Image-based Semantic Segmentation Network for the Brazilian Cerrado based on Public Databases

  • Daniel C. de Coimbra CPQD
  • Silas P. W. de Oliveira CPQD
  • Dimas A. M. Lemes PUC-Campinas
  • José G. Picolo PUC-Campinas
  • Guilherme Ribeiro Sales CPQD
  • Valentino Corso CPQD
  • Cides S. Bezerra CPQD

Resumo


We have developed fully convolutional networks (FCN) for semantic segmentation of satellite imagery based on five Land Use and Land Cover (LULC) categories: native vegetation, agriculture, pasture, urban region, and waterbody. To this end, we gathered and preprocessed public Brazilian data into an annotated dataset with 26,000 segmented 224 × 224 image patches. We obtained images from the Sino-Brazilian CBERS-04A satellite program and segmentation masks from the Terra Class project (INPE/Embrapa). We performed transfer learning on four backbone models: DeepLabV3+, MobileNet, ResNet-50, and VGG-16. We evaluated their performance with IoU, with the respective scores of 45.96%, 34.40%, 45.58%, and 62.78%. However, our dataset is unbalanced, and a balanced IoU yields scores lower than 20% for all models, indicating specialization on majority classes. Despite these shortcomings, our models’ masks have a 16-times higher pixel-density than previously available masks, and taking only images as input, without external data or expert curation.

Palavras-chave: semantic segmentation, satellite imagery, Brazilian Cerrado, fully convolutional, transfer learning

Referências

S. Minaee, Y. Boykov, F. Porikli, A. Plaza, N. Kehtarnavaz, D. Terzopoulos. “Image segmentation using deep learning: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, v. 44, n. 7, pp. 3523-42, 2022.

Y. LeCun et al. “Handwritten digit recognition with a back-propagation network,” Advances in Neural Information Processing Systems (NeurIPS), 1989.

E. Shelhamer, J. Long, T. Darrell. “Fully convolutional networks for semantic segmentation,” IEEE Conference on Computer Vision and Pattern Recognition (CCVPR), v. 39, n. 4, 2015, pp. 3431–40.

R. Strudel, R. Garcia, I. Laptev, C. Schmid. “Segmenter: transformer for semantic segmentation,” International Conference on Computer Vision (ICCV), 2021.

W. Yang, Y. Yuan, R. Gou, X. Li. “Semantic segmentation of agricultural images: A survey,” Information Processing in Agriculture, v. 10, n. 4, pp. 172-86, 2023.

J. Wang, W. Liu, A. Gou. “Numerical characteristics and spatial distribution of panoramic Street Green View index based on SegNet semantic segmentation in Savannah,” Urban Forestry & Urban Greening, v. 69, 2022.

M. Miranda. “AI4LUC: pixel-based classification of land use and land cover via deep learning and a Cerrado image dataset,” Master’s dissertation, INPE, 2023.

Terra Class Project. “Mapeamento do uso e cobertura da terra do cerrado,” 2013. [link]

L. F. Assis et al. “TerraBrasilis: A spatial data analytics infrastructure for large-scale thematic mapping,” International Society for Photogrammetry and Remote Sensing (ISPRS) International Journal of Geo- Information, v. 513, n.8, 2019.

L. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam. “Encoder-decoder with atrous separable convolution for semantic image segmentation,” European Conference on Computer Vision (ECCV), 2018.

A. Howard et al. “MobileNets: efficient convolutional neural networks for mobile vision applications,” ArXiv, 2017. [link]

K. He, X. Zhang, S. Ren, J. Sun. “Deep residual learning for image recognition,” IEEE Conference on Computer Vision and Pattern Recognition (CCVPR), 2015.

Visual Geometry Group (VGG). “Very deep convolutional networks for large-scale image recognition,” International Conference on Learning Representations (ICLR), 2015.

TensorFlow Team. “Model garden - object detection and segmentation.” [link]

Keras Team. “Keras applications,” Keras 3 API documentation. [link]

O. Ronneberger, P. Fischer, T. Brox. “U-Net: convolutional networks for biomedical image segmentation,” International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 234–241, 2015.

European Commission & European Space Agency. “Introduction to remote sensins,” SEOS Project e-Learning Tutorials. [link]

A. Polidorio, C. Franco, N. Imai, A. Tommaselli, M. Galo. “Correção radiométrica de imagens multiespectrais CBERS e Landsat ETM usando atributos de reflectância de cor,” Anais do XII Simpósio Brasileiro de Sensoriamento Remoto, INPE, pp. 4241-4248, 2018.

T. Akiyama, J. Junior, A. Tommaselli. “Correção geométrica de imagens CBERS-4/PAN com modelos generalizados usando como referência dados do sistema nacional de gestão fundiária,” Anuário de Instituto de Geociências (UFRJ), v. 41, n. 2, 2018.

K. He, G. Gkioxari, P. Dollár, R. Girshick. “Mask R-CNN,” IEEE International Conference on Computer Vision (ICCV), 2017.

CBERS Program. “About CBERS-04A: Uses and Applications,” Brazilian Ministry of Science, Technology, and Innovation. [link]
Publicado
06/11/2024
COIMBRA, Daniel C. de; OLIVEIRA, Silas P. W. de; LEMES, Dimas A. M.; PICOLO, José G.; SALES, Guilherme Ribeiro; CORSO, Valentino; BEZERRA, Cides S.. Image-based Semantic Segmentation Network for the Brazilian Cerrado based on Public Databases. In: WORKSHOP DE VISÃO COMPUTACIONAL (WVC), 19. , 2024, Rio Paranaíba/MG. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 25-30. DOI: https://doi.org/10.5753/wvc.2024.34008.

Artigos mais lidos do(s) mesmo(s) autor(es)

Obs.: Esse plugin requer que pelo menos um plugin de estatísticas/relatórios esteja habilitado. Se o seu plugins de estatísticas oferece mais que uma métrica, então, por favor, também selecione uma métrica principal na página de configurações administrativas do site e/ou da revista.