Multispectral Image Segmentation With Dimensionality Reduction Using Autoencoders

  • Eliton Albuquerque UFRGS
  • Claudio R. Jung UFRGS


Autoencoder (AE) implementations through neural networks have achieved impressive results on dimensionality reduction tasks, such as multispectral (MS) imagery compression. Dimensionality reduction algorithms are necessary when dealing with large multispectral datasets, since the data captured by multiple levels of narrow spectral wavelengths causes high processing and storage costs, particularly when such highly dimensional MS data are used as input to deep learning networks. Traditional data compression techniques like Principal Component Analysis (PCA) are popular in remote sensing applications. However, its implementation on MS data may make the data unusable for computer vision (CV) tasks such as image segmentation, especially when applying severe compression. On the other hand, AEs provide great generalization capabilities over complex data, especially when combined with other CV pipelines. For the relevant problem of semantic segmentation, the results are considerably degraded when using dimensionality-reduced images with PCA. When using vanilla autoencoders trained with the traditional MSE loss, the segmentation results improve over PCA but are still considerably behind the one obtained with uncompressed data, which indicates a potential domain shift. In this work, we show that training an AE using a combination of the MSE loss and an additional proxy loss based on a pre-trained segmentation module can significantly improve the AE restoration process, alleviating the accuracy drop of semantic segmentation even for strong compression rates. Our code is available at

Como Citar

Selecione um Formato
ALBUQUERQUE, Eliton; JUNG, Claudio R.. Multispectral Image Segmentation With Dimensionality Reduction Using Autoencoders. In: CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 36. , 2023, Rio Grande/RS. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 229-234.