SSMSS: A Model of Semantic Segmentation for Matching of Satellite and Sonar Images

Resumo


Establishing accurate correspondence between sonar and satellite images is a nontrivial task due to differences in modality, resolution, and environmental noise, especially in underwater scenarios with GPS-denied environments. This work investigates the integration of semantic segmentation for matching satellite and sonar images. We evaluated a diverse set of state-of-the-art architectures, including convolutional models (U-Net, U-Net++, FPN, PSPNet, LinkNet, and DeepLab v3+) and attention-based models (MA-Net and SegFormer), with a focus on their ability to capture local structures, multiscale features, and contextual dependencies relevant for robust cross-modal matching. Experimental results indicate that, while convolutional networks deliver efficient and accurate segmentation of salient structures, attention-based models improve matching performance in complex scenarios by modeling long-range spatial dependencies. Among the architectures evaluated, MA-Net achieves superior performance, with a pixel accuracy of 0.9519, a mean IoU of 0.9494, and a matching score of 0.3618, underscoring the effectiveness of attention mechanisms. These findings lay the groundwork for future research on unified segmentation and matching frameworks specifically designed for autonomous underwater navigation.
Palavras-chave: Location awareness, Accuracy, Satellites, Attention mechanisms, Semantic segmentation, Semantics, Sonar, Satellite images, Robots, Context modeling, Semantic Segmentation, Satellite Map, Acoustic Image, Image Mapping
Publicado
13/10/2025
QUISPE, Marco Antonio Quiroz; RAMOS, Jose David Garcia; BRIÃO, Stephanie Loi; DÍAZ-AMADO, José Alberto; DREWS-JR, Paulo Lilles Jorge. SSMSS: A Model of Semantic Segmentation for Matching of Satellite and Sonar Images. In: SIMPÓSIO BRASILEIRO DE ROBÓTICA E SIMPÓSIO LATINO AMERICANO DE ROBÓTICA (SBR/LARS), 17. , 2025, Vitória/ES. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 237-242.