Towards evaluating FLIM attention regions

  • Matheus A. Cerqueira UNICAMP
  • Bárbara C. Benato USP
  • Alexandru C. Telea University of Utrecht
  • Alexandre X. Falcão UNICAMP

Resumo


FLIM (Feature Learning from Image Markers) has been used to interactively train convolutional networks from user-defined attention regions. Although FLIM has shown competitive results with large deep models, resulting in shallow networks trained with very few weakly annotated images, the performance of FLIM models is related to those attention regions. Much effort has been made to improve the FLIM framework itself, but no investigation has been conducted into these marked locations and their relationship to model performance. In this work, we propose to open ways to evaluate the image markers’ location impacts on FLIM. For that, we exploit multiple marker positions, determine their relevance, and create a color code for each location, resulting in a heatmap of marker locations that reflects the FLIM performance from those image regions. From our results, we showed how the FLIM performance differs depending on the background marker location, which provides some trends for better/worse marker scenarios.

Referências

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.

L. Yuan, Y. Chen, T. Wang, W. Yu, Y. Shi, Z.-H. Jiang, F. E. Tay, J. Feng, and S. Yan, “Tokens-to-token vit: Training vision transformers from scratch on imagenet,” in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 558–567.

A. Myronenko, “3d mri brain tumor segmentation using autoencoder regularization,” in International MICCAI Brainlesion Workshop. Springer, 2018, pp. 311–320.

Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.

B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning deep features for scene recognition using places database,” Advances in neural information processing systems, vol. 27, 2014.

B. C. Benato, I. E. de Souza, F. L. Galvão, and A. X. Falcão, “Convolutional neural networks from image markers,” arXiv preprint arXiv:2012.12108, 2020.

I. E. De Souza and A. X. Falcão, “Learning cnn filters from user-drawn image markers for coconut-tree image classification,” IEEE Geoscience and Remote Sensing Letters, 2020.

I. E. de Souza, B. C. Benato, and A. X. Falcão, “Feature learning from image markers for object delineation,” in 2020 33rd SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, 2020, pp. 116–123.

J. A. Fails and D. R. Olsen Jr, “Interactive machine learning,” in Proceedings of the 8th international conference on Intelligent user interfaces, 2003, pp. 39–45.

G. Ramos, C. Meek, P. Simard, J. Suh, and S. Ghorashi, “Interactive machine teaching: a human-centered approach to building machine-learned models,” Human–Computer Interaction, vol. 35, no. 5-6, pp. 413–451, 2020.

A. M. Sousa, F. Reis, R. Zerbini, J. L. Comba, and A. X. Falcão, “Cnn filter learning from drawn markers for the detection of suggestive signs of covid-19 in ct images,” in EMBC. IEEE, 2021, pp. 3169–3172.

L. Joao, M. Cerqueira, B. Benato, and A. Falcao, “Understanding marker-based normalization for flim networks,” in Proc. of VISIGRAPI - Volume 2: VISAPP. INSTICC, 2024, pp. 612–623.

G. J. Soares, M. A. Cerqueira, S. J. F. Guimaraes, J. F. Gomes, and A. X. Falcão, “Adaptive decoders for flim-based salient object detection networks,” in Proc. of SIBGRAPI, 2024, pp. 1–6.

I. E. de Souza, C. L. Cazarin, M. R. Veronez, L. Gonzaga, and A. X. Falcão, “User-guided data expansion modeling to train deep neural networks with little supervision,” IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022.

M. A. Cerqueira, F. Sprenger, B. C. Teixeira, and A. X. Falcão, “Building brain tumor segmentation networks with user-assisted filter estimation and selection,” in 18th International Symposium on Medical Information Processing and Analysis, vol. 12567. SPIE, 2023, pp. 202–211.

M. A. Cerqueira, F. Sprenger, B. C. A. Teixeira, S. J. F. Guimarães, and A. X. Falcão, “Interactive ground-truth-free image selection for flim segmentation encoders,” in 2024 37th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), 2024, pp. 1–6.

B. J. Boom, P. X. Huang, C. Beyan, C. Spampinato, S. Palazzo, J. He, E. Beauxis-Aussalet, S.-I. Lin, H.-M. Chou, G. Nadarajan et al., “Long-term underwater camera surveillance for monitoring and analysis of fish populations,” in International Workshop on VAIB, at ICPR, 2012, pp. 1–4.

J. I. Orlando, H. Fu, J. B. Breda, K. Van Keer, D. R. Bathula, A. Diaz-Pinto, R. Fang, P.-A. Heng, J. Kim, J. Lee et al., “Refuge challenge: A unified framework for evaluating automated methods for glaucoma assessment from fundus photographs,” Medical image analysis, vol. 59, p. 101570, 2020.

L. Van der Maaten and G. Hinton, “Visualizing data using t-sne.” Journal of machine learning research, vol. 9, no. 11, 2008.

F. C. Belém, S. J. F. Guimaraes, and A. X. Falcão, “Superpixel segmentation using dynamic and iterative spanning forest,” IEEE Signal Processing Letters, vol. 27, pp. 1440–1444, 2020.
Publicado
30/09/2025
CERQUEIRA, Matheus A.; BENATO, Bárbara C.; TELEA, Alexandru C.; FALCÃO, Alexandre X.. Towards evaluating FLIM attention regions. In: WORKSHOP DE TRABALHOS EM ANDAMENTO - CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 38. , 2025, Salvador/BA. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 150-155.

Artigos mais lidos do(s) mesmo(s) autor(es)

1 2 > >>