Analysis of Visual Explainers Using Clustering Techniques
Resumo
A Inteligência Artificial (IA) tem se tornado cada vez mais presente no cotidiano, ampliando a demanda por métodos que expliquem as decisões tomadas por seus modelos — o que se convencionou chamar de Explainable Artificial Intelligence (XAI). No campo da explicabilidade visual, diversas técnicas e métricas vêm sendo propostas, no entanto, ainda não há consenso sobre como avaliá-las de forma comparativa e confiável. Este trabalho propõe uma nova metodologia para análise de comportamento de explicadores, com base em técnicas de agrupamento (clustering), permitindo avaliar múltiplos explicadores e métricas simultaneamente. Os resultados indicam que o comportamento entre explicadores varia conforme o conjunto de dados e o modelo de explicação utilizado.Referências
Aggarwal, C. and Reddy, C. (2018). Data Clustering: Algorithms and Applications.
Chapman & Hall/CRC Data Mining and Knowledge Discovery Series. CRC Press.
Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N., and Herrera, F. (2023). Explainable artificial intelligence (xai): What we know and what is left to attain trustworthy artificial intelligence. Information Fusion, 99:101805.
Arras, L., Osman, A., and Samek, W. (2021). Clevr-xai: A benchmark dataset for the ground truth evaluation of neural network explanations. Information Fusion, 81.
Arrieta, A. B., Díaz-Rodríguez, N., Ser, J. D., and et al (2020). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82–115.
Arya, V., Bellamy, R. K. E., Chen, P.-Y., and et al (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques.
Bommer, P. L., Kretschmer, M., Hedström, A., Bareeva, D., and Höhne, M. M.-C. (2024). Finding the right xai method—a guide for the evaluation and ranking of explainable ai methods in climate science. Artificial Intelligence for the Earth Systems, 3(3):e230074.
Davies, D. L. and Bouldin, D. W. (1979). A cluster separation measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-1(2):224–227.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255.
Han, J., Kamber, M., and Pei, J. (2011). Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 3rd edition.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition.
Hedström, A., Weber, L., Krakowczyk, D., Bareeva, D., Motzkus, F., Samek, W., Lapuschkin, S., and Höhne, M. M. M. (2023). Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond. Journal of Machine Learning Research, 24(34):1–11.
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows.
Luo, Y., Wong, Y., Kankanhalli, M., and Zhao, Q. (2021). Direction concentration learning: Enhancing congruency in machine learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(6):1928–1946.
Mersha, M., Lam, K., Wood, J., AlShami, A. K., and Kalita, J. (2024). Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction. Neurocomputing, 599:128111.
Miró-Nicolau, M., i Capó, A. J., and Moyà-Alcover, G. (2023). Assessing fidelity in xai post-hoc techniques: A comparative study with ground truth explanations datasets.
O’Shea, K. and Nash, R. (2015). An introduction to convolutional neural networks.
Rousseeuw, P. J. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20:53–65.
Russell, S. and Norvig, P. (2009). Artificial Intelligence: A Modern Approach. Prentice Hall Press, USA, 3rd edition.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2020). Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. International Journal of Computer Vision, 128(2):336–359. arXiv:1610.02391 [cs].
Shrikumar, A., Greenside, P., Shcherbina, A., and Kundaje, A. (2017). Not just a black box: Learning important features through propagating activation differences.
Tan, M. and Le, Q. V. (2020). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv:1905.11946 [cs].
Witten, I., Frank, E., Hall, M., Pal, C., and Foulds, J. (2025). Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann.
Zhang, A., Lipton, Z. C., Li, M., and Smola, A. J. (2023). Dive into Deep Learning. Cambridge University Press. [link].
Chapman & Hall/CRC Data Mining and Knowledge Discovery Series. CRC Press.
Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., Guidotti, R., Del Ser, J., Díaz-Rodríguez, N., and Herrera, F. (2023). Explainable artificial intelligence (xai): What we know and what is left to attain trustworthy artificial intelligence. Information Fusion, 99:101805.
Arras, L., Osman, A., and Samek, W. (2021). Clevr-xai: A benchmark dataset for the ground truth evaluation of neural network explanations. Information Fusion, 81.
Arrieta, A. B., Díaz-Rodríguez, N., Ser, J. D., and et al (2020). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82–115.
Arya, V., Bellamy, R. K. E., Chen, P.-Y., and et al (2019). One explanation does not fit all: A toolkit and taxonomy of ai explainability techniques.
Bommer, P. L., Kretschmer, M., Hedström, A., Bareeva, D., and Höhne, M. M.-C. (2024). Finding the right xai method—a guide for the evaluation and ranking of explainable ai methods in climate science. Artificial Intelligence for the Earth Systems, 3(3):e230074.
Davies, D. L. and Bouldin, D. W. (1979). A cluster separation measure. IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-1(2):224–227.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248–255.
Han, J., Kamber, M., and Pei, J. (2011). Data Mining: Concepts and Techniques. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 3rd edition.
He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep residual learning for image recognition.
Hedström, A., Weber, L., Krakowczyk, D., Bareeva, D., Motzkus, F., Samek, W., Lapuschkin, S., and Höhne, M. M. M. (2023). Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond. Journal of Machine Learning Research, 24(34):1–11.
Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows.
Luo, Y., Wong, Y., Kankanhalli, M., and Zhao, Q. (2021). Direction concentration learning: Enhancing congruency in machine learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43(6):1928–1946.
Mersha, M., Lam, K., Wood, J., AlShami, A. K., and Kalita, J. (2024). Explainable artificial intelligence: A survey of needs, techniques, applications, and future direction. Neurocomputing, 599:128111.
Miró-Nicolau, M., i Capó, A. J., and Moyà-Alcover, G. (2023). Assessing fidelity in xai post-hoc techniques: A comparative study with ground truth explanations datasets.
O’Shea, K. and Nash, R. (2015). An introduction to convolutional neural networks.
Rousseeuw, P. J. (1987). Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20:53–65.
Russell, S. and Norvig, P. (2009). Artificial Intelligence: A Modern Approach. Prentice Hall Press, USA, 3rd edition.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2020). Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. International Journal of Computer Vision, 128(2):336–359. arXiv:1610.02391 [cs].
Shrikumar, A., Greenside, P., Shcherbina, A., and Kundaje, A. (2017). Not just a black box: Learning important features through propagating activation differences.
Tan, M. and Le, Q. V. (2020). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv:1905.11946 [cs].
Witten, I., Frank, E., Hall, M., Pal, C., and Foulds, J. (2025). Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann.
Zhang, A., Lipton, Z. C., Li, M., and Smola, A. J. (2023). Dive into Deep Learning. Cambridge University Press. [link].
Publicado
29/09/2025
Como Citar
OLIVEIRA, Lázaro Raimundo de; XAVIER JÚNIOR, João Carlos; CANUTO, Anne M. P..
Analysis of Visual Explainers Using Clustering Techniques. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 22. , 2025, Fortaleza/CE.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 463-474.
ISSN 2763-9061.
DOI: https://doi.org/10.5753/eniac.2025.13657.
