IMAT: Uma Ferramenta para Análise de Modelos de Aprendizado de Máquina Interpretáveis
Resumo
A transparência e interpretabilidade dos modelos de Inteligência Artificial (IA) e Machine Learning (ML) são cada vez mais relevantes em aplicações que envolvem análise de redes sociais e mineração de dados. Embora modelos avançados, como os de deep learning, ofereçam soluções robustas para problemas complexos, sua crescente complexidade dificulta a compreensão das decisões tomadas. A falta de transparência pode comprometer a confiança dos usuários e limitar a adoção dessas tecnologias. Para enfrentar esse desafio, este artigo apresenta a IMAT (Interpretable Models Analysis Tool), uma ferramenta desenvolvida para gerar fluxogramas que mapeiam cada etapa do processamento de dados em modelos de deep learning. A IMAT visa oferecer uma visualização clara e acessível do fluxo de dados e das operações internas dos modelos, desde a entrada até a geração da resposta, facilitando sua interpretação. Além disso, este trabalho discute a arquitetura e funcionalidades da IMAT e demonstra sua aplicação na análise de sentimentos em tweets, utilizando o algoritmo MLP (MultiLayer Perceptron), avaliando as implicações e limitações dos resultados obtidos.Referências
Adadi, A. and Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE access, 6:52138–52160.
Agarwal, N. and Das, S. (2020). Interpretable machine learning tools: A survey. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1528–1534. IEEE.
Alves, M. A. S. and ANDRADE, O. d. (2022). Da “caixa-preta” à “caixa de vidro”: o uso da explainable artificial intelligence (xai) para reduzir a opacidade e enfrentar o enviesamento em modelos algorítmicos. Direito Público, 18(100).
Angelov, P. P., Soares, E. A., Jiang, R., Arnold, N. I., and Atkinson, P. M. (2021). Explainable artificial intelligence: an analytical review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(5):e1424.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., et al. (2020). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82–115.
Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S. C., Houde, S., Liao, Q. V., Luss, R., Mojsilović, A., et al. (2022). Ai explainability 360: Impact and design. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 12651–12657.
Assis, A., Véras, D., and Andrade, E. (2023). Explainable artificial intelligence-an analysis of the trade-offs between performance and explainability. In 2023 IEEE Latin American Conference on Computational Intelligence (LA-CCI), pages 1–6. IEEE.
Bhattacharya, A. (2022). Applied Machine Learning Explainability Techniques: Make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more. Packt Publishing Ltd.
Biecek, P. (2018). Dalex: Explainers for complex predictive models in r. Journal of Machine Learning Research, 19(84):1–5.
Cortiz, D. (2021). Inteligência artificial: conceitos fundamentais. VAINZOF, Rony; GUTIERREZ, Adriei. Inteligência artificial: sociedade, economia e Estado. São Paulo: Thomson Reuters, pages 45–60.
Ellson, J., Gansner, E. R., Koutsofios, E., North, S. C., and Woodhull, G. (2001). Graphviz - Graph Visualization Software. Software available from [link].
et al., M. A. (2015). TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Software available from [link].
Go, A., Bhayani, R., and Huang, L. (2009). Twitter sentiment classification using distant supervision.
Gregor, S. and Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS quarterly, pages 497–530.
Grinberg, M. (2018). Flask Web Development: Developing Web Applications with Python. Sebastopol, CA. ISBN 978-1-491-95791-3.
Kim, B., Khanna, R., and Koyejo, O. O. (2016). Examples are not enough, learn to criticize! criticism for interpretability. Advances in neural information processing systems, 29.
Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
Meske, C. and Bunde, E. (2020). Transparency and trust in human-ai-interaction: The role of model-agnostic explanations in computer vision-based decision support. In Artificial Intelligence in HCI: First International Conference, AI-HCI 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings 22, pages 54–69. Springer.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267:1–38.
Molnar, C. (2020). Interpretable machine learning. [link].
Nori, H., Jenkins, S., Koch, P., and Caruana, R. (2019). Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223.
Paes, V., Araújo, D., Brito, K., and Andrade, E. (2022). Análise de sentimento em tweets relacionados ao desmatamento da floresta amazônica. In Anais do XI Brazilian Workshop on Social Network Analysis and Mining, pages 61–72, Porto Alegre, RS, Brasil. SBC.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144.
Salih, A., Raisi-Estabragh, Z., Galazzo, I. B., Radeva, P., Petersen, S. E., Menegaz, G., and Lekadir, K. (2023). Commentary on explainable artificial intelligence methods: Shap and lime. arXiv preprint arXiv:2305.02012.
Samek, W. and Müller, K.-R. (2019). Towards explainable artificial intelligence. Explainable AI: interpreting, explaining and visualizing deep learning, pages 5–22.
Shneiderman, B. (2003). The eyes have it: A task by data type taxonomy for information visualizations. In The craft of information visualization, pages 364–371. Elsevier.
Silva, H., Andrade, E., Araújo, D., and Dantas, J. (2021). Sentiment analysis of tweets related to sus before and during covid-19 pandemic. IEEE Latin America Transactions, 20(1):6–13.
Søgaard, A. (2023). On the opacity of deep neural networks. Canadian Journal of Philosophy, 53(3):224–239.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human–ai interaction (haii). Journal of Computer-Mediated Communication, 25(1):74–88.
Taud, H. and Mas, J.-F. (2017). Multilayer perceptron (mlp). In Geomatic approaches for modeling land change scenarios, pages 451–455. Springer.
Agarwal, N. and Das, S. (2020). Interpretable machine learning tools: A survey. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pages 1528–1534. IEEE.
Alves, M. A. S. and ANDRADE, O. d. (2022). Da “caixa-preta” à “caixa de vidro”: o uso da explainable artificial intelligence (xai) para reduzir a opacidade e enfrentar o enviesamento em modelos algorítmicos. Direito Público, 18(100).
Angelov, P. P., Soares, E. A., Jiang, R., Arnold, N. I., and Atkinson, P. M. (2021). Explainable artificial intelligence: an analytical review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 11(5):e1424.
Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., et al. (2020). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information fusion, 58:82–115.
Arya, V., Bellamy, R. K., Chen, P.-Y., Dhurandhar, A., Hind, M., Hoffman, S. C., Houde, S., Liao, Q. V., Luss, R., Mojsilović, A., et al. (2022). Ai explainability 360: Impact and design. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 12651–12657.
Assis, A., Véras, D., and Andrade, E. (2023). Explainable artificial intelligence-an analysis of the trade-offs between performance and explainability. In 2023 IEEE Latin American Conference on Computational Intelligence (LA-CCI), pages 1–6. IEEE.
Bhattacharya, A. (2022). Applied Machine Learning Explainability Techniques: Make ML models explainable and trustworthy for practical applications using LIME, SHAP, and more. Packt Publishing Ltd.
Biecek, P. (2018). Dalex: Explainers for complex predictive models in r. Journal of Machine Learning Research, 19(84):1–5.
Cortiz, D. (2021). Inteligência artificial: conceitos fundamentais. VAINZOF, Rony; GUTIERREZ, Adriei. Inteligência artificial: sociedade, economia e Estado. São Paulo: Thomson Reuters, pages 45–60.
Ellson, J., Gansner, E. R., Koutsofios, E., North, S. C., and Woodhull, G. (2001). Graphviz - Graph Visualization Software. Software available from [link].
et al., M. A. (2015). TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. Software available from [link].
Go, A., Bhayani, R., and Huang, L. (2009). Twitter sentiment classification using distant supervision.
Gregor, S. and Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS quarterly, pages 497–530.
Grinberg, M. (2018). Flask Web Development: Developing Web Applications with Python. Sebastopol, CA. ISBN 978-1-491-95791-3.
Kim, B., Khanna, R., and Koyejo, O. O. (2016). Examples are not enough, learn to criticize! criticism for interpretability. Advances in neural information processing systems, 29.
Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
Meske, C. and Bunde, E. (2020). Transparency and trust in human-ai-interaction: The role of model-agnostic explanations in computer vision-based decision support. In Artificial Intelligence in HCI: First International Conference, AI-HCI 2020, Held as Part of the 22nd HCI International Conference, HCII 2020, Copenhagen, Denmark, July 19–24, 2020, Proceedings 22, pages 54–69. Springer.
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267:1–38.
Molnar, C. (2020). Interpretable machine learning. [link].
Nori, H., Jenkins, S., Koch, P., and Caruana, R. (2019). Interpretml: A unified framework for machine learning interpretability. arXiv preprint arXiv:1909.09223.
Paes, V., Araújo, D., Brito, K., and Andrade, E. (2022). Análise de sentimento em tweets relacionados ao desmatamento da floresta amazônica. In Anais do XI Brazilian Workshop on Social Network Analysis and Mining, pages 61–72, Porto Alegre, RS, Brasil. SBC.
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ” why should i trust you?” explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144.
Salih, A., Raisi-Estabragh, Z., Galazzo, I. B., Radeva, P., Petersen, S. E., Menegaz, G., and Lekadir, K. (2023). Commentary on explainable artificial intelligence methods: Shap and lime. arXiv preprint arXiv:2305.02012.
Samek, W. and Müller, K.-R. (2019). Towards explainable artificial intelligence. Explainable AI: interpreting, explaining and visualizing deep learning, pages 5–22.
Shneiderman, B. (2003). The eyes have it: A task by data type taxonomy for information visualizations. In The craft of information visualization, pages 364–371. Elsevier.
Silva, H., Andrade, E., Araújo, D., and Dantas, J. (2021). Sentiment analysis of tweets related to sus before and during covid-19 pandemic. IEEE Latin America Transactions, 20(1):6–13.
Søgaard, A. (2023). On the opacity of deep neural networks. Canadian Journal of Philosophy, 53(3):224–239.
Sundar, S. S. (2020). Rise of machine agency: A framework for studying the psychology of human–ai interaction (haii). Journal of Computer-Mediated Communication, 25(1):74–88.
Taud, H. and Mas, J.-F. (2017). Multilayer perceptron (mlp). In Geomatic approaches for modeling land change scenarios, pages 451–455. Springer.
Publicado
20/07/2025
Como Citar
ASSIS, André; DANTAS, Jamilson; ANDRADE, Ermeson.
IMAT: Uma Ferramenta para Análise de Modelos de Aprendizado de Máquina Interpretáveis. In: BRAZILIAN WORKSHOP ON SOCIAL NETWORK ANALYSIS AND MINING (BRASNAM), 14. , 2025, Maceió/AL.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 134-147.
ISSN 2595-6094.
DOI: https://doi.org/10.5753/brasnam.2025.8924.
