Explainable Artificial Intelligence Using Forward-Forward Networks: A Study Involving Quantitative Analysis

  • Vitor L. Fabris Eldorado Institute of Research
  • Juliane R. de Oliveira Eldorado Institute of Research
  • Camille H. B. Silva Eldorado Institute of Research
  • Vanessa Cassenote Eldorado Institute of Research
  • José V. N. A. da Silva Eldorado Institute of Research
  • Rodrigo R. Arrais Eldorado Institute of Research
  • Renata De Paris Eldorado Institute of Research

Resumo


The field of eXplainable Artificial Intelligence (XAI) aims to understand the output of machine learning algorithms. We observed that the literature faults in proposing the systematic evaluation of XAI metrics and requires human perception to evaluate. This paper assesses XAI methods using the Forward-Forward (FF) algorithm from Geoffrey Hinton’s proposal. Through a quantitative and critical analysis of XAI algorithms mainly SHAP, LIME, and Grad-CAM this study assesses the effectiveness of LIME by comparing ground truth image and LIME mask output using traditional evaluation metrics. Our contributions to this paper are to improve our understanding of the FF output using XAI and to provide a systematic strategy for evaluating XAI metrics. We demonstrate that the proposed metrics effectively highlight the features considered by the FF network when correctly or incorrectly classifying images, allowing for quantitative distinction.

Palavras-chave: Explainable AI, Quantitative evaluation, Forward-Forward NeuralNetwork, Deep Learning

Referências

Alzubaidi, L., Al-Sabaawi, A., Bai, J., Dukhan, A., Alkenani, A. H., Al-Asadi, A., Alwzwazy, H. A., Manoufali, M., Fadhel, M. A., Albahri, A., et al. (2023). Towards risk-free trustworthy artificial intelligence: Significance and requirements. International Journal of Intelligent Systems, 2023(1):4459198.

Bitton, R., Malach, A., Meiseles, A., Momiyama, S., Araki, T., Furukawa, J., Elovici, Y., and Shabtai, A. (2022). Latent SHAP: Toward Practical Human-Interpretable Explanations.

da Silva, M. V. S., Arrais, R. R., da Silva, J. V. S., Tânios, F. S., Chinelatto, M. A., Pereira, N. B., Paris, R. D., Domingos, L. C. F., Villaça, R. D., Fabris, V. L., da Silva, N. R. B., de Faria, A. C. A. M., da Silva, J. V. N. A., de Oliveira Marucci, F. C. Q., de Souza Neto, F. A., Silva, D. X., Kondo, V. Y., and dos Santos, C. F. G. (2023). eXplainable Artificial Intelligence on Medical Images: A Survey.

Das, A. and Rad, P. (2020). Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A survey. CoRR, abs/2006.11371.

Erion, G., Janizek, J. D., Sturmfels, P., Lundberg, S. M., and Lee, S.-I. (2021). Improving performance of deep learning models with axiomatic attribution priors and expected gradients. Nature Machine Intelligence, 3(7):620–631.

Hinton, G. (2022). The Forward-Forward Algorithm: Some preliminary investigations.

Holzinger, A., Saranti, A., Molnar, C., Biecek, P., and Samek, W. (2022). Explainable AI Methods - A Brief Overview, pages 13–38. Springer International Publishing, Cham.

Kingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In Bengio, Y. and LeCun, Y., editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.

Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324.

Li, B., Qi, P., Liu, B., Di, S., Liu, J., Pei, J., Yi, J., and Zhou, B. (2023). Trustworthy AI: From Principles to Practices. ACM Comput. Surv., 55(9).

Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 4768–4777, Red Hook, NY, USA. Curran Associates Inc.

Nguyen, H. T. T., Cao, H. Q., Nguyen, K. V. T., and Pham, N. D. K. (2021). Evaluation of explainable artificial intelligence: SHAP, LIME, and CAM. In Proceedings of the FPT AI Conference, pages 1–6.

Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”Why Should I Trust You?”: Explaining the Predictions of Any Classifier.

Saeed, W. and Omlin, C. (2023). Explainable AI (XAI): A systematic meta-survey of current challenges and future opportunities. Knowledge-Based Systems, 263:110273.

Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017). Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 618–626.

Vinh, N. X., Epps, J., and Bailey, J. (2010). Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance. Journal of Machine Learning Research, 11(95):2837–2854.

Zhou, J., Gandomi, A. H., Chen, F., and Holzinger, A. (2021). Evaluating the Quality of Machine Learning Explanations: A Survey on Methods and Metrics. Electronics, 10(5).
Publicado
17/11/2024
FABRIS, Vitor L.; OLIVEIRA, Juliane R. de; SILVA, Camille H. B.; CASSENOTE, Vanessa; SILVA, José V. N. A. da; ARRAIS, Rodrigo R.; PARIS, Renata De. Explainable Artificial Intelligence Using Forward-Forward Networks: A Study Involving Quantitative Analysis. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 21. , 2024, Belém/PA. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 577-588. ISSN 2763-9061. DOI: https://doi.org/10.5753/eniac.2024.245025.