Interpreting Convolutional Neural Networks for Brain Tumor Classification: An Explainable Artificial Intelligence Approach

Resumo


Brain tumors pose a complex medical challenge, requiring a specific approach for accurate diagnosis and effective treatment. Early detection can significantly improve outcomes and quality of life for patients with brain tumors. Magnetic resonance (MRI) is a powerful diagnostic tool, and convolutional neural networks (CNNs) are efficient deep learning algorithms for image analysis. In this study, we explored using two CNN models for brain tumor classification and applied hyperparameter optimization and data augmentation techniques to achieve an accuracy of up to 96%. In addition, we use Explainable Artificial Intelligence (XAI) techniques to visualize and interpret the behavior of CNN models. Our results show that CNN models accurately classified MRI images with brain tumors. XAI techniques helped us to identify the patterns and features used by the models to make predictions. This study supports the development of more reliable medical diagnoses for brain tumors using CNN models and XAI techniques. The source code is available at (https://github.com/dieineb/Bracis23). The repository also has images generated during the experiments.

Publicado
25/09/2023
Como Citar

Selecione um Formato
SCHIAVON, Dieine Estela Bernieri; BECKER, Carla Diniz Lopes; BOTELHO, Viviane Rodrigues; PIANOSKI, Thatiane Alves. Interpreting Convolutional Neural Networks for Brain Tumor Classification: An Explainable Artificial Intelligence Approach. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 12. , 2023, Belo Horizonte/MG. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 77-91. ISSN 2643-6264.