Alzheimer’s Disease Neuroimaging Initiative Comparing LIME and SHAP Global Explanations for Human Activity Recognition
Resumo
The development of complex machine learning models has been increasing in recent years, and the need to understand the decisions made by these models has become essential. In this context, eXplainable Artificial Intelligence (XAI) has emerged as a field of study that aims to provide explanations for the decisions made by ML models. This work presents a comparison between two state-of-the-art XAI techniques, LIME and SHAP, in the context of Human Activity Recognition (HAR). As LIME provides only local explanations, we present a way to compute global feature importance from LIME explanations based on a global aggregation approach and use correlation metrics to compare the feature importance provided by LIME and SHAP across different HAR datasets and models. The results show that using correlation metrics to compare XAI techniques is not enough to conclude if the techniques are similar or are not, so we employ a feature removal and retrain approach and show that, besides some divergences in the correlation metrics, both XAI techniques successfully identify the most and least important features used by the model for the task.
Publicado
17/11/2024
Como Citar
ALVES, Patrick; DELGADO, Jaime; GONZALEZ, Luis; ROCHA, Anderson R.; BOCCATO, Levy; BORIN, Edson.
Alzheimer’s Disease Neuroimaging Initiative Comparing LIME and SHAP Global Explanations for Human Activity Recognition. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 13. , 2024, Belém/PA.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 172-186.
ISSN 2643-6264.