Explainable Artificial Intelligence for Behavioral Simulation: SHAP-Based Analysis of Decision-Making in Virtual Rats
Resumo
The opacity of artificial neural networks (ANNs) has raised growing concerns about the interpretability of their decisions, particularly in behavioral simulations. To address this, the field of explainable artificial intelligence (XAI) has introduced tools that clarify the internal logic of machine learning models. This study applies SHAP (SHapley Additive exPlanations) to analyze the behavior of virtual rats controlled by ANNs in the Elevated Plus Maze, a widely used paradigm for assessing anxiety. The networks were evolved through genetic algorithms, and SHAP values were used to identify the contribution of each input (recurrent neurons and environmental sensors) to decision-making. The results show that long-range sensors, especially those detecting obstacles on the left, had the highest relevance, although most decisions were driven by a small subset of recurrent neurons. Network configurations with two or three hidden neurons were most frequent, indicating compact but effective internal processing. This approach enhances model transparency and provides an ethical alternative to live-animal testing, in alignment with the principles of the 3Rs (Reduction, Refinement, and Replacement).
Publicado
29/09/2025
Como Citar
FREIRE, Thálita Guimarães; TINÓS, Renato; COSTA, Ariadne Andrade.
Explainable Artificial Intelligence for Behavioral Simulation: SHAP-Based Analysis of Decision-Making in Virtual Rats. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 35. , 2025, Fortaleza/CE.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 89-103.
ISSN 2643-6264.
