Evaluation of Explainable Artificial Intelligence Methods for Deep Time Series Classification of Biosignals

  • Diego R. de Sá UFC
  • César Lincoln C. Mattos UFC
  • Regis P. Magalhães UFC

Resumo


Deep learning models for time series classification, such as 1D Convolutional Neural Networks, are increasingly being used to predict the outcome of time-dependent clinical data, such as electrocardiograms (ECGs) and electroencephalograms (EEGs). However, when dealing with such complex model architectures, which behave as opaque boxes, the interpretability of new predictions is lost. Explainable artificial intelligence (XAI) techniques aim to mitigate this issue by enabling some degree of explanation for the model output, given a new input. In the case of biosignals, represented by multivariate time series, it is desirable to indicate which dimensions and which segments led the model to the final outcome. In this context, this work aims to evaluate and compare the use of different XAI methods, gradient-based or perturbation-based, to better understand the predictions produced by deep learning models in biosignals classification tasks. The experimental results with a synthetic dataset and a real-world ECG dataset indicate that perturbation-based approaches are the best suited for faithful and robust explanations, while gradient-based methods achieve similar results, but are more cost-efficient.
Publicado
29/09/2025
SÁ, Diego R. de; MATTOS, César Lincoln C.; MAGALHÃES, Regis P.. Evaluation of Explainable Artificial Intelligence Methods for Deep Time Series Classification of Biosignals. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 35. , 2025, Fortaleza/CE. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 539-554. ISSN 2643-6264.