Explainability of COVID-19 Classification Models Using Dimensionality Reduction of SHAP Values

  • Daniel Matheus Kuhn UFRGS
  • Melina Silva de Loreto Hospital de Clínicas de Porto Alegre
  • Mariana Recamonde-Mendoza UFRGS
  • João Luiz Dihl Comba UFRGS
  • Viviane Pereira Moreira UFRGS

Resumo


The critical scenario in public health triggered by COVID-19 intensified the demand for predictive models to assist in the diagnosis and prognosis of patients affected by this disease. This work evaluates several machine learning classifiers to predict the risk of COVID-19 mortality based on information available at the time of admission. We also apply a visualization technique based on a state-of-the-art explainability approach which, combined with a dimensionality reduction technique, allows drawing insights into the relationship between the features taken into account by the classifiers in their predictions. Our experiments on two real datasets showed promising results, reaching a sensitivity of up to 84% and an AUROC of 92% (95% CI, [0.89–0.95]).
Publicado
25/09/2023
KUHN, Daniel Matheus; LORETO, Melina Silva de; RECAMONDE-MENDOZA, Mariana; COMBA, João Luiz Dihl; MOREIRA, Viviane Pereira. Explainability of COVID-19 Classification Models Using Dimensionality Reduction of SHAP Values. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 12. , 2023, Belo Horizonte/MG. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 415-430. ISSN 2643-6264.