Evaluation of Post-hoc Explanations for Malaria Detection

  • Vinícius Araújo Universidade Federal de Campina Grande
  • Leandro Marinho Universidade Federal de Campina Grande


It has been advocated that post-hoc explanation techniques are crucial for increasing the trust in complex Machine Learning (ML) models. However, it is so far not well understood whether such explanation techniques are useful or easy for users to understand. In this work, we explore the extent to which SHAP’s explanations, a state-of-the-art post-hoc explainer, help humans to make better decisions. In the malaria classification scenario, we have designed an experiment with 120 volunteers to understand whether humans, starting with zero knowledge about the classification mechanism, could replicate the complex ML classifier’s performance after having access to the model explanations. Our results show that this is indeed the case, i.e., when presented with the ML model outcomes and the explanations, humans can improve their classification performance, indicating that they understood how the ML model makes its decisions.

Palavras-chave: deep learning, explainability, explanation evaluation, shap


Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., and Elhadad, N. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD ’15. ACM, New York, NY, USA, pp. 1721–1730, 2015.

Deng, J., Dong, W., Socher, R., Li, L., Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. pp. 248–255, 2009.

Doshi-Velez, F. and Kim, B. Towards a rigorous science of interpretable machine learning, 2017.

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., and Kagal, L. Explaining explanations: An overview of interpretability of machine learning. In 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). pp. 80–89, 2018.

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., and Giannotti, F. A survey of methods for explaining black box models, 2018.

LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. Nature 521 (7553): 436–444, 2015.

Lundberg, S. M. and Lee, S.-I. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Curran Associates, Inc., pp. 4765–4774, 2017.

NLM. Malaria datasets.

Ribeiro, M. T., Singh, S., and Guestrin, C. "why should I trust you?": Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13-17, 2016. pp. 1135–1144, 2016.

Sarkar, D. D. Detecting malaria with deep learning, 2019.

Sayres, R., Taly, A., Rahimy, E., Blumer, K., Coz, D., Hammel, N., Krause, J., Narayanaswamy, A., Rastegar, Z., Wu, D., Xu, S., Barb, S., Joseph, A., Shumski, M., Smith, J., Sood, A. B., Corrado, G. S., Peng, L., and Webster, D. R. Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology 126 (4): 552 – 564, 2019.

Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. CoRR vol. abs/1409.1556, 2014.

Trumbelj, E. and Kononenko, I. Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems vol. 41, pp. 647–665, 2013.
ARAÚJO, Vinícius; MARINHO, Leandro. Evaluation of Post-hoc Explanations for Malaria Detection. In: SYMPOSIUM ON KNOWLEDGE DISCOVERY, MINING AND LEARNING (KDMILE), 8. , 2020, Evento Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2020 . p. 225-232. ISSN 2763-8944. DOI: https://doi.org/10.5753/kdmile.2020.11980.