Comparative analysis of explainability methods of Artificial Intelligence in the educational scenario: a case study on dropout
Abstract
With the increasing application of Artificial Intelligence in education, it is essential to understand the reasons behind model results to ensure safe decisions. Thus, this work presents preliminary results of experiments with XAI methods applied to school dropout data. Three methods were analyzed: SHAP, LIME and Anchor. SHAP and LIME presented detailed explanations, but their complex visual representations may require additional technical knowledge for interpretation by managers and teachers, for example. In contrast, Anchor, with its rule-based approach, proved to be simpler and more intuitive, making predictions easier to understand and becoming a more accessible option for the educational context.
Keywords:
Explainable Artificial Intelligence, Comparison of XAI Methods, School Dropout, Educational Data Mining, Understandable Predictions
References
Adadi, A.; Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access, v. 6, p. 52138–52160. Disponível em: DOI: 10.1109/ACCESS.2018.2870052. Acesso em: 3 ago. 2024.
Alamri, R. and Alharbi, B. (2021). Explainable student performance prediction models: A systematic review. IEEE Access, 9:33132–33143.
Alvarez-Melis, D. and Jaakola, T. S. (2018). Towards Robust Interpretability with Self-Explaining Neural Networks. In: Neural Information Processing Systems.
Batista, G., Prati, R. and Monard, M.-C. (2004). A study of the behavior of several methods for balancing machine learning training data. SIGKDD Explorations, v. 6, p. 20–29.
Carvalho, D. V., Pereira, E. M. and Cardoso, J. S. (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, v. 8, n. 8, p. 832, 26 jul. Disponível em: [link]. Acesso em: 28 jul. 2024.
Chen, J. et al. (2019). MOOC dropout prediction using a hybrid algorithm based on decision tree and extreme learning machine. Mathematical Problems in Engineering, v. 2019, p. 1-11. Disponível em: DOI: 10.1155/2019/8404653. Acesso em: 02 ago 2024.
Chitti, M., Chitti, P. and Jayabalan, M. (2020). Need for interpretable student performance prediction. In 2020 13th International Conference on Developments in eSys-tems Engineering (DeSE), pages 269–272.
Cristobal, R. et al. (2013). Web usage mining for predicting final marks of students that use Moodle courses. Computer Application Engineering Education, Wiley Periodicals, v. 21, n. 1, p. 135-146.
Da Gama Neto, M. V. (2022). Análise comparativa das técnicas de Explainable AI e um novo método para geração de explicações textuais. [s.l.] Universidade Federal de Pernambuco, 10 mar.
Canha, D. M. De Carvalho Martins. (2022). Building a benchmark framework for eX-plainable Artificial Intelligence (XAI) methods. Instituto Superior Técnico, Lisboa.
Fernandes, M. (2022). Inteligência artificial explicável aplicada a aprendizado de máquina: Um estudo para identificar estresse ocupacional em profissionais da saúde. Ano. Trabalho de Conclusão de Curso (Graduação em Graduação em Engenharia de Computação) — Universidade Federal de Santa Catarina, Araranguá. Disponível em: [link]. Acesso em: 27. Jul. 2024.
Huynh-Cam, T.-T. Chen, L.-S. and Le, H. (2021). Using decision trees and random for-est algorithms to predict and determine factors contributing to first-year university students’ learning performance. Algorithms, v. 14, n. 11, p. 318, 2021.
IBGE - Instituto Brasileiro de Geografia e Estatística. (2023).
Jayaprakash, S., Krishnan, S. and Jaiganesh, V. (2020). Predicting students’ academic performance using an improved random forest classifier. In: 2020 International Conference on Emerging Smart Computing and Informatics (ESCI), 2020,. Proceedings... p. 238-243, 2020. DOI: 10.1109/ESCI48226.2020.9167547.
Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwana-than, S., and Garnett, R., editors, Advances in Neural Information Processing Sys-tems, volume 30. Curran Associates, Inc.
Marbouti, F.; Diefes-Dux, H. A.; Madhavan, K. (2016). Models for early prediction of at-risk students in a course using standards-based grading. Computers & Education, vol 103, pp 1-15.
Nayebi, A. et al. (2022). An empirical comparison of explainable Artificial Intelligence methods for clinical data: A case study on traumatic brain injury. Disponível em: [link]. Acesso em: 20 jul. 2024.
Neto, M. V. G., Vasconcelos, G. C. and Zanchettin, C. (2021). Mineração de dados aplicada à predição do desempenho de escolas e técnicas de interpretabilidade dos modelos. In Anais do XXXII Simpósio Brasileiro de Informática na Educação, pages 773–782. SBC.
Oliveira, D. F. N. (2020). Dissertação de Mestrado. Escola Politécnica, Universidade de São Paulo. Disponível em: DOI: 10.11606/D.3.2020.tde-08032021-102116. Acesso em: 02 ago. 2024.
Qin, F., Li, K., and Yan, J. (2020). Understanding user trust in artificial intelligence-based educational systems: Evidence from china. British Journal of Educational Technology, 51(5):1693–1710.
Rachha, A. and Seyam, M. (2023). Explainable ai in education : Current trends, challenges, and opportunities. In SoutheastCon 2023, pages 232–239.
Ribeiro, M. T., Singh, S. and Guestrin, C. (2016). ”why should i trust you?”: Explaining the predictions of any classifier. KDD ’16, page 1135–1144, New York, NY, USA. Association for Computing Machinery.
Ribeiro, M. T., Singh, S. and Guestrin, C. (2018). ANCHOR: high-precision model-agnostic explanations. Association for the Advancement of Artificial Intelligence. Disponível em: [link]. Acesso em: 02 ago. 2024.
Alamri, R. and Alharbi, B. (2021). Explainable student performance prediction models: A systematic review. IEEE Access, 9:33132–33143.
Alvarez-Melis, D. and Jaakola, T. S. (2018). Towards Robust Interpretability with Self-Explaining Neural Networks. In: Neural Information Processing Systems.
Batista, G., Prati, R. and Monard, M.-C. (2004). A study of the behavior of several methods for balancing machine learning training data. SIGKDD Explorations, v. 6, p. 20–29.
Carvalho, D. V., Pereira, E. M. and Cardoso, J. S. (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, v. 8, n. 8, p. 832, 26 jul. Disponível em: [link]. Acesso em: 28 jul. 2024.
Chen, J. et al. (2019). MOOC dropout prediction using a hybrid algorithm based on decision tree and extreme learning machine. Mathematical Problems in Engineering, v. 2019, p. 1-11. Disponível em: DOI: 10.1155/2019/8404653. Acesso em: 02 ago 2024.
Chitti, M., Chitti, P. and Jayabalan, M. (2020). Need for interpretable student performance prediction. In 2020 13th International Conference on Developments in eSys-tems Engineering (DeSE), pages 269–272.
Cristobal, R. et al. (2013). Web usage mining for predicting final marks of students that use Moodle courses. Computer Application Engineering Education, Wiley Periodicals, v. 21, n. 1, p. 135-146.
Da Gama Neto, M. V. (2022). Análise comparativa das técnicas de Explainable AI e um novo método para geração de explicações textuais. [s.l.] Universidade Federal de Pernambuco, 10 mar.
Canha, D. M. De Carvalho Martins. (2022). Building a benchmark framework for eX-plainable Artificial Intelligence (XAI) methods. Instituto Superior Técnico, Lisboa.
Fernandes, M. (2022). Inteligência artificial explicável aplicada a aprendizado de máquina: Um estudo para identificar estresse ocupacional em profissionais da saúde. Ano. Trabalho de Conclusão de Curso (Graduação em Graduação em Engenharia de Computação) — Universidade Federal de Santa Catarina, Araranguá. Disponível em: [link]. Acesso em: 27. Jul. 2024.
Huynh-Cam, T.-T. Chen, L.-S. and Le, H. (2021). Using decision trees and random for-est algorithms to predict and determine factors contributing to first-year university students’ learning performance. Algorithms, v. 14, n. 11, p. 318, 2021.
IBGE - Instituto Brasileiro de Geografia e Estatística. (2023).
Jayaprakash, S., Krishnan, S. and Jaiganesh, V. (2020). Predicting students’ academic performance using an improved random forest classifier. In: 2020 International Conference on Emerging Smart Computing and Informatics (ESCI), 2020,. Proceedings... p. 238-243, 2020. DOI: 10.1109/ESCI48226.2020.9167547.
Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwana-than, S., and Garnett, R., editors, Advances in Neural Information Processing Sys-tems, volume 30. Curran Associates, Inc.
Marbouti, F.; Diefes-Dux, H. A.; Madhavan, K. (2016). Models for early prediction of at-risk students in a course using standards-based grading. Computers & Education, vol 103, pp 1-15.
Nayebi, A. et al. (2022). An empirical comparison of explainable Artificial Intelligence methods for clinical data: A case study on traumatic brain injury. Disponível em: [link]. Acesso em: 20 jul. 2024.
Neto, M. V. G., Vasconcelos, G. C. and Zanchettin, C. (2021). Mineração de dados aplicada à predição do desempenho de escolas e técnicas de interpretabilidade dos modelos. In Anais do XXXII Simpósio Brasileiro de Informática na Educação, pages 773–782. SBC.
Oliveira, D. F. N. (2020). Dissertação de Mestrado. Escola Politécnica, Universidade de São Paulo. Disponível em: DOI: 10.11606/D.3.2020.tde-08032021-102116. Acesso em: 02 ago. 2024.
Qin, F., Li, K., and Yan, J. (2020). Understanding user trust in artificial intelligence-based educational systems: Evidence from china. British Journal of Educational Technology, 51(5):1693–1710.
Rachha, A. and Seyam, M. (2023). Explainable ai in education : Current trends, challenges, and opportunities. In SoutheastCon 2023, pages 232–239.
Ribeiro, M. T., Singh, S. and Guestrin, C. (2016). ”why should i trust you?”: Explaining the predictions of any classifier. KDD ’16, page 1135–1144, New York, NY, USA. Association for Computing Machinery.
Ribeiro, M. T., Singh, S. and Guestrin, C. (2018). ANCHOR: high-precision model-agnostic explanations. Association for the Advancement of Artificial Intelligence. Disponível em: [link]. Acesso em: 02 ago. 2024.
Published
2024-11-04
How to Cite
SILVA, Francisco da C.; FEITOSA, Rodrigo M.; BATISTA, Luiz A.; SANTANA, André M..
Comparative analysis of explainability methods of Artificial Intelligence in the educational scenario: a case study on dropout. In: BRAZILIAN SYMPOSIUM ON COMPUTERS IN EDUCATION (SBIE), 35. , 2024, Rio de Janeiro/RJ.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 2968-2977.
DOI: https://doi.org/10.5753/sbie.2024.244433.
