Uma Revisão sobre o uso de Frameworks de Interpretabilidade em Aprendizado de Máquina

  • Ivo de Abreu Araújo UFPA

Resumo


Os modelos de aprendizado de máquina têm possibilitado soluções inteligentes em vários setores e aplicações da sociedade devido suas capacidades de predições robustas provenientes de seus processos de aprendizagem. Dessa maneira, entender decisões de modelos complexos torna-se essencial para a confiança nos resultados. Assim este artigo destaca uma revisão com o objetivo de analisar o uso de frameworks de interpretabilidade em modelos de caixas pretas. Os resultados obtidos de após a análise de 143 estudos confirmam que a interpretabilidade em modelos vem consolidando-se por meio de frameworks como o LIME e SHAP que conseguem mapear possíveis fatores que implicam nos resultados preditivos.

Referências

Barredo-Arrieta, A., Laña, I., and Del Ser, J. (2019). What lies beneath: A note on the explainability of black-box machine learning models for road trafc forecasting. In Proc. IEEE Intelligent Transportation Systems Conf. (ITSC), pages 2232–2237.

El Shawi, R., Sherif, Y., Al-Mallah, M., and Sakr, S. (2019). Interpretability in healthcare In Proc. a comparative study of local machine learning interpretability techniques. IEEE 32nd Int. Symp. Computer-Based Medical Systems (CBMS), pages 275–280.

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., and Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. In Proc. IEEE 5th Int. Conf. Data Science and Advanced Analytics (DSAA), pages 80–89.

Goodman, B. and Flaxman, S. (2017). European union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3):50–57.

Jordan, M. I. and Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245):255–260.

Lakkaraju, H., Bach, S. H., and Leskovec, J. (2016). Interpretable Decision Sets: A Joint Framework for Description and Prediction. KDD ’16, pages 1675–1684, San Francisco, California, USA. Association for Computing Machinery.

Lundberg, S. M. and Lee, S.-I. (2017). A unied approach to interpreting model predictions. In Advances in neural information processing systems, pages 4765–4774.

Lustrek, M., Gams, M., Martincíc-Ipsíc, S., et al. (2016). What makes classication trees comprehensible? Expert Systems with Applications, 62:333–346.

Malhi, A., Kampik, T., Pannu, H., Madhikermi, M., and Främling, K. (2019). Explaining machine learning-based classications of in-vivo gastral images. In 2019 Digital Image Computing: Techniques and Applications (DICTA), pages 1–7. IEEE.

Messalas, A., Kanellopoulos, Y., and Makris, C. (2019). Model-agnostic interpretability with shapley values. In Proc. Systems and Applications (IISA) 2019 10th Int. Conf. Information, Intelligence, pages 1–7.

Molnar, C. (2019). Interpretable machine learning: A Guide for Making Black Box Models Explainable. Leanpub.

Monteiro de Aquino, R. and Cozman, F. (2019). Natural language explanations of classier behavior. In Proc. IEEE Second Int. Conf. Articial Intelligence and Knowledge Engineering (AIKE), pages 239–242.

Mothilal, R. K., Sharma, A., and Tan, C. (2020). Explaining machine learning classiers through diverse counterfactual explanations. FAT* ’20, pages 607–617, Barcelona, Spain. Association for Computing Machinery.

Ponce, H. and de Lourdes Martinez-Villaseñor, M. (2017). Interpretability of articial hydrocarbon networks for breast cancer classication. In Proc. Int. Joint Conf. Neural Networks (IJCNN), pages 3535–3542.

Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”why should i trust you?”explaining the predictions of any classier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144.

Yang, C., Rangarajan, A., and Ranka, S. (2018). Global model interpretation via recursive partitioning. In Proc. IEEE 20th Int. Conf. High Performance Computing and Communications; IEEE 16th Int. Conf. Smart City; IEEE 4th Int. Conf. Data Science and Systems (HPCC/SmartCity/DSS), pages 1563–1570.

Zhang, W., Zhou, Y., and Yi, B. (2019). An interpretable online learner’s performance prediction model based on learning analytics. In Proceedings of the 2019 11th International Conference on Education Technology and Computers, pages 148–154.
Publicado
23/11/2021
ARAÚJO, Ivo de Abreu. Uma Revisão sobre o uso de Frameworks de Interpretabilidade em Aprendizado de Máquina. In: ENCONTRO UNIFICADO DE COMPUTAÇÃO DO PIAUÍ (ENUCOMPI), 14. , 2021, Picos. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 105-112. DOI: https://doi.org/10.5753/enucompi.2021.17760.