Uma Revisão sobre o uso de Frameworks de Interpretabilidade em Aprendizado de Máquina

  • Ivo de Abreu Araújo UFPA

Abstract


Machine learning models have enabled intelligent solutions in various sectors and applications of society due to their robust prediction capabilities coming from their learning processes. Thus, understanding complex model decisions become essential for confidence in the results. Thus this paper highlights a review with the objective of analyzing the use of interpretability frameworks in black-box models. The results obtained after the analysis of 143 studies confirm that interpretability in models has been consolidating through frameworks such as LIME and SHAP that are able to map possible factors that implicate in the predictive results.

References

Barredo-Arrieta, A., Laña, I., and Del Ser, J. (2019). What lies beneath: A note on the explainability of black-box machine learning models for road trafc forecasting. In Proc. IEEE Intelligent Transportation Systems Conf. (ITSC), pages 2232–2237.

El Shawi, R., Sherif, Y., Al-Mallah, M., and Sakr, S. (2019). Interpretability in healthcare In Proc. a comparative study of local machine learning interpretability techniques. IEEE 32nd Int. Symp. Computer-Based Medical Systems (CBMS), pages 275–280.

Gilpin, L. H., Bau, D., Yuan, B. Z., Bajwa, A., Specter, M., and Kagal, L. (2018). Explaining explanations: An overview of interpretability of machine learning. In Proc. IEEE 5th Int. Conf. Data Science and Advanced Analytics (DSAA), pages 80–89.

Goodman, B. and Flaxman, S. (2017). European union regulations on algorithmic decision-making and a “right to explanation”. AI magazine, 38(3):50–57.

Jordan, M. I. and Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245):255–260.

Lakkaraju, H., Bach, S. H., and Leskovec, J. (2016). Interpretable Decision Sets: A Joint Framework for Description and Prediction. KDD ’16, pages 1675–1684, San Francisco, California, USA. Association for Computing Machinery.

Lundberg, S. M. and Lee, S.-I. (2017). A unied approach to interpreting model predictions. In Advances in neural information processing systems, pages 4765–4774.

Lustrek, M., Gams, M., Martincíc-Ipsíc, S., et al. (2016). What makes classication trees comprehensible? Expert Systems with Applications, 62:333–346.

Malhi, A., Kampik, T., Pannu, H., Madhikermi, M., and Främling, K. (2019). Explaining machine learning-based classications of in-vivo gastral images. In 2019 Digital Image Computing: Techniques and Applications (DICTA), pages 1–7. IEEE.

Messalas, A., Kanellopoulos, Y., and Makris, C. (2019). Model-agnostic interpretability with shapley values. In Proc. Systems and Applications (IISA) 2019 10th Int. Conf. Information, Intelligence, pages 1–7.

Molnar, C. (2019). Interpretable machine learning: A Guide for Making Black Box Models Explainable. Leanpub.

Monteiro de Aquino, R. and Cozman, F. (2019). Natural language explanations of classier behavior. In Proc. IEEE Second Int. Conf. Articial Intelligence and Knowledge Engineering (AIKE), pages 239–242.

Mothilal, R. K., Sharma, A., and Tan, C. (2020). Explaining machine learning classiers through diverse counterfactual explanations. FAT* ’20, pages 607–617, Barcelona, Spain. Association for Computing Machinery.

Ponce, H. and de Lourdes Martinez-Villaseñor, M. (2017). Interpretability of articial hydrocarbon networks for breast cancer classication. In Proc. Int. Joint Conf. Neural Networks (IJCNN), pages 3535–3542.

Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). ”why should i trust you?”explaining the predictions of any classier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135–1144.

Yang, C., Rangarajan, A., and Ranka, S. (2018). Global model interpretation via recursive partitioning. In Proc. IEEE 20th Int. Conf. High Performance Computing and Communications; IEEE 16th Int. Conf. Smart City; IEEE 4th Int. Conf. Data Science and Systems (HPCC/SmartCity/DSS), pages 1563–1570.

Zhang, W., Zhou, Y., and Yi, B. (2019). An interpretable online learner’s performance prediction model based on learning analytics. In Proceedings of the 2019 11th International Conference on Education Technology and Computers, pages 148–154.
Published
2021-11-23
ARAÚJO, Ivo de Abreu. Uma Revisão sobre o uso de Frameworks de Interpretabilidade em Aprendizado de Máquina. In: UNIFIED COMPUTING MEETING OF PIAUÍ (ENUCOMPI), 14. , 2021, Picos. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 105-112. DOI: https://doi.org/10.5753/enucompi.2021.17760.