Abstract
Machine learning models are widespread in many different fields due to their remarkable performances in many tasks. Some require greater interpretability, which often signifies that it is necessary to understand the mechanism underlying the algorithms. Feature importance is the most common explanation and is essential in data mining, especially in applied research. There is a frequent need to compare the effect of features over time, across models, or even across studies. For this, a single metric for each feature shared by all may be more suitable. Thus, analysts may gain better first-order insights regarding feature behavior across these different scenarios. The \(\beta\)-coefficients of additive models, such as logistic regressions, have been widely used for this purpose. They describe the relationships among predictors and outcomes in a single number, indicating both their direction and size. However, for black-box models, there is no metric with these same characteristics. Furthermore, even the \(\beta\)-coefficients in logistic regression models have limitations. Hence, this paper discusses these limitations together with the existing alternatives for overcoming them, and proposes new metrics of feature importance. As with the coefficients, these metrics indicate the feature effect’s size and direction, but in the probability scale within a model-agnostic framework. An experiment conducted on openly available breast cancer data from the UCI Archive verified the suitability of these metrics, and another on real-world data demonstrated how they may be helpful in practice.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The implementation code can be found in this repository: https://github.com/rogerioluizsi/summary_ale.git.
References
Bhatt, U., Xiang, A., Sharma, S., et al.: Explainable machine learning in deployment. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 648–657. ACM, New York (2020)
Razavian, N., Blecker, S., Schmidt, A.M., et al.: Population-level prediction of type 2 diabetes from claims data and analysis of risk factors. Big Data 3, 277–287 (2015). https://doi.org/10.1089/big.2015.0020
Pellagatti, M., Masci, C., Ieva, F., Paganoni, A.M.: Generalized mixed-effects random forest: a flexible approach to predict university student dropout. Stat. Anal. Data Min., 1–17 (2021). https://doi.org/10.1002/sam.11505
Berens, J., Schneider, K., Görtz, S., et al.: Early Detection of Students at Risk-Predicting Student Dropouts Using Administrative Student Data from German Universities and Machine Learning Methods (2019)
Yang, K.C., Varol, O., Davis, C.A., et al.: Arming the public with artificial intelligence to counter social bots. Hum. Behav. Emerg. Technol. 1, 48–61 (2019). https://doi.org/10.1002/hbe2.115
Leite, M.A.G.L., Guelpeli, M.V.C., Santos, C.Q.: Um Modelo Baseado em Regras para a Detecção de bots no Twitter, pp. 37–48 (2020). https://doi.org/10.5753/brasnam.2020.11161
Barredo Arrieta, A., Díaz-Rodríguez, N., del Ser, J., et al.: Explainable Artificial Intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012
Saarela, M., Jauhiainen, S.: Comparison of feature importance measures as explanations for classification models. SN Appl. Sci. 3(2), 1–12 (2021). https://doi.org/10.1007/s42452-021-04148-9
Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. In: 34th International Conference on Machine Learning, ICML 2017, vol. 7, pp. 4844–4866 (2017)
Štrumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41(3), 647–665 (2013). https://doi.org/10.1007/s10115-013-0679-x
Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?” explaining the predictions of any classifier. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 13-17-August, pp. 1135–1144 (2016). https://doi.org/10.1145/2939672.2939778
Lundberg, S.M., Lee, S.-I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4768–4777. Curran Associates Inc., Red Hook (2017)
Mood, C.: Logistic regression: uncovering unobserved heterogeneity, pp. 1–25 (2017)
Long, J.S., Long, J.S.: Regression Models for Categorical and Limited Dependent Variables. Sage, New York (1997)
Molnar, C.: Interpretable Machine Learning (2019)
Bhatt, U., Ravikumar, P., Moura, J.M.F.: Towards aggregating weighted feature attributions (2019)
Hooker, G., Mentch, L.: Please stop permuting features: an explanation and alternatives, pp. 1–15 (2019)
Guidotti, R., Monreale, A., Ruggieri, S., et al.: A survey of methods for explaining black box models. ACM Comput. Surv. 51 (2018). https://doi.org/10.1145/3236009
Bartus, T.: Estimation of marginal effects using margeff. Stata J. 5, 309–329 (2005). https://doi.org/10.1177/1536867x0500500303
Leeper, T.J.: Interpreting Regression Results using Average Marginal Effects with R’s margins (2021). https://cran.r-project.org/web/packages/margins/vignettes/TechnicalDetails.pdf32
Apley, D.W., Zhu, J.: Visualizing the effects of predictor variables in black box supervised learning models. J. R. Stat. Soc. Ser. B Stat. Methodol. 82, 1059–1086 (2020). https://doi.org/10.1111/rssb.12377
Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29, 1189–1232 (2001). https://doi.org/10.1214/aos/1013203451
Zhao, Q., Hastie, T.: Causal interpretations of black-box models. J. Bus. Econ. Stat. 39, 272–281 (2021). https://doi.org/10.1080/07350015.2019.1624293
Dua, D., Graff, C.: UCI Machine Learning Repository. University of California, Irvine, School of Information and Computer Sciences (2017). http://archive.ics.uci.edu/ml
Klaise, J., Van Looveren, A., Vacanti, G., Coca, A.: Alibi explain: Algorithms for explaining machine learning models. J. Mach. Learn. Res. 22(181), 1–7 (2021). http://jmlr.org/papers/v22/21-0017.html
Mood, C.: Logistic regression: why we cannot do what we think we can do, and what we can do about it. Eur. Sociol. Rev. 26, 67–82 (2010). https://doi.org/10.1093/esr/jcp006
Fisher, A., Rudin, C., Dominici, F.: All models are wrong, but many are useful: learning a variable’s importance by studying an entire class of prediction models simultaneously. J. Mach. Learn. Res. 20, 1–81 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Silva Filho, R.L.C., Adeodato, P.J.L., dos Santos Brito, K. (2021). Interpreting Classification Models Using Feature Importance Based on Marginal Local Effects. In: Britto, A., Valdivia Delgado, K. (eds) Intelligent Systems. BRACIS 2021. Lecture Notes in Computer Science(), vol 13073. Springer, Cham. https://doi.org/10.1007/978-3-030-91702-9_32
Download citation
DOI: https://doi.org/10.1007/978-3-030-91702-9_32
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91701-2
Online ISBN: 978-3-030-91702-9
eBook Packages: Computer ScienceComputer Science (R0)