Contrasting Explain-ML with Interpretability Machine Learning Tools in Light of Interactive Machine Learning Principles

Authors

DOI:

https://doi.org/10.5753/jis.2022.2556

Keywords:

Human-centered computing, User studies, Information visualization, Computing methodologies, Machine Learning, Semiotic inspection method

Abstract

The way Complex Machine Learning (ML) models generate their results is not fully understood, including by very knowledgeable users. If users cannot interpret or trust the predictions generated by the model, they will not use them. Furthermore, the human role is often not properly considered in the development of ML systems. In this article, we present the design, implementation and evaluation of Explain-ML, an Interactive Machine Learning (IML) system for Explainable Machine Learning that follows the principles of Human-Centered Machine Learning (HCML). We assess the user experience with the Explain-ML interpretability strategies, contrasting them with the analysis of how other IML tools address the IML principles. To do so, we have conducted an analysis of the results of the evaluation of Explain-ML with potential users in light of principles for IML systems design and a systematic inspection of three other tools – Rulematrix, Explanation Explorer and ATMSeer – using the Semiotic Inspection Method (SIM). Our results generated positive indicators regarding Explain-ML and the process that guided its development. Our analyses also highlighted aspects of the IML principles that are relevant from the users’ perspective. By contrasting the results with Explain-ML and SIM inspections of the other tools we were able to identify common interpretability strategies. We believe that the results reported in this work contribute to the understanding and consolidation of the IML principles, ultimately advancing the knowledge in HCML.

Downloads

Download data is not yet available.

References

Adebayo, J. and Kagal, L. (2016). Iterative orthogonal feature projection for diagnosing bias in black-box models. arXiv preprint arXiv:1611.04967.

Adler, P., Falk, C., Friedler, S. A., Nix, T., Rybeck, G., Scheidegger, C., Smith, B., and Venkatasubramanian, S. (2018). Auditing black-box models for indirect influence. Knowledge and Information Systems, 54(1):95–122.

Carroll, J. (2000). Introduction to this Special Issue on “Scenario-Based System Development”. Interacting with Computers, 13(1):41–42.

Cortez, P. and Embrechts, M. J. (2011). Opening black box data mining models using sensitivity analysis. In 2011 IEEE Symposium on Computational Intelligence and Data Mining (CIDM), pages 341–348. IEEE.

Cunha, W., Mangaravite, V., Gomes, C., Canuto, S., Resende, E., Nascimento, C., Viegas, F., França, C., Martins, W. S., Almeida, J. M., et al. (2021). On the cost-effectiveness of neural and non-neural approaches and representations for text classification: A comprehensive comparative study. Information Processing & Management, 58(3):102481.

De Souza, C. S. (2005). The semiotic engineering of human-computer interaction. MIT press.

De Souza, C. S. and Leitão, C. F. (2009). Semiotic engineering methods for scientific research in hci. Synthesis Lectures on Human-Centered Informatics, 2(1):1–122.

De Souza, C. S., Leitão, C. F., Prates, R. O., and Da Silva, E. J. (2006). The semiotic inspection method. In Proc. of VII Brazilian symposium on Human factors in computing systems, pages 148–157.

Demiralp, Ç. (2016). Clustrophile: A tool for visual clustering analysis. In KDD 2016 Workshop on Interactive Data Exploration and Analytics, pages 37–45.

Doshi-Velez, F. and Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

Dudley, J. J. and Kristensson, P. O. (2018). A review of user interface design for interactive machine learning. ACM TIIS, 8(2):8.

Erik, S. and Kononenko, I. (2010). An efficient explanation of individual classifications using game theory. Journal of Machine Learning Research, 11(Jan):1–18.

Fails, J. A. and Olsen Jr, D. R. (2003). Interactive machine learning. In Proc. of the 8th International Conference on Intelligent User Interfaces, pages 39–45.

Fiebrink, R. and Gillies, M. (2018). Introduction to the special issue on human-centered machine learning. ACM TIIS, 8(2):7.

Flick, U. (2008a). Designing qualitative research. Sage Publications Ltd., 1th edition.

Flick, U. (2008b). Managing quality in qualitative research. Sage Publications Ltd., 1th edition.

Gillies, M., Fiebrink, R., Tanaka, A., Garcia, J., Bevilacqua, F., Heloir, A., Nunnari, F., Mackay, W., Amershi, S., Lee, B., et al. (2016). Human-centred machine learning. In Proc. of the 2016 CHI, pages 3558–3565.

Goldstein, A., Kapelner, A., Bleich, J., and Pitkin, E. (2015). Peeking inside the black box: Visualizing statistical learning with plots of individual conditional expectation. J. of Comput. and Graphical Statistics, 24(1):44–65.

Goodman, B. and Flaxman, S. (2017). European union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3):50–57.

Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., and Giannotti, F. (2018a). Local rule-based explanations of black box decision systems. arXiv preprint arXiv:1805.10820.

Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., and Pedreschi, D. (2018b). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5):93.

Hall, A., Bosevski, D., and Larkin, R. (2006). Blogging by the dead. In Proc. of the 4th Nordic conference on Human-computer interaction: changing roles, pages 425–428.

Hall, P. and Gill, N. (2018). Introduction to Machine Learning Interpretability. O’Reilly Media, Incorporated.

Han, Q., Zhu, W., Heimerl, F., Koch, S., and Ertl, T. (2016). A visual approach for interactive co-training. In KDD 2016 Workshop on Interactive Data Exploration and Analytics, pages 46–52.

Hooker, G. (2004). Discovering additive structure in black box functions. In Proc. of ACM SIGKDD, pages 575–580.

Krause, J., Dasgupta, A., Swartz, J., Aphinyanaphongs, Y., and Bertini, E. (2017). A workflow for visual diagnostics of binary classifiers using instance-level explanations. In 2017 IEEE Conference on Visual Analytics Science and Technology (VAST), pages 162–172. IEEE.

Krause, J., Perer, A., and Bertini, E. (2016a). Using visual analytics to interpret predictive machine learning models. In Proc. of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016).

Krause, J., Perer, A., and Bertini, E. (2018). A user study on the effect of aggregating explanations for interpreting machine learning models. In ACM KDD Workshop on Interactive Data Exploration and Analytics.

Krause, J., Perer, A., and Ng, K. (2016b). Interacting with predictions: Visual inspection of black-box machine learning models. In Proc. of the 2016 CHI, pages 5686–5697.

Labs, C. F. F. (2020). Interpretability, Report FF06. Technical report, Cloudera Fast Forward Labs. [link].

Lazar, J., Feng, J. H., and Hochheiser, H. (2017). Research methods in human-computer interaction. Morgan Kaufmann.

Linardatos, P., Papastefanopoulos, V., and Kotsiantis, S. (2021). Explainable ai: A review of machine learning interpretability methods. Entropy, 23(1):18.

Lopes, B. G., Soares, L. S., Prates, R. O., and Gonçalves, M. A. (2021). Analysis of the user experience with a multiperspective tool for explainable machine learning in light of interactive principles. In Proc. of the XX Brazilian Symposium on Human Factors in Computing Systems, pages 1–11.

Lopes, B. G. C. O. (2020). Explain-ml: A human-centered multiperspective and interactive visual tool for explainable machine learning. Master’s thesis, Universidade Federal de Minas Gerais, Belo Horizonte, Minas Gerais, Brasil.

Lundberg, S. M. and Lee, S.-I. (2017). A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems, pages 4765–4774.

Madsen, S. and Nielsen, L. (2010). Exploring persona-scenarios - using storytelling to create design ideas. In Katre, D., Orngreen, R., Yammiyavar, P., and Clemmesen, T., editors, Human Work Interaction Design, pages 57–66.

Ming, Y., Qu, H., and Bertini, E. (2019). Rulematrix: Visualizing and understanding classifiers with rules. IEEE Transactions on Visualization and Computer Graphics, 25(1):342–352.

Mohseni, S., Zarei, N., and Ragan, E. D. (2021). A multidisciplinary survey and framework for design and evaluation of explainable ai systems. ACM TIIS, 11(3-4):1–45.

Mosqueira-Rey, E., Pereira, E. H., Alonso-Ríos, D., and Bobes-Bascarán, J. (2022). A classification and review of tools for developing and interacting with machine learning systems. In Proc. of the 37th ACM/SIGAPP Symposium on Applied Computing, pages 1092–1101.

Neto, M. P. and Paulovich, F. V. (2021). Explainable matrix - visualization for global and local interpretability of random forest classification ensembles. IEEE Transactions on Visualization and Computer Graphics, 27(2):1427–1437.

Pereira, F. H. S., Prates, R. O., Maciel, C., and Pereira, V. C. (2017). Combining configurable interaction anticipation challenges and volitional aspects in the analysis of digital posthumous communication systems. SBC Journal on Interactive Systems, 8(2):77–88.

Preece, J., Sharp, H., and Rogers, Y. (2019a). Interaction Design: Beyond Human - Computer Interaction. Wiley Publishing, 5th edition.

Preece, J., Sharp, H., and Rogers, Y. (2019b). Interaction design: beyond human-computer interaction, page 408. John Wiley & Sons.

Ramos, G., Suh, J., Ghorashi, S., Meek, C., Banks, R., Amershi, S., Fiebrink, R., Smith-Renner, A., and Bansal, G. (2019). Emerging perspectives in human-centered machine learning. In Extended Abstracts of the 2019 CHI Conference, page W11.

Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). Why should i trust you?: Explaining the predictions of any classifier. In Proc. of the 22nd ACM SIGKDD, pages 1135–1144.

Ribeiro, M. T., Singh, S., and Guestrin, C. (2018). Anchors: High-precision model-agnostic explanations. In Thirty-Second AAAI Conference on Artificial Intelligence.

Rosson, M. B. and Carroll, J. M. (2002). Scenario-based usability engineering. In Proc. of the 4th conference on Designing interactive systems: processes, practices, methods, and techniques, pages 413–413.

Singh, S., Ribeiro, M. T., and Guestrin, C. (2016). Programs as black-box explanations. arXiv preprint arXiv:1611.07579.

Smilkov, D., Carter, S., Sculley, D., Viégas, F. B., and Wattenberg, M. (2016). Direct-manipulation visualization of deep networks. In KDD 2016 Workshop on Interactive Data Exploration and Analytics, pages 115–119.

Tolomei, G., Silvestri, F., Haines, A., and Lalmas, M. (2017). Interpretable predictions of tree-based ensembles via actionable feature tweaking. In Proc. of the 23rd ACM SIGKDD, pages 465–474.

Turner, R. (2016). A model explanation system. In 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6. IEEE.

Vidovic, M. M.-C., Görnitz, N., Müller, K.-R., and Kloft, M. (2016). Feature importance measure for non-linear learning algorithms. arXiv preprint arXiv:1611.07567.

Vilone, G. and Longo, L. (2021). Notions of explainability and evaluation approaches for explainable artificial intelligence. Information Fusion, 76:89–106.

Wang, Q., Ming, Y., Jin, Z., Shen, Q., Liu, D., Smith, M. J., Veeramachaneni, K., and Qu, H. (2019). Atmseer: Increasing transparency and controllability in automated machine learning. In Proc. of the 2019 CHI, page 681.

Wondimu, N. A., Buche, C., and Visser, U. (2022). Interactive machine learning: A state of the art review. ArXiv preprint arXiv:2207.06196.

Zhang, J., Wang, Y., Molino, P., Li, L., and Ebert, D. S. (2019). Manifold: A model-agnostic framework for interpretation and diagnosis of machine learning models. IEEE Transactions on Visualization and Computer Graphics, 25(1):364–373.

Zhao, X., Wu, Y., Lee, D. L., and Cui, W. (2019). iforest: Interpreting random forests via visual analytics. IEEE Transactions on Visualization and Computer Graphics, 25(1):407–416.

Zhou, J., Gandomi, A. H., Chen, F., and Holzinger, A. (2021). Evaluating the quality of machine learning explanations: A survey on methods and metrics. Electronics, 10(5):593.

Downloads

Published

2022-11-21

How to Cite

LOPES, B. G. C. O.; SOARES, L. S.; PRATES, R. O.; GONÇALVES, M. A. Contrasting Explain-ML with Interpretability Machine Learning Tools in Light of Interactive Machine Learning Principles. Journal on Interactive Systems, Porto Alegre, RS, v. 13, n. 1, p. 313–334, 2022. DOI: 10.5753/jis.2022.2556. Disponível em: https://sol.sbc.org.br/journals/index.php/jis/article/view/2556. Acesso em: 19 apr. 2024.

Issue

Section

Regular Paper

Most read articles by the same author(s)