Estado da Arte sobre Engenharia de Requisitos e Explicabilidade em Sistemas Baseados em Aprendizado de Máquina

  • Lívia Mancine IFG / UFG
  • João Lucas Soares UFG
  • Taciana Novo Kudo UFG
  • Renato F. Bulcão-Neto UFG

Resumo


With the recent growth in the use of Machine Learning (ML)-based software, concerns arise regarding explaining the results generated. Explanations help with transparency and increase stakeholder trust. Explainability, a term used to refer to these explanations, is considered a non-functional requirement (NFR) that substantially impacts the quality of ML systems. Explainability has become a mandatory requirement outlined in various laws in several countries. Additionally, Explainable Artificial Intelligence (XAI) is a field that studies methods supporting explainability in ML-based systems, focusing mainly on technical explanations. This study is not limited to technical explanations but provides a comprehensive overview of Requirements Engineering (RE) and the explainability requirement in AM-based systems. To achieve this, we planned and executed a Systematic Mapping Study protocol, adopting automatic searches in six databases. From the 200 returned articles, after applying selection criteria, we analyzed and reported the results of 27 articles. Our findings reveal that explainability is an emerging quality NFR in ML-based systems, challenging classical RE paradigms.

Palavras-chave: Engenharia de Requisitos, Explicabilidade, Aprendizado de Máquina, Mapeamento Sistemático da Literatura

Referências

Tedla Bayou Admekie and Sushmitha Pravin Karthick. 2024. An Exploration of Explainability for Internal Stakeholders: A Qualitative Study. (2024).

Khlood Ahmad, Mohamed Abdelrazek, Chetan Arora, Muneera Bano, and John Grundy. 2023. Requirements engineering for artificial intelligence systems: A systematic mapping study. Information and Software Technology 158 (2023), 107176.

Khlood Ahmad, Muneera Bano, Mohamed Abdelrazek, Chetan Arora, and John Grundy. 2021. What’s up with requirements engineering for artificial intelligence systems?. In 2021 IEEE 29th International Requirements Engineering Conference (RE). IEEE, 1–12.

Antonio Pedro Santos Alves, Marcos Kalinowski, Görkem Giray, Daniel Mendez, Niklas Lavesson, Kelly Azevedo, Hugo Villamizar, Tatiana Escovedo, Helio Lopes, Stefan Biffl, et al. 2023. Status quo and problems of requirements engineering for machine learning: Results from an international survey. In International Conference on Product-Focused Software Process Improvement. Springer, 159–174.

Saleema Amershi, Andrew Begel, Christian Bird, Robert DeLine, Harald Gall, Ece Kamar, Nachiappan Nagappan, Besmira Nushi, and Thomas Zimmermann. 2019. Software engineering for machine learning: A case study. In 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP). IEEE, 291–300.

Maria Aslam, Diana Segura-Velandia, and Yee Mey Goh. 2023. A conceptual model framework for XAI requirement elicitation of application domain system. IEEE Access (2023).

Jose M Barrera, Alejandro Reina-Reina, Ana Lavalle, Alejandro Maté, and Juan Trujillo. 2024. An extension of iStar for Machine Learning requirements by following the PRISE methodology. Computer Standards & Interfaces 88 (2024), 103806.

Garrick Cabour, Andrés Morales-Forero, Élise Ledoux, and Samuel Bassetto. 2023. An explanation space to align user studies with the technical development of Explainable AI. AI & SOCIETY 38, 2 (2023), 869–887.

Larissa Chazette, Wasja Brunotte, and Timo Speith. 2021. Exploring explainability: a definition, a model, and a knowledge catalogue. In 2021 IEEE 29th international requirements engineering conference (RE). IEEE, 197–208.

Larissa Chazette, Wasja Brunotte, and Timo Speith. 2022. Explainable software systems: from requirements analysis to system evaluation. Requirements Engineering 27, 4 (2022), 457–487.

Larissa Chazette and Kurt Schneider. 2020. Explainability as a non-functional requirement: challenges and recommendations. Requirements Engineering 25, 4 (2020), 493–514.

Douglas Cirqueira, Dietmar Nedbal, Markus Helfert, and Marija Bezbradica. 2020. Scenario-based requirements elicitation for user-centric explainable AI: A case in fraud detection. In International cross-domain conference for machine learning and knowledge extraction. Springer, 321–341.

Tobias Clement, Nils Kemmerzell, Mohamed Abdelaal, and Michael Amberg. 2023. Xair: A systematic metareview of explainable ai (xai) aligned to the software development process. Machine Learning and Knowledge Extraction 5, 1 (2023), 78–108.

Barnaby Crook, Maximilian Schlüter, and Timo Speith. 2023. Revisiting the performance-explainability trade-off in explainable artificial intelligence (XAI). In 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW). IEEE, 316–324.

Giordano d’Aloisio. 2022. Quality-driven machine learning-based data science pipeline realization: a software engineering approach. In Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings. 291–293.

Luan Soares de Souza, André Levi Zanon, Lucas Padilha Modesto de Araújo, and Marcelo Garcia Manzato. 2023. A multiturn recommender system with explanations. In Anais Estendidos do XXIX Simpósio Brasileiro de Sistemas Multimídia e Web. SBC, 77–80.

Umm e Habiba. 2023. Requirements Engineering for Explainable AI.. In RE. 376–380.

Xavier Franch, Andreas Jedlitschka, and Silverio Martínez-Fernández. 2023. A requirements engineering perspective to AI-based systems development: A vision paper. In International Working Conference on Requirements Engineering: Foundation for Software Quality. Springer, 223–232.

Renata Guizzardi, Glenda Amaral, Giancarlo Guizzardi, and John Mylopoulos. 2023. An ontology-based approach to engineering ethicality requirements. Software and Systems Modeling 22, 6 (2023), 1897–1923.

Umm-E Habiba, Justus Bogner, and Stefan Wagner. 2022. Can requirements engineering support explainable artificial intelligence? Towards a user-centric approach for explainability requirements. In 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW). IEEE, 162–165.

Khan Mohammad Habibullah, Gregory Gay, and Jennifer Horkoff. 2023. Nonfunctional requirements for machine learning: Understanding current use and challenges among practitioners. Requirements Engineering 28, 2 (2023), 283–316.

Philipp Haindl, Thomas Hoch, Javier Dominguez, Julen Aperribai, Nazim Kemal Ure, and Mehmet Tunçel. 2022. Quality characteristics of a software platform for human-ai teaming in smart manufacturing. In International Conference on the Quality of Information and Communications Technology. Springer, 3–17.

Vikas Hassija et al. 2023. Interpreting black-box models: a review on explainable artificial intelligence. Cognitive Computation (2023), 1–30.

Lena Kästner, Markus Langer, Veronika Lazar, Astrid Schomäcker, Timo Speith, and Sarah Sterz. 2021. On the relation of trust and explainability: Why to engineer for trustworthiness. In 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW). IEEE, 169–175.

Staffs Keele et al. 2007. Guidelines for performing systematic literature reviews in software engineering. Technical Report. Technical report, ver. 2.3 ebse technical report. ebse.

Maximilian A Köhl, Kevin Baum, Markus Langer, Daniel Oster, Timo Speith, and Dimitri Bohlender. 2019. Explainability as a non-functional requirement. In 2019 IEEE 27th International Requirements Engineering Conference (RE). IEEE, 363–368.

Markus Langer, Kevin Baum, Kathrin Hartmann, Stefan Hessel, Timo Speith, and Jonas Wahl. 2021. Explainability auditing for intelligent systems: a rationale for multi-disciplinary perspectives. In 2021 IEEE 29th international requirements engineering conference workshops (REW). IEEE, 164–168.

Tong Li and Lu Han. 2023. Dealing with explainability requirements for machine learning systems. In 2023 IEEE 47th Annual Computers, Software, and Applications Conference (COMPSAC). IEEE, 1203–1208.

Andreas Metzger, Jone Bartel, and Jan Laufer. 2023. An AI Chatbot for Explaining Deep Reinforcement Learning Decisions of Service-Oriented Systems. In International Conference on Service-Oriented Computing. Springer, 323–338.

Diego Minatel, Nícolas Roque dos Santos, Angelo Cesar Mendes da Silva, Mariana Cúri, Ricardo Marcondes Marcacini, and Alneu de Andrade Lopes. 2023. Unfairness in machine learning for web systems applications. In Proceedings of the 29th Brazilian Symposium on Multimedia and the Web. 144–153.

My-Linh Nguyen, Thao Phung, Duong-Hai Ly, and Hong-Linh Truong. 2021. Holistic explainability requirements for end-to-end machine learning in IoT cloud systems. In 2021 IEEE 29th International Requirements Engineering Conference Workshops (REW). IEEE, 188–194.

Carl O Retzlaff, Alessa Angerschmid, Anna Saranti, David Schneeberger, Richard Roettger, Heimo Mueller, and Andreas Holzinger. 2024. Post-hoc vs ante-hoc explanations: xAI design guidelines for data scientists. Cognitive Systems Research 86 (2024), 101243.

Katia Romero Felizardo Scannavino, Elisa Yumi Nakagawa, Sandra Camargo Pinto Ferraz Fabbri, and Fabiano Cutigi Ferrari. 2017. Revisão Sistemática da Literatura em Engenharia de Software: teoria e prática. (2017).

Roan Schellingerhout, Francesco Barile, and Nava Tintarev. 2023. A Co-design Study for Multi-stakeholder Job Recommender System Explanations. In World Conference on Explainable Artificial Intelligence. Springer, 597–620.

Tjeerd AJ Schoonderwoerd, Wiard Jorritsma, Mark A Neerincx, and Karel Van Den Bosch. 2021. Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. International Journal of HumanComputer Studies 154 (2021), 102684.

Timo Speith. 2022. How to evaluate explainability?-a case for three criteria. In 2022 IEEE 30th International Requirements Engineering Conference Workshops (REW). IEEE, 92–97.

Timo Speith and Markus Langer. 2023. A new perspective on evaluation methods for explainable artificial intelligence (xai). In 2023 IEEE 31st International Requirements Engineering Conference Workshops (REW). IEEE, 325–331.

Sabine Theis, Sophie Jentzsch, Fotini Deligiannaki, Charles Berro, Arne Peter Raulf, and Carmen Bruder. 2023. Requirements for explainability and acceptance of artificial intelligence in collaborative work. In International Conference on Human-Computer Interaction. Springer, 355–380.

Umm-E-Habiba. 2023. Requirements Engineering for Explainable AI. In 2023 IEEE 31st International Requirements Engineering Conference (RE). 376–380. DOI: 10.1109/RE57278.2023.00058

Hugo Villamizar, Tatiana Escovedo, and Marcos Kalinowski. 2021. Requirements engineering for machine learning: A systematic mapping study. In 2021 47 th Euromicro Conference on Software Engineering and Advanced Applications (SEAA). IEEE, 29–36.

Andreas Vogelsang. 2019. Explainable software systems. it-Information Technology 61, 4 (2019), 193–196.

Andreas Vogelsang and Markus Borg. 2019. Requirements engineering for machine learning: Perspectives from data scientists. In 2019 IEEE 27th International Requirements Engineering Conference Workshops (REW). IEEE, 245–251.

Nobukazu Yoshioka, Jati H Husen, Hnin Thandar Tun, Zhenxiang Chen, Hironori Washizaki, and Yoshiaki Fukazawa. 2021. Landscape of requirements engineering for machine learning-based ai systems. In 2021 28th Asia-Pacific Software Engineering Conference Workshops (APSEC Workshops). IEEE, 5–8.

Marc-André Zöller, Waldemar Titov, Thomas Schlegel, and Marco F Huber. 2023. Xautoml: A visual analytics tool for understanding and validating automated machine learning. ACM Transactions on Interactive Intelligent Systems 13, 4 (2023) , 1–39.
Publicado
14/10/2024
MANCINE, Lívia; SOARES, João Lucas; KUDO, Taciana Novo; BULCÃO-NETO, Renato F.. Estado da Arte sobre Engenharia de Requisitos e Explicabilidade em Sistemas Baseados em Aprendizado de Máquina. In: WORKSHOP DE REVISÕES SISTEMÁTICAS DE LITERATURA EM SISTEMAS MULTIMÍDIAS E WEB - SIMPÓSIO BRASILEIRO DE SISTEMAS MULTIMÍDIA E WEB (WEBMEDIA), 30. , 2024, Juiz de Fora/MG. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 143-158. ISSN 2596-1683. DOI: https://doi.org/10.5753/webmedia_estendido.2024.243944.