Towards a process for Trustworthy AI systems development
Resumo
This work aims to define a comprehensive Software Development Life Cycle (SDLC) tailored for Trustworthy Artificial Intelligence (AI) systems. These systems are understood as full software solutions that incorporate AI components— such as machine learning models or intelligent agents—whose behavior must align with ethical principles, legal requirements, and technical robustness. The proposed SDLC is grounded on a multidimensional taxonomy of trustworthiness, covering aspects such as lawfulness, non-maleficence, beneficence, autonomy, justice, explicability, and technology. By integrating these principles into all development phases, the SDLC supports the design of AI systems that are not only effective and innovative but also aligned with human values, regulations, and societal expectations. The methodology follows a Design Science Research approach, ensuring the model’s relevance, feasibility, and adaptability to evolving technological and regulatory contexts.
Referências
Agarwal, P. et al. (2017). SDLC model selection tool and risk incorporation. Int. J. Comput. Appl, 172(10):6–10.
Baldassarre, M. T., Gigante, D., Kalinowski, M., and Ragone, A. (2024). Polaris: A framework to guide the development of trustworthy ai systems. In Proceedings of the IEEE/ACM 3rd International Conference on AI Engineering-Software Engineering for AI, pages 200–210.
Barletta, V. S. et al. (2023). A rapid review of responsible AI frameworks: How to guide the development of ethical AI. In Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering, pages 358–367.
Bietti, E. (2020). From ethics washing to ethics bashing: a view on tech ethics from within moral philosophy. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pages 210–219.
European Parliament and the Council (2024). Artificial intelligence act. Final text approved on 13 March 2024. [link]
Floridi, L. et al. (2018). AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Minds and machines, 28:689–707.
Habbal, A. et al. (2024). Artificial intelligence trust, risk and security management (AI TRiSM): Frameworks, applications, challenges and future research directions. Expert Systems with Applications, 240:122442.
Hevner, A. et al. (2010). Design science research in information systems. Design research in information systems: theory and practice, pages 9–22.
Jobin, A. et al. (2019). The global landscape of AI ethics guidelines. Nature machine intelligence, 1(9):389–399.
Kwak, J. H. (2022). Trustops: A risk-based AI engineering process. In 2022 13th International Conference on Information and Communication Technology Convergence (ICTC), pages 2252– 2254. IEEE.
Mandal, A. et al. (2013). Investigating and analysing the desired characteristics of software development lifecycle (SDLC) models. International Journal of Software Engineering Research & Practices, pages 9–15.
Mariscal, G. et al. (2010). A survey of data mining and knowledge discovery process models and methodologies. The Knowledge Engineering Review, 25(2):137–166.
Martínez-Plumed et al. (2019). CRISP-DM twenty years later: From data mining processes to data science trajectories. IEEE transactions on knowledge and data engineering, 33(8):3048–3061.
Mohan, K. et al. (2011). What methodology attributes are critical for potential users? understanding the effect of human needs. In Advanced Information Systems Engineering: 23rd International Conference, CAiSE 2011, London, UK, June 2024, 2011. Proceedings 23, pages 314–328. Springer.
Morley, J. et al. (2020). From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Science and engineering ethics, 26(4):2141–2168.
O’neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown, New York City, U.S.
Piatetsky, G. (2014). CRISP-DM, still the top methodology for analytics, data mining, or data science projects. KDD News.
Radclyffe, C. et al. (2023). The assessment list for trustworthy artificial intelligence: A review and recommendations. Frontiers in artificial intelligence, 6:1020592.
Saltz, J. S. (2021). CRISP-DM for data science: strengths, weaknesses and potential next steps. In 2021 IEEE International Conference on Big Data (Big Data), pages 2337–2344. IEEE.
Saltz, J. S. et al. (2016). Big data team process methodologies: A literature review and the identification of key factors for a project’s success. In 2016 IEEE International Conference on Big Data (Big Data), pages 2872–2879. IEEE.
Saltz, J. S. et al. (2022). Current approaches for executing big data science projects—a systematic literature review. PeerJ Computer Science, 8:e862.
Schmelczer, A. et al. (2023). Trustworthy and robust AI deployment by design: A framework to inject best practice support into AI deployment pipelines. In 2023 IEEE/ACM 2nd International Conference on AI Engineering–Software Engineering for AI (CAIN), pages 127–138. IEEE.
Schmid, A. et al. (2023). The importance of an ethical framework for trust calibration in AI. IEEE Intelligent Systems.
Schröer, C. et al. (2021). A systematic literature review on applying CRISP-DM process model. Procedia Computer Science, 181:526–534.
Sekiguchi, K. et al. (2020). Organic and dynamic tool for use with knowledge base of AI ethics for promoting engineers’ practice of ethical AI design. AI & Society, 35(1):51–71.
Sharma, V. et al. (2023). Framework for evaluating ethics in AI. In 2023 International Conference on Innovative Data Communication Technologies and Application (ICIDCA), pages 307–312. IEEE.
Shearer, C. (2000). The CRISP-DM model: the new blueprint for data mining. Journal of data warehousing, 5(4):13–22.
Singh, A. M. et al. (2023). Wasabi: A conceptual model for trustworthy artificial intelligence. Computer, 56(2):20–28.
Steinhoff, J. (2023). AI ethics as subordinated innovation network. AI & Society, pages 1–13.
Stix, C. (2021). Actionable principles for artificial intelligence policy: three pathways. Science and Engineering Ethics, 27(1):15.
Thiebes, S. et al. (2021). Trustworthy artificial intelligence. Electronic Markets, 31:447–464.
Whittlestone, J. et al. (2019). The role and limits of principles in AI ethics: Towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, pages 195–200.
Wickramasinghe, C. S. et al. (2020). Trustworthy AI development guidelines for human system interaction. In 2020 13th International Conference on Human System Interaction (HSI), pages 130–136. IEEE.
Wiener, N. (1950). The human use of human beings: Cybernetics and society. The riverside press, Cambridge, Massachusetts.
Wu, W. et al. (2020). Ethical principles and governance technology development of AI in china. Engineering, 6(3):302–309.
Xiaomei, S. et al. (2023). Research on trustworthiness analysis technology of artificial intelligence software. In 2023 IEEE International Conference on Control, Electronics and Computer Technology (ICCECT), pages 802–806. IEEE.
Xue, L. et al. (2022). Ethical governance of artificial intelligence: An integrated analytical framework. Journal of Digital Economy, 1(1):44–52.
Zhang, J. et al. (2023). Fairness in design: A framework for facilitating ethical artificial intelligence designs. International Journal of Crowd Science, 7(1):32–39.
Zhang, T. et al. (2021). Trusted artificial intelligence: technique requirements and best practices. In 2021 International Conference on Cyberworlds (CW), pages 303–306. IEEE.
Zhou, J. et al. (2020). A survey on ethical principles of AI and implementations. In 2020 IEEE Symposium Series on Computational Intelligence (SSCI), pages 3010–3017. IEEE.
