Communicating Ethical Considerations in Generative AI Systems

Resumo


Introduction: The growing adoption of Generative AI raises ethical concerns, despite gains in automation and efficiency. Objective: This study analyzes how popular generative AI systems communicate ethical considerations to users. Methodology: We applied the Semiotic Inspection Method, supported by a Semiotic Engineering-based epistemic tool, to evaluate ChatGPT, Gemini, and Claude. The analysis was guided by the principles of Beneficence, Non-Maleficence, Autonomy, Justice, and Explicability. Results: Findings reveal inconsistent and opaque ethical communication across systems. The study maps the design space of generative AI in relation to ethics and demonstrates the method’s value in advancing ethical AI research.

Palavras-chave: Ethics of AI, Semiotic Inspection Method, Generative AI, Ethical Design

Referências

Aldboush, H. H. e Ferdous, M. (2023). Building trust in fintech: an analysis of ethical and privacy considerations in the intersection of big data, ai, and customer trust. International Journal of Financial Studies, 11(3):90.

Barbosa, G. D. J., Nunes, J. L., De Souza, C. S., e Barbosa, S. D. J. (2024). Investigating the extended metacommunication template: How a semiotic tool may encourage reflective ethical practice in the development of machine learning systems. In Proceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems, IHC ’23, New York, NY, USA. Association for Computing Machinery.

Barbosa, S. D. J., Barbosa, G. D. J., Souza, C. S. d., e Leitão, C. F. (2021). A semiotics-based epistemic tool to reason about ethical issues in digital technology design and development. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, pages 363–374.

Belle, V. e Papantonis, I. (2021). Principles and practice of explainable machine learning. Frontiers in big Data, 4:688969.

Bender, E. M., Gebru, T., McMillan-Major, A., e Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big? . In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 610–623, New York, NY, USA. Association for Computing Machinery.

Bingley, W. J., Curtis, C., Lockey, S., Bialkowski, A., Gillespie, N., Haslam, S. A., e Worthy, P. (2023). Where is the human in human-centered ai? insights from developer priorities and user experiences. volume 141, page 107617.

Binns, R., Veale, M., Van Kleek, M., e Shadbolt, N. (2018). ‘it’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1–14. ACM.

Branch, C. C., Beaton, C. I., McQuaid, M., e Weeden, E. (2021). Perceptions of ethics in persuasive user interfaces. In International Conference on Persuasive Technology, pages 275–288. Springer.

Brey, P. e Dainow, B. (2024). Ethics by design for artificial intelligence. AI and Ethics, 4(4):1265–1277.

Capel, T. e Brereton, M. (2023). What is human-centered about human-centered ai? a map of the research landscape. In Proceedings of the 2023 CHI conference on human factors in computing systems, pages 1–23.

Chromik, M. e Butz, A. (2021). Human-xai interaction: a review and design principles for explanation user interfaces. In Human-Computer Interaction–INTERACT 2021: 18th IFIP TC 13 International Conference, Bari, Italy, August 30–September 3, 2021, Proceedings, Part II 18, pages 619–640. Springer.

Colley, A., Väänänen, K., e Häkkilä, J. (2022). Tangible explainable ai-an initial conceptual framework. In Proceedings of the 21st International Conference on Mobile and Ubiquitous Multimedia, pages 22–27.

de Oliveira Carvalho, N., Sampaio, A. L., e Monteiro, I. (2020). Evaluation of facebook advertising recommendations explanations with the perspective of semiotic engineering. In Simpósio Brasileiro sobre Fatores Humanos em Sistemas Computacionais (IHC), pages 151–160. SBC.

De Souza, C. S. (2005). The semiotic engineering of human-computer interaction. MIT press.

de Souza, C. S. e Leitão, C. F. (2009). Semiotic engineering methods for scientific research in HCI. Morgan & Claypool Publishers.

de Souza, C. S., Leitão, C. F., Prates, R. O., Bim, S. A., e da Silva, E. J. (2010). Can inspection methods generate valid new knowledge in hci? the case of semiotic inspection. International Journal of Human-Computer Studies, 68(1-2):22–40.

De Souza, C. S., Leitão, C. F., Prates, R. O., e Da Silva, E. J. (2006). The semiotic inspection method. In Proceedings of VII Brazilian symposium on Human factors in computing systems, pages 148–157.

Detweiler, C., Pommeranz, A., Hoven, J. v., e Nissenbaum, H. (2011). Values in design-building bridges between re, hci and ethics. In IFIP Conference on Human-Computer Interaction, pages 746–747. Springer.

Duarte, E. F., T. Palomino, P., Pontual Falcão, T., Lis Porto, G., e Portela, Carlos e Francisco Ribeiro, D. e. N. A. e. A. Y. e. S. M. e. G. A. e. M. T. A. (2024). GranDIHC-BR 2025-2035 - GC6: Implications of Artificial Intelligence in HCI: A Discussion on Paradigms, Ethics, and Diversity, Equity and Inclusion. In Proceedings of the XXIII Brazilian Symposium on Human Factors in Computing Systems (IHC ’24), New York, NY, USA. Association for Computing Machinery.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., et al. (2018). Ai4people—an ethical framework for a good ai society: opportunities, risks, principles, and recommendations. Minds and machines, 28:689–707.

Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., e Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12):86–92.

Gupta, R., Nair, K., Mishra, M., Ibrahim, B., e Bhardwaj, S. (2024). Adoption and impacts of generative artificial intelligence: Theoretical underpinnings and research agenda. International Journal of Information Management Data Insights, 4(1):100232.

Jobin, A., Ienca, M., e Vayena, E. (2019). The global landscape of ai ethics guidelines. Nature machine intelligence, 1(9):389–399.

Johnson, B. e Smith, J. (2021). Towards ethical data-driven software: filling the gaps in ethics research & practice. In 2021 IEEE/ACM 2nd International Workshop on Ethics in Software Engineering Research and Practice (SEthics), pages 18–25. IEEE.

Johnson, D. G. (2004). Computer ethics. The Blackwell guide to the philosophy of computing and information, pages 63–75.

Kotek, H., Dockum, R., e Sun, D. (2023). Gender bias and stereotypes in large language models. In Proceedings of the ACM collective intelligence conference, pages 12–24.

Laato, S., Tiainen, M., Najmul Islam, A., e Mäntymäki, M. (2022). How to explain ai systems to end users: a systematic literature review and research agenda. Internet Research, 32(7):1–31.

Morley, J., Floridi, L., Kinsey, L., e Elhalal, A. (2020). From what to how: an initial review of publicly available ai ethics tools, methods and research to translate principles into practices. Science and engineering ethics, 26(4):2141–2168.

Nissenbaum, H. (1996). Accountability in a computerized society. Science and engineering ethics, 2:25–42.

Nissenbaum, H. (2001). How computer systems embody values. Computer, 34(3):120–119.

Nunes, J. L., Barbosa, G. D., de Souza, C. S., e Barbosa, S. D. (2024). Using model cards for ethical reflection on machine learning models: an interview-based study. Journal on Interactive Systems, 15(1):1–19.

Ozkaya, I. (2019). Ethics is a software design concern. IEEE Software, 36(3):4–8.

Ozmen Garibay, O., Winslow, B., Andolina, S., Antona, M., Bodenschatz, A., Coursaris, C., Falco, G., Fiore, S. M., Garibay, I., Grieman, K., et al. (2023). Six human-centered artificial intelligence grand challenges. International Journal of Human–Computer Interaction, 39(3):391–437.

Pasricha, S. (2022). Ai ethics in smart healthcare. IEEE Consumer Electronics Magazine, 12(4):12–20.

Pereira, F. H., Prates, R. O., Maciel, C., e Pereira, V. (2016). Análise de interação antecipada e aspectos volitivos em sistemas de comunicação digital póstuma. In XV Simpósio Brasileiro sobre Fatores Humanos em Sistemas Computacionais.

Pereira, R., Darin, T., e Silveira, M. S. (2024). GranDIHC-BR: Grand Research Challenges in Human-Computer Interaction in Brazil for 2025-2035. In Proceedings of the XXIII Brazilian Symposium on Human Factors in Computing Systems (IHC ’24), New York, NY, USA. Association for Computing Machinery.

Prates, R. O., Rosson, M. B., e de Souza, C. S. (2015). Making decisions about digital legacy with google’s inactive account manager. In Human-Computer Interaction–INTERACT 2015: 15th IFIP TC 13 International Conference, Bamberg, Germany, September 14-18, 2015, Proceedings, Part I 15, pages 201–209. Springer.

Prem, E. (2023). From ethical ai frameworks to tools: a review of approaches. AI and Ethics, 3(3):699–716.

Rodrigues, K. R. d. H., Carvalho, L. P., Pimentel, M. d. G. C., e Freire, A. P. (2024). GranDIHC-BR 2025-2035 - GC2: Ethics and Responsibility: Principles, Regulations, and Societal Implications of Human Participation in HCI Research. In Proceedings of the XXIII Brazilian Symposium on Human Factors in Computing Systems (IHC ’24), New York, NY, USA. ACM.

Shneiderman, B. (2020). Bridging the gap between ethics and practice: guidelines for reliable, safe, and trustworthy human-centered ai systems. ACM Transactions on Interactive Intelligent Systems (TiiS), 10(4):1–31.

Umbrello, S. e van de Poel, I. (2021). Mapping value sensitive design onto ai for social good principles. AI and Ethics, 1(3):283–296.

UNESCO, C. Recommendation on the ethics of artificial intelligence.

Vainio-Pekka, H., Agbese, M. O. O., Jantunen, M., Vakkuri, V., Mikkonen, T., Rousi, R., e Abrahamsson, P. (2023). The role of explainable ai in the research field of ai ethics. volume 13, pages 1–39.

Valério, F. A., Guimarães, T. G., Prates, R. O., e Candello, H. (2017). Here’s what i can do: Chatbots’ strategies to convey their features to users. In Proceedings of the xvi brazilian symposium on human factors in computing systems, pages 1–10.

van Berkel, N., Tag, B., Goncalves, J., e Hosio, S. (2022). Human-centered artifcial intelligence: a contextual morality perspective. In Behaviour & Information Technology, volume 41, pages 502–518.

Vilone, G. e Longo, L. (2021). Classifcation of explainable artificial intelligence methods through their output formats. Machine Learning and Knowledge Extraction, 3(3):615– 661.

Xu, W. (2019). Toward human-centered ai: a perspective from human-computer interaction. interactions, 26(4):42–46.

Zack, T., Lehman, E., Suzgun, M., Rodriguez, J. A., Celi, L. A., Gichoya, J., Jurafsky, D., Szolovits, P., Bates, D. W., Abdulnour, R.-E. E., et al. (2024). Assessing the potential of gpt-4 to perpetuate racial and gender biases in health care: a model evaluation study. The Lancet Digital Health, 6(1):e12–e22.
Publicado
08/09/2025
GOMES, Libiane; SANTANA SILVEIRA, João Carlos; MARTINS, Helena; APARECIDA LANA, Cristiane; BENTO VILLELA, Maria Lúcia. Communicating Ethical Considerations in Generative AI Systems. In: SIMPÓSIO BRASILEIRO SOBRE FATORES HUMANOS EM SISTEMAS COMPUTACIONAIS (IHC), 24. , 2025, Belo Horizonte/MG. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 718-742. DOI: https://doi.org/10.5753/ihc.2025.10939.