Evaluation of Ethics Tools in Evaluating Ethical Considerations of Portuguese Language Models

  • Jhessica Silva UNICAMP
  • Alef Ferreira UFG
  • Diego Moreira UNICAMP
  • Gabriel Santos UNICAMP
  • Gustavo Bonil UNICAMP
  • João Gondim UNICAMP
  • Luiz Pereira UNICAMP
  • Helena Maia UNICAMP
  • Nadia Silva UFG
  • Simone Hashiguti UNICAMP
  • Sandra Avila UNICAMP
  • Helio Pedrini UNICAMP

Abstract


This paper presents a study using AI Ethics Tools (AIETs) to raise ethical considerations in language models developed for the Portuguese language. The AIETs are intended to help developers, companies, governments, and other interested parties to establish trust, transparency, and responsibility with their technologies. This work briefly discusses whether AIETs can help developers think about their technologies ethically. This study was based on interviews with developers of four language models using the AIETs Harms Modeling and Model Cards. The results suggest that the AIETs serve as a guide in elaborating ethical considerations but require prior knowledge of AI ethics.
Keywords: Ethics Tools, Language Models, Ethical Considerations Survey

References

Bender, E. M., Gebru, T., McMillan-Major, A., and Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In ACM FAccT, pages 610–623.

Brown, N., Xie, B., Sarder, E., Fiesler, C., and Wiese, E. S. (2024). Teaching Ethics in Computing: A Systematic Literature Review of ACM Computer Science Education Publications. ACM Transactions on Computing Education, 24(1):1–36.

Brown, T., Mann, B., Ryder, N., Subbiah, M., and et al. (2020). Language Models are Few-Shot Learners. In NeurIPS, volume 33, pages 1877–1901.

Goetze, T. S. (2023). Integrating ethics into computer science education: Multi-, inter-, and transdisciplinary approaches. In ACM SIGCSE, pages 645–651.

Hershcovich, D., Frank, S., Lent, H., de Lhoneux, M., Abdou, M., Brandl, S., Bugliarello, E., Cabello Piqueras, L., Chalkidis, I., Cui, R., Fierro, C., Margatina, K., Rust, P., and Søgaard, A. (2022). Challenges and strategies in crosscultural NLP. In ACL, pages 6997–7013.

Hovy, D. and Prabhumoye, S. (2021). Five Sources of Bias in Natural Language Processing. Language and Linguistics Compass, 15(8):e12432.

Johnson, R. L., Pistilli, G., Menédez-González, N., Duran, L. D. D., Panai, E., Kalpokiene, J., and Bertulfo, D. J. (2022). The Ghost in the Machine Has an American Accent: Value Conflict in GPT-3.

Kemp, S. and Social, W. A. (2023). Digital 2023: Global Overview Report.

Microsoft (2022). Harms Modeling – Azure Application Architecture Guide.

Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., and Gebru, T. (2019). Model Cards for Model Reporting. In ACM FAccT, pages 220–229.

OpenAI (2023). GPT-4 Technical Report.

Radford, A., Narasimhan, K., Salimans, T., and Sutskever, I. (2018). Improving Language Understanding by Generative Pre-Training. OpenAI.

Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language Models are Unsupervised Multitask Learners. OpenAI Blog, 1(8):9.

Santos, G., Moreira, D., Ferreira, A., Silva, J., Pereira, L., Bueno, P., Sousa, T., Maia, H., da Silva, N., Colombini, E., Pedrini, H., and Avila, S. (2023). CAPIVARA: Cost-Efficient Approach for Improving Multilingual CLIP Performance on Low-Resource Languages. In Workshop on Multi-lingual Representation Learning (MRL), EMNLP, pages 184–207.
Published
2024-11-27
SILVA, Jhessica et al. Evaluation of Ethics Tools in Evaluating Ethical Considerations of Portuguese Language Models. In: LATIN AMERICAN ETHICS ON ARTIFICIAL INTELLIGENCE, 1. , 2024, Niteroi. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 61-64. DOI: https://doi.org/10.5753/laai-ethics.2024.32452.

Most read articles by the same author(s)

<< < 1 2 3 > >>