Ethics of AI: Do the Face Detection Models Act with Prejudice?

  • Marcos Vinícius Ferreira UFBA / Neodados
  • Ariel Almeida UFBA
  • João Paulo Canario UFBA
  • Matheus Souza Integra - Association of Public Transportation Companies
  • Tatiane Nogueira UFBA
  • Ricardo Rios UFBA

Resumo


This work presents a study on an ethical issue in Artificial Intelligence related to the presence of racist biases by detecting faces in images. Our analyses were performed on a real-world system designed to detect fraud in public transportation in Salvador (Brazil). Our experiments were conducted by taking into account three steps. Firstly, we individually analyzed a sample of images and added specific labels related to the users’ gender and race. Then, we used well-defined detectors, based on different Convolutional Neural Network architectures, to find faces in the previously labeled images. Finally, we used statistical tests to assess whether or not there is some relation between the error rates and such labels. According to our results, we had noticed important biases, thus leading to higher error rates when images were taken from black people. We also noticed errors are more likely in both black men and women. Based on our conclusions, we recall the risk of deploying computational software that might affect minority groups that are historically neglected.
Palavras-chave: Face detection, Racism, Ethic, Deep learning
Publicado
29/11/2021
Como Citar

Selecione um Formato
FERREIRA, Marcos Vinícius; ALMEIDA, Ariel; CANARIO, João Paulo; SOUZA, Matheus; NOGUEIRA, Tatiane; RIOS, Ricardo. Ethics of AI: Do the Face Detection Models Act with Prejudice?. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 10. , 2021, Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . ISSN 2643-6264.