Bias in Machine Learning and its social implications: a case study on facial recognition
Abstract
This work presents a study on biases generated in the machine learning process and its implications for society — moral, ethical, and social. We re-read a framework that positions the different types of biases in the machine learning process stages, from pre-processing, through data collection, to post-processing. We present a case study on facial recognition to illustrate the biases that can be potentially included during these machine learning stages, and their social implications.
Keywords:
machine learning, bias, facial recognition
References
Almeida, S. (2019). Racismo estrutural. Polen Produção Editorial LTDA.
Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., et al. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943.
Bissoto, A., Fornaciali, M., Valle, E., and Avila, S. (2019). (De)Constructing bias on skin lesion datasets. In IEEE Conference on Computer Vision and Pattern Recognition Workshops.
Buolamwini, J. and Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency, pages 77–91.
Burkov, A. (2019). The hundred-page machine learning book (em portugues).
Castelvecchi, D. (2020). Is facial recognition too biased to be let loose? Nature, 587(7834):347–349.
Caton, S. and Haas, C. (2020). Fairness in machine learning: A survey. arXiv preprint arXiv:2010.04053.
Francisco, P. A. P., Hurel, L. M., and Rielli, M. M. (2020). Regulação do reconhecimento facial no setor público. Data Privacy Brasil. https://igarape.org.br/wp-content/uploads/2020/06/2020-06-09-Regulao-do-reconhecimento-facial-no-setor-pblico.pdf
Instituto Igarape (2019). Reconhecimento facial no brasil. https://igarape.org.br/infografico-reconhecimento-facial-no-brasil.
Latour, B. (1999). Pandora’s hope: essays on the reality of science studies. Harvard University Press
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. (2019). A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635.
Molina, D., Causa, L., and Tapia, J. (2020). Toward to reduction of bias for gender and ethnicity from face images using automated skin tone classification. In International Conference of the Biometrics Special Interest Group, pages 281–289.
Moraes, T. G., Almeida, E. C., and de Pereira, J. R. L. (2020). Smile, you are being identified! risks and measures for the use of facial recognition in (semi-) public spaces. AI and Ethics, pages 1–14.
Nunes, P. (2019). Novas ferramentas, velhas praticas: reconhecimento facial e policiamento no Brasil. Retratos da violencia: cinco meses de monitoramento, analise e descobertas (Rede de Observatório de Segurança). http://observatorioseguranca.com.br/wp-content/uploads/2019/11/1relatoriorede.pdf.
Olteanu, A., Castillo, C., Diaz, F., and Kıcıman, E. (2019). Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2:13.
O’Neil, C. (2020). Algoritmos de Destruicão em Massa. Editora Rua do Sabão, 1ª edition.
ONU Mulheres, Insper, M. M. . P. B. (2016). Vieses incoscientes, equidade de genero e o mundo corporativo: lições da oficina vieses inconscientes. https://www.onumulheres.org.br/wp-content/uploads/2016/04/Vieses inconscientes 16 digital.pdf.
Pinch, T. J. (1992). Opening black boxes: Science, technology and society. Social Studies of Science, 22(3):487–510
Rosa, A., Pessoa, S. A., and Lima, F. S. (2020). Neutralidade tecnologica: reconhecimento facial e racismo. REVISTA V! RUS, 21. http://www.nomads.usp.br/virus/virus21/?sec=4&item=9&lang=pt.
Silva, T. (2019). Visao computacional e vieses racializados: branquitude como padrao no aprendizado de máquina. II COPENE Nordeste: Epistemologias Negras e Lutas Antirracistas, pages 29–31.
Silva, T. and Birhane, A. (2020). Comunidades, algoritmos e ativismos digitais: olhares afrodiasporicos. LiteraRua
Suresh, H. and Guttag, J. V. (2019). A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.10002.
Vilarino, R. and Vicente, R. (2021). Dissecting racial bias in a credit scoring system experimentally developed for the brazilian population. arXiv preprint arXiv:2011.09865.
Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., et al. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943.
Bissoto, A., Fornaciali, M., Valle, E., and Avila, S. (2019). (De)Constructing bias on skin lesion datasets. In IEEE Conference on Computer Vision and Pattern Recognition Workshops.
Buolamwini, J. and Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on Fairness, Accountability and Transparency, pages 77–91.
Burkov, A. (2019). The hundred-page machine learning book (em portugues).
Castelvecchi, D. (2020). Is facial recognition too biased to be let loose? Nature, 587(7834):347–349.
Caton, S. and Haas, C. (2020). Fairness in machine learning: A survey. arXiv preprint arXiv:2010.04053.
Francisco, P. A. P., Hurel, L. M., and Rielli, M. M. (2020). Regulação do reconhecimento facial no setor público. Data Privacy Brasil. https://igarape.org.br/wp-content/uploads/2020/06/2020-06-09-Regulao-do-reconhecimento-facial-no-setor-pblico.pdf
Instituto Igarape (2019). Reconhecimento facial no brasil. https://igarape.org.br/infografico-reconhecimento-facial-no-brasil.
Latour, B. (1999). Pandora’s hope: essays on the reality of science studies. Harvard University Press
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. (2019). A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635.
Molina, D., Causa, L., and Tapia, J. (2020). Toward to reduction of bias for gender and ethnicity from face images using automated skin tone classification. In International Conference of the Biometrics Special Interest Group, pages 281–289.
Moraes, T. G., Almeida, E. C., and de Pereira, J. R. L. (2020). Smile, you are being identified! risks and measures for the use of facial recognition in (semi-) public spaces. AI and Ethics, pages 1–14.
Nunes, P. (2019). Novas ferramentas, velhas praticas: reconhecimento facial e policiamento no Brasil. Retratos da violencia: cinco meses de monitoramento, analise e descobertas (Rede de Observatório de Segurança). http://observatorioseguranca.com.br/wp-content/uploads/2019/11/1relatoriorede.pdf.
Olteanu, A., Castillo, C., Diaz, F., and Kıcıman, E. (2019). Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2:13.
O’Neil, C. (2020). Algoritmos de Destruicão em Massa. Editora Rua do Sabão, 1ª edition.
ONU Mulheres, Insper, M. M. . P. B. (2016). Vieses incoscientes, equidade de genero e o mundo corporativo: lições da oficina vieses inconscientes. https://www.onumulheres.org.br/wp-content/uploads/2016/04/Vieses inconscientes 16 digital.pdf.
Pinch, T. J. (1992). Opening black boxes: Science, technology and society. Social Studies of Science, 22(3):487–510
Rosa, A., Pessoa, S. A., and Lima, F. S. (2020). Neutralidade tecnologica: reconhecimento facial e racismo. REVISTA V! RUS, 21. http://www.nomads.usp.br/virus/virus21/?sec=4&item=9&lang=pt.
Silva, T. (2019). Visao computacional e vieses racializados: branquitude como padrao no aprendizado de máquina. II COPENE Nordeste: Epistemologias Negras e Lutas Antirracistas, pages 29–31.
Silva, T. and Birhane, A. (2020). Comunidades, algoritmos e ativismos digitais: olhares afrodiasporicos. LiteraRua
Suresh, H. and Guttag, J. V. (2019). A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.10002.
Vilarino, R. and Vicente, R. (2021). Dissecting racial bias in a credit scoring system experimentally developed for the brazilian population. arXiv preprint arXiv:2011.09865.
Published
2021-07-19
How to Cite
RUBACK, Lívia; AVILA, Sandra; CANTERO, Lucia.
Bias in Machine Learning and its social implications: a case study on facial recognition. In: WORKSHOP ON THE IMPLICATIONS OF COMPUTING IN SOCIETY (WICS), 2. , 2021, Evento Online.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2021
.
p. 90-101.
ISSN 2763-8707.
DOI: https://doi.org/10.5753/wics.2021.15967.
