Abstract
This work presents a study on an ethical issue in Artificial Intelligence related to the presence of racist biases by detecting faces in images. Our analyses were performed on a real-world system designed to detect fraud in public transportation in Salvador (Brazil). Our experiments were conducted by taking into account three steps. Firstly, we individually analyzed a sample of images and added specific labels related to the users’ gender and race. Then, we used well-defined detectors, based on different Convolutional Neural Network architectures, to find faces in the previously labeled images. Finally, we used statistical tests to assess whether or not there is some relation between the error rates and such labels. According to our results, we had noticed important biases, thus leading to higher error rates when images were taken from black people. We also noticed errors are more likely in both black men and women. Based on our conclusions, we recall the risk of deploying computational software that might affect minority groups that are historically neglected.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bostrom, N.: Superintelligence: Paths, dangers, strategies (2014)
Buolamwini, J., Gebru, T.: Gender shades: intersectional accuracy disparities in commercial gender classification. In: Conference on Fairness, Accountability and Transparency, pp. 77–91. PMLR (2018)
Casella, G., Berger, R.L.: Statistical inference. Cengage Learning (2021)
Castelvecchi, D.: Is facial recognition too biased to be let loose?, vol. 587. Nature, https://doi.org/10.1038/d41586-020-03186-4 (2020). https://www.nature.com/articles/d41586-020-03186-4
Castelvecchi, D.: Mathematicians urge colleagues to boycott police work in wake of killings. Nature 582, 465 (2020). https://doi.org/10.1038/d41586-020-01874-9
Farinella, G., Dugelay, J.L.: Demographic classification: do gender and ethnicity affect each other? In: 2012 International Conference on Informatics, Electronics & Vision (ICIEV), pp. 383–390. IEEE (2012)
Furl, N., Phillips, P., O’Toole, A.J.: Face recognition algorithms and the other-race effect: computational mechanisms for a developmental contact hypothesis. Cogn. Sci. 26(6), 797–815 (2002). https://doi.org/10.1016/S0364-0213(02)00084-8, https://www.sciencedirect.com/science/article/pii/S0364021302000848
Grinstead, C.M., Snell, J.L.: Introduction to probability. Am. Math. Soc. (2012)
Haykin, S.: Neural Networks: A Comprehensive Foundation, 1st edn. Prentice Hall PTR, Upper Saddle River (1994)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Jain, V., Learned-Miller, E.: FDDB: a benchmark for face detection in unconstrained settings. Technical report, UMass Amherst technical report (2010)
Karkkainen, K., Joo, J.: FairFace: face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 1548–1558, January 2021
Klare, B.F., Burge, M.J., Klontz, J.C., Vorder Bruegge, R.W., Jain, A.K.: Face recognition performance: role of demographic information. IEEE Trans. Inf. Foren. Secur. 7(6), 1789–1801 (2012). https://doi.org/10.1109/TIFS.2012.2214212
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2323 (1998)
Najibi, A.: Racial discrimination in face recognition technology. 24, October 2020. https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology
Phillips, P.J., Jiang, F., Narvekar, A., Ayyad, J., O’Toole, A.J.: An other-race effect for face recognition algorithms. ACM Trans. Appl. Percept. (TAP) 8(2), 1–11 (2011)
Raji, I.D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., Denton, E.: Saving face: investigating the ethical concerns of facial recognition auditing. https://doi.org/10.1145/3375627.3375820 (2020)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Snow, B.J.: Amazon’s face recognition falsely matched 28 members of congress with mugshots. 26, July 2018 (June 2018), technology & Civil Liberties Attorney, ACLU of Northern California
Wang, M., Deng, W., Hu, J., Tao, X., Huang, Y.: Racial faces in the wild: reducing racial bias by information maximization adaptation network. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 692–702 (2019)
Yang, M.H., Kriegman, D.J., Ahuja, N.: Detecting faces in images: a survey. IEEE Trans. Patt. Anal. Mach. Intell. 24(1), 34–58 (2002)
Yang, S., Luo, P., Loy, C.C., Tang, X.: From facial parts responses to face detection: a deep learning approach. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3676–3684 (2015)
Yang, S., Luo, P., Loy, C.C., Tang, X.: WIDER FACE: a face detection benchmark. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Zhang, C., Zhang, Z.: A survey of recent advances in face detection. Technical report. MSR-TR-2010-66, June 2010. https://www.microsoft.com/en-us/research/publication/a-survey-of-recent-advances-in-face-detection/
Zhang, K., Zhang, Z., Li, Z., Qiao, Y.: Joint face detection and alignment using multitask cascaded convolutional networks. IEEE Signal Process. Lett. 23(10), 1499–1503 (2016)
Acknowledgment
This work was partially supported by CAPES (Coordination for the Improvement of Higher Education Personnel – Brazilian Federal Government Agency). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research. Finally, we also thank the company Integra, responsible for the public transportation in Salvador, for support this work. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of CAPES, NVIDIA, and Integra.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Ferreira, M.V., Almeida, A., Canario, J.P., Souza, M., Nogueira, T., Rios, R. (2021). Ethics of AI: Do the Face Detection Models Act with Prejudice?. In: Britto, A., Valdivia Delgado, K. (eds) Intelligent Systems. BRACIS 2021. Lecture Notes in Computer Science(), vol 13074. Springer, Cham. https://doi.org/10.1007/978-3-030-91699-2_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-91699-2_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91698-5
Online ISBN: 978-3-030-91699-2
eBook Packages: Computer ScienceComputer Science (R0)