The Systemic Causes Behind Gender Bias in AI: A Systematic Literature Mapping
Abstract
Artificial Intelligence (AI) has been transforming daily life, impacting critical decisions in various fields such as healthcare, security, and the labor market. However, its use raises ethical and social concerns, particularly regarding biases present in AI systems. This study specifically addresses gender bias, investigating the systemic causes (data, algorithmic bias, or user behavior) behind this issue through a Systematic Literature Mapping (SLM). The research analyzed 176 studies from the IEEE Xplore, ACM Digital Library, and Scopus databases, of which 12 were deemed relevant to the study’s objective. The selection process was guided by inclusion and exclusion criteria. The analysis identified that the primary cause of gender bias is the use of biased data, amplified by algorithmic bias in half of the studies. User behavior was minimally explored, being the main focus in only one article.References
Brunet, M.-E., Alkalay-Houlihan, C., Anderson, A., and Zemel, R. (2019). Understanding the origins of bias in word embeddings. In Proceedings of the 36th International Conference on Machine Learning (ICML 2019), pages 1275–1294, Long Beach, California, USA. PMLR.
Cheong, J., Kuzucu, S., Kalkan, S., and Gunes, H. (2023). Towards gender fairness for mental health prediction. In Elkind, E., editor, Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, pages 5932–5940. International Joint Conferences on Artificial Intelligence Organization. AI for Good.
Cho, W. I., Kim, J., Yang, J., and Kim, N. S. (2021). Towards cross-lingual generalization of translation gender bias. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 449–457, New York, NY, USA. Association for Computing Machinery.
da Silva Souza, N. C. (2023). Uma abordagem para identificação do viés de gênero em modelos de pln. Trabalho de Conclusão de Curso (MBA) -– Instituto de Ciências Matemáticas e de Computação, Universidade de São Paulo, São Carlos, 2023.
Dervişoğlu, H. and Fatih Amasyali, M. (2021). Bias detection and mitigation in sentiment analysis. In 2021 Innovations in Intelligent Systems and Applications Conference (ASYU), pages 1–6.
Ghosh, S. and Caliskan, A. (2023). Chatgpt perpetuates gender bias in machine translation and ignores non-gendered pronouns: Findings across bengali and five other low-resource languages. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’23, page 901–912, New York, NY, USA. Association for Computing Machinery.
GLOBO (2024). Na educação, na saúde e até dentro de casa: como a inteligência artificial já faz parte da nossa rotina — g1.globo.com. [link]. [Acessado em 03/02/2025].
Hall, P. and Ellis, D. (2023). A systematic review of socio-technical gender bias in ai algorithms. Online Information Review, 47(7):1264 – 1279.
Kopeinik, S., Mara, M., Ratz, L., Krieg, K., Schedl, M., and Rekabsaz, N. (2023). Show me a ”male nurse”! how gender bias is reflected in the query formulation of search engine users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York, NY, USA. Association for Computing Machinery.
Lima, R. M. d., Pisker, B., and Correa, V. S. (2023). Journal of Telecommunications and the Digital Economy, 11(2):8–30.
Matthews, S., Hudzina, J., and Sepehr, D. (2022). Gender and racial stereotype detection in legal opinion word embeddings. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI 2022), volume 36, pages 12026–12033, Vancouver, Canada. AAAI Press.
Nadeem, A., Abedin, B., and Marjanovic, O. (2020). Gender bias in ai: A review of contributing factors and mitigating strategies. In ACIS 2020 Proceedings - 31st Australasian Conference on Information Systems.
Nadeem, A., Marjanovic, O., and Abedin, B. (2022). Gender bias in ai-based decision-making systems: a systematic literature review. Australasian Journal of Information Systems, 26. All Open Access, Gold Open Access.
Njoto, S., Cheong, M., Lederman, R., McLoughney, A., Ruppanner, L., and Wirth, A. (2022a). Gender bias in ai recruitment systems: A sociological-and data science-based case study. In 2022 IEEE International Symposium on Technology and Society (ISTAS), volume 1, pages 1–7.
Njoto, S., Cheong, M., Lederman, R., McLoughney, A., Ruppanner, L., and Wirth, A. (2022b). Gender bias in ai recruitment systems: A sociological-and data science-based case study. In 2022 IEEE International Symposium on Technology and Society (ISTAS), volume 1, pages 1–7.
Parreira, M. T., Gillet, S., Winkle, K., and Leite, I. (2023). How did we miss this? a case study on unintended biases in robot social behavior. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’23, page 11–20, New York, NY, USA. Association for Computing Machinery.
Rizhinashvili, D., Sham, A. H., and Anbarjafari, G. (2022). Gender neutralisation for unbiased speech synthesising. Electronics (Switzerland), 11(10). All Open Access, Gold Open Access.
Sogancioglu, G., Kaya, H., and Salah, A. A. (2023). The effects of gender bias in word embeddings on patient phenotyping in the mental health domain. In 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 1–8.
Thelwall, M. (2018). Gender bias in machine learning for sentiment analysis. Online Information Review, 42(3):343 – 354. All Open Access, Green Open Access.
Cheong, J., Kuzucu, S., Kalkan, S., and Gunes, H. (2023). Towards gender fairness for mental health prediction. In Elkind, E., editor, Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI-23, pages 5932–5940. International Joint Conferences on Artificial Intelligence Organization. AI for Good.
Cho, W. I., Kim, J., Yang, J., and Kim, N. S. (2021). Towards cross-lingual generalization of translation gender bias. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 449–457, New York, NY, USA. Association for Computing Machinery.
da Silva Souza, N. C. (2023). Uma abordagem para identificação do viés de gênero em modelos de pln. Trabalho de Conclusão de Curso (MBA) -– Instituto de Ciências Matemáticas e de Computação, Universidade de São Paulo, São Carlos, 2023.
Dervişoğlu, H. and Fatih Amasyali, M. (2021). Bias detection and mitigation in sentiment analysis. In 2021 Innovations in Intelligent Systems and Applications Conference (ASYU), pages 1–6.
Ghosh, S. and Caliskan, A. (2023). Chatgpt perpetuates gender bias in machine translation and ignores non-gendered pronouns: Findings across bengali and five other low-resource languages. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’23, page 901–912, New York, NY, USA. Association for Computing Machinery.
GLOBO (2024). Na educação, na saúde e até dentro de casa: como a inteligência artificial já faz parte da nossa rotina — g1.globo.com. [link]. [Acessado em 03/02/2025].
Hall, P. and Ellis, D. (2023). A systematic review of socio-technical gender bias in ai algorithms. Online Information Review, 47(7):1264 – 1279.
Kopeinik, S., Mara, M., Ratz, L., Krieg, K., Schedl, M., and Rekabsaz, N. (2023). Show me a ”male nurse”! how gender bias is reflected in the query formulation of search engine users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, CHI ’23, New York, NY, USA. Association for Computing Machinery.
Lima, R. M. d., Pisker, B., and Correa, V. S. (2023). Journal of Telecommunications and the Digital Economy, 11(2):8–30.
Matthews, S., Hudzina, J., and Sepehr, D. (2022). Gender and racial stereotype detection in legal opinion word embeddings. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI 2022), volume 36, pages 12026–12033, Vancouver, Canada. AAAI Press.
Nadeem, A., Abedin, B., and Marjanovic, O. (2020). Gender bias in ai: A review of contributing factors and mitigating strategies. In ACIS 2020 Proceedings - 31st Australasian Conference on Information Systems.
Nadeem, A., Marjanovic, O., and Abedin, B. (2022). Gender bias in ai-based decision-making systems: a systematic literature review. Australasian Journal of Information Systems, 26. All Open Access, Gold Open Access.
Njoto, S., Cheong, M., Lederman, R., McLoughney, A., Ruppanner, L., and Wirth, A. (2022a). Gender bias in ai recruitment systems: A sociological-and data science-based case study. In 2022 IEEE International Symposium on Technology and Society (ISTAS), volume 1, pages 1–7.
Njoto, S., Cheong, M., Lederman, R., McLoughney, A., Ruppanner, L., and Wirth, A. (2022b). Gender bias in ai recruitment systems: A sociological-and data science-based case study. In 2022 IEEE International Symposium on Technology and Society (ISTAS), volume 1, pages 1–7.
Parreira, M. T., Gillet, S., Winkle, K., and Leite, I. (2023). How did we miss this? a case study on unintended biases in robot social behavior. In Companion of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’23, page 11–20, New York, NY, USA. Association for Computing Machinery.
Rizhinashvili, D., Sham, A. H., and Anbarjafari, G. (2022). Gender neutralisation for unbiased speech synthesising. Electronics (Switzerland), 11(10). All Open Access, Gold Open Access.
Sogancioglu, G., Kaya, H., and Salah, A. A. (2023). The effects of gender bias in word embeddings on patient phenotyping in the mental health domain. In 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 1–8.
Thelwall, M. (2018). Gender bias in machine learning for sentiment analysis. Online Information Review, 42(3):343 – 354. All Open Access, Green Open Access.
Published
2025-07-20
How to Cite
DOLABELLA, Rafaela Toledo; SILVA, Thais Regina de Moura Braga; BRAGA E SILVA, Gláucia; BATISTA, Estela Miranda.
The Systemic Causes Behind Gender Bias in AI: A Systematic Literature Mapping. In: WOMEN IN INFORMATION TECHNOLOGY (WIT), 19. , 2025, Maceió/AL.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 252-263.
ISSN 2763-8626.
DOI: https://doi.org/10.5753/wit.2025.9141.
