On the evaluation of algorithm fairness strategies: a use case on gender bias

  • Samuel de Morais Lima IFPB
  • Alex Sandro da Cunha Rêgo IFPB
  • Damires Yluska de Souza Fernandes IFPB

Resumo


Research Context: The growing use of Machine Learning (ML) techniques in the development of predictive models to support decision-making and the development of information systems, while fostering advancements, has introduced significant ethical challenges, notably the emergence of biases that may lead to unfair decisions. Scientific and/or Practical Problem: The main scientific and practical challenge addressed is the identification and mitigation of algorithmic biases that may reproduce or exacerbate discrimination against minority groups, with a specific focus on the sensitive attribute gender. Proposed Solution and/or Analysis: This study presents an experimental evaluation of different bias mitigation strategies, including the use of the EqOddsPostprocessing and Reweighing methods from AI Fairness 360 toolkit, the application of weights, and the randomization of sensitive attribute values. Fairness performance was assessed using the Equal Opportunity and Demographic Parity metrics. Related IS Theory: This research is grounded in the theory of Algorithmic Fairness, aimed at ensuring impartiality, and in the concept of SocioTechnical Bias, which acknowledges that socially embedded prejudices are reflected in the outcomes produced by ML algorithms. Research Method: An experimental evaluation was conducted on a binary classification problem using the Portuguese SATDAP dataset. The baseline model was built with the Decision Tree algorithm, chosen for interpretability. Methodology comprised five experimental scenarios designed to test mitigation strategies and assess fairness through cross-validation. Summary of Results: Findings showed that Reweighing method fostered fairer predictions according to fairness metrics, in addition to yielding a slight but notable improvement in performance metrics such as accuracy, precision, recall, and f1-score. Contributions and Impact to IS area: This research reinforces the importance of integrating ethical guidelines and bias mitigation methodologies in developing ML systems, contributing to the construction of solutions fostering predictive fairness and countering discrimination without negatively impacting predictive performance.

Referências

Barocas, S., Hardt, M., and Narayanan, A. (2023). Fairness and Machine Learning: Limitations and Opportunities. MIT Press.

Barocas, S. and Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3):671–732.

Bellamy, R. K. E., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., Nagar, S., Ramamurthy, K. N., Richards, J. T., Saha, D., Sattigeri, P., Singh, M., Varshney, K. R., and Zhang, Y. (2018). AI fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. CoRR, abs/1810.01943.

Binns, R. (2018). What can political philosophy teach us about algorithmic fairness? IEEE Security & Privacy, 16(3):73–80.

Brasil (2018). Lei nº 13.709, de 14 de agosto de 2018. [link]. Accessed: 2024-12-18.

Castaneda, J., Jover, A., Calvet, L., Yanes, S., Juan, A., and Sainz, M. (2022). Dealing with gender bias issues in data-algorithmic processes: A social-statistical perspective. Algorithms, 15(9):1–16.

Caton, S. and Haas, C. (2024). Fairness in machine learning: A survey. ACM Comput. Surv., 56(166):1–38.

Dutenhefner, P., Lemos, G., Rezende, T., Fernandes, J., Tuler, D., Pappa, G., Paixão, G., Ribeiro, A., and Jr., W. M. (2024). Ecg-resnext: Age prediction in pediatric electrocardiograms and its correlations with comorbidities. In Anais do XXI Encontro Nacional de Inteligência Artificial e Computacional, pages 49–60, Porto Alegre, RS, Brasil. SBC.

Favaretto, M., De Clercq, E., and Elger, B. S. (2019). Big data and discrimination: perils, promises and solutions. a systematic review. J Big Data, 6:12.

Fernandes, D. Y. d. S. and Rêgo, A. S. d. C. (2023). Viés, ética e responsabilidade social em modelos preditivos. Computação Brasil, (51):19–23.

Ferrara, E. (2024). Fairness and bias in artificial intelligence: A brief survey of sources, impacts, and mitigation strategies. Sci, 6(3). Accessed: 2024-04-28.

Freitas, C. C. (2024). Análise preditiva e equitativa com ai fairness 360. Orientador: Clerivaldo José Roccia. Contribuidores: Thiago Salhab Alves, Luciene Maria Garbuio Castello Branco.

Hastings, C., Qian, L., Gibson, C., Obiomon, P., and Dong, X. (2024). Comprehensive validation on reweighting samples for bias mitigation via aif360. Applied Sciences, 14(9):3826. C. Hastings, L. Qian, P. Obiomon and X. Dong are with the Department of Electrical and Computer Engineering, Prairie View A&M University, Texas A&M University System, Prairie View, TX 77446, USA. C. Gibson is with the College of Juvenile Justice, Executive Director of Texas Juvenile Crime Prevention Center, Prairie View A&M University, Texas A&M University System, Prairie View, TX 77446, USA. Email: {chastings, liqian, cbgibson, phobiomon, xidong}@pvamu.edu.

Kordzadeh, N. and Ghasemaghaei, M. (2021). Algorithmic bias: review, synthesis, and future research directions. European Journal of Information Systems. Accessed: 2024-11-20.

Martini, V. and Berton, L. (2024). Fairness analysis in ai algorithms in healthcare: A study on post-processing approaches. In Anais do XXI Encontro Nacional de Inteligência Artificial e Computacional, pages 553–564, Porto Alegre, RS, Brasil. SBC.

Martins, M. V. e. a. (2021). Early prediction of student’s performance in higher education: a case study. In Trends and Applications in Information Systems and Technologies, volume 1 of Advances in Intelligent Systems and Computing. Springer.

Mehrabi, N. e. a. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1–35. Accessed: 2024-11-20.

Menezes, H. F. e. a. (2021). Bias and fairness in face detection. In Conference on Graphics, Patterns and Images (SIBGRAPI), Online. Sociedade Brasileira de Computação.

Pagano, T. P. e. a. (2023). Bias and unfairness in machine learning models: A systematic review on datasets, tools, fairness metrics, and identification and mitigation methods. Big Data and Cognitive Computing, 7:15.

Ribeiro, H. B., da Silva, L. O., and de Andrade Lira Rabêlo, R. (2024). Estratégias para lidar com desbalanceamento de dados em aprendizado de máquina. In Anais Estendidos do X Simpósio Brasileiro de Computação Aplicada à Saúde. Sociedade Brasileira de Computação.

Ruback, L., Avila, S., and Cantero, L. (2021). Vieses no aprendizado de máquina e suas implicações sociais: Um estudo de caso no reconhecimento facial. In Anais do 2º Workshop sobre as Implicações da Computação na Sociedade (WICS), pages 90–101, Evento Online. Sociedade Brasileira de Computação.

Silva, F., Feitosa, R., Batista, L., and Santana, A. (2024). Análise comparativa de métodos de explicabilidade da inteligência artificial no cenário educacional: um estudo de caso sobre evasão. In Anais do XXXV Simpósio Brasileiro de Informática na Educação, pages 2968–2977, Porto Alegre, RS, Brasil. SBC.

Uddin, S., Lu, H., Rahman, A., et al. (2024). A novel approach for assessing fairness in deployed machine learning algorithms. Scientific Reports, 14:17753.

Wong, P.-H. (2019). The socio-technical dimensions of fairness in machine learning. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, pages 1–14, Honolulu, HI, USA.

Yang, K., Huang, B., Stoyanovich, J., and Schelter, S. (2020). Fairness-aware instrumentation of preprocessing pipelines for machine learning. In Proceedings of the Workshop on Human-In-the-Loop Data Analytics (HILDA ’20), page 4, New York, NY, USA. ACM.
Publicado
25/05/2026
LIMA, Samuel de Morais; RÊGO, Alex Sandro da Cunha; FERNANDES, Damires Yluska de Souza. On the evaluation of algorithm fairness strategies: a use case on gender bias. In: SIMPÓSIO BRASILEIRO DE SISTEMAS DE INFORMAÇÃO (SBSI), 22. , 2026, Vitória/ES. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2026 . p. 1142-1160. DOI: https://doi.org/10.5753/sbsi.2026.248732.