Justiça Algorítmica: Instrumentalização, Limites Conceituais e Desafios na Engenharia de Software
Resumo
Este artigo descreve uma pesquisa em andamento com o objetivo de compreender o conceito de justiça no campo da engenharia de software, os fatores que fundamentam a criação e instrumentalização desses conceitos e as limitações enfrentadas pela engenharia de software ao aplicá-los. A expansão do campo de estudo denominado de “justiça algorítmica” consiste fundamentalmente na criação de mecanismos e procedimentos matemáticos e formais para conceituar, avaliar e reduzir vieses e discriminações causadas por algoritmos. Realizamos um mapeamento sistemático no contexto de justiça na engenharia de software, compreendendo as métricas e definições de justiça algorítmica, assim como os procedimentos e técnicas para sistemas de tomada de decisão mais justos. Propomos, então, uma discussão acerca das limitações que surgem devido à compreensão de justiça como um atributo de software e resultado de tomadas de decisões, assim como a influência que o campo sofre decorrente da construção do pensamento computacional, que constantemente é desenvolvido em torno de abstrações. Por fim, refletimos sobre possíveis caminhos que podem nos ajudar a superar os limites da justiça algorítmica.Referências
Abdu, A. A., Pasquetto, I. V., and Jacobs, A. Z. (2023). An empirical analysis of racial categories in the algorithmic fairness literature. In 2023 ACM Conference on Fairness, Accountability, and Transparency, pages 1324–1333. ACM.
Adebayo, J. A. (2016). Fairml : Toolbox for diagnosing bias in predictive modeling.
Alencar, I. (2023). Com mais de mil prisões na BA, sistema de reconhecimento facial é criticado por racismo algorítmico; inocente ficou preso por 26 dias. Accessed: 2024-10-27.
Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks. Accessed: 2024-10-27.
Barabas, C., Doyle, C., Rubinovitz, J., and Dinakar, K. (2020). Studying up: reorienting the study of algorithmic fairness around issues of power. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, page 167–176, New York, NY, USA. Association for Computing Machinery.
Calmon, F. P., Wei, D., Ramamurthy, K. N., and Varshney, K. R. (2017). Optimized data pre-processing for discrimination prevention.
Caton, S. and Haas, C. (2024). Fairness in machine learning: A survey. ACM Comput. Surv., 56(7).
Cavalcante, L. T. C. and de Oliveira, A. A. S. (2020). Métodos de revisão bibliográfica nos estudos científicos. Psicologia em Revista, 26:83 – 102.
Chen, Z., Zhang, J. M., Hort, M., Harman, M., and Sarro, F. (2024). Fairness testing: A comprehensive survey and analysis of trends. ACM Trans. Softw. Eng. Methodol., 33(5).
Danilo Vieira and Mariana Cardoso (2024). Motorista de aplicativo alvo de tiros que mataram turista baiana diz que não houve ordem de parada por traficantes.
de Souza Santos, R., Fronchetti, F., Freire, S., and Spinola, R. (2025). Software fairness debt.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. (2011). Fairness through awareness.
Flick, U. (2014). The SAGE Handbook of Qualitative Data Analysis. SAGE Publications Ltd.
Greene, D., Hoffmann, A. L., and Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Hawaii International Conference on System Sciences.
Hardt, M., Price, E., and Srebro, N. (2016). Equality of opportunity in supervised learning.
Hoffmann, A. L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7):900–915.
Kamiran, F. and Calders, T. Data preprocessing techniques for classification without discrimination. 33(1):1–33.
Keele, S. et al. (2007). Guidelines for performing systematic literature reviews in software engineering. Technical report, Technical report, ver. 2.3 ebse technical report. ebse.
Kitchenham, B., Pretorius, R., Budgen, D., Pearl Brereton, O., Turner, M., Niazi, M., and Linkman, S. (2010). Systematic literature reviews in software engineering – a tertiary study. 52(8):792–805.
Malazita, J. W. and Resetar, K. (2019). Infrastructures of abstraction: how computer science education produces anti-political subjects. 30(4):300–312.
Mendonca, R. F., Almeida, V., and Filgueiras, F. (2023). Algorithmic Institutionalism: The Changing Rules of Social and Political Life. Oxford University Press.
Palumbo, G., Carneiro, D., and Alves, V. (2024). Objective metrics for ethical AI: a systematic literature review.
Pham, N., Pham-Ngoc, H., and Nguyen-Duc, A. (2023). Fairness requirement in AI engineering – a review on current research and future directions. volume 62, pages 3–13.
Rabonato, R. T. and Berton, L. (2024). A systematic review of fairness in machine learning.
Roberta de Souza (2024). Motorista de aplicativo morre baleado após entrar por engano em comunidade de campo grande; passageiros também foram alvejados.
Schwöbel, P. and Remmers, P. (2022). The long arc of fairness: Formalisations and ethical discourse. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 2179–2188. ACM.
Soremekun, E., Papadakis, M., Cordy, M., and Traon, Y. L. (2022). Software fairness: An analysis and survey.
Stern, P. N. (1997). Qualitative meta-analysis. Completing a Qualitative Project: Details and Dialogue, page 311.
Trigo, A., Stein, N., and Belfo, F. P. (2024). Strategies to improve fairness in artificial intelligence:a systematic literature review. Education for Information, 40(3):323–346.
Udeshi, S., Arora, P., and Chattopadhyay, S. (2018). Automated directed fairness testing. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE ’18, page 98–108.
Verma, S. and Rubin, J. (2018). Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness, FairWare ’18, page 1–7, New York, NY, USA. Association for Computing Machinery.
West, S. M. (2020). Redistribution and rekognition: A feminist critique of algorithmic fairness. 6(2).
Wu, Z. and He, J. (2022). Fairness-aware model-agnostic positive and unlabeled learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 1698–1708, New York, NY, USA. Association for Computing Machinery.
Zhang, B. H., Lemoine, B., and Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning.
Adebayo, J. A. (2016). Fairml : Toolbox for diagnosing bias in predictive modeling.
Alencar, I. (2023). Com mais de mil prisões na BA, sistema de reconhecimento facial é criticado por racismo algorítmico; inocente ficou preso por 26 dias. Accessed: 2024-10-27.
Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. and it’s biased against blacks. Accessed: 2024-10-27.
Barabas, C., Doyle, C., Rubinovitz, J., and Dinakar, K. (2020). Studying up: reorienting the study of algorithmic fairness around issues of power. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* ’20, page 167–176, New York, NY, USA. Association for Computing Machinery.
Calmon, F. P., Wei, D., Ramamurthy, K. N., and Varshney, K. R. (2017). Optimized data pre-processing for discrimination prevention.
Caton, S. and Haas, C. (2024). Fairness in machine learning: A survey. ACM Comput. Surv., 56(7).
Cavalcante, L. T. C. and de Oliveira, A. A. S. (2020). Métodos de revisão bibliográfica nos estudos científicos. Psicologia em Revista, 26:83 – 102.
Chen, Z., Zhang, J. M., Hort, M., Harman, M., and Sarro, F. (2024). Fairness testing: A comprehensive survey and analysis of trends. ACM Trans. Softw. Eng. Methodol., 33(5).
Danilo Vieira and Mariana Cardoso (2024). Motorista de aplicativo alvo de tiros que mataram turista baiana diz que não houve ordem de parada por traficantes.
de Souza Santos, R., Fronchetti, F., Freire, S., and Spinola, R. (2025). Software fairness debt.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. (2011). Fairness through awareness.
Flick, U. (2014). The SAGE Handbook of Qualitative Data Analysis. SAGE Publications Ltd.
Greene, D., Hoffmann, A. L., and Stark, L. (2019). Better, nicer, clearer, fairer: A critical assessment of the movement for ethical artificial intelligence and machine learning. In Hawaii International Conference on System Sciences.
Hardt, M., Price, E., and Srebro, N. (2016). Equality of opportunity in supervised learning.
Hoffmann, A. L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society, 22(7):900–915.
Kamiran, F. and Calders, T. Data preprocessing techniques for classification without discrimination. 33(1):1–33.
Keele, S. et al. (2007). Guidelines for performing systematic literature reviews in software engineering. Technical report, Technical report, ver. 2.3 ebse technical report. ebse.
Kitchenham, B., Pretorius, R., Budgen, D., Pearl Brereton, O., Turner, M., Niazi, M., and Linkman, S. (2010). Systematic literature reviews in software engineering – a tertiary study. 52(8):792–805.
Malazita, J. W. and Resetar, K. (2019). Infrastructures of abstraction: how computer science education produces anti-political subjects. 30(4):300–312.
Mendonca, R. F., Almeida, V., and Filgueiras, F. (2023). Algorithmic Institutionalism: The Changing Rules of Social and Political Life. Oxford University Press.
Palumbo, G., Carneiro, D., and Alves, V. (2024). Objective metrics for ethical AI: a systematic literature review.
Pham, N., Pham-Ngoc, H., and Nguyen-Duc, A. (2023). Fairness requirement in AI engineering – a review on current research and future directions. volume 62, pages 3–13.
Rabonato, R. T. and Berton, L. (2024). A systematic review of fairness in machine learning.
Roberta de Souza (2024). Motorista de aplicativo morre baleado após entrar por engano em comunidade de campo grande; passageiros também foram alvejados.
Schwöbel, P. and Remmers, P. (2022). The long arc of fairness: Formalisations and ethical discourse. In 2022 ACM Conference on Fairness, Accountability, and Transparency, pages 2179–2188. ACM.
Soremekun, E., Papadakis, M., Cordy, M., and Traon, Y. L. (2022). Software fairness: An analysis and survey.
Stern, P. N. (1997). Qualitative meta-analysis. Completing a Qualitative Project: Details and Dialogue, page 311.
Trigo, A., Stein, N., and Belfo, F. P. (2024). Strategies to improve fairness in artificial intelligence:a systematic literature review. Education for Information, 40(3):323–346.
Udeshi, S., Arora, P., and Chattopadhyay, S. (2018). Automated directed fairness testing. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE ’18, page 98–108.
Verma, S. and Rubin, J. (2018). Fairness definitions explained. In Proceedings of the International Workshop on Software Fairness, FairWare ’18, page 1–7, New York, NY, USA. Association for Computing Machinery.
West, S. M. (2020). Redistribution and rekognition: A feminist critique of algorithmic fairness. 6(2).
Wu, Z. and He, J. (2022). Fairness-aware model-agnostic positive and unlabeled learning. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’22, page 1698–1708, New York, NY, USA. Association for Computing Machinery.
Zhang, B. H., Lemoine, B., and Mitchell, M. (2018). Mitigating unwanted biases with adversarial learning.
Publicado
20/07/2025
Como Citar
VALENÇA, Lucas Rodrigues; SANTOS, Ronnie de Souza.
Justiça Algorítmica: Instrumentalização, Limites Conceituais e Desafios na Engenharia de Software. In: WORKSHOP SOBRE AS IMPLICAÇÕES DA COMPUTAÇÃO NA SOCIEDADE (WICS), 6. , 2025, Maceió/AL.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 225-234.
ISSN 2763-8707.
DOI: https://doi.org/10.5753/wics.2025.8032.
