Artificial intelligence discrimination: how to deal with it?
Resumo
The emergence of artificial intelligence has brought many benefits to society through the automation of activities such as driving cars, product delivery, item classification, and predicting trends with a greater degree of accuracy. However, depending on how it is used, it may reflect persistent problems in society, such as discrimination. In this paper, we discuss discrimination by artificial intelligence. We begin by describing this problem and showing that it is a recurring and current problem. Then, we show the origin of this problem and propose a strategy to deal with it in order to prevent it from happening again. Lastly, we discuss future works and how the proposed strategy can be put into practice.
Palavras-chave:
Computing, citizenship and the welfare state, Computing and diversity, Cultural, political and social implications of AI
Referências
Allen, A. (2016). The “three black teenagers” search shows it is society, not google, that is racist. [link]. Accessed: 202202-06.
Bissoto, A., Fornaciali, M., Valle, E., and Avila, S. (2019). (de)constructing bias on skin lesion datasets.
Borgesius, F. J. Z. (2020). Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights, 24(10):1572–1593.
Buolamwini, J. and Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In FAT.
Caton, S. and Haas, C. (2020). Fairness in machine learning: A survey.
Discovery (2018). Discovery brasil — inteligência artificial ibm. https://www.youtube.com/watch?v=W95YlM5-iPk. Accessed: 2022-02-11.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS ’12, page 214–226, New York, NY, USA. Association for Computing Machinery.
Ferrer, X., Nuenen, T. v., Such, J. M., Cote, M., and Criado, N. (2021). Bias and discrimination in ai: A cross-disciplinary perspective. IEEE Technology and Society Magazine, 40(2):72–80.
Hajian, S., Domingo-Ferrer, J., and Martínez-Ballesté, A. (2011). Rule protection for indirect discrimination prevention in data mining. In Torra, V., Narakawa, Y., Yin, J., and Long, J., editors, Modeling Decision for Artificial Intelligence, pages 211–222, Berlin, Heidelberg. Springer Berlin Heidelberg.
Kamiran, F. and Calders, T. (2011). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33:1–33.
Lefevre, S., Carvalho, A., and Borrelli, F. (2015). Autonomous car following: A learningbased approach. In 2015 IEEE Intelligent Vehicles Symposium (IV), pages 920–926.
Luong, B. T., Ruggieri, S., and Turini, F. (2011). K-nn as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’11, page 502–510, New York, NY, USA. Association for Computing Machinery.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. (2022). A survey on bias and fairness in machine learning.
Mujtaba, D. F. and Mahapatra, N. R. (2019). Ethical considerations in ai-based recruitment. In 2019 IEEE International Symposium on Technology and Society (ISTAS), pages 1–7.
Olteanu, A., Castillo, C., Diaz, F., and Kcman, E. (2019). Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2.
Parker, R. (2012). Stigma, prejudice and discrimination in global public health. Cadernos de Saúde Pública [online], 28(1):164–169.
Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
Rosa, A., Pessoa, S. A., and Lima, F. S. (2020). Neutralidade tecnologica: reconhecimento facial e racismo. REVISTA V! RUS, 21.
Silva, T. (2019). Visao computacional e vieses racializados: branquitude como padrao no aprendizado de máquina. II COPENE Nordeste: Epistemologias Negras e Lutas Antirracistas, pages 29–31.
Sowell, T. (2019). Discrimination and Disparities. Basic Books.
Sperrle, F., Schlegel, U., El-Assady, M., and Keim, D. (2019). Human trust modeling for bias mitigation in artificial intelligence. In ACM CHI 2019 Workshop: Where is the Human? Bridging the Gap Between AI and HCI.
Suresh, H. and Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. Equity and Access in Algorithms, Mechanisms, and Optimization.
Vilarino, R. and Vicente, R. (2021). An experiment on the mechanisms of racial bias in ml-based credit scoring in brazil.
von Eschenbach, W. (2021). Transparency and the black box problem: Why we do not trust ai. Philosophy & Technology, 34.
York, C. (2016). Three black teenagers: Is google racist? it’s not them, it’s us. [link]. Accessed: 2022-02-05.
Bissoto, A., Fornaciali, M., Valle, E., and Avila, S. (2019). (de)constructing bias on skin lesion datasets.
Borgesius, F. J. Z. (2020). Strengthening legal protection against discrimination by algorithms and artificial intelligence. The International Journal of Human Rights, 24(10):1572–1593.
Buolamwini, J. and Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. In FAT.
Caton, S. and Haas, C. (2020). Fairness in machine learning: A survey.
Discovery (2018). Discovery brasil — inteligência artificial ibm. https://www.youtube.com/watch?v=W95YlM5-iPk. Accessed: 2022-02-11.
Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. (2012). Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, ITCS ’12, page 214–226, New York, NY, USA. Association for Computing Machinery.
Ferrer, X., Nuenen, T. v., Such, J. M., Cote, M., and Criado, N. (2021). Bias and discrimination in ai: A cross-disciplinary perspective. IEEE Technology and Society Magazine, 40(2):72–80.
Hajian, S., Domingo-Ferrer, J., and Martínez-Ballesté, A. (2011). Rule protection for indirect discrimination prevention in data mining. In Torra, V., Narakawa, Y., Yin, J., and Long, J., editors, Modeling Decision for Artificial Intelligence, pages 211–222, Berlin, Heidelberg. Springer Berlin Heidelberg.
Kamiran, F. and Calders, T. (2011). Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33:1–33.
Lefevre, S., Carvalho, A., and Borrelli, F. (2015). Autonomous car following: A learningbased approach. In 2015 IEEE Intelligent Vehicles Symposium (IV), pages 920–926.
Luong, B. T., Ruggieri, S., and Turini, F. (2011). K-nn as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’11, page 502–510, New York, NY, USA. Association for Computing Machinery.
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. (2022). A survey on bias and fairness in machine learning.
Mujtaba, D. F. and Mahapatra, N. R. (2019). Ethical considerations in ai-based recruitment. In 2019 IEEE International Symposium on Technology and Society (ISTAS), pages 1–7.
Olteanu, A., Castillo, C., Diaz, F., and Kcman, E. (2019). Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data, 2.
Parker, R. (2012). Stigma, prejudice and discrimination in global public health. Cadernos de Saúde Pública [online], 28(1):164–169.
Pasquale, F. (2015). The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press.
Rosa, A., Pessoa, S. A., and Lima, F. S. (2020). Neutralidade tecnologica: reconhecimento facial e racismo. REVISTA V! RUS, 21.
Silva, T. (2019). Visao computacional e vieses racializados: branquitude como padrao no aprendizado de máquina. II COPENE Nordeste: Epistemologias Negras e Lutas Antirracistas, pages 29–31.
Sowell, T. (2019). Discrimination and Disparities. Basic Books.
Sperrle, F., Schlegel, U., El-Assady, M., and Keim, D. (2019). Human trust modeling for bias mitigation in artificial intelligence. In ACM CHI 2019 Workshop: Where is the Human? Bridging the Gap Between AI and HCI.
Suresh, H. and Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. Equity and Access in Algorithms, Mechanisms, and Optimization.
Vilarino, R. and Vicente, R. (2021). An experiment on the mechanisms of racial bias in ml-based credit scoring in brazil.
von Eschenbach, W. (2021). Transparency and the black box problem: Why we do not trust ai. Philosophy & Technology, 34.
York, C. (2016). Three black teenagers: Is google racist? it’s not them, it’s us. [link]. Accessed: 2022-02-05.
Publicado
31/07/2022
Como Citar
NIEMIEC, William; BORGES, Rafael F.; BARONE, Dante A. C..
Artificial intelligence discrimination: how to deal with it?. In: WORKSHOP SOBRE AS IMPLICAÇÕES DA COMPUTAÇÃO NA SOCIEDADE (WICS), 3. , 2022, Niterói.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2022
.
p. 93-100.
ISSN 2763-8707.
DOI: https://doi.org/10.5753/wics.2022.222604.