Unfairness in Machine Learning for Web Systems Applications

  • Diego Minatel USP
  • Nícolas Roque dos Santos USP
  • Angelo Cesar Mendes da Silva USP
  • Mariana Cúri USP
  • Ricardo Marcondes Marcacini USP
  • Alneu de Andrade Lopes USP

Resumo


Machine learning models are increasingly present in our society; many of these models integrate Web Systems and are directly related to the content we consume daily. Nonetheless, on several occasions, these models have been responsible for decisions that spread prejudices or even decisions, if committed by humans, that would be punishable. After several cases of this nature came to light, research and discussion topics such as Fairness in Machine Learning and Artificial Intelligence Ethics gained a boost of importance and urgency in our society. Thus, one way to make Web Systems fairer in the future is to show how they can currently be unfair. In order to support discussions and be a reference for unfairness cases in machine learning decisions, this work aims to organize in a single document known decision-making that was wholly or partially supported by machine learning models that propagated prejudices, stereotypes, and inequalities in Web Systems. We conceptualize relevant categories of unfairness (such as Web Search and Deep Fake), and when possible, we present the solution adopted by those involved. Furthermore, we discuss approaches to mitigate or prevent discriminatory effects in Web Systems decision-making based on machine learning.

Palavras-chave: Data Bias, Machine Learning, Unfairness Examples, Web Systems

Referências

Eleni Adamopoulou and Lefteris Moussiades. 2020. An overview of chatbot technology. In Artificial Intelligence Applications and Innovations. Springer, 373–383

Shruti Agarwal, Hany Farid, Yuming Gu, Mingming He, Koki Nagano, and Hao Li. 2019. Protecting World Leaders Against Deep Fakes.. In CVPR workshops, Vol. 1. 38

Maryam Ahmed. 2020. UK passport photo checker shows bias against dark-skinned women. [link]

Leigh Alexander. 2016. Do Google’s ’unprofessional hair’ results show it is racist? [link].

Kiana Alikhademi, Emma Drobina, Diandra Prioleau, Brianna Richardson, Duncan Purves, and Juan E Gilbert. 2022. A review of predictive policing from the perspective of fairness. Artificial Intelligence and Law (2022), 1–17

Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2016. Machine Bias: Risk Assessments In Criminal Sentencing. [link].

Avaaz. 2020. Why is YouTube Broadcasting Climate Misinformation to Millions? [link].

Emily Badger. 2017. How Redlining’s Racist Effects Lasted for Decades. [link].

Solon Barocas, Moritz Hardt, and Arvind Narayanan. 2017. Fairness in machine learning. NIPS Tutorial 1 (2017)

Solon Barocas and Andrew D Selbst. 2016. Big data’s disparate impact. Calif. L. Rev. 104 (2016), 671

Yotam Berger. 2017. Israel Arrests Palestinian Because Facebook Translated ’Good Morning’ to ’Attack Them’. [link].

Sam Biddle. 2022. The internet’s new favorite AI proposes torturing Iranians and surveilling mosques. [link].

James Bisbee, Megan Brown, Angela Lai, Richard Bonneau, Joshua A Tucker, and Jonathan Nagler. 2022. Election Fraud, YouTube, and Public Perception of the Legitimacy of President Biden. Journal of Online Trust and Safety 1, 3 (2022). https://doi.org/10.54501/jots.v1i3.60

Su Lin Blodgett and Brendan O’Connor. 2017. Racial Disparity in Natural Language Processing: A Case Study of Social Media African-American English. (2017)

Colin R Blyth. 1972. On Simpson’s paradox and the sure-thing principle. J. Amer. Statist. Assoc. 67, 338 (1972), 364–366

Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems 29 (2016)

BRASIL. 1988. Constituição da República Federativa do Brasil. Brasília, DF: Centro Gráfico

Thomas Brewster. 2021. Fraudsters Cloned Company Director’s Voice In $35 Million Heist, Police Find. [link].

Joy Buolamwini and Timnit Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. In Conference on fairness, accountability and transparency. 77–91

Sean Burch. 2017. Facebook’s ‘People You May Know’ Feature Is Outing Sex Workers. [link].

Amelia Butterly. 2015. Google Image search for CEO has Barbie as first female result. [link]

Flavio Calmon, Dennis Wei, Bhanukiran Vinzamuri, Karthikeyan Natesan Ramamurthy, and Kush R Varshney. 2017. Optimized pre-processing for discrimination prevention. Advances in Neural Information Processing Systems 30 (2017), 3992–4001

Stevie Chancellor. 2023. Toward Practices for Human-Centered Machine Learning. Commun. ACM 66, 3 (2023), 78–85.

Bobby Chesney and Danielle Citron. 2019. Deep fakes: A looming challenge for privacy, democracy, and national security. Calif. L. Rev. 107 (2019), 1753

Mark Coeckelbergh. 2020. AI ethics. Mit Press

Sam Corbett-Davies and Sharad Goel. 2018. The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. (2018)

Bo Cowgill and Catherine E Tucker. 2020. Algorithmic Fairness and Economics. The Journal of Economic Perspectives (2020)

Kate Crawford. 2013. Think again: Big data. Foreign Policy 9 (2013)

Jeffrey Dastin. 2020. Amazon scraps secret AI recruiting tool that showed bias against women. [link].

Maria De-Arteaga, Alexey Romanov, Hanna Wallach, Jennifer Chayes, Christian Borgs, Alexandra Chouldechova, Sahin Geyik, Krishnaram Kenthapadi, and Adam Tauman Kalai. 2019. Bias in bios: A case study of semantic representation bias in a high-stakes setting. In Conference on Fairness, Accountability, and Transparency. 120–128.

Ana Luisa Zago de Moraes, Lutiana Valadares Fernandes Barbosa, and Viviane Ceolin Dallasta Del Grossi. 2022. Inteligência artificial e direitos humanos: aportes para um marco regulatório no Brasil. Editora Dialética

Ben Dickson. 2022. Scammers used AI-generated faces to pose as a Boston law firm. [link].

Marina Drosou, HV Jagadish, Evaggelia Pitoura, and Julia Stoyanovich. 2017. Diversity in big data: A review. Big data 5, 2 (2017), 73–84

Charles Duhigg. 2012. How Companies Learn Your Secrets. [link].

Todd Feathers. 2021. Major Universities Are Using Race as a “High Impact Predictor” of Student Success. [link].

Michael Feldman, Sorelle A Friedler, John Moeller, Carlos Scheidegger, and Suresh Venkatasubramanian. 2015. Certifying and removing disparate impact. In ACM SIGKDD. 259–268

Daniela Frabalise. 2018. Apenas 17% dos programadores brasileiros são mulheres. [link].

Viraj Gaur. 2022. Deepfakes Invade LinkedIn, Delhi Firm Offers ‘Ready to Use’ Profiles: Report. [link].

Dave Gershgorn. 2017. Alphabet’s hate-fighting AI doesn’t understand hate yet. [link].

Shirin Ghaffary. 2021. Racist trolls attacked England’s soccer team. Fans fought back. [link].

Noah Giansiracusa. 2021. Facebook Uses Deceptive Math to Hide Its Hate Speech Problem. [link].

Ben Green and Lily Hu. 2018. The myth in the methodology: Towards a recontextualization of fairness in machine learning. In Proceedings of the machine learning: the debates workshop

Nina Grgic-Hlaca, Muhammad Bilal Zafar, Krishna P Gummadi, and Adrian Weller. 2016. The case for process fairness in learning: Feature selection for fair decision making. In NIPS Symposium on Machine Learning and the Law, Vol. 1. 2

James Griffiths. 2016. New Zealand passport robot thinks this Asian man’s eyes are closed. [link].

Ben Guarino. 2016. Google faulted for racial bias in image search results for black teenagers. [link].

Haleluya Hadero. 2023. Deepfake porn could be a growing problem amid AI race. [link].

Moritz Hardt, Eric Price, and Nati Srebro. 2016. Equality of opportunity in supervised learning. Advances in neural information processing systems 29 (2016), 3315–3323

GM Harshvardhan, Mahendra Kumar Gourisaria, Manjusha Pandey, and Siddharth Swarup Rautaray. 2020. A comprehensive survey and analysis of generative models in machine learning. Computer Science Review 38 (2020), 100285

Alex Hern. 2020. Twitter apologises for ’racist’ image-cropping algorithm. [link].

Kashmir Hill. 2016. Facebook recommended a psychiatrist’s patients friend each other — and there’s no clear explanation. [link].

ALex Horton. 2018. A fake photo of Emma González went viral on the far right, where Parkland teens are villains. [link].

Ayanna Howard and Jason Borenstein. 2018. The ugly truth about ourselves and our robot creations: the problem of bias and social inequity. Science and engineering ethics 24, 5 (2018), 1521–1536

Claran Jones. 2018. Facial recognition wrongly identified 2,000 people as possible criminals when Champions League final came to Cardiff. [link].

Edward Ongweso Jr. 2022. Scammers Use Elon Musk Deepfake to Steal Crypto. [link].

Edward Ongweso Jr. 2022. This Startup Is Selling Tech to Make Call Center Workers Sound Like White Americans. [link].

JTA. 2016. Microsoft Pulls Robot After It Tweets ’Hitler Was Right I Hate the Jews’. [link].

Faisal Kamiran and Toon Calders. 2012. Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems 33, 1 (2012), 1–33.

Paramjit Kaur, Kewal Krishan, Suresh K Sharma, and Tanuj Kanchan. 2020. Facial-recognition algorithms: A literature review. Medicine, Science and the Law 60, 2 (2020), 131–139

Siobhan Kennedy. 2017. Potentially deadly bomb ingredients are ‘frequently bought together’ on Amazon. [link].

Oluwanifemi Kolawole. 2020. Why Facebook labelled content from #LekkiMassacre2020 incident ’false’. [link].

Adam DI Kramer, Jamie E Guillory, and Jeffrey T Hancock. 2014. Experimental evidence of massive-scale emotional contagion through social networks. National Academy of Sciences 111, 24 (2014), 8788–8790

Preethi Lahoti, Krishna P Gummadi, and Gerhard Weikum. 2019. ifair: Learning individually fair data representations for algorithmic decision making. In Conference on Data Engineering (ICDE). IEEE, 1334–1345

David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. The parable of Google Flu: traps in big data analysis. Science 343, 6176 (2014), 1203–1205

Dave Lee. 2016. Tay: Microsoft issues apology over racist chatbot fiasco. [link]

Sam Levin. 2016. A beauty constest was judged by AI and the robots didn’t like dark skin. [link].

Bing Liu and Lei Zhang. 2012. A Survey of Opinion Mining and Sentiment Analysis. Springer US, Boston, MA, 415–463

Linyuan Lü, Matúš Medo, Chi Ho Yeung, Yi-Cheng Zhang, Zi-Ke Zhang, and Tao Zhou. 2012. Recommender systems. Physics reports 519, 1 (2012), 1–49

Kristian Lum and James Johndrow. 2016. A statistical framework for fair predictive algorithms. (2016)

Justin McCurry. 2021. South Korean AI chatbot pulled from Facebook after hate speech towards minorities. [link].

Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A survey on bias and fairness in machine learning. ACM Comput. Surv. 54, 6 (2021), 1–35.

Ruby Mellen. 2020. Buenos Aires is using facial recognition system that tracks child suspects, rights group says. [link].

Maria Mellor. 2020. Why is TikTok creating filter bubbles based on your race? (2020). [link].

Petra Molnar. 2019. Technology on the margins: AI and global migration management from a human rights perspective. Cambridge International Law Journal 8 (12 2019), 305–330. https://doi.org/10.4337/cilj.2019.02.07

Jornal Nacional. 2022. Deepfake: conteúdo do Jornal Nacional é adulterado para desinformar os eleitores. [link].

Mark Newman. 2010. Networks: An Introduction. Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199206650.001.0001.

CBS news. 2019. Doctored Nancy Pelosi video highlights threat of "deepfake" tech. [link].

Safiya Umoja Noble. 2018. Algorithms of oppression. In Algorithms of oppression. New York University Press

Alexandra Olteanu, Carlos Castillo, Fernando Diaz, and Emre Kiciman. 2019. Social data: Biases, methodological pitfalls, and ethical boundaries. Frontiers in Big Data 2 (2019), 13

Dana Pessach and Erez Shmueli. 2022. A review on fairness in machine learning. ACM Computing Surveys (CSUR) 55, 3 (2022), 1–44.

Rute Pina. 2022. Feminismo: Google mostra anúncio que deturpa conceito como 1º resultado. [link].

Geoff Pleiss, Manish Raghavan, Felix Wu, Jon Kleinberg, and Kilian Q Weinberger. 2017. On fairness and calibration. Advances in Neural Information Processing Systems 30 (2017), 5680–5689

Kevin Roose. 2019. The Making of a YouTube Radical. [link].

Adam Rose. 2010. Are Face-Detection Cameras Racist? [link]

Adam Satariano and Paul Mozur. 2023. The People Onscreen Are Fake. The Disinformation Is Real. [link].

Tatiana Serafin. 2022. Ukraine’s President Zelensky Takes the Russia/Ukraine War Viral. Orbis 66, 4 (2022), 460–476

Tom Simonite. 2018. When It Comes to Gorillas, Google Photos Remains Blind. [link].

Tom Simonite. 2019. Artificial Intelligence Is Coming for Our Faces. [link].

Jacob Snow. 2018. Amazon’s Face Recognition Falsely Matched 28 Members of Congress With Mugshots. [link].

Felix Stahlberg. 2020. Neural machine translation: A review. Journal of Artificial Intelligence Research 69 (2020), 343–418

Catherine Stupp. 2019. Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case. [link].

Kristi Sturgill. 2020. Santa Cruz becomes the first U.S. city to ban predictive policing. [link].

Anisa Subedar and Will Yates. 2015. The disturbing YouTube videos that are tricking children. [link]

Shiliang Sun, Chen Luo, and Junyu Chen. 2017. A review of natural language processing techniques for opinion mining systems. Information Fusion 36 (2017), 10–25.

Latanya Sweeney. 2013. Discrimination in online ad delivery. Queue 11, 3 (2013), 10–29.

Kat Tenbarge. 2023. Found through Google, bought with Visa and Mastercard: Inside the deepfake porn economy. [link].

Andrew Thompson. 2017. Google’s Sentiment Analyzer Thinks Being Gay Is Bad. [link].

Catherine Thorbecke. 2022. It didn’t take long for Meta’s new chatbot to say something offensive. [link].

Songül Tolan, Marius Miron, Emilia Gómez, and Carlos Castillo. 2019. Why machine learning may lead to unfairness: Evidence from risk assessment for juvenile justice in catalonia. In Conference on Artificial Intelligence and Law. 83–92.

AS Tolba, AH El-Baz, and AA El-Harby. 2006. Face recognition: A literature review. International Journal of Signal Processing 2, 2 (2006), 88–103

Ruben Tolosana, Ruben Vera-Rodriguez, Julian Fierrez, Aythami Morales, and Javier Ortega-Garcia. 2020. Deepfakes and beyond: A Survey of face manipulation and fake detection. Information Fusion 64 (2020), 131–148. https://doi.org/10.1016/j.inffus.2020.06.014

Tony Ho Tran. 2022. OpenAI’s Impressive New Chatbot Isn’t Immune to Racism. [link].

Megan Twohey and Gabriel J.X. Dance. 2022. Amazon’s algorithm suggests products for suicide attempts. [link].

Siva Vaidhyanathan. 2012. The Googlization of everything:(and why we should worry). Univ of California Press.

Joseph Walker. 2012. Meet the New Boss: Big Data. [link].

J. West and C. Bergstrom. 2019. Which Face Is Real. [link]

Levi Winslow. 2023. Twitch’s Popular AI-Powered Seinfeld Show Gets Banned For Transphobia. [link].

Shanique Yates. 2023. Lawsuit Claims Workday’s AI And Screening Tools Discriminate Against Those Black, Disabled Or Over Age 40. [link].

Kyra Yee, Uthaipon Tantipongpipat, and Shubhanshu Mishra. 2021. Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency. arXiv preprint arXiv:2105.08667 (2021)

Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, and Krishna P Gummadi. 2017. Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In International conference on world wide web. 1171–1180.

Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rogriguez, and Krishna P Gummadi. 2017. Fairness constraints: Mechanisms for fair classification. In Artificial Intelligence and Statistics. PMLR, 962–970

Rich Zemel, Yu Wu, Kevin Swersky, Toni Pitassi, and Cynthia Dwork. 2013. Learning fair representations. In International Conference on Machine Learning. 325–333

Brian Hu Zhang, Blake Lemoine, and Margaret Mitchell. 2018. Mitigating unwanted biases with adversarial learning. In AAAI/ACM Conference on AI, Ethics, and Society. 335–340.

Shuai Zhang, Lina Yao, Aixin Sun, and Yi Tay. 2019. Deep learning based recommender system: A survey and new perspectives. ACM computing surveys (CSUR) 52, 1 (2019), 1–38

Shoshana Zuboff. 2019. The age of surveillance capitalism: The fight for a human future at the new frontier of power: Barack Obama’s books of 2019. Profile books.
Publicado
23/10/2023
Como Citar

Selecione um Formato
MINATEL, Diego; DOS SANTOS, Nícolas Roque; DA SILVA, Angelo Cesar Mendes; CÚRI, Mariana; MARCACINI, Ricardo Marcondes; LOPES, Alneu de Andrade. Unfairness in Machine Learning for Web Systems Applications. In: SIMPÓSIO BRASILEIRO DE SISTEMAS MULTIMÍDIA E WEB (WEBMEDIA), 29. , 2023, Ribeirão Preto/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 144–153.

Artigos mais lidos do(s) mesmo(s) autor(es)