Automated Ableism: A Systematic Review on AI and Discrimination against People with Disabilities

  • Janaina Nogueira de Souza Lopes UFMS
  • Valéria Quadros dos Reis UFMS
  • Amaury Antônio de Castro Junior UFMS
  • Anderson Corrêa de Lima UFMS

Resumo


Research Context: Artificial Intelligence (AI) systems have expanded in decision-making processes but raise concerns about algorithmic discrimination, with ableism remaining an underexplored dimension in Information Systems (IS). Scientific and/or Practical Problem: Despite advances in the literature on algorithmic bias, discrimination against people with disabilities (PwD) in AI systems remains underexplored. This gap limits the development of fairer and more inclusive AI systems. Proposed Solution and/or Analysis: This study presents a Systematic Literature Review (SLR) on how AI systems can reproduce, intensify, or mitigate ableist practices. Related IS Theory: It is grounded in algorithmic justice, fairness in machine learning, and socio-technical perspectives on inclusion. Research Method: Following PRISMA and PICOC, 26 articles published between 2020 and 2025 were analyzed and assessed against quality criteria. Summary of Results: Three main trends were identified: AI that reinforces ableism, mitigation initiatives, and ethical-regulatory debates. The problem has been discussed mainly by researchers from the United States and Europe, highlighting the need for studies that consider different languages as well as diverse social and cultural behaviors. Contributions and Impact to IS area: The study addresses a neglected dimension of algorithmic bias and provides insights for researchers, practitioners, and policymakers committed to developing inclusive AI.

Referências

Angwin, J., Larson, J., Mattu, S., and Kirchner, L. (2016). Machine bias. ProPublica.

Barocas, S. and Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3):671–732.

Binns, R. and Kirkham, R. (2021). How could equality and data protection law shape AI fairness for people with disabilities? CoRR, abs/2107.05704.

Bolukbasi, T., Chang, K.-W., Zou, J. Y., Saligrama, V., and Kalai, A. T. (2016). Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. In Advances in Neural Information Processing Systems (NeurIPS).

Buyl, M., Cociancig, C., Frattone, C., and Roekens, N. (2022). Tackling algorithmic disability discrimination in the hiring process: An ethical, legal and technical analysis. page 1071–1082.

Carrera-Rivera, A., Ochoa, W., Larrinaga, F., and Lasa, G. (2022). How-to conduct a systematic literature review: A quick guide for computer science research. MethodsX, 9:101895.

Dutra, T. C., Maschio, E., and Gasparini, I. (2023). Pensar e lavar: Processo de desenvolvimento e avaliação de um jogo digital educacional para promover o pensamento computacional para crianças neurotípicas e com deficiência intelectual. Revista Brasileira de Informática na Educação, 31:659–690.

Elglaly, Y. N. and Liu, Y. (2023). Promoting machine learning fairness education through active learning and reflective practices. SIGCSE Bull., 55(3):4–6.

Eubanks, V. (2018). Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin’s Press.

Glazko, K., Cha, J., Lewis, A., Kosa, B., Wimer, B. L., Zheng, A., Zheng, Y., and Mankoff, J. (2025). Autoethnographic Insights from Neurodivergent GAI “Power Users”. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, CHI ’25, New York, NY, USA. Association for Computing Machinery.

Glazko, K., Mohammed, Y., Kosa, B., Potluri, V., and Mankoff, J. (2024). Identifying and Improving Disability Bias in GPT-Based Resume Screening. page 687–700.

Glazko, K. S., Yamagami, M., Desai, A., Mack, K. A., Potluri, V., Xu, X., and Mankoff, J. (2023). An autoethnographic case study of generative artificial intelligence’s utility for accessibility. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility, ASSETS ’23, New York, NY, USA. Association for Computing Machinery.

Griffin, P., Peters, M. L., and Smith, R. M. (2007). Ableism curriculum design. In Adams, M., Bell, L. A., and Griffin, P., editors, Teaching for Diversity and Social Justice, page 24. Routledge, 2nd edition.

Hassan, S., Huenerfauth, M., and Alm, C. O. (2021). Unpacking the interdependent systems of discrimination: Ableist bias in NLP systems through an intersectional lens. In Moens, M.-F., Huang, X., Specia, L., and Yih, S. W.-t., editors, Findings of the Association for Computational Linguistics: EMNLP 2021, pages 3116–3123, Punta Cana, Dominican Republic. Association for Computational Linguistics.

Herold, B., Waller, J., and Kushalnagar, R. (2022). Applying the Stereotype Content Model to Assess Disability Bias in Popular Pre-Trained NLP Models Underlying AI-Based Assistive Technologies. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10):10750–10757.

Hutchinson, B., Prabhakaran, V., Denton, E., Webster, K., Zhong, Y., and Denuyl, S. (2020). Social biases in NLP models as barriers for persons with disabilities. pages 5491–5501.

IBGE, M. (2023). Brasil tem 18,6 milhões de pessoas com deficiência, indica pesquisa divulgada pelo IBGE e MDHC. [link]. Acesso em: 13 jan. 2025.

Jacques, E., Sacramento, C., Gouveia, Y., Silva, W., Barros, Y., and Ferreira, S. (2025). Preservação da memória com acessibilidade digital: um plugin para descrição de imagens com IA generativa. In Anais Estendidos do XXI Simpósio Brasileiro de Sistemas de Informação, pages 157–161, Porto Alegre, RS, Brasil. SBC.

Krupiy, T. T. and Scheinin, M. (2023). Disability Discrimination in the Digital Realm: How the ICRPD Applies to Artificial Intelligence Decision-Making Processes and Helps in Determining the State of International Human Rights Law. Human Rights Law Review, 23(3):ngad019.

Li, R., Kamaraj, A., Ma, J., and Ebling, S. (2024). Decoding ableism in large language models: An intersectional approach. pages 232–249.

Lima, L., Lima, L., Riquelme, M., and Ricarte, D. (2024). Uma revisão sistemática das técnicas de justiça algorítmica para diagnóstico radiológico: Avanços, desafios e perspectivas futuras. In Anais Estendidos do XXIV Simpósio Brasileiro de Computação Aplicada à Saúde, pages 37–42, Porto Alegre, RS, Brasil. SBC.

Manzoor, R., Hussain, W., and Anjum, M. L. (2024). Out of dataset, out of algorithm, out of mind: a critical evaluation of AI bias against disabled people. AI Soc., 40(5):3941–3951.

Marks, M. (2020). Algorithmic disability discrimination. Journal of Ethics and Information Technology, 22(3):345–355.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., and Galstyan, A. (2022). A survey on bias and fairness in machine learning.

Mondal, I., Kaur, S., Bali, K., Vashistha, A., and Swaminathan, M. (2022). “DisabledOnIndianTwitter” : A dataset towards understanding the expression of people with disabilities on Indian Twitter. In He, Y., Ji, H., Li, S., Liu, Y., and Chang, C.-H., editors, Findings of the Association for Computational Linguistics: AACL-IJCNLP 2022, pages 375–386, Online only. Association for Computational Linguistics.

Morr, C. E., Kundi, B., Mobeen, F., Taleghani, S., El-Lahib, Y., and Gorman, R. (2024). AI and disability: A systematic scoping review. Health Informatics Journal, 30(3):14604582241285743. PMID: 39287175.

Moss, H. (2021). Screened out onscreen: Disability discrimination, hiring bias, and artificial intelligence. Denver Law Review, 98(4):2021. Posted: 18 Aug 2021.

Mukherjee, A., Raj, C., Zhu, Z., and Anastasopoulos, A. (2023). Global Voices, local biases: Socio-cultural prejudices across languages. In Bouamor, H., Pino, J., and Bali, K., editors, Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 15828–15845, Singapore. Association for Computational Linguistics.

N., F., N., L., F., L., R., M., Budaruiche, R., and Cunha, V. (2025). Tecnologia assistiva: O papel da inteligência artificial em sistemas embarcados para acessibilidade. In Anais do XVII Encontro Unificado de Computação do Piauí, pages 205–210, Porto Alegre, RS, Brasil. SBC.

Narayanan Venkit, P. (2023). Towards a Holistic Approach: Understanding Sociodemographic Biases in NLP Models using an Interdisciplinary Lens. page 1004–1005.

Narayanan Venkit, P., Srinath, M., and Wilson, S. (2023). Automated ableism: An exploration of explicit disability biases in sentiment and toxicity analysis models. In Ovalle, A., Chang, K.-W., Mehrabi, N., Pruksachatkun, Y., Galystan, A., Dhamala, J., Verma, A., Cao, T., Kumar, A., and Gupta, R., editors, Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023), pages 26–34, Toronto, Canada. Association for Computational Linguistics.

Newman, P., Opdahl, T., Liu, Y., Wehrwein, S., and Elglaly, Y. N. (2024). Crafting disability fairness learning in data science: A student-centric pedagogical approach. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1, SIGCSE 2024, page 944–950, New York, NY, USA. Association for Computing Machinery.

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing Group.

Packin, N. G. (2021). Disability Discrimination Using AI Systems, Social Media and Digital Platforms: Can We Disable Digital Bias? Journal of International and Comparative Law, 8(2):487. Posted: 11 Jan 2021; Last revised: 30 Jan 2023.

Pagano, T. P., Loureiro, R. B., Lisboa, F. V. N., Cruz, G. O. R., Peixoto, R. M., de Sousa Guimarães, G. A., dos Santos, L. L., Araujo, M. M., Cruz, M., de Oliveira, E. L. S., Winkler, I., and Nascimento, E. G. S. (2022). Bias and unfairness in machine learning models: a systematic literature review.

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., Chou, R., Glanville, J., Grimshaw, J. M., Hróbjartsson, A., Lalu, M. M., Li, T., Loder, E. W., Mayo-Wilson, E., McDonald, S., McGuinness, L. A., Stewart, L. A., Thomas, J., Tricco, A. C., Welch, V. A., Whiting, P., and Moher, D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ, 372.

Park, S., Min, A., Beltran, J. A., and Hayes, G. R. (2025). ”As an Autistic Person Myself:”The Bias Paradox Around Autism in LLMs. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, CHI ’25, New York, NY, USA. Association for Computing Machinery.

Rodrigues, A. S., Costa, V. K., Cardoso, R. C., Machado, M. B., and Tavares, T. A. (2018). A Systematic Mapping Study about the Evaluation in Human-Computer Interaction of Assistive Technology Focused on People with Motor Disability. iSys - Brazilian Journal of Information Systems, 11(3):90–126.

Santos, F., Gomes, L. A., Barros, E., Pinheiro, F., Silva, L., Nascimento, M., and Maia, P. (2017). Desafios e Experiências no Ensino de Programação Java através de Educação a Distância para Pessoas com Deficiência. In Anais do XXIII Workshop de Informática na Escola, pages 1109–1118, Porto Alegre, RS, Brasil. SBC.

Shew, A. (2020). Ableism, Technoableism, and Future AI. IEEE Technology and Society Magazine, 39(1):40–85.

Starke, C., Baleis, J., Keller, B., and Marcinkowski, F. (2022). Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data & Society, 9(2):20539517221115189.

Tadimalla, S. Y., Figard, R., and Song, Y. (2024). WIP: Moving from Accessibility to Anti-Ableism through the Explication of Disability in the AI Ecosystem. In 2024 IEEE Frontiers in Education Conference (FIE), pages 1–5.

Urbina, J. T., Vu, P. D., and Nguyen, M. V. (2025). Disability Ethics and Education in the Age of Artificial Intelligence: Identifying Ability Bias in ChatGPT and Gemini. Archives of Physical Medicine and Rehabilitation, 106(1):14–19.

Venkit, P. N., Srinath, M., and Wilson, S. (2022). A study of implicit bias in pretrained language models against people with disabilities. In Calzolari, N., Huang, C.-R., Kim, H., Pustejovsky, J., Wanner, L., Choi, K.-S., Ryu, P.-M., Chen, H.-H., Donatelli, L., Ji, H., Kurohashi, S., Paggio, P., Xue, N., Kim, S., Hahm, Y., He, Z., Lee, T. K., Santus, E., Bond, F., and Na, S.-H., editors, Proceedings of the 29th International Conference on Computational Linguistics, pages 1324–1332, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.

Whittaker, M., Alper, M., Bennett, C. L., Hendren, S., Kaziunas, E., Mills, M., and West, S. M. (2019). Disability, Bias, and AI. Technical report, AI Now Institute.

Wu, G. and Ebling, S. (2024). Investigating ableism in LLMs through multi-turn conversation. In Dementieva, D., Ignat, O., Jin, Z., Mihalcea, R., Piatti, G., Tetreault, J., Wilson, S., and Zhao, J., editors, Proceedings of the Third Workshop on NLP for Positive Impact, pages 202–210, Miami, Florida, USA. Association for Computational Linguistics.
Publicado
25/05/2026
LOPES, Janaina Nogueira de Souza; REIS, Valéria Quadros dos; CASTRO JUNIOR, Amaury Antônio de; LIMA, Anderson Corrêa de. Automated Ableism: A Systematic Review on AI and Discrimination against People with Disabilities. In: SIMPÓSIO BRASILEIRO DE SISTEMAS DE INFORMAÇÃO (SBSI), 22. , 2026, Vitória/ES. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2026 . p. 537-555. DOI: https://doi.org/10.5753/sbsi.2026.248567.