Design propositions for the critical analysis of educational machine learning-based applications using emancipatory pedagogy

Resumo


Na educação, aplicações baseadas em aprendizado de máquina fornecem suporte e insights analíticos a alunos, professores e administradores. No entanto, nem todos nós somos tratados igualmente por essas tecnologias. O viés algorítmico pode refletir em oportunidades desiguais para os indivíduos com base apenas em seus dados demográficos. Seguindo uma abordagem de pesquisa em design science, investigamos múltiplas fontes de viés no pipeline de aprendizado de máquina e usamos a pedagogia emancipatória como teoria kernel para elaborar propostas de design para mitigar esse problema. Correlacionamos as fontes de viés com ações em potencial, fornecendo lentes teóricas para lidar com o viés no desenvolvimento de sistemas educacionais inteligentes. Esses princípios devem fornecer aos pesquisadores uma análise crítica do desenvolvimento de sistemas inteligentes na educação.

Palavras-chave: design science, critical theory, design proposition, educational intelligent systems, emancipatory pedagogy

Referências

Baker, R., Hawn, A. (2021). Algorithmic bias in education. International Journal of Artificial Intelligence in Education. DOI: 10.1007/s40593-021-00285-9.

Blikstein, P. (2018). Time to Make Hard Choices for AI in Education. Keynote talk at the 2018 International Conference on Artificial Intelligence in Education.

Boza, P., Evgeniou, T. (2021). Implementing AI Principles: Frameworks, Processes, and Tools. Working paper. Social Science Research Network.

Bridgeman, B., Trapani, C., & Attali, Y. (2009). Considering fairness and validity in evaluating automated scoring [Paper presentation]. Annual Meeting of the National Council on Measurement in Education (NCME), United States

China (2022). Internet Information Service Algorithmic Recommendation Management Provisions (English translation). Retrieved from: [link].

Coleman, J. S. (1966). Equality of Educational Opportunity, Government Printing Office, Washington, DC (1966).

Crawford, K. [The Artificial Intelligence Channel]. (2017, December 11). The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford [Video]. YouTube. https://youtu.be/fMym_BKWQzk.

Denyer, D., Tranfield, D., Aken, J. E. (2008). Developing design propositions through research synthesis. Organization Studies, vol. 29 (3), p. 393-413. DOI: 10.1177/0170840607088020.

Finkelstein, S., Yarzebinski, E., Vaughn, C., Ogan, A., Cassell, J. (2013). The Effects of Culturally Congruent Educational Technologies on Student Achievement. In Artificial Intelligence in Education, p. 493-502.

Francis, P., Broughan. C., Foster, C., Wilson, F. (2020). Thinking critically about learning analytics, student outcomes, and equity of attainment, Assessment & Evaluation in Higher Education, 45:6, 811-821, DOI: 10.1080/02602938.2019.1691975

Freire, P. (1970) Pedagogy for the Oppressed. Continuum International Publishing Group, New York, NY, USA.

Gardner, J., Brooks, C., Baker, R. (2019). Evaluating the fairness of predictive student models through slicing analysis. In Proceedings of the International Conference on Learning Analytics & Knowledge, 225-234.

Gregor, S., and Jones, D. 2007. “The Anatomy of a Design Theory,” Journal of the Association for Information Systems (8:5), pp. 312-335

Gulson, Kalervo N., Webb, P. Taylor (2017): 'Life' and education policy: intervention, augmentation and computation, in: Discourse: Studies in the Cultural Politics of Education, 39:2, 276–291

Hutchinson, B., & Mitchell, M. (2019). 50 years of test (un)fairness: Lessons for machine learning. Proceedings of the Conference on Fairness, Accountability, and Transparency (FATED), p. 49–58. DOI: 10.1145/3287560.3287600.

Kane, G. C., Young, A. G., Majchrzak, A., Ransbotham, S. (2021). Avoiding an Oppressive Future of Machine Learning: A Design Theory for Emancipatory Assistants. MIS Quarterly, (45: 1) pp.371-396.

Kao, G; Thompson, J. . (2003). Racial and ethnic stratification in educational achievement and attainment. In Annual Review of Sociology, 29 (1), p. 417-442. DOI: 10.1146/annurev.soc.29.010202.100019.

Karumbaiah, S. (2021). The Upstream Sources of Bias in Adaptive Learning Systems.

Karumbaiah, S.; Brooks, J. (2021). How Colonial Continuities Underlie Algorithmic Injustices in Education. Conference on Research in Equitable and Sustained Participation in Engineering, Computing, and Technology (RESPECT), 2021, pp. 1-6, doi: 10.1109/RESPECT51740.2021.9620605.

Khosravi, H., Shum, S. B., Chen, G., Conati, C., Tsai, Y. S., Kay, J., Knight, S., Martinez-Maldonado, R., Sadiq, S., Gasevic, D. (2022). Explainable Artificial Intelligence in education. Computers and Education: Artificial Intelligence vol. 3, 100074. DOI: 10.1016/j.caeai.2022.100074.

Lee, M. K. (2018). Understanding perception of algorithmic decisions: Fairness, trust, and emotion in response to algorithmic management. Big Data & Society, 5(1), 2053951718756684. DOI: 10.1177/2053951718756684.

Lee, H., Kizilcec, R. F. (2020). Evaluation of Fairness Trade-offs in Predicting Student Success. EDM 2020 Fairness, Accountability, and Transparency in Educational Data (FATED) Workshop.

Kizilcec, R. F., Lee, H. (2022). Algorithmic Fairness in Education. In W. Holmes & K. Porayska-Pomsta (Eds.), Ethics in Artificial Intelligence in Education, Taylor & Francis.

Mayfield, E., Madaio, M., Prabhumoye, S., Gerritsen, D., McLaughlin, B., Dixon-Román, E., & Black, A. W. (2019). Equity beyond bias in language technologies for education. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications (pp. 444-460).

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2019). A Survey on Bias and Fairness in Machine Learning. ArXiv E-Prints, arXiv:1908.09635. https://arxiv.org/abs/1908.09635

Mitchell, S., Potash, E., Barocas, S., D’Amour, A., & Lum, K. (2021). Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8. https://doi.org/10.1146/annurev-statistics-042720-125902.

Myers, M. D., & Klein, H. K. (2011). A set of principles for conducting critical research in information systems. MIS Quarterly, 35(1), 17–36.

Ocumpaugh, J., Baker, R., Gowda, S., Heffernan, N. Heffernan, C. (2014). Population validity for educational data mining models: A case study in affect detection. British Journal of Educational Technology, 45(3), 487–501. https://www.learntechlib.org/p/148344

Ogan, A., Walker, R., Baker, R., Mendez, G. R., Castro, M. J., Laurentino, T., Carvalho, A. (2012). Collaboration in cognitive tutor use in Latin America: field study and design recommendations. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, p. 1381–1390. DOI: 10.1145/2207676.2208597.

Shum, S. J. B. (2018). Transitioning Education's Knowledge Infrastructure. Keynote talk at the 2018 International Conference of the Learning Sciences (ICLS 2018).

Suresh, H., Guttag, J. V. (2021). A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO '21), October 5–9, 2021, --, NY, USA. ACM, New York, NY, USA 9 Pages. https://doi.org/10.1145/3465416.3483305.

Williamson, K., Johnson, G. (2018). Research methods - Information, systems, and contexts. 2nd edition. Elsevier.

Young, A.G. 2018. “Using ICT for Social Good: Cultural Identity Restoration Through Emancipatory Pedagogy,” Information Systems Journal (28.2), pp. 340-358.
Publicado
16/11/2022
Como Citar

Selecione um Formato
PENTEADO, Bruno Elias. Design propositions for the critical analysis of educational machine learning-based applications using emancipatory pedagogy. In: SIMPÓSIO BRASILEIRO DE INFORMÁTICA NA EDUCAÇÃO (SBIE), 33. , 2022, Manaus. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2022 . p. 1138-1150. DOI: https://doi.org/10.5753/sbie.2022.225710.