May We Consult ChatGPT in Our Human-Computer Interaction Written Exam? An Experience Report After a Professor Answered Yes

Resumo


Using ChatGPT in education presents challenges for evaluating students. It requires distinguishing between original ideas and those generated by the model, assessing critical thinking skills, and gauging subject mastery accurately, which can impact fair assessment practices. The Human-Computer Interaction course described in this experience report has enabled consultation with textbooks, slides and other materials for over five years. This experience report describes reflections regarding using ChatGPT as a source of consultation in a written HCI exam in 2023. The paper describes experiences with analysis of the types of questions ChatGPT was able to solve immediately without mediation and the types of questions that could benefit from ChatGPT's assistance without compromising the assessment of higher-level learning outcomes that professors want to analyse in teaching HCI. The paper uses Bloom's taxonomy to analyse different questions and abilities to be evaluated and how they can be solved solely by using ChatGPT. The paper discusses questions that need mediation, previous lived experience in class and understanding of the knowledge acquired in class that cannot be answered directly by copying and pasting questions into ChatGPT. The discussions can raise reflections on the learning outcomes that can be assessed in HCI written exams and how professors should reflect upon their experiences and expectations for exams in the age of growing generative artificial intelligence resources.
Palavras-chave: HCI education, evaluation, ChatGPT, open-book exams

Referências

Hina Amin and Munawar Sultana Mirza. 2020. Comparative study of knowledge and use of Bloom’s digital taxonomy by teachers and students in virtual and conventional universities. Asian Association of Open Universities Journal 15, 2 (2020), 223–238.

David Baidoo-Anu and Leticia Owusu Ansah. 2023. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Available at SSRN 4337484 (2023).

Virginia Braun and Victoria Clarke. 2023. Toward good practice in thematic analysis: Avoiding common problems and be(com)ing a knowing researcher. International Journal of Transgender Health 24, 1 (2023), 1–6. DOI: 10.1080/26895269.2022.2129597

Steven J Durning, Ting Dong, Temple Ratcliffe, Lambert Schuwirth, Anthony R Artino, John R Boulet, and Kevin Eva. 2016. Comparing open-book and closed-book examinations: a systematic review. Academic Medicine 91, 4 (2016), 583–599.

Enkelejda Kasneci, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stepha Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, and Gjergji Kasneci. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences 103 (2023), 102274. DOI: 10.1016/j.lindif.2023.102274

Michelle E. Kiger and Lara Varpio. 2020. Thematic analysis of qualitative data: AMEE Guide No. 131. Medical Teacher 42, 8 (2020), 846–854. DOI: 10.1080/0142159X.2020.1755030

Ahmad R Kirmani. 2022. Artificial Intelligence-Enabled Science Poetry. ACS Energy Letters 8 (2022), 574–576.

David R Krathwohl. 2002. A revision of Bloom’s taxonomy: An overview. Theory into practice 41, 4 (2002), 212–218.

Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, 2023. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLoS digital health 2, 2 (2023), e0000198.

Chung Kwan Lo. 2023. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Education Sciences 13, 4 (2023). DOI: 10.3390/educsci13040410

Noelle Mathew. 2012. Student preferences and performance: A comparison of open-book, closed book, and cheat sheet exam types. 2012 NCUR (2012).

Ingrid Teixeira Monteiro and Maria Jêsca Nobre de Queiroz. 2018. The Cheat Sheet as an Instrument of Study and Learning in HCI Tests. In Proceedings of the 17th Brazilian Symposium on Human Factors in Computing Systems (Belém, Brazil) (IHC 2018). Association for Computing Machinery, New York, NY, USA, Article 52, 5 pages. DOI: 10.1145/3274192.3274244

Fakhroddin Noorbehbahani, Azadeh Mohammadi, and Mohammad Aminazadeh. 2022. A systematic review of research on cheating in online exams from 2010 to 2021. Education and Information Technologies 27, 6 (2022), 8413–8460.

Alannah Oleson, Meron Solomon, and Amy J Ko. 2020. Computing students’ learning difficulties in HCI education. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.

Jürgen Rudolph, Samson Tan, and Shannon Tan. 2023. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education?Journal of Applied Learning and Teaching 6, 1 (2023).

H. Holden Thorp. 2023. ChatGPT is fun, but not an author. Science 379, 6630 (2023), 313–313. DOI: 10.1126/science.adg7879

Eva AM Van Dis, Johan Bollen, Willem Zuidema, Robert van Rooij, and Claudi L Bockting. 2023. ChatGPT: five priorities for research. Nature 614, 7947 (2023), 224–226.
Publicado
16/10/2023
Como Citar

Selecione um Formato
FREIRE, André Pimenta; CARDOSO, Paula Christina Figueira; SALGADO, André de Lima. May We Consult ChatGPT in Our Human-Computer Interaction Written Exam? An Experience Report After a Professor Answered Yes. In: SIMPÓSIO BRASILEIRO SOBRE FATORES HUMANOS EM SISTEMAS COMPUTACIONAIS (IHC), 22. , 2023, Maceió/AL. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 .