Large Language Models for Education: Grading Open-Ended Questions Using ChatGPT

  • Gustavo Pinto UFPA / Zup Innovation
  • Isadora Cardoso-Pereira Zup Innovation
  • Danilo Monteiro Zup Innovation
  • Danilo Lucena UFPE / Zup Innovation
  • Alberto Souza Zup Innovation
  • Kiev Gama UFPE

Resumo


As a way of addressing increasingly sophisticated problems, software professionals face the constant challenge of seeking improvement. However, for these individuals to enhance their skills, their process of studying and training must involve feedback that is both immediate and accurate. In the context of software companies, where the scale of professionals undergoing training is large, but the number of qualified professionals available for providing corrections is small, delivering effective feedback becomes even more challenging. To circumvent this challenge, this work presents an exploration of using Large Language Models (LLMs) to support the correction process of open-ended questions in technical training. In this study, we utilized ChatGPT to correct open-ended questions answered by 42 industry professionals on two topics. Evaluating the corrections and feedback provided by ChatGPT, we observed that it is capable of identifying semantic details in responses that other metrics cannot observe. Furthermore, we noticed that, in general, subject matter experts tended to agree with the corrections and feedback given by ChatGPT.

Palavras-chave: Open-ended Questions, ChatGPT, Automated grading
Publicado
25/09/2023
PINTO, Gustavo; CARDOSO-PEREIRA, Isadora; MONTEIRO, Danilo; LUCENA, Danilo; SOUZA, Alberto; GAMA, Kiev. Large Language Models for Education: Grading Open-Ended Questions Using ChatGPT. In: SIMPÓSIO BRASILEIRO DE ENGENHARIA DE SOFTWARE (SBES), 37. , 2023, Campo Grande/MS. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 293–302.