Online assessments with parametric questions and automatic corrections: an improvement for MCTest using Google Forms and Sheets
Resumo
In many areas of knowledge it has always been a challenge to evaluate students efficiently. Considering that we are all undergoing a pandemic period, efficient evaluations are necessary and urgent. In our paper we followed the main objective of adapting MCTest. Namely, a web platform devoted to generate and correct individualized exams automatically. We have addressed the problem of distance student evaluation by profiting MCTest. As a result it provides a solution that is free of charge and enables creating parametric questions with LaTeX and Python. The automatic correction is carried out with Google Forms and Sheets, namely our original contribution. The adapted solution was successfully applied to a Calculus class with 100 students.
Palavras-chave:
Automated Assessment, Automatic Item Generator, Blended Learning
Referências
Arendasy, M. E., Sommer, M., and Mayr, F. (2012). Using automatic item generation to simultaneously construct german and english versions of a word fluency test. Journal of Cross-Cultural Psychology, 43(3):464–479.
Aybek, E. C. and Demirtasli, R. N. (2017). Computerized adaptive test (cat) applications and item response theory models for polytomous items. International Journal of Research in Education and Science, 3(2):475–487.
Choi, J. and Zhang, X. (2019). Computerized item modeling practices using computer adaptive formative assessment automatic item generation system: A tutorial. The Quantitative Methods for Psychology, 15:214–225.
Gierl, M. J., Lai, H., and Turner, S. R. (2012). Using automatic item generation to create multiple-choice test items. Medical education, 46(8):757–765.
Kim, E.-Y. J. (2017). The toefl ibt writing: Korean students’ perceptions of the toefl ibt writing test. Assessing Writing, 33:1–11.
Kosh, A. E., Simpson, M. A., Bickel, L., Kellogg, M., and Sanford-Moore, E. (2019). A cost–benefit analysis of automatic item generation. Educational Measurement: Issues and Practice, 38(1):48–53.
Mo, Y. and Troia, G. A. (2017). Similarities and differences in constructs represented by us states’ middle school writing tests and the 2007 national assessment of educational progress writing assessment. Assessing Writing, 33:48–67.
Pugh, D., De Champlain, A., Gierl, M., Lai, H., and Touchie, C. (2016). Using cognitive models to develop quality multiple-choice questions. Medical teacher, 38(8):838–843.
Seitenfus, D., dos Santos, B. V., Manica, E., and Pertile, S. (2019). Percepção de plágio acadêmico entre estudantes e professores de cursos de graduação e pós-graduação na modalidade a distância. RENOTE-Revista Novas Tecnologias na Educação, 17(1):103–112.
Stewart, J. (2006). Cálculo, vol. 1, 5a edição. Editora Thompson.
Zampirolli, F., Teubl, F., and Batista, V. (2019). Online generator and corrector of parametric questions in hard copy useful for the elaboration of thousands of individualized exams. In CSEDU (1), pages 352–359.
Aybek, E. C. and Demirtasli, R. N. (2017). Computerized adaptive test (cat) applications and item response theory models for polytomous items. International Journal of Research in Education and Science, 3(2):475–487.
Choi, J. and Zhang, X. (2019). Computerized item modeling practices using computer adaptive formative assessment automatic item generation system: A tutorial. The Quantitative Methods for Psychology, 15:214–225.
Gierl, M. J., Lai, H., and Turner, S. R. (2012). Using automatic item generation to create multiple-choice test items. Medical education, 46(8):757–765.
Kim, E.-Y. J. (2017). The toefl ibt writing: Korean students’ perceptions of the toefl ibt writing test. Assessing Writing, 33:1–11.
Kosh, A. E., Simpson, M. A., Bickel, L., Kellogg, M., and Sanford-Moore, E. (2019). A cost–benefit analysis of automatic item generation. Educational Measurement: Issues and Practice, 38(1):48–53.
Mo, Y. and Troia, G. A. (2017). Similarities and differences in constructs represented by us states’ middle school writing tests and the 2007 national assessment of educational progress writing assessment. Assessing Writing, 33:48–67.
Pugh, D., De Champlain, A., Gierl, M., Lai, H., and Touchie, C. (2016). Using cognitive models to develop quality multiple-choice questions. Medical teacher, 38(8):838–843.
Seitenfus, D., dos Santos, B. V., Manica, E., and Pertile, S. (2019). Percepção de plágio acadêmico entre estudantes e professores de cursos de graduação e pós-graduação na modalidade a distância. RENOTE-Revista Novas Tecnologias na Educação, 17(1):103–112.
Stewart, J. (2006). Cálculo, vol. 1, 5a edição. Editora Thompson.
Zampirolli, F., Teubl, F., and Batista, V. (2019). Online generator and corrector of parametric questions in hard copy useful for the elaboration of thousands of individualized exams. In CSEDU (1), pages 352–359.
Publicado
24/11/2020
Como Citar
ZAMPIROLLI, Francisco de Assis; BATISTA, Valério Ramos; ARRAZOLA, Edson; ANTUNES JÚNIOR, Irineu.
Online assessments with parametric questions and automatic corrections: an improvement for MCTest using Google Forms and Sheets. In: SIMPÓSIO BRASILEIRO DE INFORMÁTICA NA EDUCAÇÃO (SBIE), 31. , 2020, Online.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2020
.
p. 51-60.
DOI: https://doi.org/10.5753/cbie.sbie.2020.51.