Identifying Evidences of Computer Programming Skills Through Automatic Source Code Evaluation

Resumo


This research is contextualized in the teaching of computer programming. Continuous assessment of source codes produced by students on time is a challenging task for teachers. The literature presents different methods for automatic evaluation of source code, mostly focusing on technical aspects. This research presents the A-Learn EvId method, having as the main differential the evaluation of high-level skills instead of technical aspects. The following results are highlighted: updating the state of the art through systematic mapping; a set of 37 skills identifiable through 9 automatic source code evaluation strategies; construction of datasets totaling 8651 source codes.

Palavras-chave: Computer Programming, Automatic Evaluation, Skill-based Assessment

Referências

[Ahmed et al. 2018] Ahmed, U. Z., Kumar, P., Karkare, A., Kar, P., and Gulwani, S. (2018). Compilation error repair: For the student programs, from the student programs. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering Education and Training, ICSE-SEET ’18, pages 78–87, New York, NY, USA. ACM.


[Hettiarachchi et al. 2013] Hettiarachchi, E., Huertas, M., and Mor, E. (2013). Skill and knowledge e-assessment: A review of the state of the art. IN3 Working Paper Series.


[Hettiarachchi et al. 2015] Hettiarachchi, E., Huertas, M., and Mor, E. (2015). E-assessment system for skill and knowledge assessment in computer engineering education. International Journal of Engineering Education, 31:529–540.


[Ihantola et al. 2010] Ihantola, P., Ahoniemi, T., Karavirta, V., and Seppälä, O. (2010). Review of recent systems for automatic assessment of programming assignments. In Proceedings of the 10th Koli Calling International Conference on Computing Education Research, Koli Calling ’10, pages 86–93, New York, NY, USA. ACM.


[Jackson and Usher 1997] Jackson, D. and Usher, M. (1997). Grading student programs using assyst. In Proceedings of the Twenty-eighth SIGCSE Technical Symposium on Computer Science Education, SIGCSE ’97, pages 335–339, New York, NY, USA. ACM.


[Liang et al. 2009] Liang, Y., Liu, Q., Xu, J., and Wang, D. (2009). The recent development of automated programming assessment. In 2009 International Conference on Computational Intelligence and Software Engineering, pages 1–5.


[Maschio 2013] Maschio, E. (2013). Modelagem do Processo de Aquisição de Conhecimento Apoiado por Ambientes Inteligentes. Tese de doutorado, Programa de Pós-Graduação em Informática, Setor de Ciências Exatas, Universidade Federal do Paraná (UFPR).


[Morris 2003] Morris, D. S. (2003). Automatic grading of student’s programming assignments: an interactive process and suite of programs. In 33rd Annual Frontiers in Education, 2003. FIE 2003., volume 3, pages S3F–1.


[Pimentel and Direne 1998] Pimentel, A. R. and Direne, A. I. (1998). Medidas cognitivas no ensino de programação de computadores com sistemas tutores inteligentes. Revista Brasileira de Informática na Educac¸ao (IE), 3:17–24.


[Porfirio et al. 2016] Porfirio, A., Maschio, E., and Direne, A. (2016). Modelagem geérica de aprendizes com Ênfase em erros na aquisição de habilidades em programação de computadores. Anais dos Workshops do Congresso Brasileiro de Informática na Educação.


[Porfirio et al. 2017] Porfirio, A., Pereira, R., and Maschio, E. (2017). Atualização do modelo do aprendiz de programação de computadores com o uso de parser ast. Anais dos Workshops do Congresso Brasileiro de Informática na Educação, 6(1):1121.


[Porfirio et al. 2018] Porfirio, A., Pereira, R., and Maschio, E. (2018). Inferência de conhecimento a partir da detecção automática de evidências no domínio da programação de computadores. Brazilian Symposium on Computers in Education (Simpósio Brasileiro de Informática na Educação - SBIE), 29(1):1553.


[Rahman and Nordin 2007] Rahman, K. A. and Nordin, M. J. (2007). A review on the static analysis approach in the automated programming assessment systems. In Proceedings of national conference on programming 07.


[Souza et al. 2016] Souza, D. M., Felizardo, K. R., and Barbosa, E. F. (2016). A systematic literature review of assessment tools for programming assignments. In 2016 IEEE 29th International Conference on Software Engineering Education and Training (CSEET), pages 147–156.


[Ullah et al. 2018] Ullah, Z., Lajis, A., Jamjoom, M., Altalhi, A., Al-Ghamdi, A., and Saleem, F. (2018). The effect of automatic assessment on novice programming: Strengths and limitations of existing systems. Computer Applications in Engineering Education, 26(6):2328–2341.


[VanPatten and Williams 2015] VanPatten, B. and Williams, J. (2015). Theories in second language acquisition: An introduction. Routledge, second edition.


[Wilcox et al. 1976] Wilcox, T. R., Davis, A. M., and Tindall, M. H. (1976). The design and implementation of a table driven, interactive diagnostic programming system. Commun. ACM, 19(11):609–616.


[Wu 2011] Wu, J. (2011). Improving the writing of research papers: Imrad and beyond.
Publicado
24/11/2020
Como Citar

Selecione um Formato
PORFIRIO, Andres J.; PEREIRA, Roberto; MASCHIO, Eleandro. Identifying Evidences of Computer Programming Skills Through Automatic Source Code Evaluation. In: S WORKSHOPS DO CONGRESSO BRASILEIRO DE INFORMÁTICA NA EDUCAÇÃO (WCBIE), 9. , 2020, Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2020 . p. 01-10. DOI: https://doi.org/10.5753/cbie.wcbie.2020.01.