Use of Rubrics in Informatics and Education Research - A Systematic Literature Review

Abstract


This paper presents the results of a Systematic Literature Reviews (SLR) which included the analysis of articles about rubrics using learning programming process in three important international research platforms, namely ScienceDirect, Scopus and Web of Science. Parsifal tool was used to support to perform of SLR. Rubric approach to learning researches provides a large range of evaluation criteria and expected performance standards for education and skill development, providing student activity linked to learning and pedagogical objectives. The rubric content in several dimensions produces robust metrics to assess the student's learning. Despite of the rubric being widely used in education, the summarization of the academic literature about this topic in programming learning is extremely rare. Applying the research protocol from SLR approximately 200 papers entered for study selection. The title, abstract and keywords were analyzed to find studies that met inclusion criteria. Then it was applied exclusion criteria, about 11% were accepted. A full reading was done on the accepted articles. Our categorization of resources was used to assist the data extraction of the Parsifal. The categorization increases the quality of the export data using Parsifal to Excel. This allowed a statistical data analysis more customized, for instance, with the creation of graphs that does not exist in Parsifal. The obtained findings provide an understanding of how rubric is explored in programming learning, resources used to apply them and challenges in this process, especially related to empirical study, feedback and the definition of rubrics applied to the programming learning.
Keywords: Systematic Literature Reviews, Rubrics, Programming Learning, Assessment, Feedback

References

Dannelle D. Stevens, Antonia J. Levi, Barbara E. Walvoord, 2013. Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Second Edition. Stylus Publishing, LLC. ISBN 978-1-57922-587-2.


Alistair Campbell, 2005. Application of ICT and Rubrics to the Assessment Process Where Professional Judgement Is Involved: The Features of an e-Marking Tool. Assessment & Evaluation in Higher Education 30 (5):529–537. DOI: 10.1080/02602930500187055.


Malini Y. Reddya and Heidi Andradeb, 2010. A Review of Rubric Use in Higher Education. Assessment & Evaluation in Higher Education, vol. 35, no. 4, pp. 435-448, 2010. DOI: 10.1080/02602930902862859.


RubStar - Create Rubrics for your Project-Based Learning Activities. Disponibilizada em http://rubistar.4teachers.org/index.php. Acessada em setembro de 2020.


Al-Khalifa K. Amal and Marie Devlin, 2020. Evaluating a Peer Assessment Approach in Introductory Programming Courses. In United Kingdom & Ireland Computing Education Research conference. (UKICER '20). ACM Press, New York, NY, USA, 51–58. DOI: https://doi.org/10.1145/3416465.3416467


Xihui Zhang, John D. Crabtree, Mark G. Terwilliger & Tyler T. Redman, 2020. Assessing Students’ Object-Oriented Programming Skills with Java: The “Department Employee” Project, Journal of Computer Information Systems, 60:3, 274-286, DOI: 10.1080/08874417.2018.1467243


Dorodchi M., Dehbozorgi N., Frevert T.K., 2017. "I wish I could rank my exam's challenge level!": An Algorithm of Bloom's Taxonomy in teaching CS1. Proceedings - Frontiers in Education Conference, FIE, 2017-October , pp. 1-5. DOI: 10.1109/FIE.2017.8190523


Sebastian Garces; Guity Ravai; Camilo Vieira; Alejandra J. Magana, 2019. Effects of Self-explanations as Scaffolding Tool for Learning Computer Programming. IEEE Frontiers in Education Conference (FIE), Covington, KY, USA, 2019, pp. 1-6, doi: 10.1109/FIE43999.2019.9028561.


Tuukka Ahoniemi, Essi Lahtinen, and Tommi Reinikainen. 2008. Improving pedagogical feedback and objective grading. SIGCSE Bull. 40, 1 (March 2008), 72–76. DOI:https://doi.org/10.1145/1352322.1352162.


Nilima Salankar, 2019. Impact of Rubrics, Addie and Gagne model on the performance of students in programming subject. International Journal of Engineering and Advanced Technology (IJEAT). ISSN: 2249 – 8958, Volume-9 Issue-1, October 2019.


Barbara Kitchenham and Stuart M. Charters, (2007). Guidelines for Performing Systematic Literature Reviews in Software Engineering (EBSE 2007-001). Keele University and Durham University Joint Report.


Torrey Trust and Emrah Pektas, 2018. Using the ADDIE Model and Universal Design for Learning Principles to Develop an Open Online Course for Teacher Professional Development. Journal of Digital Learning in Teacher Education, 34(4), 219-233.


Maknuna, Rafiqa Durotul et al., 2017. “Effectiveness of Use of Technical Skill Assessment Instruments to Increase Web Programming Competency.” (2017). DOI:10.2991/ICOVET-17.2017.9.


Osvaldo Clúa and Maria Feldgen, "A first course in Operating Systems with and without rubrics," 2011 Frontiers in Education Conference (FIE), Rapid City, SD, 2011, pp. F3D-1-F3D-5, doi: 10.1109/FIE.2011.6142820.


Dirson S. Campos, António J. Mendes, Maria J. Marcelino, Deller J. Ferreira and Lenice M. Alves, 2012. A Multinational Case Study on using Diverse Feedback Types Applied to Introductory Programming Learning. 42nd Annual Frontiers in Education Conference (FIE); Seattle, WA; United States; 3 October 2012.


Mike Wu, Milan Mosse, Noah Goodman, Chris Piech, 2019, Zero Shot Learning for Code Education: Rubric Sampling with Deep Learning Inference. The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), 782-790. ISSN 2374-3468 (Online), Volume 33 Number 1 (2019).


Katrin Becker, 2003. Grading programming assignments using rubrics. ACM Sigcse Bulletin. 35. 253. 10.1145/961290.961613.


Marco Carmosino and Mia Minnes. 2020. Adaptive Rubrics. In Proceedings of the 51st ACM Technical Symposium on Computer Science Education (SIGCSE '20). Association for Computing Machinery, New York, NY, USA, 549–555. DOI:https://doi.org/10.1145/3328778.3366946.


Shanon M. Reckinger and Bryce E. Hughes. “Measuring Differences in Performance by Varying Formative Assessment Construction Guided by Learning Style Preferences.” (2017).


Shanon M. Reckinger, Bryce E. Hughes. 2018. Partnering Strategies for Paired Formative Assessment in Programming. Conferência American Society of Engineering Education Annual Conference & Exposition.


Norka Bedregal-Alpaca, Jaén Doris S. Tupacyupanqui and Víctor Manuel C. Aparicio, 2019. The TPACK model as the basis of a didactic proposal for the teaching-learning of Linear Programming. IEEE World Conference on Engineering Education (EDUNINE), Lima, Peru, 2019, pp. 1-6, doi: 10.1109/EDUNINE.2019.8875763.


Srimadhaven T, Chris Junni AV, Naga Harshith, Jessenth Ebenezer S, Shabari Girish S, Priyaadharshini M, 2019. Learning Analytics: Virtual Reality for Programming Course in Higher Education. 9th World Engineering Education Forum, WEEF 2019. Computer Science 172 (2020) 433–437.


Dianne Raubenheimer, Jeff Joines and Ami Craig, 2009. Using Computational Tools To Enhance Problem Solving. Annual Conference & Exposition, Austin, Texas. DOI:10.18260/1-2--4610.


Gihan Osman, Susan C. Herring, 2007. Interaction, facilitation, and deep learning in cross-cultural chat: A case study. The Internet and Higher Education 10 (2007) 125–141. DOI: https://doi.org/10.1016/j.iheduc.2007.03.004


Kristina von Hausswolff and Anna Eckerdal, "Measuring Programming Knowledge in a Research Context," 2018 IEEE Frontiers in Education Conference (FIE), San Jose, CA, USA, 2018, pp. 1-9. DOI: 10.1109/FIE.2018.8658615.


Marcos Gestal, Carlos Fernandez-Lozano, Cristian R. Munteanu, Juan R. Rabuñal and Julian Dorado, 2018. Evaluation as a Continuous Improvement Process in the Learning of Programming Languages. In: Graña M. et al. (eds) International Joint Conference SOCO’18-CISIS’18-ICEUTE’18. SOCO’18-CISIS’18-ICEUTE’18 2018. Advances in Intelligent Systems and Computing, vol 771. Springer, Cham. https://doi.org/10.1007/978-3-319-94120-2_50


Sook-Young Choi, 2019. Development of an Instructional Model based on Constructivism for Fostering Computational Thinking. International Journal of Innovative Technology and Exploring Engineering (IJITEE) ISSN: 2278-3075, Volume-8 Issue-3C, January 2019.


Joseph, S. Rickett, C. Northcote M, Christian, B. J. (2020). “Who are you to judge my writing?”: Student collaboration in the co-construction of assessment rubrics, New Writing, 17:1, 31-49, DOI: 10.1080/14790726.2019.1566368.


David Vaccari and Siva Thangam, 2010. A Proposed Doctoral Assessment Procedure And Rubric For Science And Engineering Paper presented at 2010 Annual Conference & Exposition, Louisville, Kentucky. DOI: 10.18260/1-2—16106.


Gerriet Janssen, Valerie Meier, Jonathan Trace, 2015. Building a Better Rubric: Mixed Methods Rubric Revision. Assessing Writing, Volume 26, 2015, Pages 51-66, ISSN 1075-2935, https://doi.org/10.1016/j.asw.2015.07.002.
Published
2021-04-26
CAMPOS, Dirson Santos de; FERREIRA, Deller James. Use of Rubrics in Informatics and Education Research - A Systematic Literature Review. In: BRAZILIAN SYMPOSIUM ON COMPUTING EDUCATION (EDUCOMP), 1. , 2021, On-line. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 83-92. ISSN 3086-0733. DOI: https://doi.org/10.5753/educomp.2021.14474.