Model LEEM: Evaluating and improving the learner experience with the use of DICTs
Resumo
Learner eXperience (LX) can be defined as the perceptions and performance of learners interacting with learning environments, educational products, and resources. In this master's research, we proposed a model with different forms of assessment that allow the integration of most of the LX elements. The Learner Experience Evaluation Model (LEEM) aims to evaluate and improve LX using Digital Information and Communication Technologies. LEEM consists of three evaluation stages (pre, during, and post) to monitor and record LX progress continuously. In short, it is expected that LEEM will help educators rethink their teaching strategies when they notice that learners report difficulties with the resources adopted.Referências
Basili, V. R. e Rombach, H. D. (1988). The TAME project: Towards improvement-oriented software environments. IEEE Transactions on Software Engineering, 14(6): 758–773.
da Silva, E. J. e Ziviani, H. E. (2018). Desenho e música no ensino de IHC: relato de experiência de uma aula sobre conceitos básicos da engenharia semiótica. In Anais Estendidos do XVII Simpósio Brasileiro sobre Fatores Humanos em Sistemas Computacionais, Porto Alegre, RS, Brasil. SBC.
dos Santos, G. C., dos S. Silva, D. E., e Valentim, N. M. (2023). Proposal and preliminary evaluation of a learner experience evaluation model in information systems. In Proceedings of the XIX Brazilian Symposium on Information Systems, SBSI ’23, p. 308–316, New York, NY, USA. Association for Computing Machinery.
dos Santos, G. C., dos S. Silva, D. E., e Valentim, N. M. C. (2024a). Feasibility study of a model that evaluates the learner experience: A quantitative and qualitative analysis. In Proceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems, IHC ’23, New York, NY, USA. Association for Computing Machinery.
dos Santos, G. C., Silva, D., e Valentim, N. (2022). Um mapeamento sistemático da literatura sobre iniciativas que avaliam a experiência do aprendiz. In Anais do XXXIII Simpósio Brasileiro de Informática na Educação, p. 621–633, Porto Alegre, RS, Brasil. SBC.
dos Santos, G. C., Silva, D. E., Peres, L. M., e Valentim, N. M. C. (2024b). Case study of a model that evaluates the learner experience with dicts. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, CHI EA ’24, New York, NY, USA. Association for Computing Machinery.
Huang, R., Spector, J. M., e Yang, J. (2019). Educational Technology: A Primer for the 21st Century. Springer, Singapura.
Kawano, A., Motoyama, Y., e Aoyama, M. (2019). A LX (learner experience)-based evaluation method of the education and training programs for professional software engineers. In Proceedings of the 2019 7th International Conference on Information and Education Technology, ICIET 2019, p. 151–159, New York, NY, USA. Association for Computing Machinery.
Kitchenham, B., Madeyski, L., e Budgen, D. (2022). Supplementary material for SEGRESS: Software engineering guidelines for reporting secondary studies.
Magyar, N. e Haley, S. R. (2020). Balancing learner experience and user experience in a peer feedback web application for MOOCs. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, p. 1–8, New York, NY, USA. Association for Computing Machinery.
Magyar, N. e Haley, S. R. (2020). Balancing learner experience and user experience in a peer feedback web application for MOOCs. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, p. 1–8, New York, NY, USA. Association for Computing Machinery.
Nygren, E., Blignaut, A. S., Leendertz, V., e Sutinen, E. (2019). Quantitizing affective data as project evaluation on the use of a mathematics mobile game and intelligent tutoring system. Informatics in Education, 18(2): 375–402.
Pimentel, M., Filippo, D., e Santoro, F. M. (2019). Design science research: Fazendo pesquisas científicas rigorosas atreladas ao desenvolvimento de artefatos computacionais projetados para a educação.
Ruiz, J. e Snoeck, M. (2018). Adapting Kirkpatrick’s evaluation model to technology-enhanced learning. In MODELS ’18: ACM/IEEE 21st International Conference on Model Driven Engineering Languages and Systems, MODELS ’18, p. 135–142, New York, NY, USA. Association for Computing Machinery.
Schmidt, M. e Huang, R. (2022). Defining learning experience design: Voices from the field of learning design & technology. TechTrends, 66(2): 141–158.
Shull, F., Mendonça, M. G., Basili, V., Carver, J., Maldonado, J. C., Fabbri, S., Travassos, G. H., e Ferreira, M. C. (2004). Knowledge-sharing issues in experimental software engineering. Empirical Software Engineering, 9(1): 111–137.
Soloway, E., Guzdial, M., e Hay, K. E. (1994). Learner-centered design: The challenge for HCI in the 21st century. Interactions, 1(2): 36–48.
Yin, R. K. (2014). Case Study Research: Design and Methods, Volume 5. Sage, Thousand Oaks, CA.
da Silva, E. J. e Ziviani, H. E. (2018). Desenho e música no ensino de IHC: relato de experiência de uma aula sobre conceitos básicos da engenharia semiótica. In Anais Estendidos do XVII Simpósio Brasileiro sobre Fatores Humanos em Sistemas Computacionais, Porto Alegre, RS, Brasil. SBC.
dos Santos, G. C., dos S. Silva, D. E., e Valentim, N. M. (2023). Proposal and preliminary evaluation of a learner experience evaluation model in information systems. In Proceedings of the XIX Brazilian Symposium on Information Systems, SBSI ’23, p. 308–316, New York, NY, USA. Association for Computing Machinery.
dos Santos, G. C., dos S. Silva, D. E., e Valentim, N. M. C. (2024a). Feasibility study of a model that evaluates the learner experience: A quantitative and qualitative analysis. In Proceedings of the XXII Brazilian Symposium on Human Factors in Computing Systems, IHC ’23, New York, NY, USA. Association for Computing Machinery.
dos Santos, G. C., Silva, D., e Valentim, N. (2022). Um mapeamento sistemático da literatura sobre iniciativas que avaliam a experiência do aprendiz. In Anais do XXXIII Simpósio Brasileiro de Informática na Educação, p. 621–633, Porto Alegre, RS, Brasil. SBC.
dos Santos, G. C., Silva, D. E., Peres, L. M., e Valentim, N. M. C. (2024b). Case study of a model that evaluates the learner experience with dicts. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, CHI EA ’24, New York, NY, USA. Association for Computing Machinery.
Huang, R., Spector, J. M., e Yang, J. (2019). Educational Technology: A Primer for the 21st Century. Springer, Singapura.
Kawano, A., Motoyama, Y., e Aoyama, M. (2019). A LX (learner experience)-based evaluation method of the education and training programs for professional software engineers. In Proceedings of the 2019 7th International Conference on Information and Education Technology, ICIET 2019, p. 151–159, New York, NY, USA. Association for Computing Machinery.
Kitchenham, B., Madeyski, L., e Budgen, D. (2022). Supplementary material for SEGRESS: Software engineering guidelines for reporting secondary studies.
Magyar, N. e Haley, S. R. (2020). Balancing learner experience and user experience in a peer feedback web application for MOOCs. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, p. 1–8, New York, NY, USA. Association for Computing Machinery.
Magyar, N. e Haley, S. R. (2020). Balancing learner experience and user experience in a peer feedback web application for MOOCs. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, CHI EA ’20, p. 1–8, New York, NY, USA. Association for Computing Machinery.
Nygren, E., Blignaut, A. S., Leendertz, V., e Sutinen, E. (2019). Quantitizing affective data as project evaluation on the use of a mathematics mobile game and intelligent tutoring system. Informatics in Education, 18(2): 375–402.
Pimentel, M., Filippo, D., e Santoro, F. M. (2019). Design science research: Fazendo pesquisas científicas rigorosas atreladas ao desenvolvimento de artefatos computacionais projetados para a educação.
Ruiz, J. e Snoeck, M. (2018). Adapting Kirkpatrick’s evaluation model to technology-enhanced learning. In MODELS ’18: ACM/IEEE 21st International Conference on Model Driven Engineering Languages and Systems, MODELS ’18, p. 135–142, New York, NY, USA. Association for Computing Machinery.
Schmidt, M. e Huang, R. (2022). Defining learning experience design: Voices from the field of learning design & technology. TechTrends, 66(2): 141–158.
Shull, F., Mendonça, M. G., Basili, V., Carver, J., Maldonado, J. C., Fabbri, S., Travassos, G. H., e Ferreira, M. C. (2004). Knowledge-sharing issues in experimental software engineering. Empirical Software Engineering, 9(1): 111–137.
Soloway, E., Guzdial, M., e Hay, K. E. (1994). Learner-centered design: The challenge for HCI in the 21st century. Interactions, 1(2): 36–48.
Yin, R. K. (2014). Case Study Research: Design and Methods, Volume 5. Sage, Thousand Oaks, CA.
Publicado
04/11/2024
Como Citar
SANTOS, Gabriela Corbari dos; SILVA, Deivid Eive dos S.; VALENTIM, Natasha M. C..
Model LEEM: Evaluating and improving the learner experience with the use of DICTs. In: CONCURSO ALEXANDRE DIRENE (CTD-IE) - DISSERTAÇÕES DE MESTRADO - CONGRESSO BRASILEIRO DE INFORMÁTICA NA EDUCAÇÃO (CBIE), 13. , 2024, Rio de Janeiro/RJ.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 58-69.
DOI: https://doi.org/10.5753/cbie_estendido.2024.243437.