Automatic Generation of Learning Objects Using Text Summarizer Based on Deep Learning Models
A learning object (LO) is an entity, digital or not, that can be used and reused or referenced during a technological support process for teaching and learning. Despite mainly being multimedia, with audio, video, text and images synchronized with each other, LOs can help disseminate knowledge even only in educational texts. However, creating these texts can be costly in time and effort, creating the need to seek new ways to generate this content. This article presents a solution for the generation of text-based LOs generated through summaries supported by Deep Learning models. The present work was evaluated in a supervised experiment in which volunteers rate computer educational texts generated by three types of summarizers. The results presented are positive and allow us to compare the performance of summaries as LO generators in text format. The findings also suggest that using post-processing in the output of models can improve the readability of generated content.
Damasceno, A. L. B., Busson, A. J. G., Lima, T. S., and Neto, C. S. S. (2020). Authoring hypervideos learning objects. Special Topics in Multimedia, IoT and Web Technologies, page 149.
Hooshyar, D., Yousefi, M., Wang, M., and Lim, H. (2018). A data-driven procedural-content-generation approach for educational games.Journal of Computer Assisted Learning, 34(6):731–739.
Kurdi, G., Leo, J., Parsia, B., Sattler, U., and Al-Emari, S. (2020). A systematic review of automatic question generation for educational purposes. International Journal of Artificial Intelligence in Education, 30(1):121–204.
Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L. (2019). Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Li, C. and Xing, W. (2021). Natural language generation using deep learning to support mooc learners. International Journal of Artificial Intelligence in Education, pages 1–29.
Lima, G., Soares, L. F. G., Neto, C. d. S. S., Moreno, M. F., Costa, R. R., and Moreno,M. F. (2010). Towards the ncl raw profile.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8):9.
Rocha, O. R., Zucker, C. F., Giboin, A., and Lagarrigue, A. (2020). Automatic generation of questions from dbpedia. International Journal of Continuing Engineering Education and Life Long Learning, 30(3):276–294.
Rüdian, S., Heuts, A., and Pinkwart, N. (2020).Educational text summarizer: Which sentences are worth asking for? DELFI 2020–Die 18. Fachtagung Bildungstechnologien der Gesellschaft f ̈ur Informatik eV.
Xu, Y., Smeets, R., and Bidarra, R. (2021). Procedural generation of problems for elementary math education.International Journal of Serious Games, 8(2):49–66.
Yang, G., Chen, N.-S., Sutinen, E., Anderson, T., Wen, D., et al. (2013). The effectiveness of automatic text summarization in mobile learning contexts. Computers & Education, 68:233–243.
Zhang, J., Zhao, Y., Saleh, M., and Liu, P. (2020). Pegasus: Pre-training with extracted gap-sentences for abstractive summarization. InInternational Conference on Machine Learning, pages 11328–11339. PMLR.