An Empirical Study of Information Retrieval and Machine Reading Comprehension Algorithms for an Online Education Platform
Resumo
This paper provides an empirical study of various techniques for information retrieval and machine reading comprehension in the context of an online education platform. More specifically, our application deals with answering conceptual students questions on technology courses. To that end we explore a pipeline consisting of a document retriever and a document reader. We find that using TF-IDF document representations for retrieving documents and RoBERTa deep learning model for reading documents and answering questions yields the best performance with respect to F-Score. In overall, without a fine-tuning step, deep learning models have a significant performance gap with comparison to previously reported F-scores on other datasets.
Referências
Barker, P. (2002). On being an online tutor. Innovations in Education and Teaching International, 39(1):3–13.
Bernath, U. and Rubin, E. (2001). Professional development in distance education – a successful experiment and future directions. Innovations in Open & Distance Learning, Successful Development of Online and Web-Based Learning, pages 213–223.
Clark, K., Luong, M.-T., Le, Q. V., and Manning, C. D. (2020). Electra: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations.
Damasceno, A. R., Martins, A. R., Chagas, M. L., Barros, E. M., Maia, P. H. M., and Oliveira, F. C. (2020). Stuart: an intelligent tutoring system for increasing scalability In Proceedings of the 19th Brazilian Symposium on of distance education courses. Human Factors in Computing Systems, pages 1–10.
Denis, B., Watland, P., Pirotte, S., and Verday, N. (2004). Roles and competencies of the e-tutor. In Networked Learning 2004: A Research Based Conference on Networked learning and lifelong learning: Proceedings of the fourth international conference, Lancaster, pages 150–157.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of In Proceedings of the deep bidirectional transformers for language understanding. 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Fu, B., Qiu, Y., Tang, C., Li, Y., Yu, H., and Sun, J. (2020). A survey on complex question answering over knowledge base: Recent advances and challenges. arXiv preprint arXiv:2007.13069.
Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., and Lempitsky, V. (2016). Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096–2030.
Hartmann, N. S., Fonseca, E. R., Shulby, C. D., Treviso, M. V., Rodrigues, J. S., and Aluísio, S. M. (2017). Portuguese word embeddings: Evaluating on word analoIn Anais do XI Simpósio Brasileiro de Tecnologia gies and natural language tasks. da Informação e da Linguagem Humana, pages 122–131. SBC.
Hermann, K. M., Kocisky, T., Grefenstette, E., Espeholt, L., Kay, W., Suleyman, M., and Blunsom, P. (2015). Teaching machines to read and comprehend. Advances in Neural Information Processing Systems, 28:1693–1701.
Jones, K. S. (1972). A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28:11–21.
Kolomiyets, O. and Moens, M.-F. (2011). A survey on question answering technology from an information retrieval perspective. Information Sciences, 181(24):5412–5434.
Kusner, M., Sun, Y., Kolkin, N., and Weinberger, K. (2015). From word embeddings to document distances. In International Conference on Machine Learning, pages 957–966. PMLR.
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. (2020). Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations.
Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., and Kang, J. (2020). Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234–1240.
Lentell, H. (2004). The importance of the tutor in open and distance learning. In Rethinking Learner Support in Distance Education, pages 76–88. Routledge.
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532–1543.
Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383–2392.
Rondeau, M.-A. and Hazen, T. J. (2018). Systematic error analysis of the stanford question answering dataset. In Proceedings of the Workshop on Machine Reading for Question Answering, pages 12–20.
Simpson, O. and Sharma, R. C. (2002). Book review-supporting students in open and distance learning. International Review of Research in Open and Distance Learning, 3(3).
Taylor, W. L. (1953). “cloze procedure”: A new tool for measuring readability. Journalism Quarterly, 30(4):415–433.
Van der Maaten, L. and Hinton, G. (2008). Visualizing data using t-sne. Journal of Machine Learning Research, 9(11):2579–2605.
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. (2018). Glue: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 353–355.
Wen, D., Cuzzola, J., Brown, L., and Kinshuk, D. (2012). Instructor-aided asynchronous question answering system for online education and distance learning. International Review of Research in Open and Distributed Learning, 13(5):102–125.