Textual Datasets For Portuguese-Brazilian Language Models
Resumo
Avanços em Processamento de Linguagem Natural geraram novos modelos no estado da arte e alcançaram novos patamares em tarefas complexas em tratamento de textos não estruturados. A maioria das novas arquiteturas e modelos foca na língua inglesa. Constatamos uma baixa disponibilidade de conjuntos de dados que podem ser utilizados durante o treinamento de novos modelos. Esta investigação apresenta quatro novos conjunto de dados textuais para modelagem de linguagem no Português-Brasileiro. Nossos conjuntos de dados foram gerados a partir de diversas metodologias específicas que visaram obter dados de diferentes naturezas. Dois de nossos conjuntos foram originalmente construídos a partir dados em forúns Web online. Distribuímos igualmente uma versão traduzida do MultiWOZ, e uma versão limpa do BrWaC. Os conjuntos de dados originais são disponibilizados de maneira estruturada para facilitar sua utilização durante o treinamento de modelos PLN, com perguntas, respostas e conversas já identificadas.
Palavras-chave:
Dataset, NLP, Portuguese
Referências
Baroni, M., Bernardini, S., Ferraresi, A., and Zanchetta, E. (2009). The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43(3):209-226.
Budzianowski, P., Wen, T.-H., Tseng, B.-H., Casanueva, I., Ultes, S., Ramadan, O., and Gasic, M. (2018). Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding.
Gonçalves, L. (2022). Imdb pt-br. https://www.kaggle.com/datasets/luisfredgs/imdb-ptbr.
Guillou, P. (2020). Gportuguese-2 (portuguese gpt-2 small): a language model for portuguese text generation (and more nlp tasks).
Howard, J. and Ruder, S. (2018). Universal language model fine-tuning for text classification.
HuggingFace (2022a). Hugging face-the ai community building the future. https://huggingface.co/datasets?languages=languages:en. Accessed: 2022-05-25.
HuggingFace (2022b). Hugging face-the ai community building the future. https://huggingface.co/datasets?languages=languages:pt. Accessed: 2022-05-25.
Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and Amodei, D. (2020). Scaling laws for neural language models.
Lowe, R., Pow, N., Serban, I., and Pineau, J. (2016). The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems.
Meta (2021). Main page-meta, discussion about wikimedia projects.
Poncelas, A., Lohar, P., Way, A., and Hadley, J. (2020). The impact of indirect machine translation on sentiment classification.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language models are unsupervised multitask learners.
Rajpurkar, P., Jia, R., and Liang, P. (2018). Know what you don’t know: Unanswerable questions for squad.
Sanches., M., C. de Sá., J., M. de Souza., A., Silva., D., R. de Souza., R., Reis., J., and Villas., L. (2022). Mccd: Generating human natural language conversational datasets. In Proceedings of the 24th International Conference on Enterprise Information Systems-Volume 2: ICEIS,, pages 247-255. INSTICC, SciTePress.
Sharir, O., Peleg, B., and Shoham, Y. (2020). The cost of training nlp models: A concise overview.
Souza, F., Nogueira, R., and Lotufo, R. (2020). Bertimbau: Pretrained bert models for brazilian portuguese. In Cerri, R. and Prati, R. C., editors, Intelligent Systems, pages 403-417, Cham. Springer International Publishing.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need.
Wagner, J., Wilkens, R., Idiart, M., and Villavicencio, A. (2018). The brwac corpus: A new open resource for brazilian portuguese.
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. (2018). Glue: A multi-task benchmark and analysis platform for natural language understanding.
Budzianowski, P., Wen, T.-H., Tseng, B.-H., Casanueva, I., Ultes, S., Ramadan, O., and Gasic, M. (2018). Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling.
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding.
Gonçalves, L. (2022). Imdb pt-br. https://www.kaggle.com/datasets/luisfredgs/imdb-ptbr.
Guillou, P. (2020). Gportuguese-2 (portuguese gpt-2 small): a language model for portuguese text generation (and more nlp tasks).
Howard, J. and Ruder, S. (2018). Universal language model fine-tuning for text classification.
HuggingFace (2022a). Hugging face-the ai community building the future. https://huggingface.co/datasets?languages=languages:en. Accessed: 2022-05-25.
HuggingFace (2022b). Hugging face-the ai community building the future. https://huggingface.co/datasets?languages=languages:pt. Accessed: 2022-05-25.
Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and Amodei, D. (2020). Scaling laws for neural language models.
Lowe, R., Pow, N., Serban, I., and Pineau, J. (2016). The ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems.
Meta (2021). Main page-meta, discussion about wikimedia projects.
Poncelas, A., Lohar, P., Way, A., and Hadley, J. (2020). The impact of indirect machine translation on sentiment classification.
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. (2019). Language models are unsupervised multitask learners.
Rajpurkar, P., Jia, R., and Liang, P. (2018). Know what you don’t know: Unanswerable questions for squad.
Sanches., M., C. de Sá., J., M. de Souza., A., Silva., D., R. de Souza., R., Reis., J., and Villas., L. (2022). Mccd: Generating human natural language conversational datasets. In Proceedings of the 24th International Conference on Enterprise Information Systems-Volume 2: ICEIS,, pages 247-255. INSTICC, SciTePress.
Sharir, O., Peleg, B., and Shoham, Y. (2020). The cost of training nlp models: A concise overview.
Souza, F., Nogueira, R., and Lotufo, R. (2020). Bertimbau: Pretrained bert models for brazilian portuguese. In Cerri, R. and Prati, R. C., editors, Intelligent Systems, pages 403-417, Cham. Springer International Publishing.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need.
Wagner, J., Wilkens, R., Idiart, M., and Villavicencio, A. (2018). The brwac corpus: A new open resource for brazilian portuguese.
Wang, A., Singh, A., Michael, J., Hill, F., Levy, O., and Bowman, S. R. (2018). Glue: A multi-task benchmark and analysis platform for natural language understanding.
Publicado
19/09/2022
Como Citar
SANCHES, Matheus Ferraroni; DE SÁ, Jader M. C.; FOERSTE, Henrique T. S.; SOUZA, Rafael R.; DOS REIS, Julio C.; VILLAS, Leandro A..
Textual Datasets For Portuguese-Brazilian Language Models. In: DATASET SHOWCASE WORKSHOP (DSW), 4. , 2022, Búzios.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2022
.
p. 1-12.
DOI: https://doi.org/10.5753/dsw.2022.224294.