Comparing Contextual Embeddings for Semantic Textual Similarity in Portuguese
Resumo
Semantic textual similarity (STS) measures how semantically similar two sentences are. In the context of the Portuguese language, STS literature is still incipient but includes important initiatives like the ASSIN and ASSIN 2 shared tasks. The state-of-the-art for those datasets is a contextual embedding produced by a Portuguese pre-trained and fine-tuned BERT model. In this work, we investigate the application of Sentence-BERT (SBERT) contextual embeddings to these datasets. Compared to BERT, SBERT is a more computationally efficient approach, enabling its application to scalable unsupervised learning problems. Given the absence of SBERT models pre-trained in Portuguese and the computational cost for such training, we adopt multilingual models and also fine-tune them for Portuguese. Results showed that SBERT embeddings were competitive especially after fine-tuning, numerically surpassing the results of BERT on ASSIN 2 and the results observed during the shared tasks for all datasets considered.
Palavras-chave:
Deep learning, Natural language processing, Semantic textual similarity, Word embeddings
Publicado
29/11/2021
Como Citar
ANDRADE JUNIOR, José E.; CARDOSO-SILVA, Jonathan; BEZERRA, Leonardo C. T..
Comparing Contextual Embeddings for Semantic Textual Similarity in Portuguese. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 10. , 2021, Online.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2021
.
ISSN 2643-6264.