Evaluating Recent Legal Rhetorical Role Labeling Approaches Supported by Transformer Encoders
Resumo
Pre-trained Transformer models have been used to improve the results of several NLP tasks, which includes the Legal Rhetorical Role Labeling (Legal RRL) one. This task assigns semantic functions, such as fact and argument, to sentences from judgment documents. Several Legal RRL works exploit pre-trained Transformers to encode sentences but only a few employ approaches other than fine-tuning to improve the performance of models. In this work, we implement three of such approaches and evaluate them over the same datasets to achieve a better perception of their impacts. In our experiments, approaches based on data augmentation and positional encoders do not provide performance gains to our models. Conversely, the models based on the DFCSC approach overcome the appropriate baselines, and they do remarkably well as the lowest and highest improvements respectively are 5.9% and 10.4%.
Publicado
25/09/2023
Como Citar
LIMA, Alexandre Gomes de; MORENO, José G.; DKAKI, Taoufiq; ARANHA, Eduardo Henrique da S.; BOUGHANEM, Mohand.
Evaluating Recent Legal Rhetorical Role Labeling Approaches Supported by Transformer Encoders. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 12. , 2023, Belo Horizonte/MG.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2023
.
p. 18-32.
ISSN 2643-6264.