APVAT: Attention Perturbation in Virtual Adversarial Training for Semi-supervised Learning
Resumo
A significant portion of the textual corpus lacks the annotated labels needed for training supervised models. Manual labeling of such data is costly and impractical. In response to this challenge, semi-supervised learning (SSL) has emerged as a pivotal approach to harnessing the potential of both labeled and unlabeled data. This paper explores the integration of adversarial perturbation with SSL, particularly focusing on text classification tasks. We propose an innovative approach called APVAT, which incorporates adversarial perturbation into the Attention Virtual Adversarial Training (VAT) model. We performed experiments in five benchmark datasets, exploring the impact of different embeddings such as fastText, GloVe BERT, and GPT-2. Our contributions are twofold: First, our approach improves classification accuracy in datasets with little training data compared to previous methods, even when labeled data is scarce (e.g., 10%). Perturbation augments the training data effectively, leading to efficient model learning and resource savings. Second, our method reduces the time of processing and required training epochs. The findings demonstrate that the fusion of adversarial perturbation in the attention mechanism with SSL, particularly when applied to text classification, offers a promising avenue for advancing the field.
Publicado
29/09/2025
Como Citar
DUARTE, José Marcio; MILIOS, Evangelos; BERTON, Lilian.
APVAT: Attention Perturbation in Virtual Adversarial Training for Semi-supervised Learning. In: BRAZILIAN CONFERENCE ON INTELLIGENT SYSTEMS (BRACIS), 35. , 2025, Fortaleza/CE.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 162-176.
ISSN 2643-6264.
