Semi-supervised siamese network using self-supervision under scarce annotation improves class separability and robustness to attack
Resumo
Self-supervised learning approaches were shown to benefit feature learning by training models under a pretext task. In this context, learning from limited data can be tackled using a combination of semi-supervised learning and self-supervision. In this paper we combine the traditional supervised learning paradigm with the rotation prediction self-supervised task, that are used simultaneously to train a siamese model with a joint loss function and shared weights. In particular, we are interested in the case in which the proportion of labeled with respect to unlabeled data is small. We investigate the effectiveness of a compact feature space obtained after training under such limited annotation scenario, in terms of linear class separability and under attack. The study includes images from multiple domains, such as natural images (STL-10 dataset), products (Fashion-MNIST dataset) and biomedical images (Malaria dataset). We show that in scenarios where we have only a few labeled data the model augmented with a self-supervised task can take advantage of the unlabeled data to improve the learned representation in terms of the linear discrimination, as well as allowing learning even under attack. Also, we discuss the choices in terms of self-supervision and cases of failure considering the different datasets.
Palavras-chave:
Training, Representation learning, Graphics, Annotations, Supervised learning, Predictive models, Robustness, deep learning, attack, self supervision, self supervised learning
Publicado
18/10/2021
Como Citar
CAVALLARI, Gabriel B.; PONTI, Moacir A..
Semi-supervised siamese network using self-supervision under scarce annotation improves class separability and robustness to attack. In: CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 34. , 2021, Online.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2021
.