Analyzing the Effects of Dimensionality Reduction for Unsupervised Domain Adaptation

  • Renato Sergio Lopes Junior UFMG
  • William Robson Schwartz UFMG

Resumo


Deep neural networks are extensively used for solving a variety of computer vision problems. However, in order for these networks to obtain good results, a large amount of data is necessary for training. In image classification, this training data consists of images and labels that indicate the class portrayed by each image. Obtaining this large labeled dataset is very time and resource consuming. Therefore, domain adaptation methods allow different, but semantic-related, datasets that are already labeled to be used during training, thus eliminating the labeling cost. In this work, the effects of embedding dimensionality reduction in a state-of-the-art domain adaptation method are analyzed. Furthermore, we experiment with a different approach that use the available data from all domains to compute the confidence of pseudo-labeled samples. We show through experiments in commonly used datasets that, in fact, the proposed modifications led to better results in the target domain in some scenarios.
Palavras-chave: Training, Dimensionality reduction, Graphics, Deep learning, Computer vision, Costs, Training data, Computer Vision, Machine Learning, Domain Adaptation, Transfer Learning
Publicado
18/10/2021
Como Citar

Selecione um Formato
LOPES JUNIOR, Renato Sergio; SCHWARTZ, William Robson. Analyzing the Effects of Dimensionality Reduction for Unsupervised Domain Adaptation. In: CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 34. , 2021, Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 .