Simultaneous Learning Loss for Improved Cross-Domain Knowledge Transfer
Resumo
Leveraging auxiliary data to improve performance on a target task is a potent strategy, yet it is often hindered by negative transfer, where domain misalignment degrades accuracy. We introduce Simultaneous Learning Loss (SLL), an objective function designed to enable robust cross-domain knowledge transfer by explicitly regularizing the shared representation space. SLL integrates a domain-balancing term with an InterGroup Penalty (GP), which directly discourages feature-level confusion between target and auxiliary domains without requiring architectural changes. We conduct extensive experiments on 12 cross-domain dataset pairs, using four target benchmarks and seven lightweight architectures. SLL consistently outperforms standard cross-entropy and multi-task learning baselines, achieving accuracy gains of up to 20.58 percentage points and demonstrating superior robustness against negative transfer. Our work establishes that explicitly penalizing inter-domain confusion is a powerful and generalizable principle for improving knowledge transfer.
Palavras-chave:
Training, Graphics, Accuracy, Benchmark testing, Multitasking, Linear programming, Robustness, Knowledge transfer, Standards, Image classification
Publicado
30/09/2025
Como Citar
CASTRO, Pedro; SILVA, Pedro H. L.; MENOTTI, David; MOREIRA, Gladston; LUZ, Eduardo.
Simultaneous Learning Loss for Improved Cross-Domain Knowledge Transfer. In: CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 38. , 2025, Salvador/BA.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 86-91.
