Improving Multi-Domain Learning by Balancing Batches With Domain Information

  • Alexandre Thurow Bender UFPel
  • Emillyn Mellyne Gobetti Souza UFPel
  • Ihan Belmonte Bender UFPel
  • Ulisses Brisolara Corrêa UFPel
  • Ricardo Matsumura Araujo UFPel

Resumo


Collections of data obtained or generated under similar conditions are referred to as domains or data sources. The distinct conditions of data acquisition or generation are often neglected, but understanding them is vital to address any phenomena emerging from these differences that might hinder model generalization. Multi-domain learning seeks to find the best way to train a model so that it performs adequately in all domains used during training. This paper explores multi-domain learning techniques that use explicit information about the domain of an example, in addition to its class. This study evaluates a general approach (Stew) by mixing all available data and also proposes two novel batch domain-regularization methods: Balanced Domains and Loss Sum. We train machine learning models with the listed approaches using datasets with multiple sources of data for image and audio classification tasks. The results suggest training a model using the Loss Sum method improves the results of models otherwise trained in a mix of all available data.
Palavras-chave: Multi-Domain Learning, Batch Regularization, Classification Task, Image, Audio

Referências

Devansh Arpit, Huan Wang, Yingbo Zhou, and Caiming Xiong. 2021. Ensemble of averages: Improving model selection and boosting performance in domain generalization. arXiv preprint arXiv:2110.10832 (2021). https://doi.org/10.48550/arXiv.2110.10832

William Chan, Daniel Park, Chris Lee, Yu Zhang, Quoc Le, and Mohammad Norouzi. 2021. Speechstew: Simply mix all available speech recognition data to train one large neural network. arXiv preprint arXiv:2104.02133 (2021). https://doi.org/10.48550/arXiv.2104.02133

Roza Chojnacka, Jason Pelecanos, Quan Wang, and Ignacio Lopez Moreno. 2021. Speakerstew: Scaling to many languages with a triaged multilingual text-dependent and text-independent speaker verification system. arXiv preprint arXiv:2104.02125 (2021). https://doi.org/10.48550/arXiv.2104.02125

Pedro Domingos. 2012. A few useful things to know about machine learning. Commun. ACM 55, 10 (2012), 78–87. https://doi.org/10.1145/2347736.2347755

Robert M French. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences 3, 4 (1999), 128–135. https://doi.org/10.1016/s1364-6613(99)01294-2

Yaroslav Ganin and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. In International conference on machine learning. PMLR, 1180–1189. https://doi.org/10.48550/arXiv.1409.7495

Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The journal of machine learning research 17, 1 (2016), 2096–2030. https://doi.org/10.1007/978-3-319-58347-1_10

Ian J Goodfellow, Mehdi Mirza, Da Xiao, Aaron Courville, and Yoshua Bengio. 2013. An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211 (2013). https://doi.org/10.48550/arXiv.1312.6211

Ishaan Gulrajani and David Lopez-Paz. 2020. In search of lost domain generalization. arXiv preprint arXiv:2007.01434 (2020). https://doi.org/10.48550/arXiv.2007.01434

Abhinav Jain, Hima Patel, Lokesh Nagalapatti, Nitin Gupta, Sameep Mehta, Shanmukha Guttula, Shashank Mujumdar, Shazia Afzal, Ruhi Sharma Mittal, and Vitobha Munigala. 2020. Overview and Importance of Data Quality for Machine Learning Tasks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (New York, NY, USA, 2020-08-20) (KDD ’20). Association for Computing Machinery, 3561–3562. https://doi.org/10.1145/3394486.3406477

Guoliang Kang, Lu Jiang, Yi Yang, and Alexander G Hauptmann. 2019. Contrastive adaptation network for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4893–4902. https://doi.org/10.1109/CVPR.2019.00503

Egoitz Laparra, Steven Bethard, and Timothy A Miller. 2020. Rethinking domain adaptation for machine learning over clinical language. JAMIA open 3, 2 (2020), 146–150. https://doi.org/10.1093/jamiaopen/ooaa010

Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. nature 521, 7553 (2015), 436–444. https://doi.org/10.1038/nature14539

Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales. 2017. Deeper, broader and artier domain generalization. In Proceedings of the IEEE international conference on computer vision. 5542–5550. https://doi.org/10.1109/ICCV.2017.591

Haoliang Li, Sinno Jialin Pan, Shiqi Wang, and Alex C Kot. 2018. Domain generalization with adversarial feature learning. In Proceedings of the IEEE conference on computer vision and pattern recognition. 5400–5409. https://doi.org/10.1109/CVPR.2018.00566

Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Paden Tomasello, Jacob Kahn, Gilad Avidov, Ronan Collobert, and Gabriel Synnaeve. 2020. Rethinking evaluation in asr: Are our models robust enough?arXiv preprint arXiv:2010.11745 (2020). https://doi.org/10.21437/Interspeech.2021-1758

Yajing Liu, Xinmei Tian, Ya Li, Zhiwei Xiong, and Feng Wu. 2019. Compact feature learning for multi-domain image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7193–7201. https://doi.org/10.1109/CVPR.2019.00736

Gautham J Mysore. 2014. Can we automatically transform speech recorded on common consumer devices in real-world environments into professional production quality speech?—a dataset, insights, and challenges. IEEE Signal Processing Letters 22, 8 (2014), 1006–1010. https://doi.org/10.1109/LSP.2014.2379648

Jaemin Na, Heechul Jung, Hyung Jin Chang, and Wonjun Hwang. 2021. Fixbi: Bridging domain spaces for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 1094–1103. https://doi.org/10.1109/CVPR46437.2021.00115

Arun Narayanan, Ananya Misra, Khe Chai Sim, Golan Pundak, Anshuman Tripathi, Mohamed Elfeky, Parisa Haghani, Trevor Strohman, and Michiel Bacchiani. 2018. Toward domain-invariant speech recognition via large scale training. In 2018 IEEE Spoken Language Technology Workshop (SLT). IEEE, 441–447. https://doi.org/10.1109/SLT.2018.8639610

Shuteng Niu, Yongxin Liu, Jian Wang, and Houbing Song. 2020. A decade survey of transfer learning (2010–2020). IEEE Transactions on Artificial Intelligence 1, 2 (2020), 151–166. https://doi.org/10.1109/TAI.2021.3054609

Joaquin Quinonero-Candela, Masashi Sugiyama, Anton Schwaighofer, and Neil D Lawrence. 2008. Dataset shift in machine learning. Mit Press

Joao Ribeiro, Francisco S Melo, and Joao Dias. 2019. Multi-task learning and catastrophic forgetting in continual reinforcement learning. arXiv preprint arXiv:1909.10008 (2019). https://doi.org/10.48550/arXiv.1909.10008

Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. 2010. Adapting visual category models to new domains. In European conference on computer vision. Springer, 213–226. https://doi.org/10.1007/978-3-642-15561-1_16

Nithya Sambasivan, Shivani Kapania, Hannah Highfill, Diana Akrong, Praveen Paritosh, and Lora M Aroyo. 2021. “Everyone wants to do the model work, not the data work”: Data Cascades in High-Stakes AI. In proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15. https://doi.org/10.1145/3411764.3445518

E Schweighofer. 2022. Data-Centric Machine Learning: Improving Model Performance and Understanding Through Dataset Analysis. In Legal Knowledge and Information Systems: JURIX 2021: The Thirty-fourth Annual Conference, Vilnius, Lithuania, 8-10 December 2021, Vol. 346. IOS Press, 54. https://doi.org/10.3233/FAIA210316

Anthony Sicilia, Xingchen Zhao, Davneet S Minhas, Erin E O’Connor, Howard J Aizenstein, William E Klunk, Dana L Tudorascu, and Seong Jae Hwang. 2021. Multi-domain learning by meta-learning: Taking optimal steps in multi-domain loss landscapes by inner-loop learning. In 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). IEEE, 650–654

Joaquin Vanschoren. 2018. Meta-learning: A survey. arXiv preprint arXiv:1810.03548 (2018). https://doi.org/10.1109/TPAMI.2021.3079209

Karl Weiss, Taghi M Khoshgoftaar, and DingDing Wang. 2016. A survey of transfer learning. Journal of Big data 3, 1 (2016), 1–40. https://doi.org/10.1186/s40537-016-0043-6

Shaoan Xie, Zibin Zheng, Liang Chen, and Chuan Chen. 2018. Learning semantic representations for unsupervised domain adaptation. In International conference on machine learning. PMLR, 5423–5432

Tongkun Xu, Weihua Chen, Pichao Wang, Fan Wang, Hao Li, and Rong Jin. 2021. Cdtrans: Cross-domain transformer for unsupervised domain adaptation. arXiv preprint arXiv:2109.06165 (2021). https://doi.org/10.48550/arXiv.2109.06165

Xiang Xu, Xiong Zhou, Ragav Venkatesan, Gurumurthy Swaminathan, and Orchid Majumder. 2019. d-sne: Domain adaptation using stochastic neighborhood embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2497–2506. https://doi.org/10.1007/978-3-030-45529-3_3
Publicado
23/10/2023
Como Citar

Selecione um Formato
BENDER, Alexandre Thurow; SOUZA, Emillyn Mellyne Gobetti; BENDER, Ihan Belmonte; CORRÊA, Ulisses Brisolara; ARAUJO, Ricardo Matsumura. Improving Multi-Domain Learning by Balancing Batches With Domain Information. In: SIMPÓSIO BRASILEIRO DE SISTEMAS MULTIMÍDIA E WEB (WEBMEDIA), 29. , 2023, Ribeirão Preto/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 96–103.