Continuous Integration for Machine Learning Experiments Reproducibility: a Practical Study
The development of a Machine Learning (ML) model depends on many variables in its training. Both model architecture-related variables, such as initial weights and hyperparameters, and general variables, like datasets and framework versions, might impact model metrics and experiment reproducibility. An application cannot be trustworthy if it produces good results only in a specific environment. Therefore, in order to avoid reproducibility issues, some good practices need to be adopted. This paper aims to report a practical experience in developing a machine learning application adopting a workflow that assures the reproducibility of the experiments and, consequently, its reliability, improving the team productivity.
Adrian-Ioan Argesanu and Gheorghe-Daniel Andreescu. 2021. A Platform to Manage the End-to-End Lifecycle of Batch-Prediction Machine Learning Models. In 2021 IEEE 15th International Symposium on Applied Computational Intelligence and Informatics (SACI). 000329–000334. https://doi.org/10.1109/SACI51354.2021.9465588
Rob Ashmore, Radu Calinescu, and Colin Paterson. 2021. Assuring the Machine Learning Lifecycle: Desiderata, Methods, and Challenges. ACM Comput. Surv. 54, 5 (05 2021), 39.
Brett K. Beaulieu-Jones and Casey S. Greene. 2017. Reproducibility of computational workflows is automated using continuous analysis. Nature Biotechnology 35, 4 (01 Apr 2017), 342–346. https://doi.org/10.1038/nbt.3780
Joelle Pineau, Philippe Vincent-Lamarre, Koustuv Sinha, Vincent Larivière, Alina Beygelzimer, Florence d’Alché Buc, Emily Fox, and Hugo Larochelle. 2020. Improving Reproducibility in Machine Learning Research (A Report from the NeurIPS 2019 Reproducibility Program). arXiv:2003.12206 [cs.LG]
Zhiyuan Wan, Xin Xia, David Lo, and Gail C. Murphy. 2019. How does Machine Learning Change Software Development Practices? IEEE Transactions on Software Engineering (2019), 1–1. https://doi.org/10.1109/TSE.2019.2937083