Ensemble Co-Teaching for Robust Learning of Deep Neural Networks under Label Noise
Resumo
Training Deep Neural Networks under Label Noise is challenging due to their memorization ability. To address this issue, various methods have been developed to facilitate robust learning under such conditions. Methods based on multiple networks, such as Stochastic Co-Teaching, have demonstrated superior performance in identifying correctly labeled instances compared to state-of-the-art approaches. In this paper we propose a new method, Ensemble Co-Teaching, which introduces the concept of ensemble learning into robust learning techniques by incorporating perturbations in the network weights. This ensures diversity between the two networks and enhances their ability to detect clean label samples. The proposed Ensemble Co-Teaching method achieved an accuracy improvement, with 91.0% compared to 88.9% from the Co-Teaching method.
Palavras-chave:
Label Noise, Deep Learning, Co-Teaching
Referências
Han, B., Yao, Q., Yu, X., Niu, G., Xu, M., Hu, W., Tsang, I., and Sugiyama, M. (2018). Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Proceeding of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2018).
Jiang, L., Zhou, Z., Leung, T., Li, L., and Fei-Fei, L. (2018). Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In Proceeding of the International Conference on Machine Learning (ICML 2018).
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. In Proceedings of the IEEE 1998.
Lee, J. and Chung, S. (2020). Robust training with ensemble consensus. In Proceeding of the 8th International Conference on Learning Representations (ICLR 2020).
Malach, E. and Shalev-Shwartz, S. (2017). Decoupling “when to update” from “how to update”. In Proceeding of the Conference on Neural Information Processing Systems (NIPS 2017).
Patrini, H., Rozza, A., Menon, A., Nock, R., and Qu, L. (2017). Making deep neural networks robust to label noise: A loss correction approach. In Proceeding of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017).
Rooyen, B., Menon, A., and Williamson, R. (2015). Learning with symmetric label noise: The importance of being unhinged. In Proceedings of the 28th International Conference on Neural Information Processing Systems.
Shen, Y. and Sanghavi, S. (2019). Learning with bad training data via iterative trimmed loss minimization. In Proceeding of the International Conference on Machine Learning (ICML 2019).
Song, H., Kim, M., Park, D., Shin, Y., and Lee, J. (2022). Learning from noisy labels with deep neural networks: A survey. IEEE Transactions on Neural Networks and Learning Systems, 34(11):135–8153.
Vos, B., Jansen, G., and Isgum, I. (2023). Stochastic co-teaching for training neural networks with unknown levels of label noise. Scientifc Reports, 13(16875).
Wei, H., Feng, L., Chen, X., and An, B. (2020). Combating noisy labels by agreement: A joint training method with co-regularization. In Proceeding of the Conference on Computer Vision and Pattern Recognition (CVPR 2020).
Yu, X., Han, B., Yao, J., Niu, G., Tsang, I., and Sugiyama, M. (2019). How does disagreement help generalization against label corruption? In Proceeding of the International Conference on Machine Learning (ICML 2019).
Jiang, L., Zhou, Z., Leung, T., Li, L., and Fei-Fei, L. (2018). Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In Proceeding of the International Conference on Machine Learning (ICML 2018).
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. In Proceedings of the IEEE 1998.
Lee, J. and Chung, S. (2020). Robust training with ensemble consensus. In Proceeding of the 8th International Conference on Learning Representations (ICLR 2020).
Malach, E. and Shalev-Shwartz, S. (2017). Decoupling “when to update” from “how to update”. In Proceeding of the Conference on Neural Information Processing Systems (NIPS 2017).
Patrini, H., Rozza, A., Menon, A., Nock, R., and Qu, L. (2017). Making deep neural networks robust to label noise: A loss correction approach. In Proceeding of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2017).
Rooyen, B., Menon, A., and Williamson, R. (2015). Learning with symmetric label noise: The importance of being unhinged. In Proceedings of the 28th International Conference on Neural Information Processing Systems.
Shen, Y. and Sanghavi, S. (2019). Learning with bad training data via iterative trimmed loss minimization. In Proceeding of the International Conference on Machine Learning (ICML 2019).
Song, H., Kim, M., Park, D., Shin, Y., and Lee, J. (2022). Learning from noisy labels with deep neural networks: A survey. IEEE Transactions on Neural Networks and Learning Systems, 34(11):135–8153.
Vos, B., Jansen, G., and Isgum, I. (2023). Stochastic co-teaching for training neural networks with unknown levels of label noise. Scientifc Reports, 13(16875).
Wei, H., Feng, L., Chen, X., and An, B. (2020). Combating noisy labels by agreement: A joint training method with co-regularization. In Proceeding of the Conference on Computer Vision and Pattern Recognition (CVPR 2020).
Yu, X., Han, B., Yao, J., Niu, G., Tsang, I., and Sugiyama, M. (2019). How does disagreement help generalization against label corruption? In Proceeding of the International Conference on Machine Learning (ICML 2019).
Publicado
17/11/2024
Como Citar
MIYAJI, Renato O.; CORRÊA, Pedro L. P..
Ensemble Co-Teaching for Robust Learning of Deep Neural Networks under Label Noise. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 21. , 2024, Belém/PA.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 777-786.
ISSN 2763-9061.
DOI: https://doi.org/10.5753/eniac.2024.245147.