Few Data Diversification in Training Generative Adversarial Networks

  • Lucas Fontes Buzutti Centro Universitário FEI
  • Carlos Eduardo Thomaz Centro Universitário FEI


The first GANs have initially produced sharp images in relatively small resolution and with limited variations, and unstable training. Later works proposed new GAN models capable of generating sharp images in high resolution and with a high level of variation. However, these models use unlimited and highly diversified image sets. We discuss here the use of these models with real-world image sets, since they are composed of limited sample size sets.

Palavras-chave: generative adversarial net, limited data set, synthetic images, limited sample size sets


I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio, “Generative adversarial nets,” in NIPS, 2014.

T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” ArXiv, vol. abs/1710.10196, 2018.

T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4396–4405, 2019.

M. Huzaifah and L. Wyse, “Deep generative models for musical audio synthesis,” ArXiv, vol. abs/2006.06426, 2020.

T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.

S. Engelhardt, L. Sharan, M. Karck, R. D. Simone, and I. Wolf, “Crossdomain conditional generative adversarial networks for stereoscopic hyperrealism in surgical training,” in MICCAI, 2019.

X. Wang, K. Yu, S. Wu, J. Gu, Y.-H. Liu, C. Dong, C. C. Loy, Y. Qiao, and X. Tang, “Esrgan: Enhanced super-resolution generative adversarial networks,” in ECCV Workshops, 2018.

L. Yang, C. Liu, P. Wang, S. Wang, P. Ren, S. Ma, and W. Gao, “Hifacegan: Face renovation via collaborative suppression and replenishment,” Proceedings of the 28th ACM International Conference on Multimedia, 2020.

Y. Lin, L. Li, H. Jing, B. Ran, and D. Sun, “Automated traffic incident detection with a smaller dataset based on generative adversarial networks.” Accident; analysis and prevention, vol. 144, p. 105628, 2020.

N. Wu, F. Liu, F. Meng, M. Li, C. Zhang, and Y. He, “Rapid and accurate varieties classification of different crop seeds under samplelimited condition based on hyperspectral imaging and deep transfer learning,” Frontiers in Bioengineering and Biotechnology, vol. 9, 2021.

T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, and T. Aila, “Training generative adversarial networks with limited data,” in Proc. NeurIPS, 2020.

A. Noguchi and T. Harada, “Image generation from small datasets via batch statistics adaptation,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 2750–2758, 2019.

Z. Gong, P. Zhong, and W. Hu, “Diversity in machine learning,” IEEE Access, vol. 7, pp. 64 323–64 350, 2019.

M. Mirza and S. Osindero, “Conditional generative adversarial nets,” arXiv preprint arXiv:1411.1784, 2014.

E. L. Denton, S. Chintala, A. D. Szlam, and R. Fergus, “Deep generative image models using a laplacian pyramid of adversarial networks,” in NIPS, 2015.

A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” CoRR, vol. abs/1511.06434, 2016.

X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets,” in NIPS, 2016.

M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein gan,” ArXiv, vol. abs/1701.07875, 2017.

I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” in NIPS, 2017.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778, 2016.

G. Zamzami, R. Paul, D. Goldgof, R. Kasturi, and Y. Sun, “Pain assessment from facial expression: Neonatal convolutional neural network (n-cnn),” 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1–7, 2019.

T. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-resolution image synthesis and semantic manipulation with conditional gans,” 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8798–8807, 2018.

A. Odena, C. Olah, and J. Shlens, “Conditional image synthesis with auxiliary classifier gans,” in International conference on machine learning. PMLR, 2017, pp. 2642–2651.

T. M. Heiderich, A. T. F. S. Leslie, and R. Guinsburg, “Neonatal procedural pain can be assessed by computer software that has good sensitivity and specificity to detect facial movements,” Acta Paediatrica, vol. 104, no. 2, pp. e63–e69, 2015.

C. E. Thomaz and G. A. Giraldi, “A new ranking method for principal components analysis and its application to face image analysis,” Image and vision computing, vol. 28, no. 6, pp. 902–913, 2010.
Como Citar

Selecione um Formato
BUZUTTI, Lucas Fontes; THOMAZ, Carlos Eduardo. Few Data Diversification in Training Generative Adversarial Networks. In: WORKSHOP DE VISÃO COMPUTACIONAL (WVC), 17. , 2021, Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 70-75. DOI: https://doi.org/10.5753/wvc.2021.18892.