A methodology for performance and cost evaluation of neural network training in cloud environments

  • Cláudio Márcio de Araújo Moura Filho UFRPE
  • Érica Teixeira Gomes de Souza UFRPE

Abstract


Deep neural networks are solutions to problems involving pattern recognition and several works try to find ways to optimize the performance of these networks. This optimization requires suitable hardware to be implemented, hardware that can be very expensive for small and medium-sized organizations. The objective of this work is to propose a methodology to evaluate the performance and cost of training convolutional neural networks, considering the factors that most impact training time and evaluate the total financial cost of the environment for this task. In this sense, it was observed that factors such as the size of the input image and the network architecture have a great impact on the training time metric and consequently on the total cost.

References

ADEBAYO, I. O.; NONSINDISO MANGANYELA; ADIGUN, M. O. Cost-Benefit Analysis of Pricing Models in Cloudlets. 25 nov. 2020.

CARNEIRO, T. et al. Performance Analysis of Google Colaboratory as a Tool for Accelerating Deep Learning Applications. IEEE Access, v. 6, p. 61677–61685, 2018.

CORREIA, Fernando. Definição de computação em nuvem segundo o NIST. Plataforma Nuvem, 2011. Disponível em: [link]. Acesso em: 02/07/2023.

EDINAT A. Cloud Computing Pricing Models: A Survey. International Journal of Scientific Engineering and Research (IJSER). 2018.

ELSHAWI, R. et al. DLBench: a comprehensive experimental evaluation of deep learning frameworks. Cluster Computing, v. 24, n. 3, p. 2017–2038, 7 fev. 2021.

JACKSON, K. R. et al. Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud. 2010 IEEE Second International Conference on Cloud Computing Technology and Science, nov. 2010.

JUVE, G. et al. Scientific workflow applications on Amazon EC2. 16 maio 2010.

KANSAL, S. et al. Pricing Models in Cloud Computing. Proceedings of the 2014 International Conference on Information and Communication Technology for Competitive Strategies, 27 out. 2014.

KRIZHEVSKY, A. 2009. CIFAR-10 and CIFAR-100 datasets. Disponível em: [link]. Acesso em: 20/02/2024.

LECUN, Y., e CORTES, C. (2005). The mnist database of handwritten digits.

LECUN, Y.; BENGIO, Y.; HINTON, G. Deep Learning. Nature, v. 521, n. 7553, p. 436–444, maio 2015.

LILJA, D. J. Measuring Computer Performance: a practitioner’s guide. [S.l.]: Cambridge University Press, 2005. 278p.

LIU, J. et al. “Usability Study of Distributed Deep Learning Frameworks For Convolutional Neural Networks.” 2018.

MEETU KANDPAL; GAHLAWAT, M.; PATEL, K. R. Role of predictive modeling in cloud services pricing: A survey. 1 jan. 2017.

MENASCÉ, D. A.; ALMEIDA, V. A. F. Performance by Design: computer capacity planning by example. [S.l.]: Prentice Hall PTR, 2005. 462p.

OpenAI Five. 2018. Disponível em: [link]. Acesso em: 20/02/2024.

SHAMS, S. et al. Evaluation of Deep Learning Frameworks Over Different HPC Architectures. 1 jun. 2017.

SHI, S. et al. Benchmarking State-of-the-Art Deep Learning Software Tools. arXiv (Cornell University), 25 ago. 2016.

SIMONYAN, K. e ZISSERMAN, A. “Very Deep Convolutional Networks for Large-Scale Image Recognition.” CoRR abs/1409.1556 (2014): n. pag.

WU, C.; BUYYA, R.; RAMAMOHANARAO, K. Cloud Pricing Models. ACM Computing Surveys, v. 52, n. 6, p. 1–36, 21 jan. 2020.

WU, Y. et al. A Comparative Measurement Study of Deep Learning as a Service Framework. IEEE Transactions on Services Computing, p. 1–1, 2019.

XIE, X. et al. Performance Evaluation and Analysis of Deep Learning Frameworks. 23 set. 2022.

ZHU, H. et al. TBD: Benchmarking and Analyzing Deep Neural Network Training. arXiv (Cornell University), 16 mar. 2018.
Published
2024-07-21
MOURA FILHO, Cláudio Márcio de Araújo; SOUZA, Érica Teixeira Gomes de. A methodology for performance and cost evaluation of neural network training in cloud environments. In: WORKSHOP ON PERFORMANCE OF COMPUTER AND COMMUNICATION SYSTEMS (WPERFORMANCE), 23. , 2024, Brasília/DF. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 1-12. ISSN 2595-6167. DOI: https://doi.org/10.5753/wperformance.2024.1986.