Avaliação do Consumo de Energia para o Treinamento de Aprendizado de Máquina utilizando Single-board computers baseadas em ARM

  • Felipe Bernardo LNCC
  • André Yokoyama LNCC
  • Bruno Schulze LNCC
  • Mariza Ferro LNCC

Resumo


Neste trabalho é avaliado o uso de placas single-board computers baseadas em ARM para o treinamento de algoritmos de Aprendizado de Máquina (AM). Foi desenvolvido um conjunto experimental treinando o algoritmo XGBoost com 36 configurações de hiperparâmetros em quatro arquiteturas diferentes. Além disso, foi comparado a sua eficiência (consumo energético, custo de aquisição e tempo de execução) com as principais arquiteturas usadas no treinamento de algoritmos de AM (x86 e GPU). Os resultados mostram que este tipo de arquitetura pode se tornar uma alternativa viável e mais verde, não apenas para a inferência, mas também para a fase de treinamento desses algoritmos.

Referências

Chen, T., He, T., Benesty, M., Khotilovich, V., Tang, Y., Cho, H., et al. (2015). Xgboost: extreme gradient boosting. R package version 0.4-2, 1(4):1–4.

Ferro, M., Silva, G., Klôh, V., Yokoyama, A. M., Mury, A. R., and Schulze, B. (2017). Challenges in HPC evaluation: Towards a methodology for scientific application requirements. In Grandinetti, L., Mirtaheri, S. L., Shahbazian, R., Sterling, T. L., and Voevodin, V. V., editors, Big Data and HPC: Ecosystem and Convergence, TopHPC 2017, Tehran, Iran, 24-26 April 2017, volume 33 of Advances in Parallel Computing, pages 32–52. IOS Press.

Guo, R., Zhao, Z., Wang, T., Liu, G., Zhao, J., and Gao, D. (2020). Degradation state recognition of piston pump based on iceemdan and xgboost. Applied Sciences, 10(18).

Henderson, P., Hu, J., Romoff, J., Brunskill, E., Jurafsky, D., and Pineau, J. (2020). Towards the systematic reporting of the energy and carbon footprints of machine learning. Journal of Machine Learning Research, 21(248):1–43.

Holt, J. and Sievert, S. (2021). Training machine learning models faster with dask. SciPy Conferences.

Kaewkasi, C. and Srisuruk, W. (2014). A study of big data processing constraints on a low-power hadoop cluster. In 2014 International Computer Science and Engineering Conference (ICSEC), pages 267–272. IEEE.

Kaggle (2020). State of data science and machine learning 2020. Technical report, https://www.kaggle.com/kaggle-survey-2020.

Kanagachidambaresan, G. R., Prakash, K. B., and Mahima, V. (2021). Programming Tensor Flow with Single Board Computers, pages 145–157. Springer International Publishing, Cham.

Khaydarova, R., Fishchenko, V., Mouromtsev, D., Shmatkov, V., and Lapaev, M. (2020). Rock-cnn: a distributed rockpro64-based convolutional In 2020 26th neural network cluster for iot. verification and performance analysis. Conference of Open Innovations Association (FRUCT), pages 174–181.

Kim, J., Galanopoulos, A., Joseph, J. V., and Kwak, J. (2020). A study In 2020 Int. on energy-process-latency tradeoff in embedded artificial intelligence. Conf. on Inf. and Communication Tec. Convergence (ICTC), pages 22–24. IEEE.

Miranda, M. M. d. (2012). Fator de emissão de gases de efeito estufa da geração de energia elétrica no Brasil: implicações da aplicação da Avaliação do Ciclo de Vida. PhD thesis, Universidade de São Paulo.

Mittal, S. (2019). A survey on optimized implementation of deep learning models on the nvidia jetson platform. Journal of Systems Architecture, 97:428–442.

Partel, V., Kakarla, S. C., and Ampatzidis, Y. (2019). Development and evaluation of a low-cost and smart technology for precision weed management utilizing artificial intelligence. Comp. and electronics in agriculture, 157:339–350.

Sapio, A., Canini, M., Ho, C.-Y., Nelson, J., Kalnis, P., Kim, C., Krishnamurthy, A., Moshref, M., Ports, D. R., and Richtárik, P. (2019). Scaling distributed ml with in-network aggregation. arXiv preprint arXiv:1903.06701.

Serpa, M. S., Krause, A. M., Cruz, E. H., Navaux, P. O. A., Pasin, M., and Felber, P. (2018). Optimizing machine learning algorithms on multi-core and many-core architectures using thread and data mapping. In 2018 26th Euromicro Int. Conf. on Parallel, Dist. and Network-based Processing (PDP), pages 329–333. IEEE.

Silva, G., Schulze, B., and Ferro, M. (2021). Performance and energy efficiency analysis of machine learning algorithms towards green ai: a case study of decision tree algorithms. Master’s thesis, National Lab. for Scientific Computing.

Strubell, E., Ganesh, A., and McCallum, A. (2019). Energy and In Proceedings of the 57th Annual policy considerations for deep learning in nlp. Meeting of the Association for Computational Linguistics, pages 3645–3650.

Süzen, A. A., Duman, B., and Sen, B. (2020). Benchmark analysis of jetson tx2, jetson nano and raspberry pi using deep-cnn. In 2020 Int. Cong. on Human-Computer Interaction, Optimization and Robotic Applications, pages 1–5. IEEE.

Tynes, I. L. (2021). Pain recognition performance on a single board computer. Master’s thesis, University of South Florida.

UNESCO, D.-G. (2021). Preliminary report on the first draft of the recommendation on the ethics of artificial intelligence. https://unesdoc.unesco.org/ark:/48223/pf0000374266.locale=en.

Vinuesa, R., Azizpour, H., Leite, I., Balaam, M., Dignum, V., Domisch, S., Felländer, A., Langhans, S., Tegmark, M., and Nerini, F. F. (2020). The role of artificial intelligence in achieving the sustainable development goals. Nature Communications, 11(233).
Publicado
26/10/2021
Como Citar

Selecione um Formato
BERNARDO, Felipe; YOKOYAMA, André; SCHULZE, Bruno; FERRO, Mariza. Avaliação do Consumo de Energia para o Treinamento de Aprendizado de Máquina utilizando Single-board computers baseadas em ARM. In: SIMPÓSIO EM SISTEMAS COMPUTACIONAIS DE ALTO DESEMPENHO (SSCAD), 22. , 2021, Belo Horizonte. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 60-71. DOI: https://doi.org/10.5753/wscad.2021.18512.