Exploiting Non-conventional DVFS on GPUs: Application to Deep Learning

  • Francisco Mendes INESC-ID / Universidade de Lisboa
  • Pedro Tomás Universidade de Lisboa
  • Nuno Roma INESC-ID / Universidade de Lisboa

Resumo


The use of Graphics Processing Units (GPUs) to accelerate Deep Neural Networks (DNNs) training and inference is already widely adopted, allowing for a significant increase in the performance of these applications. However, this increase in performance comes at the cost of a consequent increase in energy consumption. While several solutions have been proposed to perform Voltage-Frequency (V-F) scaling on GPUs, these are still one-dimensional, by simply adjusting frequency while relying on default voltage settings. To overcome this, this paper introduces a methodology to fully characterize the impact of non-conventional Dynamic Voltage and Frequency Scaling (DVFS) in GPUs. The proposed approach was applied to an AMD Vega 10 Frontier Edition GPU. When applying this non-conventional DVFS scheme to DNNs, the obtained results show that it is possible to safely decrease the GPU voltage, allowing for a significant reduction of the energy consumption (up to 38%) and the Energy-Delay Product (EDP) (up to 41%) on the training of CNN models, with no degradation of the networks accuracy.
Palavras-chave: Graphics processing units, Benchmark testing, Kernel, Random access memory, Training, Voltage control, Memory management, GPU, DVFS, Undervoltage
Publicado
08/09/2020
MENDES, Francisco; TOMÁS, Pedro; ROMA, Nuno. Exploiting Non-conventional DVFS on GPUs: Application to Deep Learning. In: INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD), 32. , 2020, Porto/Portugal. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2020 . p. 1-9.