Comparison of Different Adaptable Cache Bypassing Approaches

  • Mariana Carmin Universidade Federal do Paraná
  • Leandro Augusto Ensina Universidade Federal do Paraná
  • Marco Antonio Zanata Alves Universidade Federal do Paraná

Resumo


Most modern microprocessors have a deep cache hierarchy to hide the latency of accessing the main memory. Thus, with the increase in the number of cores, the shared Last-Level Cache (LLC) also increases, which consumes a large portion of the chip's total power and area. The same cache hierarchy can represent an extra latency barrier for applications with poor temporal and spatial locality. Therefore, sophisticated solutions should ensure optimal resource utilization to mitigate cache problems. In this scenario, an adaptive cache mechanism can benefit such applications, improving general system performance and decreasing energy consumption. When multiple programs are running, adapting the use of the LLC for each application avoids cache conflicts and cache pollution, increasing system performance. In this paper, we assess two approaches based on regression and classification models to adapt the use of the LLC during run-time, both using hardware counters. Analyzing the efficiency and overhead of each model through SPEC CPU 2006 and 2017, we observe a better performance for the classification model based on the Random Forest algorithm for both single and multi-program workloads.
Palavras-chave: memory architectures, bypass, last-level cache, artificial intelligence, machine learning
Publicado
21/11/2022
Como Citar

Selecione um Formato
CARMIN, Mariana; ENSINA, Leandro Augusto; ALVES, Marco Antonio Zanata. Comparison of Different Adaptable Cache Bypassing Approaches. In: SIMPÓSIO BRASILEIRO DE ENGENHARIA DE SISTEMAS COMPUTACIONAIS (SBESC), 12. , 2022, Fortaleza/CE. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2022 . p. 1-8. ISSN 2237-5430.