Avoiding Unnecessary Caching with History-Based Preemptive Bypassing

  • Arthur M. Krause UFRGS
  • Paulo C. Santos UFRGS
  • Philippe O. A. Navaux UFRGS

Resumo


Cache memories can account for more than half of the area and energy consumption on modern processors, which will only increase with the current trend of bigger on die memories. Although these components are very effective when the access pattern is cache-friendly, cache memories incur extra and unnecessary latencies when they cannot serve the data, which adds to significant energy wastes when data that is never reused is placed on them. This work introduces HBPB, a mechanism that detects whether a memory access is cache friendly or not, allowing the bypass of the cache for accesses that are not known to be cache-friendly. Our approach allows the processor to quickly detect when caching accesses is inadequate, improving overall access latency and reducing energy waste and cache pollution. The presented solution achieves reductions of up to 28.6% in energy consumption and 19.5% in latency for SPEC applications, and further improvements in power and performance across various workloads.
Palavras-chave: Cache Memory, Energy Efficiency, Cache By passing
Publicado
02/11/2022
KRAUSE, Arthur M.; SANTOS, Paulo C.; NAVAUX, Philippe O. A.. Avoiding Unnecessary Caching with History-Based Preemptive Bypassing. In: INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD), 34. , 2022, Bordeaux/France. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2022 . p. 71-80.