Kub: Enabling Elastic HPC Workloads on Containerized Environments

  • Daniel Medeiros KTH Royal Institute of Technology
  • Jacob Wahlgren KTH Royal Institute of Technology
  • Gabin Schieffer KTH Royal Institute of Technology
  • Ivy Peng KTH Royal Institute of Technology

Resumo


The conventional model of resource allocation in HPC systems is static. Thus, a job cannot leverage newly available resources in the system or release underutilized resources during the execution. In this paper, we present Kub, a methodology that enables elastic execution of HPC workloads on Kubernetes so that the resources allocated to a job can be dynamically scaled during the execution. One main optimization of our method is to maximize the reuse of the originally allocated resources so that the disruption to the running job can be minimized. The scaling procedure is coordinated among nodes through remote procedure calls on Kubernetes for deploying workloads in the cloud. We evaluate our approach using one synthetic benchmark and two production-level MPI-based HPC applications - GRO-MACS and CM1. Our results demonstrate that the benefits of adapting the allocated resources depend on the workload characteristics. In the tested cases, a properly chosen scaling point for increasing resources during execution achieved up to 2x speedup. Also, the overhead of checkpointing and data reshuffling significantly influences the selection of optimal scaling points and requires application-specific knowledge.
Palavras-chave: HPC, Cloud, scaling, Kubernetes, Elasticity, Malleability
Publicado
17/10/2023
Como Citar

Selecione um Formato
MEDEIROS, Daniel; WAHLGREN, Jacob; SCHIEFFER, Gabin; PENG, Ivy. Kub: Enabling Elastic HPC Workloads on Containerized Environments. In: INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD), 35. , 2023, Porto Alegre/RS. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 219-229.