Impact of a dynamic Allocation Policy for Resource and Job Management Systems in deadline-oriented Scenarios
Resumo
As High Performance Computing (HPC) becomes a tool used in many different workflows, Quality of Service (QoS) becomes increasingly important. In many cases, this includes the reliable execution of an HPC job and the generation of the results by a certain deadline. The Resource and Job Management System (RJMS or simply RMS) is responsible for receiving the job requests and executing the jobs with a deadline-oriented policy to support the workflows. In this paper, we evaluate how well static resource management policies cope with deadline constrained HPC jobs, and explore two variations of a dynamic policy in this context. Our preliminary results clearly show that a dynamic policy is needed to meet the requirements of a modern deadline-oriented RMS scenario.
Referências
Alam, S. R., Bartolome, J., Carpene, M., Happonen, K., s LaFoucriere, J.-C., and Pleiter, D. (2022). Fenix: A Pan-European Federation of Supercomputing and Cloud e-Infrastructure Services. Communications of the ACM, 65(4).
Álvarez, D., Sala, K., and Beltran, V. (2022). nos-v: Co-executing hpc applications using system-wide task scheduling. arXiv preprint arXiv:2204.10768.
Becker, R. P. (2021). Entwurf und Implementierung eines Plugins für SLURM zum planungsbasierten Scheduling. Bachelor’s Thesis, Freie Universität Berlin.
CURTA. Curta: A General-purpose High-Performance Computer at ZEDAT, Freie Universität Berlin. https://doi.org/10.17169/refubium-26754 (visited May 19, 2021).
De Rose, C. A. (1998). Verteilte Prozessorverwaltung in Multirechnersystemen. PhD thesis, Universität Karlsruhe (Technische Hochschule).
De Rose, C. A., Heiss, H.-U., and Linnert, B. (2007). Distributed dynamic processor allocation for multicomputers. Parallel Computing, 33(3):145–158.
Fan, Y. (2021). Job scheduling in high performance computing. Horizons in Computer Science Research, 18.
Fan, Y., Lan, Z., Rich, P., Allcock, W., and Papka, M. E. (2022). Hybrid workload scheduling on hpc systems. In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 470–480. IEEE.
Feitelson, D. G., Tsafrir, D., and Krakov, D. (2014). Experience with using the parallel workloads archive. Journal of Parallel and Distributed Computing, 74(10):2967–2982.
Frachtenberg, E., Feitelson, D. G., Fernandez, J., and Petrini, F. (2003). Parallel job scheduling under dynamic workloads. In Workshop on Job Scheduling Strategies for Parallel Processing, pages 208–227. Springer.
Frank, A. (2022). Reducing resource waste in HPC through co-allocation, custom checkpoints, and lower false failure prediction rates. PhD thesis, Johannes GutenbergUniversität Mainz.
Heiss, H.-U. (1994). Prozessorzuteilung in Parallelrechnern. BI-Wiss.-Verlag.
Le Hai, T. H., Trung, K. P., and Thoai, N. (2020). A working time deadline-based backfilling scheduling solution. In 2020 International Conference on Advanced Computing and Applications (ACOMP), pages 63–70. IEEE.
Li, B., Fan, Y., Dearing, M., Lan, Z., Rich, P., Allcock, W., and Papka, M. (2022). Mrsch: Multi-resource scheduling for hpc. In 2022 IEEE International Conference on Cluster Computing (CLUSTER), pages 47–57. IEEE.
Li, J., Michelogiannakis, G., Cook, B., Cooray, D., and Chen, Y. (2023). Analyzing resource utilization in an hpc system: A case study of nersc perlmutter. arXiv preprint arXiv:2301.05145.
Linnert, B., Schneider, J., and Burchard, L.-O. (2014). Mapping algorithms optimizing the overall Manhattan distance for pre-occupied cluster computers in SLA-based Grid environments. In Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on, pages 132–140. IEEE.
Nesi, L. L., Schnorr, L. M., and Legrand, A. (2022). Multi-phase task-based HPC applications: Quickly learning how to run fast. In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 357–367. IEEE.
Nichols, D., Marathe, A., Shoga, K., Gamblin, T., and Bhatele, A. h. (2022). Resource utilization aware job scheduling to mitigate performance variability. In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 335–345. IEEE.
Perez, J. M., Beltran, V., Labarta, J., and Ayguadé, E. (2017). Improving the integration of task nesting and dependencies in openmp. In 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 809–818. IEEE.
Qiu, H., Xu, C., Li, D., Wang, H., Li, J., and Wang, Z. (2022). Parallelizing and balancing coupled DSMC/PIC for large-scale particle simulations. In 2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 390–401. IEEE.
Reed, D., Gannon, D., and Dongarra, J. (2023). Hpc forecast: Cloudy and uncertain. Communications of the ACM, 66(2):82–90.
Schneider, J. and Linnert, B. (2014). List-based data structures for efficient management of advance reservations. International Journal of Parallel Programming, 42(1):77–93.
Shilpika, S., Lusch, B., Emani, M., Simini, F., Vishwanath, V., Papka, M. E., and Ma, K.L. (2022). Toward an in-depth analysis of multifidelity high performance computing systems. In 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid), pages 716–725. IEEE.
Strohmaier, E., Dongarra, J., Simon, H., Meuer, M., and Meuer, H. Top500 list. https://www.top500.org/ (visited April 25, 2021).
Ueter, N., Günzel, M., von der Brüggen, G., and Chen, J.-J. (2022). Parallel path progression DAG scheduling. arXiv preprint arXiv:2208.11830.
Valiant, L. G. (1990). A bridging model for parallel computation. Communications of the ACM, 33(8):103–111.
Yoo, A. B., Jette, M. A., and Grondona, M. (2003). Slurm: Simple Linux utility for resource management. In Workshop on job scheduling strategies for parallel processing, pages 44–60. Springer.
Zrigui, S., de Camargo, R. Y., Legrand, A., and Trystram, D. (2022). Improving the performance of batch schedulers using online job runtime classification. Journal of Parallel and Distributed Computing, 164:83–95.