Abstract
The recent progress in Reinforcement Learning applications to Resource Management presents Markov Decision Processes (MDPs) without a deeper analysis of the impacts of design decisions on agent performance. In this paper, we compare and contrast four different MDP variations, discussing their computational requirements and impacts on agent performance by means of an empirical analysis. We conclude by showing that, in our experiments, when using Multi-Layer Perceptrons as approximation function, a compact state representation allows transfer of agents between environments, and that transferred agents have good performance and outperform specialized agents in 80% of the tested scenarios, even without retraining.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Some schedulers allow for oversubscription of memory resources in their default configuration, inspired by the fact that jobs don’t use peak memory during their complete lifetimes.
- 2.
Some authors leave the \(\gamma \) component out of the definition of the MDP. Leaving it in the definition yields a more general formulation, since it allows one to model continuous (non-ending) learning settings.
- 3.
The value shown for \(R_2\) might contradict the previous discussion, but the MDP is set in a way that, when jobs are scheduled successfully, \(R_{t+1}=0\).
- 4.
In our example, for each job \(j_i\), in time step 1, \(\pi \) would give the probabilities of choosing each job given an empty cluster: , , and such that, by total probability, .
- 5.
Normalization is needed to approximate the average value of \(\widehat{J(\theta )}\). Otherwise, \(\widehat{J(\theta )}\rightarrow \infty \) as \(N\rightarrow \infty \).
- 6.
Jobs in the wait queue that the agent cannot choose to schedule.
- 7.
Truncating the list of jobs violates the Markov property, since once it overflows, the agent cannot know how many jobs are in the system.
- 8.
Parentheses group elements. In the first vector, there are five parenthesized pairs to indicate the time horizon of 5, and two parenthesized elements to represent job slows in window W.
References
Cunha, R.L., Rodrigues, E.R., Tizzei, L.P., Netto, M.A.S.: Job placement advisor based on turnaround predictions for HPC hybrid clouds. Future Gener. Comput. Syst. 67, 35–46 (2017). ISSN 0167–739X
de Freitas Cunha, R.L., Chaimowicz, L.: Towards a common environment for learning scheduling algorithms. In: 2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems (MASCOTS), pp. 1–8 (2020)
Domeniconi, G., Lee, E.K., Morari, A.: CuSH: cognitive scheduler for heterogeneous high performance computing system. In: Proceedings of DRL4KDD 19: Workshop on Deep Reinforcement Learning for Knowledge Discovery (DRL4KDD), vol. 12 (2019)
Fan, Y., Lan, Z., Childers, T., Rich, P., Allcock, W., Papka, M.E.: Deep reinforcement agent for scheduling in HPC. arXiv preprint arXiv:2102.06243 (2021)
Feitelson, D.G., Rudolph, L.: Toward convergence in job schedulers for parallel supercomputers. In: Feitelson, D.G., Rudolph, L. (eds.) JSSPP 1996. LNCS, vol. 1162, pp. 1–26. Springer, Heidelberg (1996). https://doi.org/10.1007/BFb0022284
Liang, Y., Machado, M.C., Talvitie, E., Bowling, M.: State of the art control of atari games using shallow reinforcement learning. In AAMAS (2016)
Mao, H., Alizadeh, M., Menache, I., Kandula, S.: Resource management with deep reinforcement learning. In: Proceedings of the 15th ACM Workshop on Hot Topics in Networks, pp. 50–56 (2016)
Mnih, V., et al.: Asynchronous methods for deep reinforcement learning. In: International Conference on Machine Learning, pp. 1928–1937 (2016)
Nishida, C.E.H., Costa, A.H.R., da Costa Bianchi, R.A.: Control of gene regulatory networks basin of attractions with batch reinforcement learning. In: 2018 7th Brazilian Conference on Intelligent Systems (BRACIS), pp. 127–132 (2018)
Raffin, A., Hill, A., Ernestus, M., Gleave, A., Kanervisto, A., Dormann, N.: Stable baselines3 (2019). https://github.com/DLR-RM/stable-baselines3
Ramos, W., Silva, M., Araujo, E., Marcolino, L.S., Nascimento, E.: Straight to the point: fast-forwarding videos via reinforcement learning using textual data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10931–10940 (2020)
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 (2017)
Silver, D., et al.: A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362(6419), 1140–1144 (2018)
Sutton, R.S., Precup, D., Singh, S.: Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artif. Intell. 112(1), 181–211 (1999). https://doi.org/10.1016/S0004-3702(99)00052-1. ISSN 0004–3702
Tesauro, G.: TD-Gammon, a self-teaching backgammon program, achieves master-level play. Neural Comput. 6(2), 215–219 (1994)
Zhang, D., Dai, D., He, Y., Bao, F.S., Xie, B.: RLScheduler: an automated HPC batch job scheduler using reinforcement learning. In: SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–15. IEEE (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
de Freitas Cunha, R.L., Chaimowicz, L. (2021). On the Impact of MDP Design for Reinforcement Learning Agents in Resource Management. In: Britto, A., Valdivia Delgado, K. (eds) Intelligent Systems. BRACIS 2021. Lecture Notes in Computer Science(), vol 13073. Springer, Cham. https://doi.org/10.1007/978-3-030-91702-9_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-91702-9_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91701-2
Online ISBN: 978-3-030-91702-9
eBook Packages: Computer ScienceComputer Science (R0)