Impact of Heterogeneity on Multi-Agent Reinforcement Learning

  • Rodrigo Fonseca Marques Pontifícia Universidade Católica de Minas Gerais
  • Zenilton Kleber Gonçalves do Patrocínio Júnior Pontifícia Universidade Católica de Minas Gerais

Resumo


Most Multi-Agent Reinforcement Learning (MARL) methods and studies use homogenous agents. The majority of study on heterogeneity concentrates on agents with different skill sets. However, in real-world applications, agents frequently possess the same set of skills but different degrees. In this paper, we propose a novel model for heterogeneous agents in a MARL system, in which they share a standard skill set but have different degrees of intensity. Experiments were carried out in the framework of Soccer Twos, a competitive and cooperative game, and also with Tennis, which has competitive gameplay. Results demonstrate that heterogeneous agents perform better than homogeneous ones in both environments and also acquire organizational abilities in Soccer Twos.
Palavras-chave: Multi-agent Systems, Reinforcement Learning

Referências

Bowling, M. and Veloso, M. (2000). An analysis of stochastic game theory for multiagent reinforcement learning. Technical report, Carnegie-Mellon Univ Pittsburgh Pa School of Computer Science.

Busoniu, L., Babuska, R., and De Schutter, B. (2008). A comprehensive survey of multi-agent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 38(2):156–172.

Düring, B., Torregrossa, M., and Wolfram, M.-T. (2019). Boltzmann and fokker–planck equations modelling the elo rating system with learning effects. Journal of Nonlinear Science, 29(3):1095–1128.

Foerster, J., Nardelli, N., Farquhar, G., Afouras, T., Torr, P. H., Kohli, P., and Whiteson, S. (2017). Stabilising experience replay for deep multi-agent reinforcement learning. In International Conference on Machine Learning, pages 1146–1155. PMLR.

Fu, Q., Ai, X., Yi, J., Qiu, T., Yuan, W., and Pu, Z. (2022). Learning heterogeneous agent cooperation via multiagent league training. arXiv preprint arXiv:2211.11616.

Juliani, A., Berges, V.-P., Teng, E., Cohen, A., Harper, J., Elion, C., Goy, C., Gao, Y., Henry, H., Mattar, M., and Lange, D. (2018). Unity: A general platform for intelligent agents. arXiv preprint arXiv:1809.02627.

Kober, J., Bagnell, J. A., and Peters, J. (2013). Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238–1274.

Li, Y., Wang, X., Wang, J., Wang, W., Luo, X., and Xie, S. (2020). Cooperative multi-agent reinforcement learning with hierarchical relation graph under partial observability. In 2020 IEEE 32nd International Conference on Tools with Artificial Intelligence (ICTAI), pages 1–8. IEEE.

Liu, S., Lever, G., Merel, J., Tunyasuvunakool, S., Heess, N., and Graepel, T. (2019). Emergent coordination through competition. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.

Lowe, R., WU, Y., Tamar, A., Harb, J., Abbeel, P., and Mordatch, I. (2017). Multi-agent actor-critic for mixed cooperative-competitive environments. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc.

Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hass- abis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540):529–533.

Nazari, M., Oroojlooy, A., Snyder, L., and Takac, M. (2018). Reinforcement learning for solving the vehicle routing problem. In Bengio, S., Wallach, H., Larochelle, H., Grau- man, K., Cesa-Bianchi, N., and Garnett, R., editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc.

Oroojlooy, A. and Hajinezhad, D. (2023). A review of cooperative multi-agent deep reinforcement learning. Applied Intelligence, 53(11):13677–13722.

Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.

Shortreed, S. M., Laber, E., Lizotte, D. J., Stroup, T. S., Pineau, J., and Murphy, S. A. (2011). Informing sequential clinical decision-making through reinforcement learning: an empirical study. Machine Learning, 84(1-2):109–136.

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrit-twieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489.

Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., and Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419):1140–1144.

Sutton, R. S. and Barto, A. G. (2018). Reinforcement Learning: An Introduction. The MIT Press, second edition.

Vlassis, N. (2022). A concise introduction to multiagent systems and distributed artificial intelligence. Springer Nature.

Wakilpoor, C., Martin, P. J., Rebhuhn, C., and Vu, A. (2020). Heterogeneous multi-agent reinforcement learning for unknown environment mapping. arXiv preprint arXiv:2010.02663.

Weiss, G. (1999). Multiagent systems: a modern approach to distributed artificial intelligence. MIT press.
Publicado
25/09/2023
Como Citar

Selecione um Formato
MARQUES, Rodrigo Fonseca; PATROCÍNIO JÚNIOR, Zenilton Kleber Gonçalves do. Impact of Heterogeneity on Multi-Agent Reinforcement Learning. In: ENCONTRO NACIONAL DE INTELIGÊNCIA ARTIFICIAL E COMPUTACIONAL (ENIAC), 20. , 2023, Belo Horizonte/MG. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 1048-1062. ISSN 2763-9061. DOI: https://doi.org/10.5753/eniac.2023.234582.