Scale-Invariant Reinforcement Learning in Real-Time Strategy Games

  • Marcelo Luiz Harry Diniz Lemos UFMG
  • Ronaldo E. Silva Vieira UFMG
  • Anderson Rocha Tavares UFRGS
  • Leandro Soriano Marcolino Lancaster University
  • Luiz Chaimowicz UFMG

Abstract

Real-time strategy games present a significant challenge for artificial game-playing agents by combining several fundamental AI problems. Despite the difficulties, attempts to create autonomous agents using Deep Reinforcement Learning have been successful, with bots like AlphaStar beating even expert human players. Many RTS games include several distinct world maps with different dimensions, which may affect the agent’s observation and the representation of game states. However, most current architectures suffer from fixed input sizes or require extensive and complex training. In this paper, we overcome these limitations by combining Grid-Wise Control with Spatial Pyramid Pooling (SPP). Specifically, we employ the encoder-decoder framework provided by the GridNet architecture and enhance the critic component of PPO by adding an SPP layer to it. The new layer generates a standardized representation of any game state regardless of the initial observation dimensions, allowing the agent to act on any map. Our evaluation demonstrates that our proposed method improves the models’ flexibility and provides a more effective and efficient solution for training autonomous agents in multiple RTS game scenarios.
Published
2023-11-06
How to Cite
LEMOS, Marcelo Luiz Harry Diniz et al. Scale-Invariant Reinforcement Learning in Real-Time Strategy Games. Proceedings of the Brazilian Symposium on Computer Games and Digital Entertainment (SBGames), [S.l.], p. 11–19, nov. 2023. ISSN 0000-0000. Available at: <https://sol.sbc.org.br/index.php/sbgames/article/view/27663>. Date accessed: 17 may 2024.