Moving towards automated game play-testing
Abstract
Introduction: Prototyping games requires multiple testing phases, some of which focus on balancing rules and keeping the game fun. These are typically done by human testers, who can influence the results while requiring a significant time investment. Objective: This work in progress reports the implementation of a framework for stress testing games in the development phase. Steps: The data for decision-making is generated by intelligent agents trained in an AlphaZero-inspired method, which aligns residual neural networks with the Monte-Carlo Tree Search algorithm. Expected results: Game designers should be able to describe their game on our platform. The process of training and using agents will allow us to capture eventual balancing problems and dominant strategies, providing valuable information to guide changes in the rules, thus improving the player experience.
References
Araki, D. S. e Knop, I. O. (2020). Testes de software e simulações como ferramentas para game design. In Brazilian Symposium on Computer Games and Digital Entertainment 2020 Proceedings.
Cohn, D., Atlas, L., e Ladner, R. (1994). Improving generalization with active learning. Machine learning, 15:201–221.
Coulom, R. (2006). Efficient selectivity and backup operators in monte-carlo tree search. In International conference on computers and games, pages 72–83. Springer.
Fullerton, T. (2019). Game Design Workshop: A Playcentric Approach to Creating Innovative Games. CRC Press, Boca Raton, 4 edition.
Gudmundsson, S., Eisen, P., Poromaa, E., Nodet, A., Purmonen, S., Kozakowski, B., Meurling, R., e Cao, L. (2018). Human-like playtesting with deep learning. In 2018 IEEE Conference on Computational Intelligence and Games (CIG), pages 1–8. IEEE.
He, K., Zhang, X., Ren, S., e Sun, J. (2015). Deep residual learning for image recognition.
Kocsis, L. e Szepesvári, C. (2006). Bandit based monte-carlo planning. In European conference on machine learning, pages 282–293. Springer.
Li, Z., Liu, F., Yang, W., Peng, S., e Zhou, J. (2022). A survey of convolutional neural networks: Analysis, applications, and prospects. IEEE Transactions on Neural Networks and Learning Systems, 33(12):6999–7019.
Malosto, C. G. D. A., Knop, I. O., e Conceição, L. D. C. (2023). Alphazero como ferramenta de playtest. Revista ComInG - Communications and Innovations Gazette, 7(1):39–50.
Marcelo, A. e Pescuite, J. (2009). Design de jogos: Fundamentos. Brasport, Rio de janeiro, 1 edition.
Nair, V. e Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML’10, page 807–814, Madison, WI, USA. Omnipress.
Romero, B. e Schreiber, I. (2021). Game Balance. CRC Press, Boca Raton, 1st edition edition.
Salen, K. e Zimmerman, E. (2003). Rules of Play: Game Design Fundamentals. MIT Press, Cambridge.
Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., e Hassabis, D. (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587):484–489.
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., e Hassabis, D. (2017). Mastering chess and shogi by self-play with a general reinforcement learning algorithm.
Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., Lillicrap, T., Simonyan, K., e Hassabis, D. (2018). A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science, 362(6419):1140–1144.
Suits, B. (1967). What is a game? Philosophy of Science, 34(2):148–156.
Świechowski, M., Godlewski, K., Sawicki, B., e Mańdziuk, J. (2022). Monte carlo tree search: a review of recent modifications and applications. Artificial Intelligence Review, 56(3):2497–2562.
Zook, A., Fruchter, E., e Riedl, M. (2019). Automatic playtesting for game parameter tuning via active learning.
