Towards playing Risk with a hybrid Monte Carlo based agent

  • René G. Ferrari UFSM
  • Joaquim V. C. Assunção UFSM

Resumo


Over the last few decades, games have proven to be great test environments in the artificial intelligence field due to their well-defined rules and clear evaluation methods. Therefore, aiming at the advance in the artificial intelligence field, this paper proposes the development and analysis of a hybrid Monte Carlo based agent for the game Risk, a famous strategy board game. To do so, the proposed agent is going to face a heuristic agent based on an already tested agent. The expectation is to identify the pros, cons, and efficiency of using Monte Carlo in games like Risk.

Palavras-chave: Artificial Intelligence, Monte Carlo, Agents, RISK

Referências

Berner, C., Brockman, G., Chan, B., Cheung, V., Debiak, P., Dennison, C., Farhi, D., Fischer, Q., Hashme, S., Hesse, C., et al. (2019). Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680.

Costalba, M., Kiiski, J., and Romstad, T. (2008). Stockfish, https://stockfishchess.org/, Outubro de 2022.

Ferrari, R. (2022). A comparison between AI approaches for Risk agents. Bachelor’s thesis, Universidade Federal de Santa Maria.

Georgiou, H. (2004). Risk board game-battle outcome analysis. An Example of game-theoretic approaches to analyze simple board games and evaluate globally optimal strategies, (15. 12. 2016).

Hasbro (1959). Risk, https://www.hasbro.com/common/instruct/risk.pdf, Outubro de 2022.

Lozano, J. and Bratz, D. (2012). A risky proposal: Designing a risk game playing agent.

Olsson, F. (2005). A multi-agent system for playing the board game risk. Master’s thesis, Blekinge Institute of Technology.

Pavin, T. (2022). An implementation of risk for competition and learning in ai. Bachelor’s thesis, Universidade Federal de Santa Maria.

Sillysoft (2002). Lux Delux, https://sillysoft.net/lux/, Outubro de 2022.

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al. (2016). Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489.

Vinyals, O., Babuschkin, I., Chung, J., Mathieu, M., Jaderberg, M., Czarnecki, W., Dudzik, A., Huang, A., Georgiev, P., Powell, R., Ewalds, T., Horgan, D., Kroiss, M., Danihelka, I., Agapiou, J., Oh, J., Dalibard, V., Choi, D., Sifre, L., Sulsky, Y., Vezhnevets, S., Molloy, J., Cai, T., Budden, D., Paine, T., Gulcehre, C., Wang, Z., Pfaff, T., Pohlen, T., Yogatama, D., Cohen, J., McKinney, K., Smith, O., Schaul, T., Lillicrap, T., Apps, C., Kavukcuoglu, K., Hassabis, D., and Silver, D. (2019). AlphaStar: Mastering the Real-Time Strategy Game StarCraft II, [link], Outubro de 2022.

Wolf, M. (2005). An intelligent artificial player for the game of risk. Unpublished doctoral dissertation. TU Darmstadt, Knowledge Engineering Group, Darmstadt Germany, http://www.ke.tu-darmstadt.de/bibtex/topics/single/33, Outubro de 2022.
Publicado
24/10/2022
FERRARI, René G.; ASSUNÇÃO, Joaquim V. C.. Towards playing Risk with a hybrid Monte Carlo based agent. In: TRILHA DE COMPUTAÇÃO – ARTIGOS CURTOS - SIMPÓSIO BRASILEIRO DE JOGOS E ENTRETENIMENTO DIGITAL (SBGAMES), 21. , 2022, Natal/RN. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2022 . p. 301-306. DOI: https://doi.org/10.5753/sbgames_estendido.2022.225471.