Evaluating Deep Learning-based Chess-Engine Endgame Strategies

  • Epitácio Pessoa de Brito Neto UFPB
  • Telmo de Menezes e Silva Filho University of Bristol
  • Thaís Gaudencio do Rêgo UFPB
  • Yuri Malheiros UFPB

Resumo


Artificial Intelligence has been used to challenge human players in chess for decades. In 1997, IBM’s Deep Blue won against the best chess player of that time, and since then, the chess engines have continued to improve. However, it is not clear if these new high-performing chess engines are learning how to replicate the way human grandmasters play or if they are devising new strategies to win. Therefore, in this paper, we evaluated two chess engines that use deep learning approaches: StockFish NNUE and Lc0, to compare their moves in endgame situations to the moves in chess theory books. We collected 19 types of endgames and we ran the engines to replicate the books’ moves. After that, we computed the similarity, that is, the percentage of equal moves. The Lc0 engine has 40.20% similarity in our experiments and StockFish NNUE 22.50%. These results show that the engines replicate some moves from chess-theory books, but they differ in most parts from what is expected from human players.

Referências

Acher, M. and Esnault, F. (2016). Large-scale analysis of chess games with chess engines: A preliminary report. arXiv preprint arXiv:1607.04186.

De La Villa, J. (2015). 100 Endgames You Must Know: Vital Lessons for Every Chess Player Improved and Expanded. New In Chess.

Elo, A. E. (1967). The proposed uscf rating system. its development, theory, and applications. Chess Life, 22(8):242–247.

Fayed, M. S. (2021). Classification of the chess endgame problem using logistic regression, decision trees, and neural networks. arXiv preprint arXiv:2111.05976.

Haque, R., Wei, T. H., and Müller, M. (2021). On the road to perfection? evaluating leela chess zero against endgame tablebases. In Advances in Computer Games, pages 142–152. Springer.

Hu, J., Shen, L., and Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132–7141.

Liu, S., Cao, J., Wang, Y., Chen, W., and Liu, Y. (2021). Self-play reinforcement learning with comprehensive critic in computer games. Neurocomputing, 449:207–213.

Maharaj, S., Polson, N., and Turk, A. (2022). Chess ai: competing paradigms for machine intelligence. Entropy, 24(4):550.

McGrath, T., Kapishnikov, A., Tomašev, N., Pearce, A., Wattenberg, M., Hassabis, D., Kim, B., Paquet, U., and Kramnik, V. (2022). Acquisition of chess knowledge in alphazero. Proceedings of the National Academy of Sciences, 119(47):e2206625119.

Polgar, L. (2013). Chess: 5334 problems, combinations and games. Hachette UK.

Si, J. and Tang, R. (1999). Trained neural networks play chess endgames. In IJCNN’99. International Joint Conference on Neural Networks. Proceedings (Cat. No. 99CH36339), volume 6, pages 3730–3733. IEEE.

Silver, D., Hubert, T., Schrittwieser, J., Antonoglou, I., Lai, M., Guez, A., Lanctot, M., Sifre, L., Kumaran, D., Graepel, T., et al. (2018). A general reinforcement learning algorithm that masters chess, shogi, and go through self-play. Science, 362(6419):1140–1144.

Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., Chen, Y., Lillicrap, T., Hui, F., Sifre, L., Driessche, G. v. d., Graepel, T., and Hassabis, D. (2017). Mastering the game of Go without human knowledge. Nature, 550(7676):354–359.

Whitaker, N., and Hartleb, G. (1960). 365 Selected Endings, one for each day of the year. Self published.
Publicado
06/11/2023
Como Citar

Selecione um Formato
BRITO NETO, Epitácio Pessoa de; MENEZES E SILVA FILHO, Telmo de; RÊGO, Thaís Gaudencio do; MALHEIROS, Yuri. Evaluating Deep Learning-based Chess-Engine Endgame Strategies. In: TRILHA DE COMPUTAÇÃO – ARTIGOS CURTOS - SIMPÓSIO BRASILEIRO DE JOGOS E ENTRETENIMENTO DIGITAL (SBGAMES), 22. , 2023, Rio Grande/RS. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 282-287. DOI: https://doi.org/10.5753/sbgames_estendido.2023.233983.