Deep Reinforcement Learning Using a Low-Dimensional Observation Filter for Visual Complex Video Game Playing
Resumo
Deep Reinforcement Learning (DRL) has produced great achievements since it was proposed, including the possibility of processing raw vision input data. However, training an agent to perform tasks based on image feedback remains a challenge. It requires the processing of large amounts of data from high-dimensional observation spaces, frame by frame, and the agent's actions are computed according to deep neural network policies, end-to-end. Image pre-processing is an effective way of reducing these high dimensional spaces, eliminating unnecessary information present in the scene, supporting the extraction of features and their representations in the agent's neural network. Modern video-games are examples of this type of challenge for DRL algorithms because of their visual complexity. In this paper, we propose a low-dimensional observation filter that allows a deep Q-network agent to successfully play in a visually complex and modern video-game, called Neon Drive.
Referências
G. N. Yannakakis and J. Togelius, Artificial intelligence and games, Springer, 2018, vol. 2.
V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski et al., “Human-level control through deep reinforcement learning,” nature, vol. 518, no. 7540, pp. 529–533, 2015.
M. G. Bellemare, Y. Naddaf, J. Veness, and M. Bowling, “The arcade learning environment: An evaluation platform for general agents,” Journal of Artificial Intelligence Research, vol. 47, pp. 253–279, 2013.
K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath, “Deep reinforcement learning: A brief survey,” IEEE Signal Processing Magazine, vol. 34, no. 6, pp. 26–38, 2017
M. Laskin, K. Lee, A. Stooke, L. Pinto, P. Abbeel, and A. Srinivas, “Reinforcement learning with augmented data,” arXiv preprint arXiv:2004.14990, 2020.
R. R. Torrado, P. Bontrager, J. Togelius, J. Liu, and D. Perez-Liebana, “Deep reinforcement learning for general video game ai,” in 2018 IEEE Conference on Computational Intelligence and Games (CIG). IEEE, 2018, pp. 1–8.
D. Ha and J. Schmidhuber, “World models,” Conference on Neural Information Processing Systems, 2018.
D. Ha and J. Schmidhuber, “Recurrent world models facilitate policy evolution,” Conference on Neural Information Processing Systems, 2018.
K. Shao, Z. Tang, Y. Zhu, N. Li, and D. Zhao, “A survey of deep reinforcement learning in video games,” 2019. [Online]. Available: https://arxiv.org/abs/1912.10944
O. Vinyals, T. Ewalds, S. Bartunov, P. Georgiev, A. S. Vezhnevets, M. Yeo, A. Makhzani, H. Kuttler, J. Agapiou, J. Schrittwieser, “Starcraft ii: A new challenge for reinforcement learning,” arXiv preprint arXiv:1708.04782, 2017. [Online]. Available: https://arxiv.org/abs/1708.04782
C. Berner, G. Brockman, B. Chan, V. Cheung, P. Debiak, C. Dennison, D. Farhi, Q. Fischer, S. Hashme, C. Hesse, “Dota 2 with large scale deep reinforcement learning,” arXiv preprint arXiv:1912.06680, 2019. [Online]. Available: https://arxiv.org/abs/1912.06680
C. Tessler, S. Givony, T. Zahavy, D. Mankowitz, and S. Mannor, “A deep hierarchical approach to lifelong learning in minecraft,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31, no. 1, 2017.
C.-J. Lin, J.-Y. Jhang, H.-Y. Lin, C.-L. Lee and K.-Y. Young, “Using a reinforcement q-learning-based deep neural network for playing video games,” Electronics, vol. 8, no. 10, p. 1128, 2019.
R. Tan, J. Zhou, H. Du, S. Shang and L. Dai, “An modeling processing method for video games based on deep reinforcement learning,” in 2019 IEEE 8th joint international information technology and artificial intelligence conference (ITAIC). IEEE, 2019, pp. 939–942.
L. Kaiser, M. Babaeizadeh, P. Milos, B. Osinski, R. H. Campbell, K. Czechowski, D. Erhan, C. Finn, P. Kozakowski, S. Levine, “Model-based reinforcement learning for atari,” arXiv preprint arXiv:1903.00374, 2019.
R. Dechter, “Learning while searching in constraint-satisfaction problems,” Association for the Advancement of Artificial Intelligence, 1986.
J. Schmidhuber, “Deep learning in neural networks: An overview,” Neural networks, vol. 61, pp. 85–117, 2015.
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in neural information processing systems, vol. 25, pp. 1097–1105, 2012.
Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015.
R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018.
V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. A. Riedmiller, “Playing atari with deep reinforcement learning,” NIPS Deep Learning Workshop, vol. abs/1312.5602, 2013.
T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning,” in 4th Int. Conf. on Learning Representations, ICLR, Y. Bengio and Y. LeCun, Eds., 2016.
L. Bar, N. Sochen, and N. Kiryati, “Image deblurring in the presence of salt-and-pepper noise,” in International Conference on Scale-Space Theories in Computer Vision. Springer, 2005, pp. 107–118.