LLM-Driven Intrinsic Motivation for Sparse Reward Reinforcement Learning
Abstract
This paper explores the combination of two intrinsic motivation strategies to improve the efficiency of reinforcement learning (RL) agents in environments with extreme sparse rewards, where traditional learning struggles due to infrequent positive feedback. We propose integrating Variational State as Intrinsic Reward (VSIMR), which uses Variational AutoEncoders (VAEs) to reward state novelty, with an intrinsic reward approach derived from Large Language Models (LLMs). The LLMs leverage their pre-trained knowledge to generate reward signals based on environment and goal descriptions, guiding the agent. We implemented this combined approach with an Actor-Critic (A2C) agent in the MiniGrid DoorKey environment, a benchmark for sparse rewards. Our empirical results show that this combined strategy significantly increases agent performance and sampling efficiency compared to using each strategy individually or a standard A2C agent, which failed to learn. Analysis of learning curves indicates that the combination effectively complements different aspects of the environment and task: VSIMR drives exploration of new states, while the LLM-derived rewards facilitate progressive exploitation towards goals.References
Aubret, A., Matignon, L., and Hassas, S. (2019). A survey on intrinsic motivation in reinforcement learning. arXiv preprint arXiv:1908.06976.
Barto, A. G. (2012). Intrinsic motivation and reinforcement learning. In Intrinsically motivated learning in natural and artificial systems, pages 17–47. Springer.
Cao, Y., Zhao, H., Cheng, Y., Shu, T., Chen, Y., Liu, G., Liang, G., Zhao, J., Yan, J., and Li, Y. (2024). Survey on large language model-enhanced reinforcement learning: Concept, taxonomy, and methods. IEEE Transactions on Neural Networks and Learning Systems.
Chakraborty, S., Weerakoon, K., Poddar, P., Elnoor, M., Narayanan, P., Busart, C., Tokekar, P., Bedi, A. S., and Manocha, D. (2023). Re-move: An adaptive policy design for robotic navigation tasks in dynamic environments via language-based feedback. arXiv preprint arXiv:2303.07622.
Chevalier-Boisvert, M., Dai, B., Towers, M., Perez-Vicente, R., Willems, L., Lahlou, S., Pal, S., Castro, P. S., and Terry, J. (2023). Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks. In Advances in Neural Information Processing Systems 36, New Orleans, LA, USA.
Devidze, R., Kamalaruban, P., and Singla, A. (2022). Exploration-guided reward shaping for reinforcement learning under sparse rewards. Advances in Neural Information Processing Systems, 35:5829–5842.
Grattafiori, A., Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Vaughan, A., et al. (2024). The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
Klissarov, M., Islam, R., Khetarpal, K., and Precup, D. (2019). Variational state encoding as intrinsic motivation in reinforcement learning. In Task-Agnostic Reinforcement Learning Workshop at Proceedings of the International Conference on Learning Representations, volume 15, pages 16–32.
Ma, Y. J., Liang, W., Wang, G., Huang, D.-A., Bastani, O., Jayaraman, D., Zhu, Y., Fan, L., and Anandkumar, A. (2023). Eureka: Human-level reward design via coding large language models. arXiv preprint arXiv:2310.12931.
Piaget, J., Cook, M., et al. (1952). The origins of intelligence in children, volume 8. International universities press New York.
Ryan, R. M. and Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American psychologist, 55(1):68.
Sutton, R. S., Barto, A. G., et al. (2018). Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 2nd edition.
Team, G., Georgiev, P., Lei, V. I., Burnell, R., Bai, L., Gulati, A., Tanzer, G., Vincent, D., Pan, Z., Wang, S., et al. (2024). Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Yu, J., Wang, X., Tu, S., Cao, S., Zhang-Li, D., Lv, X., Peng, H., Yao, Z., Zhang, X., Li, H., et al. (2023). Kola: Carefully benchmarking world knowledge of large language models. arXiv preprint arXiv:2306.09296.
Zahavy, T., Xu, Z., Veeriah, V., Hessel, M., Oh, J., van Hasselt, H. P., Silver, D., and Singh, S. (2020). A self-tuning actor-critic algorithm. Advances in neural information processing systems, 33:20913–20924.
Barto, A. G. (2012). Intrinsic motivation and reinforcement learning. In Intrinsically motivated learning in natural and artificial systems, pages 17–47. Springer.
Cao, Y., Zhao, H., Cheng, Y., Shu, T., Chen, Y., Liu, G., Liang, G., Zhao, J., Yan, J., and Li, Y. (2024). Survey on large language model-enhanced reinforcement learning: Concept, taxonomy, and methods. IEEE Transactions on Neural Networks and Learning Systems.
Chakraborty, S., Weerakoon, K., Poddar, P., Elnoor, M., Narayanan, P., Busart, C., Tokekar, P., Bedi, A. S., and Manocha, D. (2023). Re-move: An adaptive policy design for robotic navigation tasks in dynamic environments via language-based feedback. arXiv preprint arXiv:2303.07622.
Chevalier-Boisvert, M., Dai, B., Towers, M., Perez-Vicente, R., Willems, L., Lahlou, S., Pal, S., Castro, P. S., and Terry, J. (2023). Minigrid & miniworld: Modular & customizable reinforcement learning environments for goal-oriented tasks. In Advances in Neural Information Processing Systems 36, New Orleans, LA, USA.
Devidze, R., Kamalaruban, P., and Singla, A. (2022). Exploration-guided reward shaping for reinforcement learning under sparse rewards. Advances in Neural Information Processing Systems, 35:5829–5842.
Grattafiori, A., Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Vaughan, A., et al. (2024). The llama 3 herd of models. arXiv preprint arXiv:2407.21783.
Klissarov, M., Islam, R., Khetarpal, K., and Precup, D. (2019). Variational state encoding as intrinsic motivation in reinforcement learning. In Task-Agnostic Reinforcement Learning Workshop at Proceedings of the International Conference on Learning Representations, volume 15, pages 16–32.
Ma, Y. J., Liang, W., Wang, G., Huang, D.-A., Bastani, O., Jayaraman, D., Zhu, Y., Fan, L., and Anandkumar, A. (2023). Eureka: Human-level reward design via coding large language models. arXiv preprint arXiv:2310.12931.
Piaget, J., Cook, M., et al. (1952). The origins of intelligence in children, volume 8. International universities press New York.
Ryan, R. M. and Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American psychologist, 55(1):68.
Sutton, R. S., Barto, A. G., et al. (2018). Reinforcement learning: An introduction, volume 1. MIT press Cambridge, 2nd edition.
Team, G., Georgiev, P., Lei, V. I., Burnell, R., Bai, L., Gulati, A., Tanzer, G., Vincent, D., Pan, Z., Wang, S., et al. (2024). Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint arXiv:2403.05530.
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30.
Yu, J., Wang, X., Tu, S., Cao, S., Zhang-Li, D., Lv, X., Peng, H., Yao, Z., Zhang, X., Li, H., et al. (2023). Kola: Carefully benchmarking world knowledge of large language models. arXiv preprint arXiv:2306.09296.
Zahavy, T., Xu, Z., Veeriah, V., Hessel, M., Oh, J., van Hasselt, H. P., Silver, D., and Singh, S. (2020). A self-tuning actor-critic algorithm. Advances in neural information processing systems, 33:20913–20924.
Published
2025-09-29
How to Cite
QUADROS, André; SILVA, Cassio; ALVES, Ronnie.
LLM-Driven Intrinsic Motivation for Sparse Reward Reinforcement Learning. In: NATIONAL MEETING ON ARTIFICIAL AND COMPUTATIONAL INTELLIGENCE (ENIAC), 22. , 2025, Fortaleza/CE.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 345-355.
ISSN 2763-9061.
DOI: https://doi.org/10.5753/eniac.2025.12425.
