Memory-based Deep Reinforcement Learning for Humanoid Locomotion under Noisy Scenarios

  • Samuel Chenatti UNICAMP
  • Esther L. Colombini UNICAMP

Resumo


This paper proposes a model-free memory augmented Deep Reinforcement Learning (DRL) method that can deal with noisy sensors in humanoid locomotion. DRL-based agents are promising for automatically learning how to control robots in complex simulated environments. However, they are still not fully addressed with model-free control algorithms in challenging noisy scenarios for humanoid robots. This work shows how the Soft Actor-Critic (SAC) algorithm can benefit from the memory effect introduced by LSTMs to mitigate the side effects of Partially Observed Markov Decision Processes (POMDP). We demonstrate that LSTM-SAC is a viable path towards DRL for POMDP by applying it in a bipedal locomotion task with the NAO Robot in various noisy scenarios.
Palavras-chave: Deep learning, Three-dimensional displays, Humanoid robots, Reinforcement learning, Transformers, Trajectory, Sensors
Publicado
18/10/2022
Como Citar

Selecione um Formato
CHENATTI, Samuel; COLOMBINI, Esther L.. Memory-based Deep Reinforcement Learning for Humanoid Locomotion under Noisy Scenarios. In: SIMPÓSIO BRASILEIRO DE ROBÓTICA E SIMPÓSIO LATINO AMERICANO DE ROBÓTICA (SBR/LARS), 19. , 2022, São Bernardo do Campo/SP. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2022 . p. 205-210.