Deep Reinforcement Learning for Mapless Navigation of Unmanned Aerial Vehicles

  • Ricardo Grando FURG
  • Junior de Jesus FURG
  • Paulo Drews-Jr FURG

Resumo


This paper presents a deep reinforcement learningbased system for goal-oriented mapless navigation for Unmanned Aerial Vehicles (UAVs). In this context, image-based sensing approaches are the most common. However, they demand high processing power hardware which are heavy and difficult to embed into a small-autonomous UAV. Our approach is based on localization data and simple sparse range data to train the intelligent agent. We based our approach in two state-of-theart Deep-RL techniques for terrestrial robot: Deep Deterministic Policy Gradient (DDPG) and Soft Actor Critic (SAC). We compare the performance with a classic geometric-based tracking controller for mapless navigation of UAVs. Based on experimental results, we conclude that Deep-RL algorithms are effective to perform mapless navigation and obstacle avoidance for UAVs. Our vehicle successfully performed two proposed tasks, reaching the desired goal and outperforming the geometric-based tracking controller on the obstacle avoiding capability.
Palavras-chave: Navigation, Robots, Task analysis, Unmanned aerial vehicles, Rotors, Mobile robots, Robot sensing systems, Deep Reinforcement Learning, Unmanned Aerial Vehicles, Mapless Navigation
Publicado
09/11/2020
Como Citar

Selecione um Formato
GRANDO, Ricardo; DE JESUS, Junior; DREWS-JR, Paulo. Deep Reinforcement Learning for Mapless Navigation of Unmanned Aerial Vehicles. In: SIMPÓSIO BRASILEIRO DE ROBÓTICA E SIMPÓSIO LATINO AMERICANO DE ROBÓTICA (SBR/LARS), 17. , 2020, Natal. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2020 . p. 335-340.