Addressing Lane Keeping and Intersections using Deep Conditional Reinforcement Learning

  • Vítor A. S. Silva USP
  • Valdir Grassi USP

Resumo


End-to-end deep reinforcement learning (DRL) methods have been largely used to solve self-driving tasks. However, they usually deal only with simple problems, such as lane keeping. When it is needed to solve more complex tasks, traditional DRL algorithms fails. In this work we combine Conditional Learning and Proximal Policy Optimization (PPO) to solve the problem of turning at intersection and lane keeping in an end-to-end fashion. In our approach we trained three PPO sub-policies to perform right and left turns and follow lane. The sub-policies are activated once at a time according to a command sent by the local planner. We also used three different image transformation to verify their impact on learning speed and generalization. Experiments were conducted in an urban scenario composed by several intersections of type T on CARLA Simulator. Results show that our approach is feasible and presents good performance on accomplishing the goal. Moreover, we could confirm that properly choosing an image transformation can improve sample efficiency and generalization capability.
Palavras-chave: Training, Roads, Decision making, Reinforcement learning, Turning, Task analysis, Collision avoidance
Publicado
11/10/2021
Como Citar

Selecione um Formato
SILVA, Vítor A. S.; GRASSI, Valdir. Addressing Lane Keeping and Intersections using Deep Conditional Reinforcement Learning. In: SIMPÓSIO BRASILEIRO DE ROBÓTICA E SIMPÓSIO LATINO AMERICANO DE ROBÓTICA (SBR/LARS), 13. , 2021, Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 330-335.