Multi Camera System Analysis for Autonomous Navigation using End-to-End Deep Learning

  • José A. Diaz Amado USP
  • Jean Amaro USP
  • Iago P. Gomes USP
  • Denis Wolf USP
  • F. S. Osorio USP

Resumo


This work aims to present an autonomous vehicle navigation system, based on an End-to-End Deep Learning approach, and to study the impact of different image input configurations to the system performance. The proposed methodology in this work was to adoptand test different configurations of RGB and Depth images captured from a Kinect device. We adopted a multi-camera system, composed by 3 cameras, with different RGB and/or Depth input configurations. Two main systems were developed in order to study and validade de different input configurations: the first one based on a realistic simulator and the second one based on a mini-car (small scale vehicle). Starting with the simulations, it was possible to choose the best camera/input configuration, then we validated that using the real vehicle (mini-car) with real sensors/cameras. The experimental results demonstrated that a multi-camera solution, based on 3 cameras, allow us to obtain better autonomous navigation control results in a End-to-End Deep Learning based approch, with a very small final error when using the proposed camera configurations.

Palavras-chave: Deep Learning, End-to-End, Self-Driving Car, Image based Navigation, RGB Depth.

Referências

R. Siegwart and I. Nourbakhsh, Introduction to autonomous mobile robots. Editora Mit Press, 2011.

A. EFE, “Relatório aponta falha de carro autônomo do uber em atropelamento fatal,” 2018. [Online]. Available: https://g1.globo.com.

——, “O futuro da garagem no mundo dos carros autônomos,” 2018. [Online]. Available: https://epocanegocios.globo.com.

Z. Chen and X. Huang, “End-to-end learning for lane keeping of selfdriving cars,” IEEE Intelligent Vehicles (IV), pp. 1856–1860, 2017.

I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016, http://www.deeplearningbook.org.

D. Pomerleau, “Alvinn: An autonomous land vehicle in a neural network,” in Advances in Neural Information Processing Systems 1, D. Touretzky, Ed. Morgan Kaufmann, January 1989.

Y. Lecun, E. Cosatto, J. Ben, U. Muller, and B. Flepp, “Dave: Autonomous off-road vehicle control using end-to-end learning,” Courant Institute/CBLL, http://www.cs.nyu.edu/˜yann/research/dave/index.html, Tech. Rep. DARPA-IPTO Final Report, 2004.

M. Bojarski, D. D. Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, X. Zhang, J. Zhao, and K. Zieba, “End to end learning for self-driving cars.” CoRR, vol. abs/1604.07316, 2016.

M. Bojarski, P. Yeres, A. Choromanska, K. Choromanski, B. Firner, L. Jackel, and U. Muller, “Explaining how a deep neural network trained with end-to-end learning steers a car,” arXiv preprint arXiv:1704.07911, 2017.

D. Gruyer, R. Belaroussi, and M. Revilloud, “Accurate lateral positioning from map data and road marking detection.” Expert Syst. Appl., vol. 43, pp. 1–8, 2016.

G. H. Lee, F. Fraundorfer, and M. Pollefeys, “Motion estimation for self-driving cars with a generalized camera.” in CVPR. IEEE Computer Society, 2013, pp. 2746–2753.

R. Mhiri, P. Vasseur, S. Mousset, R. Boutteau, and A. Bensrhair, “Visual odometry with unsynchronized multi-cameras setup for intelligent vehicle application,” IEEE Intelligent Vehicles (IV), pp. 1339–1344, 2014.

L. Heng, B. Choi, Z. Cui, M. Geppert, S. Hu, B. Kuan, P. Liu, R. M. H. Nguyen, Y. C. Yeo, A. Geiger, G. H. Lee, M. Pollefeys, and T. Sattler, “Project autovision: Localization and 3d scene perception for an autonomous vehicle with a multi-camera system,” CoRR, vol. abs/1809.05477, 2018.

L. Kneip, P. T. Furgale, and R. Siegwart, “Using multi-camera systems in robotics: Efficient solutions to the npnp problem,” IEEE Intl. Conf. on Robotics and Automation, pp. 3770–3776, 2013.

S. Hecker, D. Dai, and L. V. Gool, “End-to-end learning of driving models with surround-view cameras and route planners,” in ECCV, 2018.

A. Amini, G. Rosman, S. Karaman, and D. Rus, “Variational end-to-end navigation and localization,” CoRR, vol. abs/1811.10119, 2018.

A. Goral and A. Skalski, “Accuracy assessment of kinect for xbox one in point-based tracking applications.” in ICMV, ser. SPIE Proc., A. Verikas, P. Radeva, and D. P. Nikolaev, Eds., vol. 9875. SPIE, 2015, p. 987522.

R. Kozak and R. Berggren, “Virtual reality educational pathfinders (vrep),” 2013 3rd Interdisciplinary Engineering Design Education Conference, pp. 19–22, 2013.

M. Quigley, B. P. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger, R. Wheeler, and A. Ng, “Ros : an open-source robot operating system,” 2009.

J. Hrbacek, T. Ripel, and J. Krejsa, “Ackermann mobile robot chassis with independent rear wheel drives,” Proceedings of 14th International Power Electronics and Motion Control Conference EPE-PEMC 2010, pp. T5–46–T5–51, 2010.

B. Bandeirante, “Camaro.” [Online]. Available: http://brinquedosbandeirante.com.br/produtos/camaro-amarelo-rc-el-6v/.

C. C. Smith, “Implementing full device cloning on the nvidia jetson platform.”
Publicado
09/09/2019
Como Citar

Selecione um Formato
AMADO, José A. Diaz; AMARO, Jean; GOMES, Iago P.; WOLF, Denis; OSORIO, F. S.. Multi Camera System Analysis for Autonomous Navigation using End-to-End Deep Learning. In: WORKSHOP DE VISÃO COMPUTACIONAL (WVC), 15. , 2019, São Bernardo do Campo. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2019 . p. 25-30. DOI: https://doi.org/10.5753/wvc.2019.7623.