Deep-Learning-Based Visual Odometry Models for Mobile Robotics

  • Frederico Luiz Martins de Sousa UFOP
  • Natália F. de C. Meira UFOP
  • Ricardo Augusto Rabelo Oliveira UFOP
  • Mateus Coelho Silva UFOP

Resumo


Odometry is a common problem in navigation systems where there is a need to estimate the position of the vehicle or carrier in the environment. To perform autonomous tasks, robotic or intelligent devices need to be aware of their position in the environment. There are many strategies to solve an odometry problem. This work explores a visual odometry solution with a deep neural network to infer the robotic vehicle's position in a known and mapped environment. The first robot, equipped with a LIDAR, IMU, and camera, maps the environment through a SLAM technique to perform this task. The data gathered by this first robot is used as ground truth to train the neural network, and later, other robots with only one camera can locate themselves in the environment. We also propose a validation and evaluation of the neural network.
Palavras-chave: mobile robotics, odometry, edge-computing, deep neural networks, Robot Operating System

Referências

S. A. Mohamed, M.-H. Haghbayan, T. Westerlund, J. Heikkonen, H. Tenhunen, and J. Plosila, “A survey on odometry for autonomous navigation systems,” IEEE Access, vol. 7, pp. 97 466–97 486, 2019.

D. Bender, W. Koch, and D. Cremers, “Map-based drone homing using shortcuts,” in 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). IEEE, 2017, pp. 505–511.

R. Giubilato, S. Chiodini, M. Pertile, and S. Debei, “An evaluation of ros-compatible stereo visual slam methods on a nvidia jetson tx2,” Measurement, vol. 140, pp. 161–170, 2019.

A. Geiger, J. Ziegler, and C. Stiller, “Stereoscan: Dense 3d reconstruction in real-time,” in 2011 IEEE intelligent vehicles symposium (IV). Ieee, 2011, pp. 963–968.

T. Shan and B. Englot, “Lego-loam: Lightweight and ground-optimized lidar odometry and mapping on variable terrain,” in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 4758–4765.

Y. Cheng, M. Maimone, and L. Matthies, “Visual odometry on the mars exploration rovers,” in 2005 IEEE International Conference on Systems, Man and Cybernetics, vol. 1. IEEE, 2005, pp. 903–910.

S. Leutenegger, S. Lynen, M. Bosse, R. Siegwart, and P. Furgale, “Keyframe-based visual–inertial odometry using nonlinear optimization,” The International Journal of Robotics Research, vol. 34, no. 3, pp. 314–334, 2015.

A. Voulodimos, N. Doulamis, A. Doulamis, and E. Protopapadakis, “Deep learning for computer vision: A brief review,” Computational intelligence and neuroscience, vol. 2018, 2018.

T. Taketomi, H. Uchiyama, and S. Ikeda, “Visual slam algorithms: a survey from 2010 to 2016,” IPSJ Transactions on Computer Vision and Applications, vol. 9, no. 1, pp. 1–11, 2017.

X. Wang, Y. Han, V. C. Leung, D. Niyato, X. Yan, and X. Chen, “Convergence of edge computing and deep learning: A comprehensive survey,” IEEE Communications Surveys & Tutorials, vol. 22, no. 2, pp. 869–904, 2020.

E. Li, L. Zeng, Z. Zhou, and X. Chen, “Edge ai: On-demand accelerating deep neural network inference via edge computing,” IEEE Transactions on Wireless Communications, vol. 19, no. 1, pp. 447–457, 2019.

X. Lin, J. Li, J. Wu, H. Liang, and W. Yang, “Making knowledge tradable in edge-ai enabled iot: A consortium blockchain-based efficient and incentive approach,” IEEE Transactions on Industrial Informatics, vol. 15, no. 12, pp. 6367–6378, 2019.

Y. Shi, K. Yang, T. Jiang, J. Zhang, and K. B. Letaief, “Communication efficient edge ai: Algorithms and systems,” IEEE Communications Surveys & Tutorials, vol. 22, no. 4, pp. 2167–2191, 2020.

S. Lin, Z. Zhou, Z. Zhang, X. Chen, and J. Zhang, “Edge intelligence in the making: Optimization, deep learning, and applications,” Synthesis Lectures on Learning, Networks, and Algorithms, vol. 1, no. 2, pp. 1– 233, 2020.

V. Mazzia, A. Khaliq, F. Salvetti, and M. Chiaberge, “Real-time apple detection system using embedded systems with hardware accelerators: An edge ai application,” IEEE Access, vol. 8, pp. 9102–9114, 2020.

E. Klippel, R. Oliveira, D. Maslov, A. Bianchi, S. E. Silva, and C. Garrocho, “Towards to an embedded edge ai implementation for longitudinal rip detection in conveyor belt,” in Anais Estendidos do X Simpósio Brasileiro de Engenharia de Sistemas Computacionais. SBC, 2020, pp. 97–102.

L. Jaulin, Mobile robotics. John Wiley & Sons, 2019.

M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, A. Y. Ng et al., “Ros: an open-source robot operating system,” in ICRA workshop on open source software, vol. 3, no. 3.2. Kobe, Japan, 2009, p. 5.

A. Cid, M. Nazário, M. Sathler, F. Martins, J. Domingues, M. Delunardo, P. Alves, R. Teotônio, L. G. Barros, A. Rezende et al., “A simulated environment for the development and validation of an inspection robot for confined spaces,” in 2020 Latin American Robotics Symposium (LARS), 2020 Brazilian Symposium on Robotics (SBR) and 2020 Workshop on Robotics in Education (WRE). IEEE, 2020, pp. 1–6.

J. M. Santos, D. Portugal, and R. P. Rocha, “An evaluation of 2d slam techniques available in robot operating system,” in 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). IEEE, 2013, pp. 1–6.

G. Grisetti, C. Stachniss, and W. Burgard, “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE transactions on Robotics, vol. 23, no. 1, pp. 34–46, 2007.

S. Kohlbrecher, J. Meyer, O. von Stryk, and U. Klingauf, “A flexible and scalable slam system with full 3d motion estimation,” in Proc. IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR). IEEE, November 2011.

E. Munera, J.-L. Poza-Lujan, J.-L. Posadas-Yague, J. Simo, and J. F. B. Noguera, “Distributed real-time control architecture for ros-based modular robots,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 11 233–11 238, 2017.

M. Liu, J. Niu, and X. Wang, “An autopilot system based on ros distributed architecture and deep learning,” in 2017 IEEE 15th International Conference on Industrial Informatics (INDIN). IEEE, 2017, pp. 1229–1234.

M. C. Silva, F. L. M. de Sousa, D. L. M. Barbosa, and R. A. R. Oliveira, “Constraints and challenges in designing applications for industry 4.0: A functional approach.” in ICEIS (1), 2020, pp. 767–774.

W. Shi and S. Dustdar, “The promise of edge computing,” Computer, vol. 49, no. 5, pp. 78–81, 2016.

H. Zhan, C. S. Weerasekera, J.-W. Bian, and I. Reid, “Visual odometry revisited: What should be learnt?” in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 4203–4210.

R. Li, S. Wang, Z. Long, and D. Gu, “Undeepvo: Monocular visual odometry through unsupervised deep learning,” in 2018 IEEE international conference on robotics and automation (ICRA). IEEE, 2018, pp. 7286–7291.

M. Yan, J. Wang, J. Li, and C. Zhang, “Loose coupling visual-lidar odometry by combining viso2 and loam,” in 2017 36th Chinese Control Conference (CCC). IEEE, 2017, pp. 6841–6846.

J. Zhang and S. Singh, “Loam: Lidar odometry and mapping in realtime.” in Robotics: Science and Systems, vol. 2, no. 9, 2014.

M. Jaimez, J. G. Monroy, and J. González-Jiménez, “Planar odometry from a radial laser scanner. a range flow-based approach,” in IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 4479–4485. [Online]. Available: http://mapir.isa.uma.es/mapirwebsite/index.php/mapirdownloads/papers/217
Publicado
22/11/2021
Como Citar

Selecione um Formato
SOUSA, Frederico Luiz Martins de; MEIRA, Natália F. de C.; OLIVEIRA, Ricardo Augusto Rabelo; SILVA, Mateus Coelho. Deep-Learning-Based Visual Odometry Models for Mobile Robotics. In: TRABALHOS EM ANDAMENTO - SIMPÓSIO BRASILEIRO DE ENGENHARIA DE SISTEMAS COMPUTACIONAIS (SBESC), 11. , 2021, Evento Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2021 . p. 128-133. ISSN 2763-9002. DOI: https://doi.org/10.5753/sbesc_estendido.2021.18504.