Using images to avoid collisions and bypass obstacles in indoor environments
Resumo
Convolutional Neural Network (CNN) has contributed a lot to the advancement of autonomous navigation techniques, and such systems can be adapted to facilitate the movement of robots and visually impaired people. This work presents an approach that uses images to avoid collisions and bypass obstacles in indoor environments. The constructed dataset uses information from forward and lateral speeds during walks to determine collisions and obstacle avoidance. VGG16, ResNet50, and Dronet architectures were used to evaluate the dataset. Finally, reflections on the dataset characteristics are added, and the CNNs performance is presented.Referências
World Health Organization, “Global cooperation on assistive technology (gate),” 2021, https://www.who.int/disabilities/technology/gate/en/, Last accessed on 2021-03-08.
A. Aladren, G. Lopez-Nicolas, L. Puig, and J. J. Guerrero, “Navigation assistance for the visually impaired using RGB-d sensor with range expansion,” IEEE Systems Journal, vol. 10, no. 3, pp. 922–932, Sep. 2016. [Online]. Available: https://doi.org/10.1109/jsyst.2014.2320639
R. Cheng, K. Wang, L. Lin, and K. Yang, “Visual localization of key positions for visually impaired people,” in 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, Aug. 2018. [Online]. Available: https://doi.org/10.1109/icpr.2018.8545141
A. Mancini, E. Frontoni, and P. Zingaretti, “Mechatronic system to help visually impaired users during walking and running,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 2, pp. 649–660, Feb. 2018. [Online]. Available: https://doi.org/10.1109/tits.2017.2780621
W. M. Elmannai and K. M. Elleithy, “A novel obstacle avoidance system for guiding the visually impaired through the use of fuzzy control logic,” in 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC). IEEE, Jan. 2018. [Online]. Available: https://doi.org/10.1109/ccnc.2018.8319310
B. Jiang, J. Yang, Z. Lv, and H. Song, “Wearable vision assistance system based on binocular sensors for visually impaired users,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1375–1383, Apr. 2019. [Online]. Available: https://doi.org/10.1109/jiot.2018.2842229
F. Schilling, X. Chen, J. Folkesson, and P. Jensfelt, “Geometric and visual terrain classification for autonomous mobile navigation,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Sep. 2017. [Online]. Available: https://doi.org/10.1109/iros.2017.8206092
S. P. P. da Silva, P. H. Filho, L. B. Marinho, J. S. Almeida, N. M. M. Nascimento, A. W. de O. Rodrigues, and P. P. R. Filho, “A new approach to navigation of unmanned aerial vehicle using deep transfer learning,” in 2019 8th Brazilian Conference on Intelligent Systems (BRACIS). IEEE, Oct. 2019. [Online]. Available: https://doi.org/10.1109/bracis.2019.00047
F. Breve and C. N. Fischer, “Visually impaired aid using convolutional neural networks, transfer learning, and particle competition and cooperation,” in 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, Jul. 2020. [Online]. Available: https://doi.org/10.1109/ijcnn48605.2020.9207606
W. Elmannai and K. Elleithy, “Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions,” Sensors, vol. 17, no. 3, p. 565, Mar. 2017. [Online]. Available: https://doi.org/10.3390/s17030565
K. Manjari, M. Verma, and G. Singal, “A survey on assistive technology for visually impaired,” Internet of Things, vol. 11, p. 100188, Sep. 2020. [Online]. Available: https://doi.org/10.1016/j.iot.2020.100188
P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, “Pruning convolutional neural networks for resource efficient inference,” 2017.
S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, Oct. 2010. [Online]. Available: https://doi.org/10.1109/tkde.2009.191
A. Loquercio, A. I. Maqueda, C. R. del Blanco, and D. Scaramuzza, “DroNet: Learning to fly by driving,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 1088–1095, Apr. 2018. [Online]. Available: https://doi.org/10.1109/lra.2018.2795643
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Jun. 2009. [Online]. Available: https://doi.org/10.1109/cvpr.2009.5206848
A. Aladren, G. Lopez-Nicolas, L. Puig, and J. J. Guerrero, “Navigation assistance for the visually impaired using RGB-d sensor with range expansion,” IEEE Systems Journal, vol. 10, no. 3, pp. 922–932, Sep. 2016. [Online]. Available: https://doi.org/10.1109/jsyst.2014.2320639
R. Cheng, K. Wang, L. Lin, and K. Yang, “Visual localization of key positions for visually impaired people,” in 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, Aug. 2018. [Online]. Available: https://doi.org/10.1109/icpr.2018.8545141
A. Mancini, E. Frontoni, and P. Zingaretti, “Mechatronic system to help visually impaired users during walking and running,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 2, pp. 649–660, Feb. 2018. [Online]. Available: https://doi.org/10.1109/tits.2017.2780621
W. M. Elmannai and K. M. Elleithy, “A novel obstacle avoidance system for guiding the visually impaired through the use of fuzzy control logic,” in 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC). IEEE, Jan. 2018. [Online]. Available: https://doi.org/10.1109/ccnc.2018.8319310
B. Jiang, J. Yang, Z. Lv, and H. Song, “Wearable vision assistance system based on binocular sensors for visually impaired users,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1375–1383, Apr. 2019. [Online]. Available: https://doi.org/10.1109/jiot.2018.2842229
F. Schilling, X. Chen, J. Folkesson, and P. Jensfelt, “Geometric and visual terrain classification for autonomous mobile navigation,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, Sep. 2017. [Online]. Available: https://doi.org/10.1109/iros.2017.8206092
S. P. P. da Silva, P. H. Filho, L. B. Marinho, J. S. Almeida, N. M. M. Nascimento, A. W. de O. Rodrigues, and P. P. R. Filho, “A new approach to navigation of unmanned aerial vehicle using deep transfer learning,” in 2019 8th Brazilian Conference on Intelligent Systems (BRACIS). IEEE, Oct. 2019. [Online]. Available: https://doi.org/10.1109/bracis.2019.00047
F. Breve and C. N. Fischer, “Visually impaired aid using convolutional neural networks, transfer learning, and particle competition and cooperation,” in 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, Jul. 2020. [Online]. Available: https://doi.org/10.1109/ijcnn48605.2020.9207606
W. Elmannai and K. Elleithy, “Sensor-based assistive devices for visually-impaired people: Current status, challenges, and future directions,” Sensors, vol. 17, no. 3, p. 565, Mar. 2017. [Online]. Available: https://doi.org/10.3390/s17030565
K. Manjari, M. Verma, and G. Singal, “A survey on assistive technology for visually impaired,” Internet of Things, vol. 11, p. 100188, Sep. 2020. [Online]. Available: https://doi.org/10.1016/j.iot.2020.100188
P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz, “Pruning convolutional neural networks for resource efficient inference,” 2017.
S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 22, no. 10, pp. 1345–1359, Oct. 2010. [Online]. Available: https://doi.org/10.1109/tkde.2009.191
A. Loquercio, A. I. Maqueda, C. R. del Blanco, and D. Scaramuzza, “DroNet: Learning to fly by driving,” IEEE Robotics and Automation Letters, vol. 3, no. 2, pp. 1088–1095, Apr. 2018. [Online]. Available: https://doi.org/10.1109/lra.2018.2795643
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, Jun. 2009. [Online]. Available: https://doi.org/10.1109/cvpr.2009.5206848
Publicado
18/10/2021
Como Citar
MEDEIROS, David Silva de; ARAÚJO, Thiago Henrique; SILVA JÚNIOR, Elias Teodoro da; RAMALHO, Geraldo Luis Bezerra.
Using images to avoid collisions and bypass obstacles in indoor environments. In: WORKSHOP DE TRABALHOS EM ANDAMENTO - CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 34. , 2021, Online.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2021
.
p. 158-161.
DOI: https://doi.org/10.5753/sibgrapi.est.2021.20030.