VEM-SLAM - Virtual Environment Modelling through SLAM
ResumoCreating a virtual environment for virtual reality applications can be an expensive task, especially when we aim to build a virtual environment based on a real one. In this work, we integrate solutions for 2D object detection and monocular visual SLAM to map a real indoor environment in order to recognize the static objects contained in this environment, estimate their poses, and build a similar virtual environment. The problem of mapping a real environment has received attention with the advances of SLAM solutions and robust object detection solutions in Computer Vision area. The problem with Simultaneous Localization and Mapping (SLAM) in robotics is to create a (generally geometric) map of the scene while estimating the viewer's pose. The solutions to this problem are used in several areas where a map of the environment is desirable and extracts geometric information from it. Object detection allows us to identify the object in the scene according to the object classes of the reference database. For detection in 2D images, the best solutions are based on convolutional neural networks. Multiple methods are necessary to create a virtual environment by extracting the 3D geometric information of the objects in the scene, and they vary according to the model of the reference 3D object. We also propose a new integration between an object detector and a monocular SLAM solution based on keyframes. As a result, we obtained an improvement in the estimation of the camera's trajectory when compared to the original SLAM method and we demonstrate the use of our system creating virtual environments analogous to the real ones.
Palavras-chave: Virtual reality, SLAM, object detection
MOURA, Gustavo Magalhães; VIEIRA, Marcelo Bernardes; DA SILVA, Rodrigo Luis de Souza. VEM-SLAM - Virtual Environment Modelling through SLAM. In: SIMPÓSIO DE REALIDADE VIRTUAL E AUMENTADA (SVR), 22. , 2020, Evento Online. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2020 . p. 327-336.