A Multisensory AI-based Framework for Accessible Navigation of Visually Impaired Users in Virtual Environments: Preliminary Results

  • Luiza M. F. Cintra AKCIT
  • Elisa A. M. Oliveira AKCIT
  • Gustavo H. W. Barbosa AKCIT
  • Matheus D. Negrão AKCIT
  • Valdemar V. G. Neto UFG
  • Rafael T. Sousa AKCIT
  • Sofia L. C. Paiva UFG
  • Arlindo R. G. Filho AKCIT

Resumo


Virtual reality has the potential to deliver highly immersive experiences, but for individuals with visual impairments, these environments often remain inaccessible and exclusionary. This paper introduces an AI-driven framework that redefines how such users interact with 3D virtual worlds. The system employs Vision-Language Models (VLMs) for real-time semantic scene understanding, translating visual information into auditory cues and haptic feedback. This multisensory approach allows users to perceive spatial layouts, recognize objects, and navigate with greater autonomy. By bridging the gap between visual content and non-visual perception, the framework turns virtual reality into a more inclusive, equitable, and engaging medium.

Referências

Anjos, F. E. V. d., Martins, A. d. O., Rodrigues, G. S., Sellitto, M. A., and Silva, D. O. d. (2024). Boosting engineering education with virtual reality: An experiment to enhance student knowledge retention. Applied System Innovation, 7(3):50.

Creed, C., Al-Kalbani, M., Theil, A., Sarcar, S., and Williams, I. (2024). Inclusive ar/vr: accessibility barriers for immersive technologies. Universal Access in the Information Society, 23(1):59–73.

Grande, R., Albusac, J., Herrera, V., Monekosso, D., De Los Reyes, A., Vallejo, D., and Castro-Schez, J. (2025). Enhancing hand interactions and accessibility in virtual reality environments for users with motor disabilities: A practical case study on vr-shopping. IEEE Access.

Heilemann, F., Zimmermann, G., and Münster, P. (2021). Accessibility guidelines for vr games-a comparison and synthesis of a comprehensive set. Frontiers in Virtual Reality, 2:697504.

Kim, W., Choo, K. T. W., Lee, Y., Misra, A., and Balan, R. K. (2018). Empath-d: Vr-based empathetic app design for accessibility. In Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services, pages 123–135.

Li, Z., Wu, X., Du, H., Liu, F., Nghiem, H., and Shi, G. (2025). A survey of state of the art large vision language models: Benchmark evaluations and challenges. In Proceedings of the Computer Vision and Pattern Recognition Conference, pages 1587–1606.

Mott, M., Cutrell, E., Franco, M. G., Holz, C., Ofek, E., Stoakley, R., and Morris, M. R. (2019). Accessible by design: An opportunity for virtual reality. In 2019 IEEE international symposium on mixed and augmented reality adjunct (ISMAR-adjunct), pages 451–454. IEEE.

Wen, L., Yang, X., Fu, D., Wang, X., Cai, P., Li, X., Ma, T., Li, Y., Xu, L., Shang, D., et al. (2024). On the road with gpt-4v (ision): Explorations of utilizing visual-language model as autonomous driving agent. In ICLR 2024 Workshop on Large Language Model (LLM) Agents.

Yuan, B., Folmer, E., and Harris Jr, F. C. (2011). Game accessibility: a survey. Universal Access in the information Society, 10(1):81–100.

Zhao, Y., Cutrell, E., Holz, C., Morris, M. R., Ofek, E., and Wilson, A. D. (2019). Seeingvr: A set of tools to make virtual reality more accessible to people with low vision. In Proceedings of the 2019 CHI conference on human factors in computing systems, pages 1–14.
Publicado
30/09/2025
CINTRA, Luiza M. F.; OLIVEIRA, Elisa A. M.; BARBOSA, Gustavo H. W.; NEGRÃO, Matheus D.; G. NETO, Valdemar V.; SOUSA, Rafael T.; PAIVA, Sofia L. C.; G. FILHO, Arlindo R.. A Multisensory AI-based Framework for Accessible Navigation of Visually Impaired Users in Virtual Environments: Preliminary Results. In: WORKSHOP DE TRABALHOS EM ANDAMENTO - SIMPÓSIO DE REALIDADE VIRTUAL E AUMENTADA (SVR), 27. , 2025, Salvador/BA. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2025 . p. 234-240. DOI: https://doi.org/10.5753/svr_estendido.2025.15760.