Improving VR Accessibility Through Automatic 360 Scene Description Using Multimodal Large Language Models

Resumo


Advancements in Virtual Reality (VR) technology hold immense promise for enriching immersive experiences. Despite the advancements in VR technology, there remains a significant gap in addressing accessibility concerns, particularly in automatically providing descriptive information for VR scenes. This paper combines the potential of leveraging Multimodal Large Language Models (MLLMs) to automatically generate text descriptions for 360 VR scenes according to Speech-to-Text (STT) prompts. As a case study, we conduct experiments on educational settings in VR museums, improving dynamic experiences across various contexts. Despite minor challenges in adapting MLLMs to VR Scenes, the experiments demonstrate that they can generate descriptions with high quality. Our findings provide insights for enhancing VR experiences and ensuring accessibility to individuals with disabilities or diverse needs.
Palavras-chave: Virtual Reality (VR), Accessibility, Multimodal Large Language Models (MLLMs), 3D Scene

Referências

Lindsay Bennett. Digital exclusion: Analyzing disparities in internet access for disabled and vulnerable people.

Chris Creed, Maadh Al-Kalbani, Arthur Theil, Sayan Sarcar, and Ian Williams. Inclusive augmented and virtual reality: A research agenda. International Journal of Human–Computer Interaction, pages 1–20, 2023.

Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, and Enhong Chen. A survey on multimodal large language models. arXiv preprint arXiv:2306.13549, 2023.

Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023.

Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. arXiv preprint arXiv:2306.14824, 2023.

Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities. arXiv preprint arXiv:2308.12966, 2023.

Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang, Lei Zhao, Xixuan Song, et al. Cogvlm: Visual expert for pretrained language models. arXiv preprint arXiv:2311.03079, 2023.

Dima Rekesh, Nithin Rao Koluguri, Samuel Kriman, Somshubra Majumdar, Vahid Noroozi, He Huang, Oleksii Hrinchuk, Krishna Puvvada, Ankur Kumar, Jagadeesh Balam, et al. Fast conformer with linearly scalable attention for efficient speech recognition. In 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 1–8. IEEE, 2023.

NVIDIA Corporation. Parakeet ctc riva 0.6b documentation, 2024. Accessed: 2024-06-09.

Deb K Roy. Learning visually grounded words and syntax for a scene description task. Computer speech & language, 16(3-4):353–385, 2002.

Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, et al. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neural Information Processing Systems, 33:9459–9474, 2020.
Publicado
30/09/2024
OLIVEIRA, Elisa Ayumi Masasi de; SILVA, Diogo Fernandes Costa; GALVÃO FILHO, Arlindo Rodrigues. Improving VR Accessibility Through Automatic 360 Scene Description Using Multimodal Large Language Models. In: SIMPÓSIO DE REALIDADE VIRTUAL E AUMENTADA (SVR), 26. , 2024, Manaus/AM. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2024 . p. 289-293.