H.761 Support of a News input Node and a New "recognition" Node-Event to Enable Multimodal User Interactions

  • Alan L. V. Guedes PUC-Rio
  • Sergio Colcher PUC-Rio

Resumo


Multimedia languages traditionally, they focus on synchronizing a multimedia presentation (based on media and time abstractions) and on supporting user interactions for a single user, usually limited to keyboard and mouse input. Recent advances in recognition technologies, however, have given rise to a new class of multimodal user interfaces (MUIs). In short, MUIs process two or more combined user input modalities (e.g. speech, pen, touch, gesture, gaze, and head and body movements) in a coordinated manner with output modalities . An individual input modality corresponds to a specific type of user-generated information captured by input devices (e.g. speech, pen) or sensors (e.g. motion sensor). An individual output modality corresponds to user-consumed information through stimuli captured by human senses. The computer system produces those stimuli through audiovisual or actuation devices (e.g. tactile feedback). In this proposal, we aim at extending the NCL multimedia language to take advantage of multimodal features.
Publicado
29/10/2019
Como Citar

Selecione um Formato
GUEDES, Alan L. V.; COLCHER, Sergio. H.761 Support of a News input Node and a New "recognition" Node-Event to Enable Multimodal User Interactions. In: WORKSHOP FUTURO DA TV DIGITAL INTERATIVA - SIMPÓSIO BRASILEIRO DE SISTEMAS MULTIMÍDIA E WEB (WEBMEDIA), 1. , 2019, Florianópolis. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2019 . p. 213-214. ISSN 2596-1683. DOI: https://doi.org/10.5753/webmedia_estendido.2019.8172.