Providing Multimodal User Interaction in NCL
Abstract
This proposal consists of extending NCL events and connectors to provide multimodal user interaction in NCL, creating new types of events such as touch, motion, eyeMotion, pointer, voiceRecognition, gestureRecognition and faceRecognition. New predefined roles, such as onTouch, onMotion, onEyeMotion, onPointer, on- VoiceRecognition, onFaceRecognition and onFaceRecognition are provided to express connector conditions.
Keywords:
Multimodal User Interaction, NCL, IoT, authoring
Published
2019-10-29
How to Cite
BARRETO, Fábio; MONTEVECCHI, Eyre Brasil B.; ABREU, Raphael; SANTOS, Joel A. F. dos; MUCHALUAT-SAADE, Debora C..
Providing Multimodal User Interaction in NCL. In: FUTURE OF INTERACTIVE DIGITAL TV WORKSHOP - BRAZILIAN SYMPOSIUM ON MULTIMEDIA AND THE WEB (WEBMEDIA), 1. , 2019, Florianópolis.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2019
.
p. 203-204.
ISSN 2596-1683.
DOI: https://doi.org/10.5753/webmedia_estendido.2019.8167.