Extending NCL to Support Multiuser and Multimodal Interactions

  • Álan Lívio Vasconcelos Guedes PUC-Rio
  • Roberto Gerson De Albuquerque Azevedo PUC-Rio
  • Sérgio Colcher PUC-Rio
  • Simone D. J. Barbosa PUC-Rio

Resumo


Recent advances in technologies for speech, touch and gesture recognition have given rise to a new class of user interfaces that does not only explore multiple modalities but also allows for multiple interacting users. Even so, current declarative multimedia languages—e.g. HTML, SMIL, and NCL—support only limited forms of user input (mainly keyboard and mouse) for a single user. In this paper, we aim at studying how the NCL multimedia language could take advantage of those new recognition technologies. To do so, we revisit the model behind NCL, named NCM (Nested Context Model), and extend it with first-class concepts supporting multiuser and multimodal features. To evaluate our approach, we instantiate the proposal and discuss some usage scenarios, developed as NCL applications with our extended features.
Publicado
08/11/2016
GUEDES, Álan Lívio Vasconcelos; AZEVEDO, Roberto Gerson De Albuquerque; COLCHER, Sérgio; BARBOSA, Simone D. J.. Extending NCL to Support Multiuser and Multimodal Interactions. In: BRAZILIAN SYMPOSIUM ON MULTIMEDIA AND THE WEB (WEBMEDIA), 22. , 2016, Teresina. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2016 . p. 39-46.

Artigos mais lidos do(s) mesmo(s) autor(es)

<< < 1 2