Toc-Toc, Tic-Tac, Triiiimm! Utilização de Som em Interfaces Multimodais
Resumo
À medida que as informações apresentadas pelas interfaces das aplicações em computadores e dispositivos móveis se tornam, cada vez mais, visualmente intensivas, o canal visual fica sobrecarregado e nos tornamos limitados em nossa capacidade de assimilar informações. O áudio tem um papel significativo no nosso dia-a-dia, mas tem sido pouco explorado na forma como interagimos com o computador e com dispositivos móveis. Este artigo apresenta uma discussão sobre a necessidade da integração de diferentes modos sensoriais em interfaces multimodais, particularmente o uso de informações sonoras, e aborda conceitos relevantes como ícones auditivos, earcons, atenção, semiose, processos abdutivos, antecipação, atos de fala, etc.Referências
Abowd, G. D., Dey, A. K., Brown, P. J., Davies, N., Smith, M. and Steggles, P. (1999). “Towards a Better Understanding of Context and Context-Awareness”, Proceedings of the 1st International Symposium on Handheld and Ubiquitous Computing, Karlsruhe, Germany, Lecture Notes In Computer Science, vol. 1707, p. 304-307, Springer Berlin, Heidelberg.
Austin, J.L. (1975). “How to do Things with Words”, Cambridge, MA, Harvard University Press.
Bahrick, L. E. and Lickliter R. (2002). “Intersensory Redundancy Guides Early Perceptual and Cognitive Development”, Advances in Child Development and Behavior, vol. 30. p. 153-187, Elsevier B.V., Academic Press.
Barras, S. (2005). “A perceptual framework for the auditory display of scientific data”, Transactions on Applied Perception (TAP), vol. 2, n° 4, p. 389-402, ACM, New York, USA.
Becklen, R. and Cervone, D. (1983). “Selective Looking and the Noticing of Unexpected Events”, Memory & Cognition, vol. 11, p. 601-608.
Blattner, M. M., Sumikawa, D. A. and Greenberg, R. M. (1989). “Earcons and Icons: Their Structure and Common Design Principles”, Human-Computer Interaction vol. 4, n° 1, p. 11-44, L. Erlbaum Associates Inc., Hillsdale, NJ, USA.
Brewster, S.A. and Walker, V.A. (2000). “Non-Visual Interfaces for Wearable Computers”, IEE Workshop on Wearable Computing (00/145), IEE Press.
Brewster, S.A. (2005). “Multimodal Interaction and Proactive Computing”, In British Council Workshop on Proactive Computing, Nizhny Novgorod, Russia.
Brown, M., Newsome, S. and Glinert, E. (1989). “An Experiment into the Use of Auditory Cues to Reduce Visual Workload”, Proceedings of the CHI '89 Conference on Human Factors in Computer Systems, New York, ACM, p. 339-346.
Cherry, E. C. (1953). “Some Experiments on the Recognition of Speech with One and with Two Ears”, The Journal of the Acoustical Society of America, vol. 25, n° 5, p. 975-979, September.
Dey, A. K. (2001). “Understanding and Using Context”, Personal and Ubiquitous Computing, vol. 5, n° 1, p. 4-7, Springer-Verlag London Ltd., February.
Driver, J. A. (2001). “Selective Review of Selective Attention Research from the Past Century”, British Journal of Psychology, vol. 92, England, p. 53-78, The British Psychological Society.
Edwards, A. D. N. (1989). “Soundtrack: an auditory interface for blind users”, Human-Computer Interaction , vol. 4, n° 1, L. Erlbaum Associates Inc., Hillsdale, NJ, p. 45-66.
Flowers, J. H., Buhman, D. C. and Turnage, K. D. (2005). “Data sonification from the desktop: Should sound be part of standard data analysis software?”, Transactions on Applied Perception (TAP), vol. 2, n° 4, p. 467-472, ACM, New York, USA.
Gaver, W. W. (1988). “Everyday Listening and Auditory Icons”, Doctoral Dissertation, University of California, San Diego.
Gaver, W.W. (1989). “The SonicFinder: An Interface that Uses Auditory Icons”, Human-Computer Interaction, vol. 4, n° 1, p. 67-94, L. Erlbaum Associates Inc., Hillsdale, NJ, USA.
Greenberg, S. (2001). “Context as a Dynamic Construct”, Human-Computer Interaction, vol. 16, n° 2, p. 257-268, L. Erlbaum Associates Inc., Hillsdale, NJ, USA.
Hermann, T. (2008). “Taxonomy and Definitions for Sonification and Auditory Display”, Proceedings of the 14th International Conference on Auditory Display, Paris, France.
James, W. (1890). “Attention”, The principles of psychology (Vol. 1), Chapter 11, Holt, New York, USA.
Kahneman, D. (1973). “Attention and Effort”. Englewood Cliffs, NJ, Prentice-Hall.
Kleirock, L. (1996). “Nomadicity: Anytime, Anywhere in a Disconnected World”, Mobile Networks and Applications, Special Issue on Mobile Computing and System Services, vol. 1, n° 4, J.C. Baltzer AG, Science Publishers, p. 351-357, December.
McGee, M. R., Gray, P. D. and Brewster, S.A. (2000). “The Effective Combination of Haptic and Auditory Textural Information”, Proceedings of the Haptic Human-Computer Interaction 2000, First International Workshop, Glasgow, UK, p. 118-126, August.
Murphy, E., Kuber, R., Strain, P., McAllister, G. and Yu, W. (2007). “Developing Multi-modal Interfaces for Visually Impaired People to Access the Internet”, Proceedings of the 13th International Conference on Auditory Display, Montreal, Canada.
Nadin, M. (2003). “Anticipation - The End Is Where We Start From”, Lars Müller Publishers, Baden, Switzerland.
Oppermann, R. and Specht, M. (2001). “Contextualized Information Systems for an Information Society for All”, Proceedings of HCI International 2001, The 9th International Conference on Human-Computer Interaction, New Orleans, USA, p. 850-854, August.
Pashler H. (1995). “Attention and Visual Perception: Analyzing Divided Attention”, Visual Cognition, Chapter 2, Stephen Michael Kosslyn, Daniel N. Osherson (Eds.), p. 71-99, MIT Press.
Pashler, H., Johnston, J. C. and Ruthruff, E. (2001). “Attention and Performance”, Annual Review of Psychology, vol. 52, Palo Alto, CA, USA, p. 629-651, Annual Reviews.
Santaella, L. (2004). “O Método Anticartesiano de C. S. Peirce”, Editora UNESP, São Paulo, SP, Brasil.
Saussure, F. (1910). “Curso de Linguística Geral (Cours de Linguistique Générale)”, Editora Cultrix, 2006, São Paulo, SP, Brasil.
Searle, J. (1979). “Expressão e Significado (Expression and Meaning)”, Martins Fontes, 2002, São Paulo, SP, Brasil.
Treisman, A. and Gelade, G. (1980). “A Feature Integration Theory of Attention’, Cognitive Psychology, n° 12, 97-136.
Walker, B. N. and Nees, M. A. (2009). “Theory of Sonification”, In T. Hermann, A. Hunt, & J. Neuhoff (Eds.), Handbook of Sonification, New York: Academic Press, in press.
Want, R., Hopper, A., Falcão, V. and Gibbons, J. (1992). “The Active Badge Location System”, ACM Transactions on Information Systems, vol. 10, n° 1, January, p. 91-102.
Winograd, T. (2001). “Architectures for Context”, Human-Computer Interaction, vol. 16, n° 2, L. Erlbaum Associates Inc. Hillsdale, NJ, USA, p. 401-419, December.
Austin, J.L. (1975). “How to do Things with Words”, Cambridge, MA, Harvard University Press.
Bahrick, L. E. and Lickliter R. (2002). “Intersensory Redundancy Guides Early Perceptual and Cognitive Development”, Advances in Child Development and Behavior, vol. 30. p. 153-187, Elsevier B.V., Academic Press.
Barras, S. (2005). “A perceptual framework for the auditory display of scientific data”, Transactions on Applied Perception (TAP), vol. 2, n° 4, p. 389-402, ACM, New York, USA.
Becklen, R. and Cervone, D. (1983). “Selective Looking and the Noticing of Unexpected Events”, Memory & Cognition, vol. 11, p. 601-608.
Blattner, M. M., Sumikawa, D. A. and Greenberg, R. M. (1989). “Earcons and Icons: Their Structure and Common Design Principles”, Human-Computer Interaction vol. 4, n° 1, p. 11-44, L. Erlbaum Associates Inc., Hillsdale, NJ, USA.
Brewster, S.A. and Walker, V.A. (2000). “Non-Visual Interfaces for Wearable Computers”, IEE Workshop on Wearable Computing (00/145), IEE Press.
Brewster, S.A. (2005). “Multimodal Interaction and Proactive Computing”, In British Council Workshop on Proactive Computing, Nizhny Novgorod, Russia.
Brown, M., Newsome, S. and Glinert, E. (1989). “An Experiment into the Use of Auditory Cues to Reduce Visual Workload”, Proceedings of the CHI '89 Conference on Human Factors in Computer Systems, New York, ACM, p. 339-346.
Cherry, E. C. (1953). “Some Experiments on the Recognition of Speech with One and with Two Ears”, The Journal of the Acoustical Society of America, vol. 25, n° 5, p. 975-979, September.
Dey, A. K. (2001). “Understanding and Using Context”, Personal and Ubiquitous Computing, vol. 5, n° 1, p. 4-7, Springer-Verlag London Ltd., February.
Driver, J. A. (2001). “Selective Review of Selective Attention Research from the Past Century”, British Journal of Psychology, vol. 92, England, p. 53-78, The British Psychological Society.
Edwards, A. D. N. (1989). “Soundtrack: an auditory interface for blind users”, Human-Computer Interaction , vol. 4, n° 1, L. Erlbaum Associates Inc., Hillsdale, NJ, p. 45-66.
Flowers, J. H., Buhman, D. C. and Turnage, K. D. (2005). “Data sonification from the desktop: Should sound be part of standard data analysis software?”, Transactions on Applied Perception (TAP), vol. 2, n° 4, p. 467-472, ACM, New York, USA.
Gaver, W. W. (1988). “Everyday Listening and Auditory Icons”, Doctoral Dissertation, University of California, San Diego.
Gaver, W.W. (1989). “The SonicFinder: An Interface that Uses Auditory Icons”, Human-Computer Interaction, vol. 4, n° 1, p. 67-94, L. Erlbaum Associates Inc., Hillsdale, NJ, USA.
Greenberg, S. (2001). “Context as a Dynamic Construct”, Human-Computer Interaction, vol. 16, n° 2, p. 257-268, L. Erlbaum Associates Inc., Hillsdale, NJ, USA.
Hermann, T. (2008). “Taxonomy and Definitions for Sonification and Auditory Display”, Proceedings of the 14th International Conference on Auditory Display, Paris, France.
James, W. (1890). “Attention”, The principles of psychology (Vol. 1), Chapter 11, Holt, New York, USA.
Kahneman, D. (1973). “Attention and Effort”. Englewood Cliffs, NJ, Prentice-Hall.
Kleirock, L. (1996). “Nomadicity: Anytime, Anywhere in a Disconnected World”, Mobile Networks and Applications, Special Issue on Mobile Computing and System Services, vol. 1, n° 4, J.C. Baltzer AG, Science Publishers, p. 351-357, December.
McGee, M. R., Gray, P. D. and Brewster, S.A. (2000). “The Effective Combination of Haptic and Auditory Textural Information”, Proceedings of the Haptic Human-Computer Interaction 2000, First International Workshop, Glasgow, UK, p. 118-126, August.
Murphy, E., Kuber, R., Strain, P., McAllister, G. and Yu, W. (2007). “Developing Multi-modal Interfaces for Visually Impaired People to Access the Internet”, Proceedings of the 13th International Conference on Auditory Display, Montreal, Canada.
Nadin, M. (2003). “Anticipation - The End Is Where We Start From”, Lars Müller Publishers, Baden, Switzerland.
Oppermann, R. and Specht, M. (2001). “Contextualized Information Systems for an Information Society for All”, Proceedings of HCI International 2001, The 9th International Conference on Human-Computer Interaction, New Orleans, USA, p. 850-854, August.
Pashler H. (1995). “Attention and Visual Perception: Analyzing Divided Attention”, Visual Cognition, Chapter 2, Stephen Michael Kosslyn, Daniel N. Osherson (Eds.), p. 71-99, MIT Press.
Pashler, H., Johnston, J. C. and Ruthruff, E. (2001). “Attention and Performance”, Annual Review of Psychology, vol. 52, Palo Alto, CA, USA, p. 629-651, Annual Reviews.
Santaella, L. (2004). “O Método Anticartesiano de C. S. Peirce”, Editora UNESP, São Paulo, SP, Brasil.
Saussure, F. (1910). “Curso de Linguística Geral (Cours de Linguistique Générale)”, Editora Cultrix, 2006, São Paulo, SP, Brasil.
Searle, J. (1979). “Expressão e Significado (Expression and Meaning)”, Martins Fontes, 2002, São Paulo, SP, Brasil.
Treisman, A. and Gelade, G. (1980). “A Feature Integration Theory of Attention’, Cognitive Psychology, n° 12, 97-136.
Walker, B. N. and Nees, M. A. (2009). “Theory of Sonification”, In T. Hermann, A. Hunt, & J. Neuhoff (Eds.), Handbook of Sonification, New York: Academic Press, in press.
Want, R., Hopper, A., Falcão, V. and Gibbons, J. (1992). “The Active Badge Location System”, ACM Transactions on Information Systems, vol. 10, n° 1, January, p. 91-102.
Winograd, T. (2001). “Architectures for Context”, Human-Computer Interaction, vol. 16, n° 2, L. Erlbaum Associates Inc. Hillsdale, NJ, USA, p. 401-419, December.
Publicado
20/07/2009
Como Citar
LAUFER, Carlos; SCHWABE, Daniel.
Toc-Toc, Tic-Tac, Triiiimm! Utilização de Som em Interfaces Multimodais. In: SEMINÁRIO INTEGRADO DE SOFTWARE E HARDWARE (SEMISH), 36. , 2009, Bento Gonçalves/RS.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2009
.
p. 201-215.
ISSN 2595-6205.
