Um Estudo Exploratório sobre Métodos de Avaliação de User Experience em Chatbots
Abstract
Companies are increasingly investing in developing and evaluating text-based conversational agents. There is still very little knowledge to assess the quality of chatbots from a user experience (UX) point of view. TThis study aimed to investigate the applicability, feasibility, and acceptance of three general UX evaluation methods (AttrakDiff, Think Aloud, and MAX Board) to evaluate chatbots. To do so, we conducted an exploratory study to assess the UX of a chatbot called ANA. Based on the results, we believe that three approaches are key to capturing the entire user experience when using chatbots.
References
Barbosa, M., Nakamura, W. T., Valle, P., Guerino, G. C., Finger, A. F., Lunardi, G. M., and Silva, W. (2022). Ux of chatbots: An exploratory study on acceptance of user experience evaluation methods. In ICEIS (2), pages 355–363.
Cavalcante, Emanuelle e Rivero, L. e. C. T. (2015). Max: A method for evaluating the post-use user experience through cards and a board. In SEKE 2015, pages 495–500.
Fernandes, U. d. S., Prates, R. O., Chagas, B. A., and Barbosa, G. A. (2021). Analyzing molic’s applicability to model the interaction of conversational agents: A case study on ana chatbot. In IHC 2021, pages 1–7.
Fiore, Dario e Baldauf, M. e. T. C. (2019). “forgot your password again?” acceptance and user experience of a chatbot for in-company it support. In MUM 2019, pages 1–11
Følstad, A. and Skjuve, M. (2019). Chatbots for customer service: user experience and motivation. In CUI 2019, pages 1–9.
Guerino, G. C., Silva, W. A. F., Coleti, T. A., and Valentim, N. M. C. (2021). Assessing a technology for usability and user experience evaluation of conversational systems: An exploratory study. In ICEIS 2021, volume 2, pages 461–471.
Hassenzahl, Marc e Burmester, M. e. K. F. (2003). Attrakdiff: Ein fragebogen zur messung wahrgenommener hedonischer und pragmatischer qualität. In Mensch & computer 2003, pages 187–196. Springer.
ISO 9241-210 (2011). ISO/IEC 9241-210: Ergonomics of human-system interaction – part 210: Human-centred design for interactive systems.
Jain, M., Kumar, P., Kota, R., and Patel, S. N. (2018). Evaluating and informing the design of chatbots. In Designing Interactive Systems Conference, pages 895–906.
Jaspers, M. W., Steen, T., Van Den Bos, C., and Geenen, M. (2004). The think aloud method: a guide to user interface design. Int. Journal of Medical Informatics, 73(11-12):781–795.
Lewis, James R e Sauro, J. (2021). Usability and user experience: Design and evaluation. Handbook of Human Factors and Ergonomics, pages 972–1015.
Luger, E. and Sellen, A. (2016). “like having a really bad pa” the gulf between user expectation and experience of conversational agents. In CHI 2016, pages 5286–5297.
Marques, Leonardo C e Nakamura, W. T. e. V. N. M. C. e. R. L. e. C. T. (2018). Do scale type techniques identify problems that affect user experience? user experience evaluation of a mobile application (s). In SEKE, pages 451–450.
Nielsen, J. (2000). Why you only need to test with 5 users. Disponível em: [link], Acessado em: 06/09/21.
Rapp, A., Curti, L., and Boldi, A. (2021). The human side of human-chatbot interaction: A systematic literature review of ten years of research on text-based chatbots. International Journal of Human-Computer Studies, page 102630.
Rivero, L. and Conte, T. (2017). A systematic mapping study on research contributions on ux evaluation technologies. In IHC 2017, pages 1–10.
Sivaji, A. and Tzuaan, S. S. (2012). Website user experience (ux) testing tool development using open source software (oss). In SEANES, pages 1–6.
Smestad, Tuva Lunde e Volden, F. (2018). Chatbot personalities matters. In International Conference on Internet Science, pages 170–181. Springer.
Valentim, N. M. C., Rabelo, J., Silva, W., Coutinho, W., Mota, Á., and Conte, T. (2014). Avaliando a qualidade de um aplicativo web móvel através de um teste de usabilidade: um relato de experiência. In SBQS 2015, pages 256–263. SBC.
