AI, Emotions, and Interaction: A Social Matching System with ChatGPT-Based Conversational Agents
Resumo
This article presents the development and evaluation of Tiander, an experimental platform inspired by dating applications, aimed at investigating the interaction between humans and artificial intelligence agents. The system was implemented as a web application, supporting simulated profiles operated by conversational assistants based on ChatGPT. For experimental purposes, two versions were created, one with dynamic AI-generated responses and the other with fixed responses that mimic limited human interactions. The sample will consist of (N) participants, randomly divided between the two groups, who will interact with male or female profiles according to their initial choice. At the end of the experience, participants will answer a questionnaire assessing aspects such as perceived authenticity, empathy, trust, and humanity attributed to the interaction. The proposal contributes to contemporary debates on ethics, interactive system design, and the role of intelligent assistants in social and affective contexts. The data obtained will allow for an empirical analysis of users’ perception regarding different levels of simulated intelligence, broadening the understanding of AI’s impact in interpersonal interaction environments.Referências
Bickmore, T. W. and Picard, R. W. (2005). Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput.-Hum. Interact., 12(2):293–327.
Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., and Shadbolt, N. (2018). ’it’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, page 1–14, New York, NY, USA. Association for Computing Machinery.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. (2020). Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H., editors, Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc.
Ciechanowski, L., Przegalinska, A., Magnuski, M., and Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92:539–548.
de Visser, E., Monfort, S., Mckendrick, R., Smith, M., Mcknight, P., Krueger, F., and Parasuraman, R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22.
Floridi, L. and Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1). [link].
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., and Vayena, E. (2018). Ai4people—an ethical framework for a good ai society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4):689–707.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., and Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2).
Nass, C. and Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1):81–103.
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai. International Journal of Human-Computer Studies, 146:102551.
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Alone together: Why we expect more from technology and less from each other. Basic Books/Hachette Book Group, New York, NY, US.
Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., and Shadbolt, N. (2018). ’it’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, CHI ’18, page 1–14, New York, NY, USA. Association for Computing Machinery.
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J. D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. (2020). Language models are few-shot learners. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H., editors, Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc.
Ciechanowski, L., Przegalinska, A., Magnuski, M., and Gloor, P. (2019). In the shades of the uncanny valley: An experimental study of human–chatbot interaction. Future Generation Computer Systems, 92:539–548.
de Visser, E., Monfort, S., Mckendrick, R., Smith, M., Mcknight, P., Krueger, F., and Parasuraman, R. (2016). Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22.
Floridi, L. and Cowls, J. (2019). A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review, 1(1). [link].
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., and Vayena, E. (2018). Ai4people—an ethical framework for a good ai society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4):689–707.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., and Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2).
Nass, C. and Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56(1):81–103.
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable ai. International Journal of Human-Computer Studies, 146:102551.
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Alone together: Why we expect more from technology and less from each other. Basic Books/Hachette Book Group, New York, NY, US.
Publicado
12/11/2025
Como Citar
BUCIOR, Lucas; PLENTZ, Patricia Della Méa.
AI, Emotions, and Interaction: A Social Matching System with ChatGPT-Based Conversational Agents. In: ESCOLA REGIONAL DE APRENDIZADO DE MÁQUINA E INTELIGÊNCIA ARTIFICIAL DA REGIÃO SUL (ERAMIA-RS), 1. , 2025, Porto Alegre/RS.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 25-28.
DOI: https://doi.org/10.5753/eramiars.2025.16382.