Comparative Analysis of Facial Expression Recognition Systems for Evaluating Emotional States in Virtual Humans

  • Jonas de Araújo Luz Junior UNIFOR
  • Maria Andréia Formico Rodrigues UNIFOR

Resumo


The digital animation process is a complex endeavour requiring professional animators to acquire substantial expertise and technique through years of study and practice. Particularly, facial animation, where virtual humans express specific mental states or emotions with a desire for realism, is further complicated by the "Uncanny Valley" phenomenon. In this context, it is posited that pre-validated facial expressions for certain emotions could serve as references for the novice or inexperienced animators during the facial animation and posing process of their virtual humans using morph targets, also known as blend shapes or shape keys. This research presents a comparative study between two Facial Expression Recognition (FER) systems that employ pre-trained models for facial recognition applied to emotion recognition in virtual humans. Given that these systems were not designed or trained for this particular purpose but for facial recognition in real humans, this study aims to investigate their level of applicability in scenarios where virtual humans are used instead of real humans. This assessment is a critical step towards evaluating the feasibility of integrating FER models as part of a support tool for facial animation and the posing of virtual humans. Through this investigation, this research provides evidence of the reliability of applying the FER library and Deepface systems for emotion recognition in virtual humans, contributing to investigating new ways to enhance the digital animation process and overcoming the inherent complexities of facial animation.
Palavras-chave: comparative analysis, facial expression recognition systems, evaluation, emotional states, virtual humans

Referências

Anderson R Avila, Zahid Akhtar, Joao F Santos, Douglas O’Shaughnessy, and Tiago H Falk. 2018. Feature pooling of modulation spectrum features for improved speech emotion recognition in the wild. IEEE Transactions on Affective Computing 12, 1 (2018), 177–188.

David Burden and Maggi Savin-Baden. 2019. Virtual humans: Today and tomorrow. CRC Press.

Qiong Cao, Li Shen, Weidi Xie, Omkar M. Parkhi, and Andrew Zisserman. 2018. VGGFace2: A dataset for recognising faces across pose and age. arXiv:1710.08092 [cs.CV]

Daz Productions, Inc. 2023. Daz 3D - 3D Models and 3D Software | Daz 3D. https://www.daz3d.com/

Daz Productions, Inc. 2023. Daz 3D Animation Studio Tools & Features | Daz 3D. https://www.daz3d.com/technology/

Daz Productions, Inc. 2023. Daz to Unity Bridge. Daz Productions, Inc. https://www.daz3d.com/daz-to-unity-bridge

Daniel Valente de Macedo and Maria Andréia Formico Rodrigues. 2011. Experiences with rapid mobile game development using Unity engine. Computers in Entertainment 9 (2011), 14:1–14:12. https://doi.org/10.1145/2027456.2027460

Paul Ekman. 1973. Cross-cultural studies of facial expression. Darwin and facial expression: A century of research in review 169222, 1 (1973).

Paul Ekman and Wallace V Friesen. 1971. Constants across cultures in the face and emotion. Journal of personality and social psychology 17, 2 (1971), 124.

Paul Ekman and Wallace V Friesen. 1976. Measuring facial movement. Environmental psychology and nonverbal behavior 1 (1976), 56–75.

P Ekman and D Keltner. 1970. Universal facial expressions of emotion. Calif. Mental Health Res. Dig 8, 4 (1970), 151–158.

A.J. Ferri. 2007. Willing Suspension of Disbelief: Poetic Faith in Film. Lexington Books. https://books.google.com.br/books?id=yB_ZOzxqMVcC

Alan J Fridlund and Erika L Rosenberg. 1995. Human facial expression: An evolutionary view. Nature 373, 6515 (1995), 569–569.

Prashant Gohel, Priyanka Singh, and Manoranjan Mohanty. 2021. Explainable AI: current status and future directions. https://doi.org/10.48550/arXiv.2107.07045 arXiv:2107.07045 [cs] version: 1.

Ian J. Goodfellow, Dumitru Erhan, Pierre Luc Carrier, Aaron Courville, Mehdi Mirza, Ben Hamner, Will Cukierski, Yichuan Tang, David Thaler, Dong-Hyun Lee, Yingbo Zhou, Chetan Ramaiah, Fangxiang Feng, Ruifan Li, Xiaojie Wang, Dimitris Athanasakis, John Shawe-Taylor, Maxim Milakov, John Park, Radu Ionescu, Marius Popescu, Cristian Grozea, James Bergstra, Jingjing Xie, Lukasz Romaszko, Bing Xu, Zhang Chuang, and Yoshua Bengio. 2013. Challenges in Representation Learning: A report on three machine learning contests. arXiv:1307.0414 [stat.ML]

Jonathan Gratch, Jeff Rickel, Elisabeth André, Justine Cassell, Eric Petajan, and Norman Badler. 2002. Creating interactive virtual humans: Some assembly required. IEEE Intelligent Systems 17, 4 (2002), 54–63.

Jonas De Araújo Luz Junior, Maria Andréia Formico Rodrigues, and Jessica Hammer. 2021. A storytelling game to foster empathy and connect emotionally with breast cancer journeys. In 2021 IEEE 9th International Conference on Serious Games and Applications for Health (SeGAH). IEEE, 1–8.

Nadia Magnenat-Thalmann and Daniel Thalmann. 2005. Handbook of virtual humans. John Wiley & Sons.

Masahiro Mori, Karl F. MacDorman, and Norri Kageki. 2012. The Uncanny Valley [From the Field]. IEEE Robotics & Automation Magazine 19, 2 (2012), 98–100. https://doi.org/10.1109/MRA.2012.2192811

NumFOCUS, Inc. 2023. Jupyter Notebook. https://jupyter.org/about

NumFOCUS, Inc. 2023. Pandas - Python Data Analysis Library. https://pandas.pydata.org/

Frederic I. Parke and Keith Waters. 2008. Computer Facial Animation (second ed.). AK Peters Ltd.

Ygor R. Serpa, Leonardo A. Pires, and Maria Andréia Formico Rodrigues. 2019. Milestones and New Frontiers in Deep Learning. In the 32𝑛𝑑 SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T). 22–35. https://doi.org/10.1109/SIBGRAPI-T.2019.00008

Emmanuel V.B. Sampaio, Lucie Lévêque, Matthieu Perreira da Silva, and Patrick Le Callet. 2022. Are Facial Expression Recognition Algorithms Reliable in the Context of Interactive Media? A New Metric to Analyse Their Performance. In EmotionIMX: Considering Emotions in Multimedia Experience (ACM IMX 2022 Workshop). Aveiro, Portugal. https://hal.science/hal-03789571

Sefik Ilkin Serengil and Alper Ozpinar. 2020. LightFace: A Hybrid Deep Face Recognition Framework. In 2020 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE, 23–27. https://doi.org/10.1109/ASYU50717.2020.9259802

Justin Shenk, Aaron CG, Octavio Arriaga, and Owlwasrowk. 2021. justinshenk/fer: Zenodo. https://doi.org/10.5281/zenodo.5362356

Unity Technologies. 2023. Unity Real-Time Development Platform | 3D, 2D, VR e AR Engine. https://unity.com

Guido Van Rossum and Fred L. Drake. 2009. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA.
Publicado
06/11/2023
Como Citar

Selecione um Formato
LUZ JUNIOR, Jonas de Araújo; RODRIGUES, Maria Andréia Formico. Comparative Analysis of Facial Expression Recognition Systems for Evaluating Emotional States in Virtual Humans. In: SIMPÓSIO DE REALIDADE VIRTUAL E AUMENTADA (SVR), 25. , 2023, Rio Grande/RS. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 38–47.