Comparison Study of Automated Facial Expression Recognition Models

  • Murilo de Souza Preto UFABC
  • Fernando Teubl Ferreira UFABC
  • Celso Setsuo Kurashima UFABC

Resumo


Facial expressions play a crucial role in human nonverbal communication, and in the psychology field there is a strong consensus on the existence of five key emotions: anger, fear, disgust, sadness, and happiness. This paper aims to evaluate multiple facial expression recognition detection models, assessing their performance across different machines and databases. By identifying the strengths and weaknesses of each option, the study seeks to comparatively determine the most suitable model for specific tasks or scenarios. For each computer, all databases were processed through the usage of the detection models, while measuring the required runtime for the facial expression detection. The detection models: Residual Masking Network and Deepface, were tested through the databases Extended Cohn-Kanade and AffectNet. The assessed data point towards an average higher accuracy for the model Residual Masking Network, but faster runtime for Deepface. Thereby, Deepface may be preferentially employed in scenarios where time constraints are a primary concern, there is limited processing capability available, or an emphasis on recognizing either happiness or neutral expressions, while Residual Masking Network might be favored in striving for a higher detection accuracy.

Referências

P. Ekman, “What scientists who study emotion agree about,” Perspectives on Psychological Science, vol. 11, no. 1, pp. 31–34, 2016, pMID: 26817724. [Online]. Available: https://doi.org/10.1177/1745691615596992

J. Stouten and D. De Cremer, “‘Seeing is believing’: The effects of facial expressions of emotion and verbal communication in social dilemmas,” Journal of Behavioral Decision Making, vol. 23, no. 3, pp. 271–287, Jul. 2010. [Online]. Available: [link].

S. Trichas, B. Schyns, R. Lord, and R. Hall, “‘Facing’ leaders: Facial expression and leadership perception,” The Leadership Quarterly, vol. 28, no. 2, pp. 317–333, Apr. 2017. [Online]. Available: [link].

M. A. Assari and M. Rahmati, “Driver drowsiness detection using face expression recognition,” in 2011 IEEE International Conference on Signal and Image Processing Applications (ICSIPA), Nov. 2011, pp. 337–341.

Q. Meng, X. Hu, J. Kang, and Y. Wu, “On the effectiveness of facial expression recognition for evaluation of urban sound perception,” Science of The Total Environment, vol. 710, p. 135484, 03 2020. [Online]. Available: [link].

C. Bustos, N. Elhaouij, A. Sole-Ribalta, J. Borge-Holthoefer, A. Lapedriza, and R. Picard, “Predicting driver self-reported stress by analyzing the road scene,” arXiv:2109.13225 [cs], 09 2021. [Online]. Available: [link]

S. Li and W. Deng, “Deep facial expression recognition: A survey,” IEEE Transactions on Affective Computing, vol. 13, no. 3, pp. 1195–1215, jul 2022. [Online]. Available: https://doi.org/10.1109%2Ftaffc.2020.2981446

“Free Image on Pixabay - Beard, Face, Man, Model, Mustache,” 2016. [Online]. Available: [link].

P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression,” in 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Workshops, 2010, pp. 94–101.

A. Mollahosseini, B. Hasani, and M. H. Mahoor, “AffectNet: A database for facial expression, valence, and arousal computing in the wild,” IEEE Transactions on Affective Computing, vol. 10, no. 1, pp. 18–31, jan 2019. [Online]. Available: https://doi.org/10.1109%2Ftaffc.2017.2740923

“OpenCV: Cascade Classifier,” Jul. 2023, [Online; accessed 26. Jul. 2023]. [Online]. Available: [link].

May 2022, [Online; accessed 26. Jul. 2023]. [Online]. Available: [link]

L. Pham, T. H. Vu, and T. A. Tran, “Facial expression recognition using residual masking network,” in 2020 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 4513–4519.

S. I. Serengil and A. Ozpinar, “Hyperextended lightface: A facial attribute analysis framework,” in 2021 International Conference on Engineering and Emerging Technologies (ICEET). IEEE, 2021, pp. 1–4. [Online]. Available: https://doi.org/10.1109/ICEET53442.2021.9659697

“DeepFace: Closing the Gap to Human-Level Performance in Face Verification - Meta Research | Meta Research,” Aug. 2023, [Online; accessed 13. Aug. 2023]. [Online]. Available: [link].

“Visual Geometry Group - University of Oxford,” Aug. 2023, [Online; accessed 13. Aug. 2023]. [Online]. Available: [link]
Publicado
06/11/2023
PRETO, Murilo de Souza; FERREIRA, Fernando Teubl; KURASHIMA, Celso Setsuo. Comparison Study of Automated Facial Expression Recognition Models. In: WORKSHOP DE TRABALHOS DA GRADUAÇÃO - CONFERENCE ON GRAPHICS, PATTERNS AND IMAGES (SIBGRAPI), 36. , 2023, Rio Grande/RS. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2023 . p. 152-155. DOI: https://doi.org/10.5753/sibgrapi.est.2023.27470.