Possibilitando o Reconhecimento de Expressões Faciais em Aplicações Ginga-NCL

  • Pedro Alves Valentim UFF
  • Fábio Barreto UFF
  • Débora C. Muchaluat-Saade UFF

Resumo


As the facial recognition research field grows, so do the possibilities for digital TV applications. However, in the current state of the art, it is not safe to assume there is a certain algorithm that would be the best for all kinds of applications. This work proposes an architecture to enable facial expression recognition for TV in a way that is agnostic to the recognition algorithm. As proof of concept, the proposal was developed for the Ginga middleware. There are two implementations: the first one, based on the current version of the Ginga middleware, and the second one, based on a proposed extended version of the middleware, exploring the viability of the present work.

Referências

ABNT. 2011. Digital terrestrial television - Data coding and transmission specification for digital broadcasting - Part 2: Ginga-NCL for fixed and mobile receivers- XML application language for application coding. ABNT NBR 15606-2:2011standard.

Fábio Barreto, Raphael S. de Abreu, Eyre Brasil B. Montevecchi, Marina I. P.Josué, Pedro A. Valentim, and Debora C. Muchaluat-Saade. 2020. Extending Ginga-NCL to Specify Multimodal Interactions With Multiple Users. In Anais do XXVI Simpósio Brasileiro de Sistemas Multimídia e Web. SBC.

Fábio Barreto, Eyre Brasil B Montevecchi, Raphael Abreu, Joel AF dos Santos, and Debora C Muchaluat-Saade. 2019. Providing Multimodal User Interaction in NCL. In Anais Estendidos do XXV Simpósio Brasileiro de Sistemas Multimídia e Web. SBC, 203–204.

Fábio Barreto, Eyre Brasil B. Montevecchi, Raphael Abreu, Joel A. F. dos Santos, and Debora C. Muchaluat-Saade. 2019. Providing multi-user in NCL with user Agent and User Profile. In Anais Estendidos do XXV Simpósio Brasileiro de Sistemas Multimídia e Web (Florianópolis). SBC, Porto Alegre, RS, Brasil, 205–206. https://doi.org/10.5753/webmedia_estendido.2019.8168

Carlos Eduardo CF Batista, Luiz Fernando Gomes Soares, and Guido Lemos de Souza Filho. 2010. Estendendo o uso das classes de dispositivos Ginga-NCL. In Anais Principais do XVI Simpósio Brasileiro de Sistemas Multimídia e Web. SBC,27–34.

Joseph Bullington. 2005. ’Affective’ computing and emotion recognition systems: the future of biometric surveillance. In Proceedings of the 2nd annual conference on Information security curriculum development. 95–99.

Erik Cambria. 2016. Affective computing and sentiment analysis. IEEE intelligent systems 31, 2 (2016), 102–107.

Julian L Center Jr and Christopher R Wren. 2004. Videoconferencing method with tracking of face and dynamic bandwidth allocation. US Patent 6,680,745.

Jeffrey S Coffin and Darryl Ingram. 1999. Facial recognition system for security access and identification. US Patent 5,991,429.

Jeff F Cohn and Fernando De la Torre. 2015. Automated face analysis for affective computing. (2015).

R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz, and J. G. Taylor. 2001. Emotion recognition in human-computer interaction. IEEE Signal Processing Magazine 18, 1 (2001), 32–80.

Rafael Rossi de Mello Brandao, Guido Lemos de Souza Filho, Carlos Eduardo Coelho Freire Batista, and Luiz Fernando Gomes Soares. 2010. Extended features for the Ginga-NCL environment: Introducing the LuaTV API. In 2010 Proceedings of 19th International Conference on Computer Communications and Networks. IEEE,1–6.

Paul Ekman. 1993. Facial expression and emotion. American psychologist 48, 4(1993), 384.

Paul Ekman. 1999. Facial expressions. Handbook of cognition and emotion 16, 301(1999), e320.

Nickolaos Fragopanagos and John G Taylor. 2005. Emotion recognition in human–computer interaction. Neural Networks 18, 4 (2005), 389–405.

Barbara S Guzak, Hung-Tack Kwan, and Janki Y Vora. 2011. Multiple sensory channel approach for translating human emotions in a computing environment.US Patent App. 12/540,735.

Urs Hunkeler, Hong Linh Truong, and Andy Stanford-Clark. 2008. MQTT-S—Apublish/subscribe protocol for Wireless Sensor Networks. In 2008 3rd International Conference on Communication Systems Software and Middleware and Workshops(COMSWARE’08). IEEE, 791–798.

Roberto Ierusalimschy. 2006. Programming in lua. Lua.Org Publisher.

ITU. 2014. Nested Context Language (NCL) and Ginga-NCL.http://www.itu.int/rec/T-REC-H.761. ITU-T Recommendation H.761.

Abhishek Jha. 2007. Class room attendance system using facial recognition system. The International Journal of Mathematics, Science, Technology and Management 2,3 (2007), 4–7.

Andrea R Johnson, Matthew J Johnson, Nico Toutenhoofd, and Clinton L Fenton.2014. Assisted photo-tagging with facial recognition models. US Patent 8,861,804.

Takeo Kanade, Jeffrey F Cohn, and Yingli Tian. 2000. Comprehensive database for facial expression analysis. In Proceedings Fourth IEEE International Conference on Automatic Face and Gesture Recognition (Cat. No. PR00580). IEEE, 46–53.

E. P. Kukula and S. J. Elliott. 2004. Evaluation of a facial recognition algorithm across three illumination conditions. IEEE Aerospace and Electronic Systems Magazine 19, 9 (2004), 19–23.

Yong Li, Jiabei Zeng, Shiguang Shan, and Xilin Chen. 2018. Occlusion aware facialexpression recognition using cnn with attention mechanism. IEEE Transactionson Image Processing 28, 5 (2018), 2439–2450.

Christine L Lisetti and David E Rumelhart. 1998. Facial Expression Recognition Using a Neural Network.. In FLAIRS Conference. 328–332.

Daniel McDuff, Abdelrahman Mahmoud, Mohammad Mavadati, May Amr, JayTurcot, and Rana el Kaliouby. 2016. AFFDEX SDK: a cross-platform real-time multi-face expression recognition toolkit. In Proceedings of the 2016 CHI conference extended abstracts on human factors in computing systems. 3723–3726.

David A Monroe. 2009. Method for incorporating facial recognition technology in a multimedia surveillance system. US Patent 7,634,662.

Iordanis Mpiperis, Sotiris Malassiotis, and Michael G Strintzis. 2008. Bilinear models for 3-D face and facial expression recognition. IEEE Transactions on Information Forensics and Security 3, 3 (2008), 498–511.

Rosalind W Picard. 1999. Affective Computing for HCI.. In HCI (1). Citeseer,829–833.

Rosalind W Picard. 2000. Affective computing . MIT press.

Rosalind W Picard. 2003. Affective computing: challenges. International Journal of Human-Computer Studies 59, 1-2 (2003), 55–64.

Lawrence Sirovich and Michael Kirby. 1987. Low-dimensional procedure for the characterization of human faces. Josa a 4, 3 (1987), 519–524.

Jianhua Tao Tieniu Tan and Rosalind W Picard. 2007. Affective computing and intelligent interaction. In Second International Conference, ACII. Springer.

Matthew A Turk and Alex P Pentland. 1991. Face recognition using eigenfaces. In Proceedings. 1991 IEEE computer society conference on computer vision and pattern recognition. IEEE Computer Society, 586–587.

Siyue Xie and Haifeng Hu. 2017. Facial expression recognition with FRR-CNN. Electronics Letters 53, 4 (2017), 235–237.

Álan L. V. Guedes and Simone D. J. Barbosa. 2019. Extending multimedia languages to support multimodal-multiuser interactions. In Anais Estendidos doXXV Simpósio Brasileiro de Sistemas Multimídia e Web (Florianópolis). SBC, PortoAlegre, RS, Brasil, 5–8. https://doi.org/10.5753/webmedia_estendido.2019.8125
Publicado
30/11/2020
Como Citar

Selecione um Formato
VALENTIM, Pedro Alves; BARRETO, Fábio; MUCHALUAT-SAADE, Débora C.. Possibilitando o Reconhecimento de Expressões Faciais em Aplicações Ginga-NCL. In: WORKSHOP DE TRABALHOS DE INICIAÇÃO CIENTÍFICA - SIMPÓSIO BRASILEIRO DE SISTEMAS MULTIMÍDIA E WEB (WEBMEDIA), 26. , 2020, São Luís. Anais [...]. Porto Alegre: Sociedade Brasileira de Computação, 2020 . p. 53-56. ISSN 2596-1683. DOI: https://doi.org/10.5753/webmedia_estendido.2020.13062.