Computer Vision and Neural Networks for Libras Recognition
Resumo
In recent years, one can find several efforts to increase the inclusion of people with some type of disability. As a result, the global study of sign language has become an important research area. Therefore, this project aims at developing an information system for the automatic recognition of the Brazilian Sign Language (LIBRAS). The recognition shall be done through the processing of videos, without relying on support hardware. Given the great difficulty of creating a system for this purpose, an approach was developed by dividing the process into stages. In addition to dynamically identifying signs and context, neural network concepts and tools were used to extract the characteristics of interest and classify them accordingly. In addition, a dataset of signs, referring to the alphabet in LIBRAS, was built as well as a tool to interpret, with the aid of a webcam, the signal executed by a user, transcribing it on the screen.
Referências
da Silva Borges, And réa and Aparecida de Araujo Xavier, Eliane and Bercovich, Alicia,”Censo demográfico 2010”,vol. 27,2011.
Stokoe, William C. and Marschark, Marc,”Sign language structure: An outline of the visual communication systems of the american deaf”, Journal of Deaf Studies and Deaf Education, pp. 3-37, vol. 10, 2005.
Digiampietri, Luciano A. and Teodoro, Beatriz and Santiago, Caio R.N. and Oliveira, Guilherme A. and Araujo, Jonatas C.,”Um Sistema de Informação Extensível Para o Reconhecimento Automático de LIBRAS”, VIII Simpósio Brasileiro de Sistemas de Informação (SBSI 2012), pp. 456-467, 2012.
Bianco, Simone and Cadene, Remi and Celona, Luigi and Napoletano, Paolo,”Benchmark analysis of representative deep neural network architectures”, IEEE Access,pp. 64270-64277, vol. 6, 2018.
Christian Szegedy et al.,”Going Deeper with Convolutions”, University of Michigan,pp. 163-182,2019.
Thomas Serre, Lior Wolf, Stanley Bileschi, Maximilian Riesenhuber, and Tomaso Poggio,”Robust Object Recognition with Cortex- Like Mechanisms”, IEEE transactions on pattern analysis and machine intelligence,pp.102-104,vol.38,2016.
Lin, Min and Chen, Qiang and Yan, Shuicheng,”Network in network”, 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings,pp. 1-10,2014.
Szegedy, Christian and Vanhoucke, Vincent and Ioffe, Sergey and Shlens, Jon and Wojna, Zbigniew,”Rethinking the Inception Architecture for Computer Vision”,Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp.2818- 2826,vol.2016,2016.
Spanhol, Fabio A. and Oliveira, Luiz S. and Petitjean, Caroline and Heutte, Laurent,”A Dataset for Breast Cancer Histopathological Image Classification”,IEEE Transactions on Biomedical Engineering,pp. 1455- 1462,vol. 63,2016.
Assaleh, Khaled and Shanableh, Tamer and Fanaswala, Mustafa and Bajaj, Harish and Amin, Farnaz, ”Vision-based system for continuous arabic sign language recognition in user dependent mode”, pp. 19-27, vol. 2010, 2008.
Sandjaja, Iwan Njoto and Marcos, Nelson,”Sign language number recognition”, NCM 2009 - 5th International Joint Conference on INC, IMS, and IDC,pp. 1503-1508, 2009.
Goh, Paul and Holden, Eun-Jung,”DYNAMIC FINGERSPELLING RECOGNITION USING GEOMETRIC AND MOTION FEATURES School of Computer Science & Software Engineering The University of Western Australia”,Image Processing, 2006 IEEE International Conference on, pp. 2741-2744, 2006.
Starner, Thad and Pentland, Alex,”Real-Time American Sign Language Recognition from Video Using Hidden Markov Models”,pp. 227- 243,1997.
Isaacs, Jason and Foo, Simon,”Optimized wavelet hand pose estimation for American sign language recognition”,Proceedings of the 2004 Congress on Evolutionary Computation, CEC2004,pp. 797-802,vol. 1, 2004.
Paulraj, M. P. and Yaacob, Sazali and Desa, Hazry and Majid, Wan Mohd Ridzuan Wan Ab,”Gesture recognition system for Kod Tangan Bahasa Melayu (KTBM) using neural network”,Proceedings of 2009 5th International Colloquium on Signal Processing and Its Applications, CSPA 2009,pp. 19-22,2009.
Paulraj, M. P. and Yaacob, Sazali and Desa, Hazry and Hema, C. R. and Wan Ab Majid, Wan Mohd Ridzuan,”Extraction of head and hand gesture features for recognition of sign language”, 2008 International Conference on Electronic Design, ICED 2008,2008.
Quan, Yang,”Chinese sign language recognition based on video sequence appearance modeling”,Proceedings of the 2010 5th IEEE Conference on Industrial Electronics and Applications, ICIEA 2010,pp. 1537-1542,2010.
Madeo, Renata C. B. and Peres, Sarajane M. and Bíscaro, Helton H. and Dias, Daniel B. and Boscarioli, Clodis,”A Committee Machine Implementing the Pattern Recognition Module for Fingerspelling Applications”, pp. 954-958, 2010.
Henrique, Carlos and Monteiro, A and Felipe, Luiz and Pecoraro, Inácio and Lacerda, Angélica Takamine and Corbo, Anna Regina and Matos Araujo, Gabriel,”Um sistema de baixo custo para reconhecimento de gestos em LIBRAS utilizando visão computacional”.
Kleber Padovani de Souza, Jéssica Barbosa Dias, Hemerson Pistori,” Reconhecimento Automático de Gestos da Língua Brasileira de Sinais utilizando Visão Computacional”, Lecture Notes in Mathematics,pp.59-95,vol. 2255,2019.
Gonc, Vagner M and Peres, Sarajane M and B, Rua Arlindo and S, Av Trabalhador,”Funções de Similaridade em CBIR no Covntexto de Teste de Desempenho de Fuções de Imagens de Gestos da LIBRAS Software : Estudo de Caso em Segmentação”,pp. 6, 2018.