Reconhecimento das configurações de mão de LIBRAS baseado na análise de discriminante de Fisher bidimensional, utilizando imagens de profundidade
Abstract
Deaf people communicate using sign language; however, they are just able to communicate with others who have this same knowledge. This communication is limited to other people with knowledge of sign language that, usually, are other deaf people. There are too many people that interact with deaf people in education, health and leisure areas that do not know about sign language. Then the inclusion of deaf people is seriously affected, because they are unable to make themselves understood. This study presents a methodology for automatic gesture recognition; the gestures represent the settings of hands from LIBRAS (Brazilian Language of Signs). The approach consist of constructing an image database by Kinect® sensor. In such images, we applied 2D2LDA technique to reduce their dimension and create new characteristics for classification step. The system is able to segment the hand image and recognize whole 61 settings of Sign Language. The average achieved hit rate was 95.7%. As the capture device is insensitive to light, background and colors of clothes and skin, there are no restrictions about environment.
References
Carneiro, A., P. Cortez, and R. Costa. (2009), Reconhecimento de Gestos da LIBRAS com Classificadores Neurais a partir dos Momentos Invariantes de Hu: Interaction, p. 190-195.
Chao, S., Z. Tianzhu, B. Bing-Kun, X. Changsheng, and M. Tao. (2013), Discriminative Exemplar Coding for Sign Language Recognition With Kinect: Cybernetics, IEEE Transactions on, v. 43, p. 1418-1428.
Deimel, B., and S. Schröter. (1998), Improving Hand Gesture Recognition Via Video Based Methods for the Separation of the Forearm from the Human Hand, Dekanat Informatik, Univ.
Lamar, M. V., M. S. Bhuiyan, and A. Iwata. (1999), Hand gesture recognition using morphological principal component analysis and an improved CombNET-II: Systems,
Man, and Cybernetics, 1999. IEEE SMC '99 Conference Proceedings. 1999 IEEE International Conference on, p. 57-62 vol.4.
Maraqa, M., F. Al-Zboun, M. Dhyabat, and R. A. Zitar. (2012), Recognition of Arabic Sign Language (ArSL) using recurrent neural networks: Journal of Intelligent Learning Systems and Applications, v. 4, p. 41.
Noushath, S., G. Hemantha Kumar, and P. Shivakumara. (2006), (2D)2LDA: An efficient approach for face recognition: Pattern Recognition, v. 39, p. 1396-1400.
Otsu, N. (1975), A threshold selection method from gray-level histograms: Automatica, v. 11, p. 23-27.
Pimenta, N., and R. M. de Quadros. (2010), Curso de LIBRAS 1: iniciante, LSB Vídeo.
Porfirio, A. J., K. Lais Wiggers, L. E. S. Oliveira, and D. Weingaertner. (2013), LIBRAS Sign Language Hand Configuration Recognition Based on 3D Meshes: Systems, Man, and Cybernetics (SMC), 2013 IEEE International Conference on, p. 1588-1593.
Prokop, R. J., and A. P. Reeves. (1992), A survey of moment-based techniques for unoccluded object representation and recognition: CVGIP: Graphical Models and Image Processing, v. 54, p. 438-460.
Rakun, E., M. Andriani, I. W. Wiprayoga, K. Danniswara, and A. Tjandra. (2013), Combining depth image and skeleton data from Kinect for recognizing words in the sign system for Indonesian language (SIBI [Sistem Isyarat Bahasa Indonesia]): Advanced Computer Science and Information Systems (ICACSIS), 2013 International Conference on, p. 387-392.
Ribeiro, H. L., and A. Gonzaga. (2006), Reconhecimento de gestos de mão usando o algoritmo GMM e vetor de características de momentos de imagem.
Theodoridis, S., and K. Koutroumbas. (2008), Pattern Recognition, Fourth Edition, Academic Press, 900 p.
