Game Interface for Inclusive Teaching Through Educational Robotics
Resumo
This article presents a system that uses computer vision and machine learning to interpret gestures in Libras with a focus on promoting social and educational inclusion through an interactive game. The system integrates the YOLO architecture for real-time detection and classification of hand signals. The training data was built from 1735 images of static gestures from the Libras alphabet, captured under different conditions and manually annotated. The model achieved an Overall Average Accuracy of 98.9% and an Overall Accuracy of 89.5%. The classification of gestures allowed the user to control a game environment where sign language signs corresponded to drawing movements of geometric shapes. As far as we know, the direct integration of LIBRAS sign recognition with educational robotics through a game, with an explicit focus on inclusion, highlights the novelty of this research.
Palavras-chave:
YOLO, Visualization, Sign language, Accuracy, Shape, Education, Training data, Games, Real-time systems, Robots, LIBRAS, Educational Robotics
Publicado
13/10/2025
Como Citar
LIMA, Júlia M.; HUDSON, Thayron M.; BRANDÃO, Alexandre S..
Game Interface for Inclusive Teaching Through Educational Robotics. In: SIMPÓSIO BRASILEIRO DE ROBÓTICA E SIMPÓSIO LATINO AMERICANO DE ROBÓTICA (SBR/LARS), 17. , 2025, Vitória/ES.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2025
.
p. 397-401.
