Recognizing Human Actions: A Deep Learning Model for UAV Piloting
Resumo
This project aims to explore action recognition through a deep learning model generated by Convolutional Neural Networks, establishing the foundation for human-robot interaction in a scenario where Unmanned Aerial Vehicles (UAV) are controlled exclusively by visual commands. The model analyzes images captured by an onboard camera using and classifies them into nine categories. Each category issues a specific command based on human actions performed by individuals properly equipped with personal protective equipment. The results demonstrate the feasibility of the proposed approach, opening room for improvements aiming its use in more complex scenarios.
Palavras-chave:
Deep learning, Analytical models, Visualization, System performance, Redundancy, Human-robot interaction, Autonomous aerial vehicles, Reliability, Robots, Videos
Publicado
13/11/2024
Como Citar
CABRAL, Iohana A. Torres; WEHRMEISTER, Marco Aurelio; LAZZARETTI, André Eugenio; LOPES, Heitor Silverio.
Recognizing Human Actions: A Deep Learning Model for UAV Piloting. In: SIMPÓSIO BRASILEIRO DE ROBÓTICA E SIMPÓSIO LATINO AMERICANO DE ROBÓTICA (SBR/LARS), 16. , 2024, Goiânia/GO.
Anais [...].
Porto Alegre: Sociedade Brasileira de Computação,
2024
.
p. 44-49.
