Physics-Based Motion Control Through DRL's Reward Functions
Resumo
Producing natural physically-based motions of articulated characters is a challenging problem. The animator needs to figure out high-dimensional parameters of a motion controller to get good visual quality, while still having to deal with the basic functioning of the controller. However, those parameters generally have an unintuitive relationship with the resulting motion. Deep Reinforcement Learning (DRL) has been recently explored to solve such problem. With DRL, it is possible to set a neural network with observation and action parameters and control the animation through a reward function. Nevertheless, choosing good parameters and a reward function is not a simple task. In this paper, we investigate how the animator can control the motion by manipulating simple reward functions. We propose a control structure with DRL, in which the reward function can be adapted to the desired motion and to the morphology of the controlled character. Moreover, we introduce speed in the training process so that, after training the neural network, the character is able to adapt its motion to different speeds in real time. Through a series of tests, we assess animation and speed controls of characters with different morphologies.