Mathgamer
Premier League
- 11 April 2016
Character Controllers using Motion VAEs
Work by University of British Columbia and Electronic Arts.
A fundamental problem in computer animation is that of realizing purposeful and realistic human movement given a sufficiently-rich set of motion capture clips. We learn data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs. The latent variables of the learned autoencoder define the action space for the movement and thereby govern its evolution over time. Planning or control algorithms can then use this action space to generate desired motions. In particular, we use deep reinforcement learning to learn controllers that achieve goal-directed movements. We demonstrate the effectiveness of the approach on multiple tasks. We further evaluate system-design choices and describe the current limitations of Motion VAEs.
In this work, the authors present a kinematics-based motion model which means it does not need a physics engine to predict the motion of the character. It can simply look at previous few poses of the character in motion and output future poses to continue that same motion. It does so by training an Autoregressive-Autoencoder model on motion capture data.
It is Autoregressive because it uses previous poses to reconstruct the current pose. The pose encoder outputs a latent from which we draw a latent variable z that is used to add slight variations in our reconstruction. This ensures that the output pose is not repetitive and these slight variations make the combined motion look more natural and realistic. Then, the decoder reconstructs the pose based on the output of a mixture-of-experts gating network. This network ensures that the individual elements of the output pose like hand or body movements are consistent with the overall motion of the body.
Work by University of British Columbia and Electronic Arts.
A fundamental problem in computer animation is that of realizing purposeful and realistic human movement given a sufficiently-rich set of motion capture clips. We learn data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs. The latent variables of the learned autoencoder define the action space for the movement and thereby govern its evolution over time. Planning or control algorithms can then use this action space to generate desired motions. In particular, we use deep reinforcement learning to learn controllers that achieve goal-directed movements. We demonstrate the effectiveness of the approach on multiple tasks. We further evaluate system-design choices and describe the current limitations of Motion VAEs.
In this work, the authors present a kinematics-based motion model which means it does not need a physics engine to predict the motion of the character. It can simply look at previous few poses of the character in motion and output future poses to continue that same motion. It does so by training an Autoregressive-Autoencoder model on motion capture data.
It is Autoregressive because it uses previous poses to reconstruct the current pose. The pose encoder outputs a latent from which we draw a latent variable z that is used to add slight variations in our reconstruction. This ensures that the output pose is not repetitive and these slight variations make the combined motion look more natural and realistic. Then, the decoder reconstructs the pose based on the output of a mixture-of-experts gating network. This network ensures that the individual elements of the output pose like hand or body movements are consistent with the overall motion of the body.