Motion In-betweening for Physically Simulated Characters



We present a motion in-betweening framework to generate high quality, physically plausible character animation when we are given temporally sparse keyframes as soft animation constraints. More specifically, we learn imitation policies for physically simulated characters by using deep reinforcement learning where the policies can access limited information only. Once learned, the physically simulated characters are capable of adapting to external perturbations while following given sparse input keyframes. We demonstrate the performance of our framework on two different motion datasets and also compare our results with the the results generated by a baseline imitation policy.

Supplementary Material

Featured Publications