We present a method for reproducing complex multi-character interactions for physically simulated humanoid characters using deep reinforcement learning.
We present a method for reproducing complex multi-character interactions for physically simulated humanoid characters using deep reinforcement learning.
We present a system that animates children’s drawings of the human figure, is robust to the variance inherent in these depictions, and is simple enough for anyone to use.
Focus on the underexplored question of how to personalize these systems while preserving privacy.
we introduce an alternative formulation called “user-centric ranking” based on a transposed view, which casts ‘users’ as ‘tokens’ and ‘items’ as ‘documents’ instead. We show that this formulation has a number of advantages and shows less sign of quality saturation when trained on substantially larger data sets.
Presenting InterWild, bringing MoCap and ITW samples to shared domains for robust 3D interacting hands recovery in the wild with limited ITW 2D/3D interacting hands data.
In this work, we propose a 3D compositional morphable model of eyeglasses that accurately incorporates high-fidelity geometric and photometric interaction effects.
This paper demonstrates an approach for learning highly semantic image representations without relying on hand-crafted data-augmentations.
We present the first neural relighting approach for rendering high fidelity personalized hands that can be animated in real-time under novel illumination.
In this paper, we propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations that are fully compatible with the massively parallel graphics rendering pipeline.