Simulation and Retargeting of Complex Multi-Character Interactions
We present a method for reproducing complex multi-character interactions for physically simulated humanoid characters using deep reinforcement learning.
We present a method for reproducing complex multi-character interactions for physically simulated humanoid characters using deep reinforcement learning.
Propose a framework based on diffusion models for consistent and realistic long-term novel view synthesis. Diffusion models have achieved impressive performance on many content creation applications, such as image-to-image translation and text-to- image generation.
Recognizing human activities is a decades-old problem in computer vision. With recent advancements in user- assistive augmented reality and virtual reality (AR/VR) systems...
We present the design of a productionized end-to-end stereo depth sensing system that does pre-processing, online stereo rectification, and stereo depth estimation with...
We introduce RoDynRF, an algorithm for reconstructing dynamic radiance fields from casual videos. Unlike existing approaches, we do not require accurate camera poses as input. Our method optimizes camera poses and two radiance fields, modeling static and dynamic elements. Our approach includes a coarse-to-fine strategy and epipolar geometry to exclude moving pixels, deformation fields, time- dependent appearance models, and regularization losses for improved consistency.
In this paper, we propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations that are fully compatible with the massively parallel graphics rendering pipeline.
We present the first neural relighting approach for rendering high fidelity personalized hands that can be animated in real-time under novel illumination.
We propose a method for high-quality facial texture reconstruction from RGB images based on a single smartphone which we equip with an inexpensive polarization foil.
In this work, we propose a 3D compositional morphable model of eyeglasses that accurately incorporates high-fidelity geometric and photometric interaction effects.
We propose using a global point cloud that is dynamically updated each frame, along with a learned fusion approach in image space.