This paper presents a stylized novel view synthesis method. Applying state-of-the-art stylization methods to novel views frame by frame often causes jittering artifacts due to the ...
This paper presents a stylized novel view synthesis method. Applying state-of-the-art stylization methods to novel views frame by frame often causes jittering artifacts due to the ...
In this paper, we propose the first learned passthrough method and assess its performance using a custom VR headset that contains a stereo pair of RGB cam- eras.
In this paper, we formulate seamless illumination harmonization as an illumination exchange and aggregation problem. Specifically, we firstly apply a physically-based rendering...
Instead of working with processed depth maps, we model the raw ToF sensor measurements to improve reconstruction quality and avoid issues with low reflectance regions, multi-path interference, and a sensor’s limited unambiguous depth range.
In video transmission applications, video signals are transmitted over lossy channels, resulting in low-quality received signals. To restore videos on recipient edge devices in real-time, we introduce an efficient video restoration network, EVRNet.
We develop a new algorithm, Deep 3D Mask Volume, which enables temporally stable view extrapolation from binocular videos of dynamic scenes, captured by static cameras.
We present a method for building high-fidelity animatable 3D face models that can be posed and rendered with novel lighting environments in real-time.
We present a method that learns a spatiotemporal neural irradiance field for dynamic scenes from a single video. Our learned representation enables free-viewpoint rendering of the input video. Our method builds upon recent advances in implicit representations. Learning a spatiotemporal irradiance field from a single video poses significant challenges because the video contains only one observation of the scene at any point in time.
Cerebral blood flow is an important biomarker of brain health and function as it regulates the delivery of oxygen and substrates to tissue and the removal of metabolic waste products. Moreover, blood flow changes in specific areas of the brain are correlated with neuronal activity in those areas. Diffuse correlation spectroscopy (DCS) is a promising noninvasive optical technique for monitoring cerebral blood flow and for measuring cortex functional activation tasks. However, the current state-of-the-art DCS adoption is hindered by a trade-off between sensitivity to the cortex and signal-to-noise ratio (SNR).
We propose an end-to-end trainable burst denoising pipeline which jointly captures high-resolution and high-frequency deep features derived from wavelet transforms. In our model, precious local details are preserved in high-frequency sub-band features to enhance the final perceptual quality, while the low-frequency sub-band features carry structural information for faithful reconstruction and final objective quality.