August 8, 2022

Meta graphics research at SIGGRAPH 2022

By: Meta Research

The SIGGRAPH conference is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2022, the 49th annual conference hosted by ACM SIGGRAPH, will take place as a hybrid event, with live events August 8–11 at the Vancouver Convention Center and virtual content available starting July 25 through October 31.

Meta researchers in AI and AR/VR are presenting their work in oral spotlights and poster sessions. In this blog, we are highlighting a couple of presentations from the graphics team at Meta:

  • Neural Shadow Mapping on Monday, August 8 from 11:14 to 11:19 AM (link)
  • Efficient Estimation of Boundary Integrals for Path-Space Differentiable Rendering on Wednesday, August 10 from 10:45 to 10:53 AM (link)

Learn more about each of these papers below.

Neural Shadow Mapping

Sayantan Datta, Derek Nowrouzezahrai, Christoph Schied, Zhao Dong

Shadow provides important geometric, depth, and shading cues. It plays a crucial role in defining spatial relationships for 3D graphics rendering in AR/VR. The human eye takes cues from shadows by judging where a light source originates and how an object relates spatially to its surroundings (e.g., a virtual object sitting on a real table). High-quality shadow has been comprehensively investigated in graphics. However, the existing approaches suffer from quality that degrades due to approximation (e.g., pre-filtering based shadow mappings) and/or require dedicated hardware support (e.g., GPU-based ray tracing shadows). This makes existing approaches difficult to deploy in resource-limited AR/VR systems. The recent progress of applying neural networks to enhance existing real-time rendering pipelines achieved high-quality and low-cost rendering of complex appearances such as hair, fur, and luminaires.

Motivated by this recent progress, we proposed a machine learning-based method that generates high-quality hard and soft shadows for dynamic objects in real-time. Our approach does not require ray-tracing hardware, has high performance (< 6ms per frame), requires little memory (< 1.5MB), and is easy to deploy on commodity low-end GPU hardware. We used the output of “vanilla” rasterization-based shadow mapping (i.e., no cascades) to hallucinate temporally stable hard and soft shadows. We designed a compact neural architecture based on the statistics of penumbra sizes in a diversity of scenes, which supports fast training and generalizes to unseen dynamic objects. As shown in the figure below, we demonstrated improved quality over the state-of-the-art in high performance pre-filtering based methods while retaining support for dynamic scenes and approaching ray-traced references.

Our hard and soft shadowing method approaches the quality of offline ray tracing while striking a favorable position on the performance-accuracy spectrum. On the high-performance end, we produce higher-quality results than 𝑛 × 𝑛 Moment Shadow Maps (MSM-𝑛). We require only vanilla shadow mapping inputs to generate visual (and temporal) results that approach ray-traced reference, surpassing more costly denoised interactive ray-traced methods.

Efficient Estimation of Boundary Integrals for Path-Space Differentiable Rendering

Kai Yan, Christoph Lassner, Brian Budge, Zhao Dong, Shuang Zhao

In computer graphics, physics-based rendering techniques generate photorealistic images by simulating the light transport through solving a dedicated mathematical equation involving various scene properties, such as camera pose, scene geometry (e.g., vertex positions), and spatially varying scene material properties. By contrast, physics-based differentiable rendering (PBDR) methods aim to compute derivatives of photorealistic images exhibiting complex light transport effects (e.g., soft shadows, interreflection, and caustics) with respect to arbitrary scene properties. This new level of generality makes PBDR a key component to solve challenging inverse rendering problems, that is, recovering the scene geometry and material properties from physical measurements (e.g., photographs) using gradient-based optimization. For many inverse rendering scenarios, reconstructing high-quality scene properties under natural environment light is a difficult task due to the 360-degree lighting condition that significantly increases optimization complexity.

In this paper, motivated by the path guiding technique in physics-based rendering, we developed a new guiding-based importance sampling method to effectively and efficiently handle complex lightings in the PBDR-based inverse rendering pipeline. Further, our approach can naturally combine multiple guided samplings using the classic multiple importance sampling strategy to further improve the quality and performance for inverse rendering. The experimental results demonstrate our method achieves significantly better scene reconstruction quality with at least one order of magnitude performance improvements. In the following figure, we show the effectiveness of our method by applying it for reconstructing high-quality geometry and surface materials of real-world objects under natural environment light starting from a simple sphere.

Shape geometry and material reconstruction of real-world objects under natural environment light using our new PBDR-based inverse rendering pipeline. Row 1 contains reference photos, and row 2 contains an animation showing the optimization process starting from a sphere.

To learn more about AR/VR research at Meta, visit the AR/VR research area page.