Avatars Grow Legs: Generating Smooth Human Motion from Sparse Tracking Inputs with Diffusion Model
Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, Artsiom Sanakoyeu
European Signal Processing Conference (EUSIPCO)
The demand for reproducing real and immersive auditory experiences has increased since the rise of virtual reality (VR). Various types of microphone arrays, along with processing and rendering approaches, are used to capture the true sound field in order to achieve this goal. However, auditory information is often limited to the location of the recording device and does not support expanded reproduction regions. One way to treat this is to interpolate the sound field using recordings from multiple microphone arrays. In this paper, we propose a sound field interpolation approach based on plane-wave expansion in the spherical harmonic domain. The proposed method employs the ℓ1 norm to optimally map the true sound field onto a set of virtual plane waves. Evaluation of the reproduction error and a comparison with the error obtained when employing the ℓ2 norm and a single microphone array with the plane-wave method, suggest that the proposed method provides higher reproduction accuracy over a larger region of interest.
Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, Artsiom Sanakoyeu
Bilge Acun, Benjamin Lee, Fiodar Kazhamiaka, Kiwan Maeng, Manoj Chakkaravarthy, Udit Gupta, David Brooks, Carole-Jean Wu
Ilkan Esiyok, Pascal Berrang, Katriel Cohn-Gordon, Robert Künnemann