Avatars Grow Legs: Generating Smooth Human Motion from Sparse Tracking Inputs with Diffusion Model
Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, Artsiom Sanakoyeu
International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
In augmented reality applications, where room geometries and material properties are not readily available, it is desirable to get a representation of the sound field in a room from a limited set of available room impulse response measurements. In this paper, we propose a novel method for 2D interpolation of room modes from a sparse set of RIR measurements that are non-uniformly sampled within a space. We first obtain the mode parameters of a measured room. Using the common-acoustical pole theory, the mode frequencies and decay rates are kept constant over space, and a unique set of mode amplitudes is obtained for each measurement location. Based on the general solution to the Helmholtz equation, these mode amplitudes are modeled as periodic functions of 2D spatial location. For low frequency room modes, the model parameters are found with sequential non-linear least squares. Results show accurate spatial interpolation of perceptually relevant low frequency modes in rooms with simple geometries having non-rigid walls.
Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, Artsiom Sanakoyeu
Bilge Acun, Benjamin Lee, Fiodar Kazhamiaka, Kiwan Maeng, Manoj Chakkaravarthy, Udit Gupta, David Brooks, Carole-Jean Wu
Ilkan Esiyok, Pascal Berrang, Katriel Cohn-Gordon, Robert Künnemann