Avatars Grow Legs: Generating Smooth Human Motion from Sparse Tracking Inputs with Diffusion Model
Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, Artsiom Sanakoyeu
Uncertainty and Robustness in Deep Learning Workshop at ICML
A fundamental problem in machine learning is to learn representations that are invariant to certain transformations. For example, image representations are desired to be invariant to translation, rotation, changes in color, or background; natural language representations ought to be invariant to named entities. Naturally, data augmentations are a simple yet powerful way to address such invariance. However, such data augmentations requiring either additional data collection or careful engineering to capture all invariances. In this paper, we argue that a simple yet effective additional loss, called Data Augmented Invariant Regularization (DAIR), could improve the performance even further. DAIR promotes additional invariance on top of data augmentations at little marginal cost, and is consistent with any learning model. We empirically evaluate the performance of DAIR on two vision tasks, Colored MNIST and Rotated MNIST, and demonstrate that it provides non-trivial gains beyond data augmentation, outperforming invariant risk minimization.
Yuming Du, Robin Kips, Albert Pumarola, Sebastian Starke, Ali Thabet, Artsiom Sanakoyeu
Bilge Acun, Benjamin Lee, Fiodar Kazhamiaka, Kiwan Maeng, Manoj Chakkaravarthy, Udit Gupta, David Brooks, Carole-Jean Wu
Ilkan Esiyok, Pascal Berrang, Katriel Cohn-Gordon, Robert Künnemann