We present MultiAct, the first framework to generate long-term 3D human motion from multiple action labels. MultiAct takes account of both action and motion conditions with a...
We present MultiAct, the first framework to generate long-term 3D human motion from multiple action labels. MultiAct takes account of both action and motion conditions with a...
We verify through experiments on widely available image sets that the resulting SVMs do provide superior accuracy in comparison to well-established deep neural network benchmarks...
To address this, we propose a self-supervised approach for correspondence estimation that learns from multiview consistency in short RGB-D video sequences. Our approach combines...
We show that our general performance model not only achieves low prediction error on DLRM, which has highly customized configurations and is dominated by multiple factors but also...
We conduct real-world experiments where the robot is tasked to achieve a relative target angle. We show that our approach outperforms a sliding-window based MLP in a zero-shot...
We present MidasTouch, a tactile perception system for online global localization of a vision-based touch sensor sliding on an object surface.
Generalized in-hand manipulation has long been an unsolved challenge of robotics. As a small step towards this grand goal, we demonstrate how to design and learn a simple adaptive...
We explore this alternate setting with access to the underlying world state only during training and investigate ways of “baking in” the state knowledge along with the primary...
In this work, we present a new, more inclusive bias measurement dataset, HOLISTICBIAS, which includes nearly 600 descriptor terms across 13 different demographic axes.
In this work, we frame this problem as a few-shot learning task, and show significant gains with decomposing the task into its "constituent" parts.