A Differentiable Recipe for Learning Visual Non-Prehensile Planar Manipulation

The Conference on Robot Learning (CoRL)

Abstract

Specifying tasks with videos is a powerful technique towards acquiring novel and general robot skills. However, reasoning over mechanics and dexterous interactions can make it challenging to scale learning contact-rich manipulation. In this work, we focus on the problem of visual non-prehensile planar manipulation: given a video of an object in planar motion, find contact-aware robot actions that reproduce the same object motion. We propose a novel architecture, Differentiable Learning for Manipulation (DLM), that combines video decoding neural models with priors from contact mechanics by leveraging differentiable optimization and finite difference based simulation. Through extensive simulated experiments, we investigate the interplay between traditional model-based techniques and modern deep learning approaches. We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions. https://github.com/baceituno/dlm.

Latest Publications

Log-structured Protocols in Delos

Mahesh Balakrishnan, Mahesh Balakrishnan, Mihir Dharamshi, Jason Flinn, David Geraghty, Santosh Ghosh, Filip Gruszczynski, Ahmed Jafri, Jun Li, Jingming Liu, Suyog Mapara, Rajeev Nagar, Ivailo Nedelchev, Francois Richard, Chen Shen, Yee Jiun Song, Rounak Tibrewal, Vidhya Venkat, Ahmed Yossef, Ali Zaveri

SOSP