Configuring Contact for Robots

Manipulation of Rigid Point Cloud Objects

Our project is aimed at integrating affordance perception models with sequential planning, with the goal to progress toward autonomous robotic systems capable of manipulating novel objects in unstructured settings. We plan to extend a recently developed framework to plan sequences of actions to manipulate rigid body point clouds, while leveraging state-of the art point-cloud encoders such as Graph Attention Networks (GAT) or Neural Fields. We plan to apply this framework to: 1) Learn affordance models for composite manipulation actions (sequences of simple interaction primitive), such as first toppling an object to then grasp it and finally placing it at a different location; and 2) Interactive perception and manipulation of rigid body point clouds with several objects.

Alberto Rodriguez, MIT

Read more on this project from the MCube Lab.