AI & Robotics

AI & Robotics at the Science Hub is a team of MIT and Amazon researchers working together at the intersection of autonomous mobile robots, optimization, control, and perception. Relevant challenges in AI & robotics include simultaneous localization and mapping (SLAM) and applying recent deep learning advances to applications in robotics. The intersectional approach of the AI & Robotics team results in technology that brings state-of-the-art robotics to homes around the world.

Simulating how robots can carefully handle objects at high-speeds in cluttered space

Autonomous systems that can manipulate objects at high speeds are still scarce and limited. This project will investigate autonomous systems that can move around highly diverse objects in densely occupied spaces at high speed. A main challenge is to enable robots to work and interact with different types of objects that can be rigid, articulated (e.g. books), separable (e.g. boxes), and deformable (e.g. bags), so as to not damage them. For scalability, the robotic system should be flexible -- able to improve continuously with experience and generalize to previously unseen objects, materials, shapes and motions.

This work is a collaboration between MIT faculty Phillip Isola, Alberto Rodriguez and Russ Tedrake, and Amazon researchers Andrew Marchese and Lesley Yu

photos of 5 researchers in honeycomb pattern
five researcher photos in honeycomb pattern

Building the brains behind the robots that fill your Amazon order

Every time you place an Amazon order, a fleet of robots is hard at work packaging your items. With up to a thousand moving robots in a given warehouse and dozens of robots working together to fill just your order, efficient coordination is a must. This project aims to optimize the algorithms behind the robots' behaviors and movements to seamlessly sort inventory and fill orders in a timely manner, all while avoiding collisions and ensuring on-time fulfillment of customer orders.

This work is a collaboration between MIT faculty Cynthia Barnhart, Alex Jacquillat, Cathy Wu, and Amazon researchers Michael Wolf and Lesley Yu,

Safety and Predictability in Robot Navigation for Last-mile Delivery

Economically viable last-mile autonomous delivery requires robots that are both safe and resilient in the face of previously unseen environments. With these goals in mind, this project aims to develop algorithms to enhance the safety, availability, and predictability of mobile robots used for last-mile delivery systems. One goal of the work is to extend our state-of-the-art navigation algorithms (based on reinforcement learning (RL) and model-predictive control (MPC)) to incorporate uncertainties induced by the real world (e.g., imperfect perception, heterogeneous pedestrians), which will enable the learning of cost-to-go models that more accurately predict the time it might take the robot to reach its goal in a real world scenario. A second goal of the work is to provide formal safety guarantees for AI-based navigation algorithms in dynamic environments, which will involve extending our recent neural network verification algorithms that compute reachable sets for static obstacle avoidance.

Jonathan How, MIT

honeycomb shape photo of Jon How
honeycomb shape photo of Ted

Multimodal tactile sensing

We propose to build the best tactile sensor in existence. We already can build robot fingers with extremely high spatial acuity (GelSight), but these fingers lack two important aspects of touch: temperature and vibration. We will incorporate thermal sensing with an array of liquid crystal dots covering the tactile display; the camera will infer the distribution of temperature from the colors. For vibrations, we will try various sensing methods, including IMU’s, microphones, and mouse cameras.

Edward Adelson, MIT

Soft artificial intelligent skin for tactile-environmental sensing and human-robot interaction

Multimodal sensory information can help robots to interact more safely, effectively, and collaboratively with humans. Artificial epidermis with very compact electronics integration that can be wrapped onto any arbitrary surface, will need to be realized. Herewith, we propose a bio-inspired, high-density sensor array made on a soft deformable substrate for large-scale artificial intelligent sensor. This enables our proposed research effort to scale and fabricate a dense artificial sensor and apply various self-organizing networking and novel sensing techniques for large-scale interaction, localization, and visualization with applications in robotics, human-machine interfaces, indoor/environmental sensing, and cross-reality.

Joseph Paradiso, MIT

honeycomb shape photo of joe
portrait photo of Alberto Rodriguez in hexagon shape

Manipulation of Rigid Point Cloud Objects

Our project is aimed at integrating affordance perception models with sequential planning, with the goal to progress toward autonomous robotic systems capable of manipulating novel objects in unstructured settings. We plan to extend a recently developed framework to plan sequences of actions to manipulate rigid body point clouds, while leveraging state-of the art point-cloud encoders such as Graph Attention Networks (GAT) or Neural Fields. We plan to apply this framework to: 1) Learn affordance models for composite manipulation actions (sequences of simple interaction primitive), such as first toppling an object to then grasp it and finally placing it at a different location; and 2) Interactive perception and manipulation of rigid body point clouds with several objects.

Alberto Rodriguez, MIT

AI & Robotics