Giving robots a better feel for object manipulation



[ad_1]

The model enhances a robot's ability to shape materials into shapes and interact with liquids and solid objects.

Watch video

A new

A new "particle simulator" developed by MIT researchers improves the robots' ability to shape materials into simulated target shapes and interact with solid objects and liquids. This could give the robots a refined touch for industrial applications or for personal robotics – such as modeling clay or rolling sticky sushi rice.

Courtesy of the researchers

A new learning system developed by MIT researchers improves the robots' ability to shape materials into target shapes and make predictions about interaction with solid objects and liquids. The system, known as a learning-based particle simulator, could give industrial robots a more refined touch – and can have fun applications in personal robotics, such as modeling clay shapes or rolling rice into sushi.

In robotic planning, physical simulators are models that capture how different materials respond to force. Robots are "trained" using the models to predict the results of their interactions with objects, such as pushing a solid box or chopping deformable clay. But traditional, learning-based simulators focus primarily on rigid objects and are unable to handle softer fluids or objects. Some more accurate physics-based simulators can handle various materials, but rely heavily on approaching techniques that introduce errors when robots interact with objects in the real world.

In an article presented at the International Conference on Learning Representations in May, researchers describe a new model that learns to capture as small pieces of different materials – "particles" – interact when they are nudged and stimulated. The model learns directly from the data in cases where the underlying physics of the movements are uncertain or unknown. Robots can then use the model as a guide to predict how liquids, as well as rigid and deformable materials, will react to the force of their touch. As the robot handles objects, the model also helps further refine the robot's control.

In the experiments, a two-finger robotic hand called "RiceGrip" accurately shaped a deformable foam into a desired configuration – like a "T" shape – that serves as a proxy for sushi rice. In short, the researchers' model serves as a kind of "intuitive physics" brain that robots can use to reconstruct three-dimensional objects in a manner similar to that of humans.

"Humans have an intuitive physics model in our heads, where we can imagine how an object will behave if we push or squeeze it. Based on this intuitive model, humans can perform amazing manipulation tasks that are far beyond the reach of today's robots, "says first author Yunzhu Li, a graduate student at the Computer Science and Artificial Intelligence Laboratory (CSAIL) . "We want to build this kind of intuitive model for robots, to let them do what humans can do."

"When children are 5 months old, they already have different expectations for solids and liquids," adds co-author Jiajun Wu, a graduate student at CSAIL. "This is something we've known from a very early age so maybe it's something we should try to model for robots."

Joining Li and Wu in the paper are: Russ Tedrake, a researcher at CSAIL and a professor in the Department of Electrical Engineering and Computer Science (EECS); Joshua Tenenbaum, Professor of the Department of Brain and Cognitive Sciences; and Antonio Torralba, professor of EECS and director of the MIT-IBM Watson AI Lab.

Dynamic Charts

A key innovation behind the model, called the "particle interaction network" (DPI-Nets), was to create dynamic interaction graphs consisting of thousands of nodes and edges that can capture complex behaviors of so-called particles. In graphics, each node represents a particle. Neighbors are connected to each other using directed edges, which represent the interaction passing from one particle to another. In the simulator, the particles are hundreds of small spheres combined to form some liquid or deformable object.

The graphs are constructed as the basis for a machine learning system called the graphic neural network. In training, the model over time learns how particles in different materials react and remodel. It does this by implicitly calculating various properties for each particle – such as its mass and elasticity – to predict if and where the particle will move on the graph when disturbed.

The model then uses a "propagation" technique, which instantly spreads a signal across the graph. The researchers have customized the technique for each type of material – rigid, deformable and liquid – to trigger a signal that predicts the positions of the particles in certain incremental steps of time. At each step, it moves and reconnects the particles if necessary.

For example, if a solid box is pushed, disturbed particles will be moved forward. Since all the particles inside the box are rigidly connected to each other, all other particles in the object move the same calculated distance, rotation, and any other dimension. The particulate connections remain intact and the box moves as a single unit. But if a deformable foam area is indented, the effect will be different. The disturbed particles advance greatly, the surrounding particles advance only slightly and the more distant particles do not move. With liquids being spread in a glass, particles can jump completely from one end of the graph to another. The graph must learn to predict where and how much all the affected particles move, which is computationally complex.

Shaping and adapting

In their paper, the researchers demonstrated the model by assigning the two-fingered RiceGrip robot the fixation of the target forms of the deformable foam. First, the robot uses a depth sensor camera and object recognition techniques to identify the foam. The researchers randomly select particles within the perceived form to initialize the position of the particles. Then the model adds edges between the particles and rebuilds the foam in a dynamic chart customized for deformable materials.

Because of the simulations learned, the robot already has a good idea of ​​how each touch, given a certain amount of force, will affect each of the particles in the graph. As the robot initiates the retreat of the foam, it iteratively corresponds to the actual position of the particles in the desired position of the particles. Whenever the particles do not line up, it sends an error signal to the model. This signal adjusts the model to best match the real-world physics of the material.

The researchers then intend to improve the model to help robots better predict interactions with partially observable scenarios such as knowing how a stack of boxes will move when pressed, even if only the boxes on the surface are visible and most other boxes hidden.

Researchers are also exploring ways to combine the model with an end-to-end perception module operating directly on images. This will be a joint project with the group of Dan Yamins; Yamin recently completed his postdoctoral fellowship at MIT and is now an assistant professor at Stanford University. "You're dealing with these cases all the time, where there's only partial information," says Wu. "We are expanding our model to learn the dynamics of all particles, while we see only a small part."

/ University Launching. See in full here.

[ad_2]

Source link