For toddlers, playing with toys is not all fun-and-games—it’s an important way for them to learn how the world works. Using a similar methodology, researchers from University of California, Berkeley have developed a robot that, like a child, learns from scratch and experiments with objects to figure out how to best move them around. And by doing so, this robot is essentially able to see into its own future.
A robotic learning system developed by researchers at Berkeley’s Department of Electrical Engineering and Computer Sciences visualises the consequences of its future actions to discover ways of moving objects through time and space. Called Vestri, and using technology called visual foresight, the system can manipulate objects it’s never encountered before, and even avoid objects that might be in the way.
Importantly, the system learns from a tabula rasa, using unsupervised and unguided exploratory sessions to figure out how the world works. That’s an important advance because the system doesn’t require an army of programmers to code in every single possible physical contingency which, given how complicated and varied the world is, would be a hideously onerous (and even intractable) task. In future, scaled-up versions of this self-learning predictive system could make robots more adaptable in factory and residential settings, and help self-driving vehicles anticipate future events on the road.
Led by UC Berkeley assistant professor Sergey Levine, the researchers built a robot that can predict what it’ll see through a camera if it performs a certain sequence of movements. As noted, the system is not pre-programmed, and instead learns through a process called model-based reinforcement learning. It sounds fancy, but it’s similar to the way a toddler learns how to move objects around through repetition and trial-and-error. Child psychologists call this “motor babbling,” and the UC Berkeley researchers applied the same methodology and terminology to Vestri.
“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualise how different behaviors will affect the world around it,” said Levine in a statement. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”
To train the system, the researchers let the robot “play” with several objects on a small table. A form of artificial intelligence known as deep learning was applied to recurrent video prediction, allowing the bot to foresee how an image’s pixels would move from one frame to another based on its movements. In tests, the robot’s self-acquired model of the world allowed it to move objects it’s never dealt with before, and move them to desired locations (sometimes having to move the objects around obstacles).
“Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction,” Levine said. “The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction.”
As Levine notes, the system is still pretty basic, and it can only “see” a few seconds in the future. Eventually, a self-taught system like this could learn the lay-of-the-land inside a factory, and have the foresight to avoid human workers and other robots who may be in the same environment. It could also be applied to autonomous vehicles where this predictive model could, for instance, allow it to pass a slow-moving vehicle by moving into the on-coming traffic lane, or avoid a collision.
For Levine’s team, the next step will be to get the robot to perform more complex tasks, such as picking-up and placing down objects, and manipulating soft and malleable objects like cloth, rope, and fragile objects. This latest research will be presented later today at the Neural Information Processing Systems conference in Long Beach, California. [NIPS Conference]