File:Chelsea Finn and Vestri the robot, UC Berkeley.jpg
Chelsea_Finn_and_Vestri_the_robot,_UC_Berkeley.jpg (369 × 505 pixels, file size: 36 KB, MIME type: image/jpeg)
Captions
Summary
[edit]DescriptionChelsea Finn and Vestri the robot, UC Berkeley.jpg |
English: Vestri the robot imagines how to perform tasks
UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. In the future, this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning simple manual skills entirely from autonomous play. Using this technology, called visual foresight, the robots can predict what their cameras will see if they perform a particular sequence of movements. These robotic imaginations are still relatively simple for now – predictions made only several seconds into the future – but they are enough for the robot to figure out how to move objects around on a table without disturbing obstacles. Crucially, the robot can learn to perform these tasks without any help from humans or prior knowledge about physics, its environment or what the objects are. That’s because the visual imagination is learned entirely from scratch from unattended and unsupervised exploration, where the robot plays with objects on a table. After this play phase, the robot builds a predictive model of the world, and can use this model to manipulate new objects that it has not seen before. “In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Sergey Levine, assistant professor in Berkeley’s Department of Electrical Engineering and Computer Sciences, whose lab developed the technology. “This can enable intelligent planning of highly flexible skills in complex real-world situations.” The research team will perform a demonstration of the visual foresight technology at the Neural Information Processing Systems conference in Long Beach, California, on December 5. At the core of this system is a deep learning technology based on convolutional recurrent video prediction, or dynamic neural advection (DNA). DNA-based models predict how pixels in an image will move from one frame to the next based on the robot’s actions. Recent improvements to this class of models, as well as greatly improved planning capabilities, have enabled robotic control based on video prediction to perform increasingly complex tasks, such as sliding toys around obstacles and repositioning multiple objects. “In that past, robots have learned skills with a human supervisor helping and providing feedback. What makes this work exciting is that the robots can learn a range of visual object manipulation skills entirely on their own,” said Chelsea Finn, a doctoral student in Levine’s lab and inventor of the original DNA model. Full Story: https://news.berkeley.edu/2017/12/04/... Featured researchers: Assistant Professor, Sergey Levine, doctoral student, Chelsea Finn, graduate student, Frederik Ebert Video by Roxanne Makasdjian and Stephen McNally Music: "Plastic of Paper" by Wes Hutchinson, "New Phantom" and "Believer" by Silent Partner http://news.berkeley.edu/ http://www.facebook.com/UCBerkeley http://twitter.com/UCBerkeley http://instagram.com/ucberkeleyofficial https://plus.google.com/+berkeley |
Date | |
Source | Vestri the robot imagines how to perform tasks |
Author | UC Berkeley |
Licensing
[edit]- You are free:
- to share – to copy, distribute and transmit the work
- to remix – to adapt the work
- Under the following conditions:
- attribution – You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
This file, which was originally posted to an external website, has not yet been reviewed by an administrator or reviewer to confirm that the above license is valid. See Category:License review needed for further instructions.
|
File history
Click on a date/time to view the file as it appeared at that time.
Date/Time | Thumbnail | Dimensions | User | Comment | |
---|---|---|---|---|---|
current | 21:08, 2 August 2022 | 369 × 505 (36 KB) | GRuban (talk | contribs) | {{Information |description={{en|1=Vestri the robot imagines how to perform tasks UC Berkeley researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before. In the future, this technology could help self-driving cars anticipate future events on the road and produce more intelligent robotic assistants in homes, but the initial prototype focuses on learning s... |
You cannot overwrite this file.
File usage on Commons
The following page uses this file:
File usage on other wikis
The following other wikis use this file:
- Usage on de.wikipedia.org
- Usage on en.wikipedia.org
- Usage on es.wikipedia.org
- Usage on fr.wikipedia.org
- Usage on www.wikidata.org