[T]he Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway.Creating these maps is not just a matter of adapting Google Maps to the task:
[B]efore [Google]'s vision for ubiquitous self-driving cars can be realized, all 4 million miles of U.S. public roads will be need to be mapped, plus driveways, off-road trails, and everywhere else you'd ever want to take the car. So far, only a few thousand miles of road have gotten the treatment, [...]. The company frequently says that its car has driven more than 700,000 miles safely, but those are the same few thousand mapped miles, driven over and over again.And there is this very crucial caveat:
Another problem with maps is that once you make them, you have to keep them up to date, a challenge Google says it hasn't yet started working on. Considering all the traffic signals, stop signs, lane markings, and crosswalks that get added or removed every day throughout the country, keeping a gigantic database of maps current is vastly difficult.Odd as it may seem in the constantly changing world of computing, it is important to remember the past. Every now and then, in any research community, a researcher develops an important new innovation that proves hugely influential, often eclipsing other valuable work that is being undertaken. For example, Sebastian Thrun, among others, has done phenomenally important work in probabilistic techniques for robot navigation. His work provides the conceptual foundation for the Google self-driving car. Much current robotics research is dedicated to extending and improving ideas he pioneered.
So what should we be remembering from the past? The previous revolution in robotics was the invention of subsumption by Rodney Brooks. (This is what everyone was excited about back when I started graduate school.) Let's look at how Brooks described his goals:
Consider one of the key requirements Brooks specifies for a competent Creature:I wish to build completely autonomous mobile agents that co-exist in the world with humans, and are seen by those humans as intelligent beings in their own right. I will call such agents Creatures.
In describing the subsumption approach, Brooks goes on to describe the role of representation in his scheme:A Creature should be robust with respect to its environment; minor changes in the properties of the world should not lead to total collapse of the Creature's behavior; rather one should expect only a gradual change in capabilities of the Creature as the environment changes more and more.
All of this is summarized in his famous conclusion:[I]ndividual layers extract only those aspects of the world which they find relevant-projections of a representation into a simple subspace, if you like. Changes in the fundamental structure of the world have less chance of being reflected in every one of those projections than they would have of showing up as a difficulty in matching some query to a central single world model.
Now, as I see it, the lesson from the Google self-driving car, read in light of the thought of Rodney Brooks, is that developing a high-fidelity representation is a symptom of our ongoing inability to develop a general artificial intelligence, in spite of the almost unthinkable level of resources that Google is throwing at this project. It is easy to be a critic from the outside, not experiencing what the engineers on the inside are seeing, but I can't help but wonder whether revisiting the ideas that Brooks introduced back in the 80s and 90s might be conceptually helpful in their endeavor.When we examine very simple level intelligence we find that explicit representations and models of the world simply get in the way. It turns out to be better to use the world as its own model.
Even though this pioneering work in subsumption is almost 30 years old, there are still useful lessons to be learned from it. I will be presenting a paper describing my recent work on learning subsumption behaviors from imitating human actions at the AAAI Fall Symposium on Knowledge, Skill, and Behavior Transfer in a few weeks. Something important I have learned over the years is that older research that does not conform to current fads can still be a source of cutting-edge ideas. In fact, when other researchers are clustering around a particular fad, revisiting older ideas can often help us see an opportunity to make a contribution that others are overlooking.