The dream of self-driving cars goes back a long time, but a good place to start is Munich, Germany, 1987.
1987Prometheus & VaMoRs / VaMP
Ernst Dickmanns' research group at UniBW Munich demonstrates vision-based lane keeping in a modified Mercedes van.
By 1994, they are driving vehicles in highway traffic, changing lanes autonomously, and tracking other cars to avoid collisions. Dickmanns begins a collaboration with the U.S. Army Research Lab to continue researching self-driving technology, but retires from research in 2001.
1989ALVINN & NavLab
In 1989, researcher Todd Jochem (advised by Dean Pomerleau and Chuck Thorpe) trains a neural network to steer CMU's NavLab 2 research vehicle.
The network learns from dashcam video of human-driven tests, and predicts steering directions (as well as steering confidence) based on video input.
The research group builds tail-light following, lane tracking, parallel parking, off-road obstacle avoidance, and other autonomous-driving features into successive versions of the NavLab platform.
The research is funded by DARPA, a technology research agency within the U.S. Department of Defense.
In July 1995, Jochem and Pomerleau take NavLab 5 on a 96% autonomous trip across the U.S. using a new steering system called RALPH.
Later that year, Jochem and Pomerleau start a company to commercialize their lane-keeping system.
2004-2005Stanley & the DARPA Grand Challenge
DARPA, hoping to accelerate research into autonomy, offers a million dollars to the fastest autonomous vehicle that can complete an offroad course in the Mojave desert. No vehicle manages to more than 8 miles into the 142-mile route, and no prize is given.
DARPA doubles the prize to two million dollars and asks teams to try again next year. This time, the results are more impressive.
The Stanford vehicle ("Stanley") wins after the leading CMU vehicle suffers a mechanical failure.
In July 2007, Sebastian Thrun of the Stanford team gives a talk about their winning tech to employees at Google.
In 2007, Israeli startup Mobileye announces that its EyeQ1 system-on-a-chip will provide lane-departure warning and vehicle detection in select Volvo cars.
Their company has already demonstrated the underlying technology for vision-based lane and car tracking.
They've also demonstrated pedestrian detection, which is planned for the EyeQ2.
Mobileye has previously collaborated with the UCLA DARPA Grand Challenge team, building a prototype version of their monocular lane/vehicle-detection system into the race vehicle.
2007Boss & the DARPA Urban Challenge
Following the successful desert challenge, robotics teams come together again in November 2007, this time to complete a 60 mile route through an urban environment.
This requires teams to interpret lane markings, avoid dynamic obstacles (other cars), handle intersections and merging, and satisfy a host of other DARPA evaluation criteria.
Many teams from the desert challenge return. This time, CMU's vehicle ("Boss") wins, with Stanford's "Junior" (now sponsored by Google) coming in second.
After the challenge, Chris Urmson, technology leader of the CMU team, demonstrates their winning vehicle to WIRED. Urmson predicts that similar self-driving technology will be available to consumers by 2020.
Satisfied with the progress so far, DARPA ends the challenges.
In 2009, Google hires Chris Urmson, Sebastian Thrun, and other challenge veterans to continue working on autonomous vehicle research.
2011Google announces SDC program
At IROSInternational Conference On Intelligent Robots and Systems 2011, Chris Urmson and Sebastian Thrun present their progress working at Google.
2014Tesla announces autopilot
Electric car maker Tesla announces autopilot will start shipping in Tesla vehicles. autopilot encompasses lane keeping, ACC, AEB, and garage parking. The feature relies on the latest EyeQ3 system from Mobileye, who has just raised a one billion dollar IPO.
In June 2015, Chris Urmson, now the head of Google's Self-Driving Car project, gives a TED talk explaining how Google's self-driving system works, emphasizing the difference between highway driving (simple) and everything else (complicated).
He is confident that Google will commercialize the research project soon.
In August 2016, Chris Urmson steps down from Google SDC. In December, Google SDC is spun out into a new corporation (under Alphabet) called Waymo, and Chris Urmson launches Aurora, a new self driving car startup.
The cofounders of Aurora are Drew Bagnell, another veteran of CMU's DARPA team, and Sterling Anderson, who has just left his position of director of autopilot at Tesla.
2016Mobileye breaks with Tesla
After a fatal crash in May 2016, Mobileye terminates its association with Tesla. Tesla is already trying to build its own team to pursue full autonomy, but now has to build their second-generation AP2 system from scratch.
Mobileye researchers are building deep neural networks into their system now. At CVPR 2016, Amnon Shashua shows their technology for acquiring 3D vehicle bounding boxes, drivable space segmentation, lane semantics, and 10cm-accurate localization, all from vision only.
The technology is planned for the EyeQ4. Mobileye posts a demonstration of their high-accuracy vision-based localization (REM) on their website.
As progress in machine learning makes perception problems easier, a variety of startups have begun pursuing the goal of full self driving.
In January and February 2017, Cruise posts footage of their system driving autonomously in San Francisco traffic.
In June 2017, Peter Gao, Cruise's computer-vision team leader, gives a presentation about Cruise to Stanford students.
Amidst a bevy of self-driving startups, Cruise is second only to Waymo in reducing their disengagement numbers.The definition of a 'disengagement' isn't standardized precisely across companies, and plenty of companies aren't on that list, but Waymo and Cruise have a strong lead.
In a close third is the $800-million dollar stealth-mode startup Zoox.
In 2014, Stanford-DARPA-team veteran Jesse Levinson cofounds Zoox with Australian designer Tim Kentley-Klay. Three years later, Kentley-Klay gives the firstto my knowledge public presentation of their work on autonomous vehicles at the Unreal Engine SIGGRAPH user group.
They show transfer learning of lane tracking, object segmentation, and vehicle pose estimation from simulated data to real world cars.
A year later, they post clips of their full self-driving system in action, including 3D bounding boxes and tracking for cars and pedestrians, traffic light reading, gps-free egomotion, driving in rain, and handling of complex multi-car scenarios.
While Cruise, Zoox, Waymo, and others continue working towards full self-driving with customized, sensor-rich vehicles, a startup called Comma.ai has been releasing its OpenPilot driver-assistance software for free on GitHub. The software runs on a $700 dashcam, which Comma sells online.
At version 0.2.6, their system is already approaching the lane-keeping and cruise control abilities of Tesla's autopilot.
At version 0.5, their system is comparable to autopilot in smoothness, and supports driver monitoring as well as an increasing number of vehicles.
Improvements to Comma's system are limited by a lack of HD maps. But while Waymo, Cruise, Zoox and co. manually pre-map each new city before driving it, Comma can crowd-source HD maps automatically. This is possible because Comma has thousands of users collecting data–most startups have nowhere near this scale.
But Mobileye has millions.
2018Mobileye prepares for full self-driving
After being purchased by Intel in 2017, Mobileye continues to advance their self-driving system, aiming for fully redundant vision and radar/lidar driving mechanisms using their crowd-sourced, camera-based mapping system ("REM").
In May 2018, Mobileye begins testing a 100-car fleet. They demonstrate their prototype vision-only self-driving system to reporters in busy Jerusalem traffic.
2020The future of self-driving
Self-driving cars are still limited. They are limited in geographic scope (Waymo, Cruise) or functional scope (Tesla, Comma), often both, and will continue to be limited through the forseeable future.
It's worth noting that, of all of the videos above, very few show a car without a human in the driver's seat, and those ones are all recorded in closed courses.
But although general self-driving is AI-complete, most of the hard problems have been solved for enough circumstances to be useful. The closer we get to Chris Urmson's guess of 2020 for limited autonomy, the more I'm inclined to believe it.
Progress in self driving has no standardized benchmarks and questionable terminology, so it's hard to tell what progress looks like without having reference points. This post is my collection of reference points. It's based around videos, which are a good approximation for firsthand experience, and fun to watch.
Dates on the left are (usually) the date of one of the videos–the work which the videos represent typically starts much earlier.
Links on the right are to primary sources (more videos, research papers, interviews) wherever I could find them. Research papers weren't included if I couldn't find an open-access version.
There are a lot of self-driving teams and researchers not mentioned above (Aptiv, Argo, NVIDIA, Torc, Voyage, almost every car manufacturer, Apple...). This doesn't indicate anything about their technology (in some cases, it indicates that they haven't released enough video).
The earliest information here is largely drawn from Prof. Schmidhuber's robot car history page.
The header photo is a lightly edited version of photo 73 from the CMU / Red Team photo gallery.