 So I got in my car yesterday and made the 45-mile drive down the mountain view. And I was in a pretty special car with some pretty special capabilities, and I'll get back to that car in a bit. But for now, I want you to just appreciate the fact that we can do that kind of drive at all. You know, for our ancestors, getting down to Silicon Valley would have met a full day on horseback. But for us, we can just make the decision to go down to Palo Alto and not bring an overnight back. You know, even if you got here by walking, you have to admit that cars have really shaped all of our lives, the cities we live in. It's truly a miracle that we have cars around. Unfortunately, that miracle is the most dangerous thing most of us participate in. I'll give you the good news first. After years of relentlessly increasing, the number of auto fatalities per year has finally started dropping. And it's now down to only 30,000 people per year. That's still pretty high. Some of the credit for that goes to the weak economy, fewer people on the road, fewer people die. But most of it seems to be due to more people using seat belts and improvements from car manufacturers such as airbags and better crumple zones. Now, if we continued that trend, we'd be in pretty good shape. Unfortunately, it doesn't look like that's the case. There are only so many more people we can get to use seat belts at this point. We've done a really good job of surviving accidents at this point. But from now on, we need to not have accidents. And as if the deaths weren't enough, there are other problems with driving. Even non-fatal accidents are going to have huge repair bills. And think of the time we waste. Americans spend four billion hours per year stuck in traffic and even longer than that driving. We spend so much time staring out the windshield because we're supposed to be paying attention, but we're not even good at that. Most accidents that you see are from human error. If you had a component of a system you were building that caused 90% of your failures, you would do something about it. With that in mind. DARPA, the research wing of the U.S. military, sponsored a challenge in the desert where teams of driverless vehicles were supposed to drive 150 miles to the Mojave Desert with no person in the car. The winning team out of 150 miles made it 7 miles before they went off the road and ran to Iraq. Clearly, things were not off to a great start. But undaunted, they ran the contest again the next year, and that year five teams completed the entire course. That's a huge improvement in just a year. As interesting as it is to drive 20 miles per hour through an unpopulated desert, that's not how most of us do our commuting. So recognizing that, they ran a more realistic scenario, which they called the urban challenge. They set up a simulated suburban environment and again had driverless cars complete a race, obeying traffic laws, interacting with other self-driving cars and with chase vehicles driven by human beings. This was a limited set of traffic interactions. There were four-way stops, but no traffic signals, certainly no merges onto the freeway, and there was one minor collision. But really, the sense of progress was astounding. At this point DARPA essentially declared victory and moved on to sponsoring other forms of science fiction, which many of you may know about. But the baton was passed. At this point, Google stepped in. In 2009, Google formed a team to work on self-driving cars. They hired in a lot of the winners from the earlier DARPA challenges, as well as engineers from within Google, such as myself. The goal here was to do this for real, to do it with a sense of urgency to drive in real environments and to do it safely, because this is really important. A lot of people ask why Google would even bother doing this. After all, it doesn't have a lot to do with searching the internet or serving email or serving ads. I hope I've made it clear by this point that this is a really important problem. As a software engineer, this is the most important thing I know how to work on. The other question that people ask is, why is Google even qualified to do this? After all, we've never built a car. We don't do that sort of thing. The answer is that we think that this is fundamentally a software problem. That's the difficult part of this. We think it's something that you can apply massive amounts of data to in our existing experience in building digital maps to produce a real solution to this really challenging technical problem. Approaches to date for self-driving cars can broadly be put into two categories. On the one hand, you could put all of your intelligence in the vehicle and make no assumptions about what's around the corner. This is epitomized in the desert challenges, where the teams really didn't know what route they were going to drive until a couple of hours before the event started. The problem with this is if you have no idea what's around the corner, you can't drive very fast. You have to be very conservative. It's really not how the rest of us drive. We make assumptions about how roads are laid out, about how other drivers are going to behave, but you're not allowed to do that in the desert challenge. At the other extreme, you could put all of your intelligence into the road and have dumb cars. In the extreme form of this, you could build entirely separate lanes, which were only for self-driving cars, or you could even build rails. You essentially got a monorail. The problem with that, of course, is that infrastructure is incredibly expensive to build out. You've got a real chicken and egg problem here. No one is going to spend billions of dollars laying out tracks for cars which don't even exist yet. Fortunately, there's a middle ground which we've chosen to take at Google. We've built a virtual infrastructure in the form of digital maps. It's not that we don't know anything about what's around the corner. We actually have a pretty good idea about what's around the corner. We put some of the intelligence into building the maps and some of the intelligence into the onboard software. Quite a bit, actually, but we're allowed to make certain assumptions about what the world is going to look like. When all of you go out and drive in the real world, you're not driving as a blank slate idiot who's never seen a road before. You have a lot of knowledge about how things are laid out and often you've driven the road before and can make assumptions. And we allow ourselves to do that. Whenever you talk about a system that's complicated, it helps to break it down into layers, and I'm going to do that here. First, I'm going to tell you a bit about our platform, the hardware, the sensors, our embedded computing platform, and then I'll move on to how the cars figure out where they are. Once they've established that, they need to understand what's going on in the world around them, and then finally take a course of action. I'll start with the sensors. The most obvious one everyone sees is the laser. It's the big spinning bucket on top. That laser is a laser rangefinder which uses speed of light to figure out the distance to objects. It spins at 10 times a second, and in each of those rotations, it produces 100,000 3D points. The huge advantage of the laser is that it gives us a 360-degree view, and it's incredibly precise, giving us about 5 centimeter accuracy on any given measurement. The problems with the laser are that it can only see so far, and in the farther out you get, the farther out the points get spread in the angular direction. The other problem is that it suffers from any problem that affects photons, such as rain, and we find that the laser works less well on the sort of heavy rain that you might have seen yesterday. The other sensor that you can see on the outside is the radar. Radar has a couple... It's not as precise as a laser by any means, but it has a couple of important advantages. It can see much farther than the laser. It can also see through cars. You see not just the car in front of you, but the car in front of it and possibly the car too in front of you. It can also give you direct measurements of speed via the Doppler effect. A disadvantage of the radar is that it has trouble distinguishing stationary coke cans from stationary cars. It's mostly useful for determining the position of moving vehicles. On the inside, we have a couple more sensors. The most obvious ones there are the cameras. The general computer vision problem remains almost impossibly difficult, but we don't have to solve that. Instead, a lot of driving is really tuned to the human vision system, and they build things so that they pop out in human vision. The obvious example here is traffic signals. If you want to know if a traffic signal is red or green, they've made it really easy for a person with not a fantastic vision to make that determination, and we can use cameras to figure that out as well. The last sensor I want to talk about is the positioning system, which incorporates GPS accelerometers, gyroscopes, to give us a rough position within the world, which we will find, and I'll talk about that later. The last thing I want to mention in the trunk is the computing platform. There are two computers back there. The first one I'll talk about is the drive-by-wire system. It would be great if auto manufacturers just gave us essentially joystick control that they don't give us that. Instead, we have to interface at a pretty low level, and the drive-by-wire system is a traditional embedded computing platform with a very weak processor or no operating system to speak of running a very tight loop actuating the car. Next to that is a workstation class computer with a quad-core processor running free BSD. Just kidding. It's running Linux. I'll talk a bit about how we use Linux. I apologize that I don't have awesome videos to show for the Linux part. I'm sure all of you are kind of used to that. It's a lightly modified version of Ubuntu. We mostly stripped things out of Ubuntu to get it to run. We don't run it as a hard real-time system, which surprises some people. Instead, we aim for real-time-ish, and we verify that. An example of... We have a couple of processes that really do have hard deadlines that they need to meet. For those, we run them using Sketchfifo, and we have as many of those as we have cores on the processor, and so in practice, they always get a time slice when they'd like one. For the rest of the system, we can roughly break those down into batch processes which should run when they get a chance, and things which are important but not as super-critical as the really hard real-time things. And for those, we use Linux C groups to manage the processor load between those. We separately are monitoring whether any given process exceeds its time slice. If it does, we throw a warning and hand control back to the human driver who's always present. If that happens, that's a bug, and we fix it. In practice, after we've driven several thousand miles with the same set of software, we've pretty much established that that sort of thing won't happen. I don't think I mentioned it earlier, but we have at this point driven 400,000 miles autonomously. How many of you guys have seen one of the Google self-driving cars driving around? Okay, so a fair amount. I see them constantly, but I work in the building where they all park, so... But they're out there and they're driving in real scenarios, and we find that incredibly useful. The DARPA challenges were fantastic, but they were very contrived environments, and you really need to learn how things work for real in real roads. And when I say we've driven 400,000 miles, obviously we physically drove those 400,000 miles once, but we kept all of the data from that. We record all of the sensor logs. We record everything that the processes were telling each other, and we're able to play all of that back anytime we want to make a change. And so we've used those 400,000 miles over and over anytime we want to figure out whether a change we're making is going to make things better or worse, and we're looking for regressions in that data. I'd now like to tell you a little bit about how the car thinks about itself. In the blank slate, the car has some basic positioning information. It has GPS information, which is pretty awful, actually. That's accurate to maybe five meters on a good day, 30 meters if you're in a really bad area surrounded by buildings or mountains. But we know pretty precisely which direction we're pointed. We know pretty precisely how fast we're going. On top of that, we then position ourselves on the map by comparing what we see in the real world to the map, and I'll show you a bit more about that later. What you're seeing here is essentially what the world looks like in infrared, and we're able to line those up. On top of that map, we build a logical map, which tells us not just that this pixel happens to be white and this pixel happens to be a shade of gray, but that there's a lane that goes through here and that it has this speed limit, and if you'd like to turn left at this intersection, you're allowed to move into this other lane, but you need to wait for that traffic signal to turn green and you need to wait for people who might be in this crosswalk. It's much easier to reason about this sort of map than the raw pixels. Having established that, we can then figure out what's going on in the world. I'll talk a bit more about how this works in detail, but roughly we need to figure out that a given point that we saw in the distance happens to be a car that's moving in this certain direction and that it might be turning left because that's allowed. Once we have that, we plan our route through the world. We want to stay roughly in the center of our lane, but of course if someone's blocking that or edging a little too close to our edge, we'll make corrections. We obviously don't want to hit the guy in front of us and so we don't want to exceed his speed and run into the back of him. We're also paying attention to the guy in front of him and the red light that we don't want to run. Once you've got all that, you can really forget about everything else and all that matters is driving that corridor and not hitting the guy in front of you. You need to actuate the steering wheel, the gas and the brake and ultimately that's the entire job of the car. I talked a bit about digital maps before and this is the part I work on so I'll talk a bit about more. We build lots of different kinds of maps. Some of them are pixel based where we've essentially built 10 centimeter by 10 centimeter pixels representing our entire world. For each of those we have the infrared reflectivity and also the height and so we can build really detailed views of what the world looks like. And as I mentioned we then build a logical map on top of that which is mostly done by human work with some help from computers. Conceptually the way we build the pixel based maps is we drive in the laser collects data which we then project into the ground. We average all those points, subtract out things that look like they might be cars and you ultimately end up with a really detailed view of what the world looks like. In practice we end up taking multiple passes of cars and stitching them together so that you end up with a really complete view without any of the quirks like there happened to be a truck parked there one day but not the next. Once we have that map we're able to line up with the laser sees in real time with the map so that we can figure out the exact position of the car. In this view the rectangular car is where the GPS thinks we are. It's thinking we're somewhere in the bike lane and it would be really bad if we tried to drive as if we were there. Each of those little red dots represents a hypothesis about where we think we really are and the average of that is our conclusion about where the vehicle really is. We're constantly lining up the features we see in the real world with the features we've seen in the map to produce our exact location much more accurately than you would get from a GPS. As I mentioned we also build maps in 3D. Besides making really cool looking videos this is useful for figuring out what kind of acceleration profile we should use as we approach or leave a hill and also figuring out whether a point we see in the distance represents a tree, a bush, the ground or another car. Unless you know what the ground looks like it's hard to make that distinction. Now that we've figured out where we are we need to understand what the world looks like. What you're seeing here in the bottom left is a picture of what we see at the windshield. You're going to see some pedestrians very slowly across the street in downtown Mountain View and what you see in the main window is what the car understands about the world. If you look really detailed in those pedestrian boxes you can see some laser points and that's all we get as raw data. But we have perception algorithms which cluster those and figure out that those are in fact people and that they are in fact moving in that direction and obviously we're going to yield to those people because it would look bad if we didn't do that. The tracking problem is actually incredibly hard. It would be fantastic if the laser and the radar just told us hey there's a bicycle over there but they don't do that. And what you see here is motorcycles doing lane splitting on 101. This isn't legal in most states but in California motorcycles are allowed to drive between two cars and this is a really hard problem because if you were to just forget everything and see the motorcycle next to two cars of the laser it looks like a giant wall of metal. What we in fact know is that one of those things is a motorcycle and that it's moving at a faster speed and it was incredibly difficult to be able to track the motorcycle as it passes near and through these other two cars. Of course now that we understand the world we need to drive through it and this is an example of us driving in a racetrack that we set up demonstrating the control capabilities of the car. We set up a racetrack and had a bunch of Google employees try to race the car and in fact we beat every single one of them. A lot of them are good drivers and I trust that if they had a few more practice laps they would have eventually beaten us. This is not something that we've put a lot of effort into but it's really impressive that in our first try against these humans we beat them. This is a Prius, it's not a sports car. There's only so much you can do with it but it is really impressive the precision with which we can reliably control the vehicle. Of course it's not just about steering you need to know where to go. What you're seeing here is the end of one of our first attempts at driving a really long route where we ended up at a roundabout near Lake Tahoe. We're following the path of the roundabout and we're going to let this other car merge into our roundabout and not hit him obviously. We didn't tell it what to do after that though so it just kept driving in circles. After a couple of minutes this got a little frustrating and we took manual controlled system and you can't just drive down the center of the lane. This is an example of us driving through the foothills near Palo Alto where there's a massive truck taking up the majority of this two-way road and a giant cliff on the other side and we didn't, I know the video ends here, but we didn't die. Hey, thanks, yeah, I wasn't in the car. Obviously we're not driving down the exact center of the lane. We're aware of the cliff on one side and we're aware of the probable trajectory of the truck and we need to split the difference and drive safely and that's something we had to put a lot of work into and it's something you really only notice when you accumulate hundreds of thousands of miles in real world driving situations. A lot of driving is social. This is most obvious with a four-way stop. I hope all of you know the proper way to do a four-way stop. You stop, wait for your turn, the first person goes first and then you move counter-clockwise and if you actually did that, you would never get to go. No one actually does that. What you're seeing here is six cars are going through this intersection without giving us a chance to move and eventually we start to edge into the intersection which was what any reasonable human driver would do and having asserted ourselves is now allowed to move through the intersection. All of you do this. The other extremely social situation in driving is merges which I don't have a video of but it's something we spend a lot of time on. It's really social. A lot of it's making eye contact with the other person and we obviously can't do that but an important part of it is just speeding up or slowing down to assert yourself and communicate to the other person where you intend to merge and we essentially do that process as well. I wish I could tell you that we were done and that there was nothing left that we needed to work on. Obviously that's not the case. I have to go to work after this and work on some of these things. This is the bug on the windshield problem and it's really hard. It's hard enough to tell if traffic light is red or green in the best conditions but when your view is obstructed if the windshield is cracked if something is just messing with your sensor that's a really hard problem and that's what we've been spending a lot of our time working on lately is just making sure that we understand those rare situations and that we gracefully hand control back to the human. In our current driving mode we've always got a person there we're trying to do this very safely and the question now is how do we detect these unusual situations and hand control back to the human in a safe way. Eventually we'll need to turn on the windshield wiper or something to clear up the bug master and have her done in sensors. But at this point it's more a problem of detection. Another problem people often ask about is how do you deal with snow? The answer is we've never tried and we'd probably not do all that well. There are two aspects to that problem. One is the control problem. A lot of people are very proud about how they drive in snow or on ice and we think we can eventually solve that problem. It's the sort of thing that computers would be pretty good at if you put enough software into it. The other problem is perception. Snow banks are essentially giant pools of standing water that cause problems for lasers and reasoning about the world when there's a mountain next to you that didn't used to be there is going to be a hard problem that we're eventually going to have to work on but it's not something that we've put a lot of time in yet. Really right now we're trying to nail down regular driving and it's a hard enough problem. And then you've got problems where people just do stupid things. What you see now is someone driving up an off-ramp. I don't know what happened to this person but it doesn't seem like a good idea. And if you drive 400,000 miles you eventually see a lot of weird stuff. And that's the advantage of having driven so many miles in real-world conditions with trained safety drivers ready to take over in the situation where something really weird happens. We can play back these logs later and see what the car would have done when it risked in real-time. And you're going to have to run into maybe the wrong choice of words you're going to experience a lot of these situations as you drive in the real world and that's going to be a big problem for us. We make certain assumptions that other people value their own lives that aren't going to do stupid things. All of you make those assumptions when you get out on the road. If you were being perfectly safe you would never leave your driveway because someone could always swerve into your lane and you know, that's the end. But we do get in our cars, we do get out in the driveway because driving is so important to us. I don't want to leave on a sour note I'd like to leave on something positive. Earlier I talked about how important it is to save lives and to save time for people who are able to drive. But there are a lot of people who aren't able to drive and we'd like to help them too. I hope you guys can all see the full video of this on YouTube but what we ran was an experiment where we took a blind user, Steve Mann and we had him commute to and from work a couple of days using one of our cars and we eventually, with the police escort, put him behind the wheel and had him experience driving himself so to speak for the first time in years and this really was a magical moment for her and we're hoping that someday we can provide this for people like Steve and really change a lot of lives. I'm really excited about what we're doing I think it's the future, I wish it were done today and it's not, but I'm really optimistic. So thank you all for listening and I have a couple of time for questions in case any of you guys have something to ask. Thanks.