 I'm Zaz, this is Hacking Driverless Vehicles. Quick self-intro, my academic background is autonomous robots, but I'm probably best known for a little cable TV show I did with Kingpin called Prototype This. We did some autonomous robot stuff, tried to hack together some cool demos, basically. So for example, we did a UAV that would protect unguarded beaches, so you'd press a button if you're out in the ocean in trouble, it would fly out and drop you a life preserver. Sometimes it also dropped on the ground itself. We learned a lot about unmanned air systems on that episode and a lot about iteration in the design process. We also turned our hand to driverless ground vehicles for one of the most pressing technological challenges in the ground space, which is maintaining the American lead in high-speed pizza delivery. So here's the solution for local delivery, sharing routing space with pedestrians and here's the long-distance method, operating within the shared automobile network. So this shot right here, this was the first ever autonomous crossing of a US highway bridge. Can be hard to convince human delivery drivers to make this trip, so screw them, let's let the robots do it. Since Prototype This, one of the things I've been doing is hosting autonomous vehicle competitions. For RoboNation and the AirVSI Foundation, these are student competitions, university level and below, ground vehicle competition, two air vehicle competitions, one for boats and one for submarines, and I'll say a little bit more about this at the end. But I wanna say a few words at the start here about the motivation for this talk. I'm a huge fan of unmanned vehicles. I love robots, I think they're the future, and they are definitely coming because there are so many advantages. Energy efficiency, not having to carry a human driver, not having to carry food and water, bathrooms, or go out of its way to acquire them. Same with time efficiency, not having to deal with fatigue, boredom, taking rest stops, operator changeovers and so on. And all the new applications that are gonna be enabled where it wasn't practical to do it with a human driver. So the revolution is coming, you can't stop it. Even if you want to, it's here. But like everything else that humans have ever made, these systems are gonna be hacked. So I wanna start that conversation now. I wanna start talking about it before the systems are too entrenched for us to go back on decisions. So what I'm definitely not trying to do here is spread FUD, fear, uncertainty, and doubt. So it's not gonna be some kind of alarmist, anti-robot propaganda. You like that one? How about this one? That's not it at all. It's not every presentation where I get to make two Hitler jokes on the same slide. I couldn't resist. Hope it's not too soon. But I think that this revolution is coming, and I really got a sense of this recently. This is footage I shot a year ago at SUAS. This is a vehicle called Fire Scout. It's an autonomous unmanned robot helicopter. And it's doing a take-off run there, and it just looks like it's sitting still on the ground. It's so stable, and you look at that, and you think, yeah, that's how to do it. Let's let the robot drive. So I'm not trying to interfere with this process at all. This is my point. DEF CON is not a security conference per se. It's a hacking conference, a hacking convention. And to me, that's hacking in the old-school sense. Figuring out how things work, what's wrong with them, if there's stuff wrong with them, and how to improve on them. So this talk is about getting people excited about contributing to that conversation. The analysis, discussion, design, and the eventual acceptance of driverless vehicles, right? So you can think of this as a recruitment talk. It's sort of like General Alexander's NSA keynote last year. The big difference, of course, being that I don't care, I don't wanna see what you fap to on the internet, all right? You can keep that to yourself. So we're gonna talk about vulnerabilities and interfering with driverless vehicles, but it's out of love. It's kind of tough love, right? It's kind of like when you teach people to swim the Australian way by putting them in the pool and throwing in a crocodile. This is true. This is how we all learn it. Speaking of managing vulnerabilities, right? So there's the fire scout, it's very stable, but notice that people operating that robot, setting it off, are hiding behind the start card down here, just in case anything goes wrong. That's a security mindset. So when I talk about exploits and countermeasures, I want you guys to think about counter-countermeasures, right? Stuff that I say, well, okay, here's how we design around that. Here's how we fix that problem, increase the robustness of the system. Right, so unmanned system space is kind of wide. Means that basically there's no human driver or pilot on board, but it doesn't necessarily mean that you don't have someone off board controlling it or supervising it. For an unmanned system, not actually autonomous. You could have supervised autonomy. You might also have a safety pilot on board, or it might be carrying passengers. So there might be people on there, but they just don't have a direct control role. Of course, the military early adopters. Most of this field has been dominated by military spending and military applications for a number of reasons that I'm sure that you can guess many of them. And a pretty significant amount of the uptake has been the airspace. So for example, this is Global Hawk. If you look at Global Hawk flight hours, it's looking pretty much like an exponential curve, right? And that's because it works well. There's a lot of simplifications that apply to the airspace, but they really wanna push these changes down to the other domains. So 12 years ago, Congress insisted that one third of all operational ground vehicles were to be unmanned by 2015. Now Congress can put up the money and say whatever the hell it likes. It doesn't mean they're gonna get it, right? And we're pretty clearly not gonna make that 2015 deadline, but the will is there. But I'm not gonna talk a lot about military vehicles in this talk because the capabilities are largely classified, so it's very speculative. They already have a really active interest in resistance to adversarial engagements, right? So they already think a lot about the security stuff. There's a different cost equation. They have the highest quality sensors. And of course, hopefully, most of us will never encounter one unless Edward Snowden is in the audience. But hopefully we will not encounter one of these. Here's just a quick example of a military UGV, a driverless ground vehicle, specifically designed with threats in mind, right? You can see a lot of sensors designed at looking for people. It's got weapons on board. You just have to get close enough to this thing to press one of these kill switches. It's kind of like putting an unshielded reactor exhaust port on a Death Star, right? You just gotta leave that. Presumably they'll remember to remove those in the final version. But let's start thinking more about where these things are gonna show up in our backyards. So we've got transportation. Nothing bad about these guys, by the way. We love these guys, they're friends. Not singling them out, but they're doing pioneering stuff in transportation. Oceanography, terrestrial mapping. Filmmaking is big, right? If you go see the filmmaking conferences, you see UAVs everywhere. Some of the weird esoteric ones you wouldn't necessarily think of, like power line inspection with UAVs. And of course, logistics like the pizza delivery. Lots, lots more. And there are two main priorities that the industry advocacy group has for unmanned systems in the civil sphere. Precision agriculture. Actually, a lot of combine harvesters now are practically mobile robots, right? Because they can do optimized parts over the fields and so on. And secondly, self-driving cars. This is big, right? Wide applicability over the entire country. Roadblocks to uptake in the civil domain. Shared infrastructure, right? They have to share this stuff with humans. It's much tougher if the robots have to interoperate with humans. And hand in hand with that goes the acceptance that people have. Do they trust the safety and robustness? So you don't have to just demonstrate that, but you've gotta convince the public, give them a perception of it. Also privacy is important. So now that we're talking about safety and robustness, the fun stuff here is failure. Let's take a look at that. So here's a couple of classic failures. First of all, a UAV failure. This is the RQ3 Dark Star. There's a surviving example in the Smithsonian Air and Space in Washington, DC. It was supposed to cost $10 million per unit, ultimately, right? Units 11 to 20, and $94. So the first couple were very expensive. The first prototype failed on its second flight test, on its second takeoff, actually. Now I've seen the video of this crash with my own eyes, but I was not able to obtain it for you guys. I apologize for that. So you just gotta have to imagine what happens here. Here's a non-crash RQ3 takeoff. So just extrapolate from that and imagine if it didn't take off like that, it's coming down the runway and it starts to wobble up and down and oscillate. And these oscillations get more and more and eventually it pitches nose up and comes down hard and it's a huge fireball and it's millions of dollars down the drain. There's the, this is the quote from the Journal of Unmanned Systems Engineering about what happened. Here's what, as it was explained to me by researchers involved, this is what happened. They had modeled the takeoff run with a flight control system on an asphalt runway. The second flight takeoff was on a prefabricated concrete runway. And so those gaps, those cracks between the prefabricated concrete panels were underdamped and they set up this oscillation that eventually caused the failure of the vehicle. So just those small impulses to the inertial sensors. So the moral of that story is that the expectations of the designers are critical. Even a seemingly trivial detail like the runway composition can mean the difference between success and failure of these systems. And so if there's gonna be exploitation, there's a good chance that it's gonna happen at these cracks between the boundaries of the designers' expectations. Here's the second example. This was the favorite vehicle to win the first DARPA Grand Challenge back in 2004. This is the desert race from Los Angeles to Las Vegas, fully autonomous. This vehicle was done by a CMU off-shoot called Red Team Racing and it got a few miles, seven miles or something like that, before it took a hairpin turn wrong and ran off the side of the road. It's engine caught fire and it was all over. While we wait for that video to catch up, let's see if we can play both of them at the same time. Hey, look at that. There it is, failed. What went wrong? Apart from the fact that diesel engines don't like running on weird angles. They had a huge team, this team, and they had extensively mapped the course beforehand. Even though they only got the final route two hours beforehand, they knew every road in the area to very precise, like they had people walking that course with GPS receivers. And one of the people on that team told me that their map was so good they could have just about made it on map data alone and their big problem was they paid too much attention to their other sensors. If they'd just ignored the laser rangefinder on that curve, they would have got through just fine. So the moral of the story there is the robot faces a constant battle deciding what it knows best, what information's reliable and what isn't. So correctly estimating the state that you can't observe, that hidden state is the key to all decision-making uncertainty. So hacks and exploits have some of their best chances of succeeding if they subvert or undermine that state estimation process. So let's take a look at some of these logic structures at a high level. Just like with humans, we can think about behavioral logic of a robot in a hierarchical fashion. So at the bottom, we have the control loops, stability maintenance. This stuff typically runs independently at a high cycle rate. It might run on a completely different computer from the other stuff. And so sometimes I'll see a robot that's completely crashed. It's not doing a damn thing, but it's maintaining perfect stability in the water or in the air. It's just hanging out there. Above that, you might see something like collision avoidance, right? That's preservation of the robot. It's kind of like, you know, Asimov's third law, I think. Taking precedence over everything except low level control. Then above that, navigation and localization, right? So that's part of the mission. It might even be all of the mission if it's just a navigation mission. And then above that, high level mission task planners, reasoners, stuff like that. So what can we take away from that arrangement? Well, first of all, there's an implicit dependency, right? So if you attack at lower level in the hierarchy, you can defeat everything above it. If the robot can't maintain functionality there, then it definitely can't at the higher levels. So I like to think of it, you know, for those people who have office jobs, no one thinks about filling out their TPS report while they're actually being kicked in the balls, right? You can go back to work and try that and see if it's true. But secondly, more engineering effort might have been spent programming, guaranteeing robustness at those lower levels. Stability is super important, but getting lost once in a while, you might be able to recover from that. So the lower layers might be juicier attack targets, but they also might be better defended and harder to find bugs in. So a couple of examples that I mentioned before from Prototech This, just looking at the way things were arranged. The Life-Saving drone had an autopilot that did all the stability maintenance, has low level control loops for all of the basic airworthiness, the tunable for the different environmental conditions you might expect to encounter. Nothing in the way of collision avoidance. This is missing from just about all UAVs, and that's one of the things that needs to be implemented and designed in order for that shared space arrangement to happen, and there'd be shared space approval of UAVs. Navigation and localization is GPS-based, that has waypoints, and this involves also control loops, PID loops, controlling how the aircraft approaches those waypoints. And when it decides whether it's hit one or not or whether it should go back around for another try. And then at the top, we had our kind of bombing run planner that would actually set up a temporary waypoint to give it the best possible approach path so it could do that impact point estimation. So the system, of course, fully vulnerable to collision because there was no effort to not be. And the high level logic depends on one single sensor, the GPS. So that single point of failure is a big vulnerability, and they're actually really common in the robot sphere. Local pizza delivery has to have all kinds of control for stability maintenance because it's balancing on two wheels, and it's got to do weight shifting for when the pizza gets removed and the center of gravity changes. Lots and lots of collision avoidance. That's pretty much almost everything that the system does. The main strength is dealing with those dynamic obstacles that aren't on the map. At the high level, navigation by route planning from a map that's pre-generated using simultaneous localization and mapping. So that's where the discrimination between static and dynamic obstacles happens, and then the high level task is, again, a simple one. It's just to dispense the correct pizza when the correct credit card gets given to it. So this kind of system's vulnerable to redirection, trapping, and map confusion attacks, right? All those things that attack where the robot thinks it is. And of course, you can always try and get the pizza out if you didn't pay for it, you know? So that's the mission stuff. So now that we're thinking about that logic hierarchy, let's look more generally at the upper end of the hierarchy and what kind of logic's going on there. Usually, some form of state machine represents the mission and what eventualities that the design has envisaged. So the robot considers itself to be in some state, and then at each decision point, it can either stay in that state or transition to a new state. And eventually, you get a directed graph of all these states. They define what logic the robot runs at any given time. So these states may correspond to tasks, and the transitions may be task completions, or they might be context switches by course by things like priority shifts, or they might just be simple timeouts, anything like that. And as represented here, they may, the cells contain subordinate states, and reasoners and planners and so on. And so for math people watching, if this looks kind of like a Markov chain, there's a good reason for that. These feature a lot in robot control systems. And the thing to be aware of though, with these state machines, is that the state machine is not necessarily deterministic, right? Just because we think we're in a state doesn't mean that we're actually there. And so there's this hidden state that the robot has to, that the robot can't necessarily observe, and has to figure out, and that's where things get tricky. So to put some labels on this stuff and not have it be totally abstract, here's the RoboSub mission, because I've chosen this because it's kind of linear, and it's easily broken down into these various mission states. So first the sub has to navigate through a start gate, and this is often done open loop, right? You just point it in the right direction and drive it for a certain amount of time. But then you have to start making decisions. The sub's got to start looking for a buoy and then trying to touch it. So it's got to decide, can it see the buoy? Has it touched it yet? Should we try again? So you've got some choices, and then you've got to start looking for a path on the bottom of the pool, and you've got more choices once you find it. All the different subtasks you might want to do. The obstacle course. Identifying these targets and dropping markers on them. Finding the torpedo targets and firing torpedoes through them. And then it's an underwater manipulation task. So you've got to determine the state of that in terms of finding it and then also whether you've managed to complete that task or when you've managed to make any progress on it. You can time out from all of those tasks, right? So a lot of these are vision guided, but at a certain point you can transition from anywhere in the graph to the final thing because it uses a completely different sensor, the hydrophone localization task. So you've got to find this pinger and go on retrieve a package. So looking at this from the point of view of second guessing the designers, where are the vulnerabilities and potential exploits? Oh, there's the package. They may be in the state estimation, right? What does the robot think it's trying to do versus where actually is it? The transition between states. Can we spoof them or prevent them from occurring? Or are there bugs in the states themselves? Just unexpected conditions or results. And here's a key thing. When designers watch the robot in action, they don't necessarily know, even though they programmed the whole thing, why it's doing what it's doing. They can only kind of guess from that output, right? Because there's so much complexity going into a single output. So until you see the logs, you don't necessarily know. So the would-be exploiter also has to put themselves in the mind of the designer and think about what might they have been thinking about and what might they have got wrong. But I don't want to talk about attacks that would work on any vehicle, even human-driven vehicles, because there's no point in that, like digging a big pit and perfectly camouflaging it. So I want to talk about relevant physical attacks. So attacks on the robot's input mechanisms. So the input mechanisms are the sensors. They can be active or passive, which there's an important distinction between that we'll cover as we go. And some common examples are GPS, of course, for terrestrial vehicles, laser range finder, cameras, millimeter wave radar as they're at the back, digital compass and inertial measurement unit, the wheel encoders, and then for the specialty vehicles, like the subs, we have things like doppler velocity loggers, scanning sonars, pressure transducers for the air and subsurface, like barometric altimeters, for example. In addition to all of this, there's the map, and we'll definitely talk about that too. So sensors don't give a perfect picture of the world. Just the best guess, and you've got plenty of sources of uncertainty. We all know about noise, of course. It's a constant battle. Associated with noise is drift. Latency and update rate come into play. So when we were doing the life-saving drone, we had one Hertz update rate on the GPS. So that meant with an unknown time stamp on that. So that meant that the vehicle could be anywhere within like a 70 meter distance when we got that position estimation. So getting a position estimation that could be not valid anymore by the, to the tune of up to 70 meters, that's hard to do a five meter precision bombing run. And you have to model these uncertainties under various assumptions. So you need to know what the underlying noise models of the sensors are. So a lot of people don't use GPS because they don't, they can't get from individual units what the noise model of the GPS is. So they can't develop a noise model that's super reliable. You might think that fusing sensors together might be more useful than a single sensor, and that's true in many cases. Fusing sensors and registering it together can be a lot more useful than taking the separate sensors. But what do you do when the sensors disagree? So which one do you trust and how much? So the robustness of the robot in the end may come down to how smart is it at discounting one single bad or spoofed sensor, even though it might have a whole suite of sensors on board. So let's look at sensor attacks, two basic kinds. Denial, basically, preventing the sensor from recovering any useful data. And then spoofing, causing the sensor to retrieve information that's specifically incorrect that the attacker wants it to retrieve. And then you've got a basic attack mode choice here. You can either directly attack the sensors, so give them instantaneous bad data, or you can try and mess with the aggregated sensor data that they're accumulating over time. So to lead the robot into long-term poor inferences. So I'm gonna quickly go through most of the common sensors here and leave the specialized domain sensors for another time. GPS, we know, is a major reference for vehicles that have access to it in the atmosphere. Denial is straight up jamming. You can buy a GPS jammer from a number of sketchy Chinese websites. You can also find plans for them online if you wanna build your own. And it's basically throwing a big bucket of RF noise at the GPS frequencies, right? These frequencies, the transmissions are weak from the satellites, so you just overpower them. You can also spoof them. You need to generate fake GPS satellite signals at a higher power than the satellites themselves in order to override the receiver. And this has been demonstrated with UAVs. This is a group from UT Austin demonstrating taking over a UAV using GPS. So the attacker on the left is broadcasting a GPS signal that is at a higher power and first is aligning to the GPS signal that the UAV is really receiving. And those three dots are matched filter trackers on that output that's finding that peak. And then you start to move your signal off and you take the tracking points with you. And so now convincing the UAV that it's in a position that it's not really in and its flight control system is trying to correct for that and moving it somewhere else. So here's some video of doing it to a real UAV. They're gonna take control over it and convince it via GPS that it is moving upwards at a certain speed. And the flight control system is gonna try and account for that. And move the helicopter down. And you can see if the safety pilot doesn't take over here you could drive it straight into the ground. This technique was claimed to be used by the Iranians when they brought down an RQ7 surveillance UAV. There's a picture of it captured on display in Tehran. They said they spoofed it into the ground. This is widely believed to be a model because the original crashed. But it seems unlikely to me because the military systems using crypto GPS and the military don't rely on GPS for UAVs because they're jamming and less likely spoofing for military GPS. But we do know that for civilian systems it's easily jammed due to that weak amplitude of the real signals, easily spoofed, I mean. So in the civilian realm the GPS is used as a primary sensor unlike the military. You take the GPS and maybe resolve the last couple of meters with your laser range finders. So it's important to point out here that those sensors all have filters on them, right? So that the GPS will drag the vehicle slowly from incorrect trajectory rather than just snapping it off its path. So here's some Grand Challenge footage from 2005. Again, without access to the logs it's hard to know exactly what's happening here but it looks like because there's nothing on the road it looks like we're seeing some GPS drift off the road that's then being corrected by the laser range finders. So that's typical of what you might see as something's relying on GPS. Here's another DARPA Grand Challenge example. Looks like a GPS guided run that's drifting off the road here. Who knew that a van could drive over a Jersey barrier? Something maybe for the next Defcon Cannonball run to keep in mind. Okay, next up laser range finder or LiDAR, right? It's basically a scanning laser range finder. Originally a sensor for industrial automation but then the robot guys got hold of them and were like, oh, this is awesome, right? So we're gonna start using these outside. So mechanically scanned by a rotating mirror and they essentially measure time of flight, right? So it's an active sensor. It depends on the return signal coming back. These are primarily used for collision avoidance and map making. I don't know why I do these bullets because I always forget to advance them. They return a point cloud of reflected distances within the laser range. So you can do denial on them by actively overpowering them or by preventing a return signal with dust, smoke, mist, that kind of stuff. You can also spoof them by manipulating the surface absorbance of the things that they're looking for. So basically manipulating absorbance and reflectivity to give the receiver specifically incorrect information about what it's looking at. LiDAR are often a 2D sensor. So it's highly orientation dependent depending on how it's mounted, right? So it's often mounted in a push broom configuration looking down at the ground in front. What that means is you're looking for obstacles really nicely, but if the ground slopes up, it can just look like a brick wall. So you can see here that some output, this is from pizza delivery. The on the left is the LiDAR output as the robot comes along. And you can see that basically, you get these ranges stopping when you see an obstacle. And if you look over there to the, as it sort of turns off to the right where the street slopes up, you can see that range just drop away to nothing. It kind of looks obstacle like. In addition to that, if it's shooting down over the top of the low obstacle or particularly a discontinuity like a curb, then it can miss it entirely, right? So it can fall in a ditch if the ditch is orientated right. Active emission sensor, so it only returns that active signal back to the receiver. So no return means that it assumes that nothing is there. Think about that. Over the horizon, an out of range returns no signal. So most of the world returns no data. So what that means is that things that absorb in the laser frequencies look exactly like nothing. And things that are transparent in the laser frequencies also look like nothing. So if you were to paint an absorbent tunnel on a wall, right, it's just like Wiley Coyote. The robot would not see that. Similarly, if you were to make obstacles out of glass, it sees right through them. Now of course glass is transparent, but also reflective. So there's a limit to this. It might miss a bottle, but it's probably gonna see a cowboy. Some new light ours also have this multi echo suppression so that if you have glass that's really close to them, it's definitely designed to ignore them. So it doesn't get confused by glass over the sensor. It's all about what gets returned to the sensor. So reflective things confuse it. For example, a puddle on the road. It's very reflective and they can make fireway things like obstacles look near if the angles match up, right? So this is something that the robot actually has to deal with. Or if there's nothing to be reflected, the signal goes out into space and that lack of return at the puddle makes it look like a big hole in the road. Not just water and puddles. Even fresh asphalt cannot give a return and look like a big hole on the road. One of the DARPA Grand Challenge vehicles, I won't say which one, ran into a brand new black SUV just plowed into it because it was so shiny and reflective and recently washed that it looked like nothing. So even a new car can be a problem. That's what the millimeter wave radar is for. So you can definitely use reflective surfaces to make things look like a ditch. The vehicle won't consider safe to drive over and will have to take a different route. People out there, the bad guys out there, have good reason to try and use these kinds of techniques. This is just something for fun. I found this in my travels on the internet. These are documents that were from al-Qaeda in the Arabian Peninsula that were captured with an al-Qaeda offshoot in Timbuktu in Mali. And you'll have to trust me what it says unless you read Arabic. But this is item two. It's referring to a Russian-made Rakhal GPS jammer. This is all, this is a document on how to avoid UAVs and drone strikes. And then item three, it invises them to place reflective plates on their vehicles to reflect off the laser designator just to make the missile have to miss slightly, right? That could be the difference in life or death. Of course, for this to work, you'd need a material that was reflective at the laser wavelength. So who knows whether these guys can make it work or not, but it's on a list of techniques. Laser reflectance is also a feature. The road mostly gives a decent return unless it's that fresh asphalt. But the white lines are quite reflective. So they look like gaps in the road. So they actually use this to do road line detection. So a fun consequence of this is that you could make fake road markings in a way that's invisible to the human, like black on black, but the robot is gonna see them perfectly. So you could paint some black on black swervy lines, for example. The human does not know why the robot is swerving all over the road. Or you could even, you know, try to be a little more kind about this and leave the lines as they are, but do your black on black as hidden messages for the human's back at base. When they go and look at the map. Cameras are used as well, but not as much as you might think because vision is really hard. Specialized object detection sometimes, sometimes stereo is used to get a depth map, but it's noisy. Often what people will do is colorize the LiDAR data with the cameras, right? So you're registering your laser laser fighter data to this color information from your camera. Why isn't this video going? Come on, come on. All right, here we go. So this is DARPA Grand Challenge stuff from O5. This is Stanley. The way that Stanley drove so fast is that it used its lasers to get an idea of what was road and what wasn't road. And then it used the camera information to match colors and say, all right, everything that looks like road in front of me, I'm gonna extrapolate that based on color out to the horizon, and that's where I can drive. And I'm just gonna compute my path based on that. That was a really nice technique. Of course, cameras are easily dazzled and subject to the blinding attacks, just like we always talk about with anti-surveillance stuff. You can also do spoofing with it, right? Because just like camouflage works for the Mark 1 human eyeball, you can use camouflage techniques. You can mess with those color assumptions, right? So if something's saying, well, all right, there's stuff in front of me that looks like road and it's that color, everything that's that color is road, well, you just make your obstacles out of road colored stuff. Repeating patterns and tessellations confuse the hell out of stereo cameras because they don't know what matches up with what. Next sensor, millimeter wave radar. This is being used a lot for vehicle applications to do that optical avoidance stuff. You've probably all seen millimeter wave radar in one form or another because this is the stuff at the airport that shows off your junk to the TSA. Primarily used, as I said, for collision avoidance, looking for things that reflect the radar well, like signs in other vehicles, lower resolution than the lidar, produces kind of fuzzy images, and it lives in kind of a weird world where everything's a mirror, right? So lots of stuff is very reflective, so you can't use it a lot for fine decision-making. Like any radar, you can confuse it with chafing, right? So spitting out things that reflect the radar signal. It also gets a big return from things like signs. So an overhead sign, the robot might be programmed to ignore that as an obstacle because it's getting such a huge return from it. But if there happens to be a dynamic obstacle underneath that, it might miss it. IMU Encompass stands for inertial measurement unit, so basically you're integrating the output of accelerometers and gyros. This is the primary navigation sensor for a lot of systems because they can be very, very robust and because they can be very resistant to any kind of spoofing or attack. You can get everything from high-fidelity models to these hobbyist ones that are often pretty noisy. A commercial aircraft IMU, all right, or a commercial robot IMU, like a Boeing 777 IMU has a cumulative error of about 1.1% of the total distance travel. So they're used on a lot of these like Arctic UVVs and stuff like that because what it means is you travel 300 kilometers and when you get to the destination, you pop up and get a GPS fix, your cumulative distance error is about 300 meters. So that's really easy to deal with. Very difficult to interfere with because they're all on board, they're fully encapsulated, you're just recording what the robot feels, which is why the military systems depend on them. However, especially the compasses, very susceptible to magnetic fields. So potential for physical attacks with magnets. Another part of doing this dead reckoning for ground vehicles is wheel odometry. So basically encoders on the wheels and giving you rotation information that you can integrate up. They're actually a really key component because they give you good speed information relative to the ground and they let you know when you're stopped for sure. It's like one of the only sensors that can let you know for sure that you're stopped, which you might not necessarily be able to do, especially if you're in a tunnel, right, and you can't use a GPS. So they're so important that actually what happened when we did our pizza delivery, we had to get interrupted at the end because it took this turn tight and it scraped off the wheel encoder. And that was bad enough coming off the bridge that we had to rescue the vehicle at this point. There it is, hanging off the side, very sad. So there are some things you could look at to increase the wheel odometry uncertainty or just trying to remove it. So odometry drift by changing the wheel diameter slightly, for example, if you've got physical access to the vehicle, slippery surfaces might cause drift in the odometry. And then of course, when you remove them, potentially unpredictable behavior or stoppage. So now we've talked about all these physical attacks we can do, sounding like a James Bond car package, right? We've got our GPS jammer to knock out the absolute localization. We've got smoke, dust or vapor ejectors to confuse the LiDAR. The IR lasers perform particularly bad in mist. We've got chaff dispensers for the millimeter wave radar. Could be sparse enough that a human wouldn't see anything but these fine metal particles make the robot just stop suddenly all the time. Glass caltrops, if you're James Bond, you have to have caltrops. And of course, an oil slick to prevent the encoders from telling when the cars really stopped. So nice if you've got a James Bond budget for your countermeasures and of course an Aston Martin to put them on. But there's another really important thing to talk about besides all these sensor attacks, the map. The old school mobile robots went into the world pretty much knowing nothing about it and just using the sensors. Lots of emphasis on doing slam and using the sensors to build up the map as they went along. But that data, that aggregated map data is now so cheap and ubiquitous that there's a huge emphasis on pre-acquired map data. Think about, for example, what Google does as another huge part of their business, mapping. So the map is so comprehensive that it's often treated like the ground truth. Here's an example of the video of mapping with sensor data. So this is the kind of map that we created for the Treasure Island pizza delivery. So the map is so comprehensive that it's treated as the ground truth, right? It's really powerful because it reduces the recognition load on the robot in real time. The robot can instantly map its sensor data to static features such as traffic lights, trees, vegetation, even speed control and traffic signs, stuff like that, speed bumps. But reliance on one single thing, even a big thing like this can also be a weakness, a single point of failure. So there are potentially all kinds of things that we can make use of if they're relying too much on this map. Traffic lights, for example. This is how some vehicles locate traffic lights. 100% robustly, locating a traffic light is hard, right? They could be anywhere that you can see. You've got to do vision to find them. And you've got to have 100%. You can't just have your robot go around occasionally blowing through a red light because the robot doesn't have to get it completely right. Now it's got to do that because you're on the same road as humans. If it's just robots on the road, maybe you can deal with that kind of thing. But the human is expecting that if the light's red, that guy is going to stop. But if you've got a map of every single traffic light and it's registered to your GPS, and so from everywhere you see it, you know exactly where to look to see that traffic light. Then detecting the color of the traffic light is trivial. You know where it is, you just have to look for that blob. But now you've got a potential schism between the human's assumptions and the robot's assumptions because the human assumes that the robot can see the traffic light under any conditions. But the robot assumes that that traffic light is exactly where the map says it is. So for some reason, that traffic light gets moved, shifted around or altered. Human drivers have no problem with that. They're going to see it fine, it's going to be fine. But the robot isn't going to recognize the new state. Vegetation detection is another example. So let's say the robot has some kind of rules for determining what's vegetation and what kind of vegetation it is. So you might have some blob of vegetation and you've got your colorized lidar looking with your laser range finder and your camera looking for green stuff. Or you might have some kind of transmission classifier. How much of the laser is coming back because the vegetation is not 100% reflective. And you know that if it's on the ground, then it's grass and you can drive over it. And you've got all your trees mapped out, they're all in the map, so you know where those all are and you treat them as static obstacles, so it's no big deal. But since the last time you drove that route, the trees have grown and the foliage is overhanging the road. And suddenly it's now spotted by the robot as dynamic obstacles. Here's this vegetation right in front of me. And the robot's going crazy and stopping everywhere, even though it's this light vegetation that it could just drive right through. So the meta point here is that the rules for guessing what things are that the human designers have come up with are often pretty brittle. They represent the best efforts that the designers have been able to do to design acceptable tests. But when you have this great thing that looks like the truth in the form of the map, you come to depend on it. And dependence on the map may exacerbate that brittleness of these discrimination rules in a way that opens the door for exploitation. One more thing about the map is you've got to have constant updates. This video is an example and simulation of what robot vehicles could do at an intersection if they have completely reliable real-time local information, right? There's no need to stop, they just fucking go through the intersection. Totally terrifying, right? So you could do that if you have that at my update, map update, but if you have a local map, then you can't do that kind of stuff and you're vulnerable to these unexpected real-world features, things that pop up in the real world that aren't on the map. And if you have a remote map, then you're vulnerable to all those attacks on the network, right? So you can do denial, you can jam your 4G map updates and also spoofing, you can man in the middle, the map data as it comes through, all the cellular intercept techniques that we are familiar with from other parts of DEF CON. So looking at some of the general vulnerabilities here, let's talk about the overarching logic structures again and how we might like craft and exploit. So we want to maximize the uncertainty facing the vehicle in order to cause mission failure. So some of the maneuvers that a vehicle needs to do when it can only do onboard sensing are more uncertain and therefore more fragile than others simply because of the geometry. So one example is a right turn on red. You've got oncoming traffic coming from the left and the view could be blocked by other vehicles. So the same problem that a human driver has. So the robot is gonna necessarily be more cautious here and this provides an opportunity to trick it. So we might want to force the robot to require manual assistance, right? To be unable to continue without supervision. We might want to confuse or annoy the occupants so they abandon robot vehicle transportation. Even regularly dropping the vehicle back in a manual mode might do that. Inconvenience in the other road users, right? So if the robot stops and blocks traffic, you've got robot road rage. So getting back to these fragile maneuvers like the right turn on red, if you can make it too uncertain so that it sits there and blocks traffic or if it ventures out at the wrong time and gets T-boned, then that's the vulnerability to be exploited here. Now if you have physical access to the vehicle, you can do these kind of physical attacks on the logic kind of like a 21st century version of slashing the tires. Obviously highly dependent on the configuration and mission. But if you have the ability for example to get near the compass and stick a device or near the millimeter wave that has a strong electromagnet that's also got a 4G modem, you could figure out where on the map the vehicle is that it's doing a fragile maneuver and mess with its compass at just the right time. An obvious style of attack is redirecting the robot somewhere away from where it's supposed to go or even trapping it in a spot that it can't get out of. So an attack on the collision avoidance and navigation layers. Force it to postpone its high level tasks. So you can have obstacles that move and you can force the robot to stop if you can put obstacles around it so that it can't get out or you can have moving obstacles that guide the robot off the path somewhere you wanna take it. You can even have obstacle swarms of other robots. Artificial traffic lights is another one. The robot's depending on the map so you can't put on a upper fake traffic light but if you can use the real traffic light and modify it so the robot thinks it's in a different state, then the human would figure out right away but the robot has to stop. Another general task of mission fail attack is clobbering. This is a term from the cruise missile world. Basically you make the robot run into something so like a piece of terrain or something like that. So you're subverting its collision avoidance ultimately to incapacitate the vehicle perhaps. So you might wanna completely crash into something or you might wanna scrape off sensors. You can do this by doing subtle deviations from the map, changing things on the map, especially near those fragile maneuvers or by changing things post mapping. You can do it by imitating light vegetation. So it thinks it can go through it but it can't. Simulating obstacles at speed so the robot has to stop suddenly or swerve and perhaps run into something else. The disguising entrance walls like the fake tunnel. So putting reflective and absorbant materials within the localization noise so it goes too close to one side or an overhanging piece that scrapes off the top sensors. Or obstacles as I mentioned underneath big radar reflectors so that the robot is normally programmed to ignore like big overhead traffic signs. So now that I've said all of these things that you could potentially do to mess up a robot, mean and nasty things to driverless vehicles, I wanna reiterate driverless vehicles are cool. Don't do any of these things. I'm saying this. Don't hassle the half. I mean don't hacksaw the bots. But instead if you're into autonomous vehicles and getting involved in the future of transportation and all these other things, why not get involved in the hard challenge of actually making them work, right? Screwing them up is easy. Getting them right is the cool part. So for any students here I'd like to close out by just mentioning some stuff about the autonomous robot competitions, especially the three that I am involved with, SUAS, RoboBoat and RoboSub in that order on the slide. I want more DEF CON people involved in these competitions because DEF CON people like to push the envelope. And so here is just a quick run through of the tasks that are done and what you guys might be interested in. So SUAS, waypoint navigation, search for and identifying secret symbols on the ground and connecting to a narrow beam Wi-Fi network and downloading the secret codes, right? So this is DEF CON right here. Secret codes, war flying, also coming soon hopefully package dropping, right? So legit excuses to write bombing runs for UAVs. Cool challenges in SUAS involve visual map making, registering with GPS because you've got to report locations as well as codes. If you're into panorama stitching and automatic target ID, not a lot of teams are doing this yet. So there's like lots of opportunity to put together a sophisticated entry. RoboBoat is actually one of the most difficult competitions because of the challenge stations. We've got things like channel navigation, directing water cannons and darts onto a target, identifying thermally hot items on ground stations disabling water sprays, deploying a rover and retrieving a package. A team this year had a boat that launched a quadcopter to retrieve this package. Capture the flag from another boat. So this is all DEF CON stuff, right? Camera and LiDAR sensor integration. This year a team had a LiDAR, they couldn't afford a LiDAR off the shelf so they hacked one out of a robotic vacuum cleaner and reverse engineered it. That's what we need more of. Discrimination between vegetation and water and detecting when the robot is stuck up on things, right? People just haven't been good at that but that's what we need people with a security mindset to think about it. RoboSub, in many ways the big one because underwater is the poster child for autonomy just because you don't have the communications bandwidth to remote control even if you wanted to. So we've got 3D navigation, target recognition underwater, shooting torpedoes and dropping markers, manipulating objects and package recovery with a sonar pinger, all without GPS. I'm just flying through this because I know I'm slightly running over time. One big thing, again I think like we need a hacker mindset here for is like all these things that people don't think of before they go under the water, like thermal management. But the thing that I most want people to be involved in with this is because I think that the rules need to be hacked, right? They're there to have loopholes found in them. That's what the people in this room do. This UAV is the scan eagle. It doesn't need a runway, right? Planes need runways? Hell no, let's just fly it into a cable and catch it. So this kind of stuff is what I want to see. Nontraditional vehicles, experimental power supplies. There's dimension limits. There's all these rules like that in the competition but they apply at the start. Who's to say you can't change your dimensions while you're doing it and hack things that way? Swarms of vehicles, right? Let's get Voltron on this stuff. So I think that this is the ultimate hacker sport, right? It's technologically awesome, it's bloody hard and there are loopholes to be exploited. So I hope that people here in the audience who are students and have eligibility to be in this will check them out. There's gonna be a big daddy robo boat next year in Singapore called Robot X. They'll give you a $50,000 boat and $25,000 for sensors if your team is selected to compete in it. All right, that's it at the start of this talk. Just before the goons dragged me off, I want one more propaganda poster because I had these hacked anti-robot propaganda posters at the start. When I was delivering my disclaimer about I didn't want to spread fear, uncertainty and doubt about the robot revolution. So here's a motivational propaganda poster about how one day we might live the ultimate Austin Powers style dream, letting the robot take care of the driving while we get down to business in the backseat. I hope you'll get involved in making that come true. Thank you.