 So up next, we're going to talk about some current stuff, but also thinking very far forward because everybody hates driving, right? I'd much rather have my car drive me to work than me have to drive. We want to make sure that that's nice and secure. So these guys are going to talk about whether we can trust self-driving vehicles. Let's give them a big round of applause. Have a good time. Thank you. I'm so excited to stand here. Good afternoon. Today I bring you the latest work on attacking self-driving vehicles. The title is Can You Trust Your Atomomos Vehicles? I would like to talk about our latest work on vehicle security. I'm Jianhao Liu from China. And I work for Qi Hu Sui City in Skygo Team, Fox Research Vehicle Cybersecurity. I'm Chen Yan from Zhou Jiang University. And Dr. Xu is my advisor. She is a professor at Zhou Jiang University and University of South Carolina. I believe she's hiding somewhere in the audience because she wants us to do all the work. Okay. In this talk, we first introduce what is an autonomous vehicle? The idea of car hacking by sensors as a person our world attacked. At last, we decide the possible defense with the development of car hacking, ranging from conversion cars with telemetics to autonomous cars. The car is increasing, interacting with the environment. The turret opens up a new attack surface. In this talk, we show you our work on autonomous vehicles. So what are autonomous vehicles? Autonomous vehicle can sense its surrounding and make a driving decision by using the machine learning algorithm. Basically, a car that can drive itself without human doing anything. According to this international standard, autonomous driving can be divided into five levels. In the next example of level one, adaptive control where we must put hands on the steering wheel. Level three, conditioned automation where hands can be off the steering wheel. Yet, the driver still needs to take over for time to time. Level five is for automation. A car can handle all the driving models and drives itself without a human in it. So basically, we can sleep in your car. Typically, Tesla is conditioned as level three and successfully Google car will be level five. This is an architecture of autonomous vehicles. First, the car has to have sensors to monitor these surroundings. And for more advanced cars, they will have a V2X. V2X stands for vehicle to anything. Then the sensor data can guide vehicle movement to plan and control the path. The driving plans will be formed to the driver by HMI. The HMI means the human machine interface. All the driving decisions will be executed by the car. This is how automatic to driver works. Let me show a few automatic driving applications. They include autonomous light keep, autonomous light change, autonomous light overtake, autonomous highway merge, and autonomous highway exit, and the autonomous interchange. Autonomous vehicles have a rich set of sensors, which include following. Autonomous sensors can add difficult oversight nearby. Cameras can use difficult road scenes, lines, and make sure car designs and speeds. LiDAR creates a 3D map by scanning the environment and planning the driving decisions. LiDAR can add difficult cars from middle range to long range and make sure the designs to car in the front. This speed is moving direction. Because these sensors, the car can sense the environment and add difficult identify what kind of obstacles are nearby. Finally, the car can make decisions for driving. Of course, the automatic driving are controlled by electronic cars. That's to cover a regular car into self-driving car. One has added electronics to control the auto-sonic directly. This way, the car can command to control the brakes, electronic power stream, and so on. So how can I attack autonomous vehicles? We are sensor data guide to travel route of the car, and the sensors save as the plan to control the car. That's the way set scope of our attacks. Attacking the sensors on autonomous cars, if we can modify the sensor data in driving decisions will be made based on fact data. What is displayed on HMI may be wrong and may be mistake. The past planning may not be correct, which leads to wrong execution. In short, the reliability of the sensors will be affect the reliability of the automatic driving vehicles. Now to up to now, the most advanced automatic driving can that way have access in Tesla. Tesla has advanced autopilot system, which relies on the autonomous driving at between level 2 and level 3. Basically, Tesla has all the features of the autonomous driving. Thus, the autopilot system still requires the driver to place his hands on the steering wheel. It has really changed people's driving habits. Unluckily, this habit change has lead to a recent incident, which is cause of sensor malfunction. Thus, the reliability of sensors is important. If autopilot can fail on the normal, yet spacal case, what will happen if there is international... Malaysians attacks, as is some as China to have a traffic addition. So, there is three types of sensors in Tesla. One of millimetre rear leaders, a middle range leader is amount in front of the Tesla, and a camera. A front looking camera is amount on the wind share under the near rear mirror, and 12 ultrasonic sensors. Ultrasonic sensors are caught near the front and near the bumpers. That's a video. We will show how we can faster sensors under the cars, which make the autopilot of Tesla to mirror function. Let me show you a few videos give you the highlight of our work. The first is spoof ultrasonic to take HMI have a mirror function. Now, Yan Chen is behind the car. Yan Chen is here. He is ready to close now. But the HMI can't display the designs. Okay, turn it off. Now, Yan Chen off the device. The HMI displayed. So, we can the HMI mistake. Turn it off. Next, thank you. Next video is ghost car. This is our tech go-to controller car. This is a ghost car in front. So, we can start autopilot system and starting driving. But in front of the car have no car. When the car pass Yan Chen, the ghost car can force our car to stop. It's displayed to hit the ghost car. So, the car is to stop. Thank you. I guess I'll take you over from here. The first type of tag is ultrasonic sensors. And we have tested this tag on Tesla, Audi, Volkswagen and Ford. So, what is ultrasonic sensor? It is sensor that measures distance generally within two meters. It is used for parking scenarios like parking systems, parking space detection, self parking. And also on Tesla, there is a feature called summon. Which means that you can park the car without even being inside the car. So, in a parking scenario like this, generate there will be a display of distance. It is either acoustic or visual, so that we can know the sensor readings. So, how can we misuse ultrasonic sensors? So, imagine someone dislikes the owner of a shop. And he wants the car to keep backing into the glass wall. So, hey, did something to the sensor that the car does not stop where it should. So, what will happen? I believe most of you want to protect your parking spot. It is really annoying when someone gets parking into your parking spot. So, instead of putting up a sign, if you can do something to the sensor, that makes the car stop in the middle of parking. That would be awesome. So, before going into how these misuses can be done, let me walk you through how an ultrasonic sensor works. So, an ultrasonic sensor, it emits ultrasound and receive echoes based on the piezoelectric fact. I believe this technology is motivated by bats. So, the sensor generates an ultrasonic pulse and it propagates and hit an obstacle and bounces back and creates a receiver pulse. So, we can measure the propagation time between the transmitter pulse and knowing the speed of sound in air. Basically, we can calculate the distance from this very simple formulation. So, there are three types of types of ultrasonic sensors. The first one is Gemiotech. So, Gemiotech generates ultrasonic noises that cause this ultrasonic sensor. And spoofing attack, it crafts fake echo pulses so that it can auto-distance. The third one is acoustic quieting. It means that this attack can diminish the original ultrasonic pulse so that it can hide obstacles. To validate these attacks, these are the equipment we use. So, first we need ultrasonic transducers that can emit ultrasound. And second, we need signal suppliers that can generate excitation signals. In our case, we use either Adreno or a single generator to make this study faster and cheaper. We use ultrasonic hardware, but you can totally design your own piece of jammer. So, the basic idea of Gemiotech is to inject ultrasonic noises at the resonance frequency of the sensor, which is generally between 40 to 50 kilohertz. And it can cause the denial of this ultrasonic sensor. So, actually, it's really in the right figure. So, first there is an ultrasonic sensor. There's transmitter pulse and the received echo pulse. If it generates ultrasonic noise at the jammer, so this noise will be received by the sensor, and this noise will fully cover the original echoes. And we have tested this attack in the laboratory on eight models of standalone sensors and all those on four vehicles. So, for this indoor experiments, as you can see on the right figure, it is a figure of received electrical signal out of the sensor. When there's no jamming, you can see that there are excitation pulse and the following echo pulses. So, it is how it works. But when there's weak jamming signal, you can see that the noise flow has been increased. And as we increase the noise flow, you can see that when there's strong jamming, the noise can fully hide the original echoes, so no measurement is possible. So, what about the sensors? What is the reading of the sensors? So, basically, we get two very opposite types of results. The first one is zero-design, which means that the sensor detects something very close. And the other one is maximum-design, which means that the sensor cannot detect anything. So, how should cars behave to jamming attack? Should it be zero-design or maximum-design? If it is zero-design, it means that the car detects something so that it will stop. But if it's maximum-design, it means the car cannot detect anything and the car will not stop and will keep moving. So, obviously, zero-design is a fail-safe option for vehicles, right? However, according to our experiments on cars, the result is, unfortunately, the maximum-design. So, let me show you a video that demonstrates how it is really maximum-design. So, this is an autosolving sensor on Audi Q3, and this is an autosolving jammer, which is wired to a computer. And now, from the screen of the car, you can see that the jammer has been detected as an obstacle as displayed in white bar. And we read the data from the OBD. The excess distance is 28 centimeters. And now, let's turn on the jammer. And the obstacle disappears. And the distance excess is maximum. So, in conclusion, jamming attack can output at maximum-design and it can hide obstacles. So, let me summarize the result of jamming attack. So, on autosolving sensors, there are distance and there are maximum-design for different sensors. And on cars with parking systems, the result is maximum-design. Well, interestingly, from the menu of Tesla Model S, it says, if a sensor is unable to provide feedback, the instrument panel will display an alert message. However, we have never seen this alert message. Well, another question is, how will the car behave when, like, a self-parking and someone that the car actually drives itself based on these false sensor readings? So, let me just show you a video of how we do this attack on Tesla Summit. So, as you can see that there's nobody in the car. This is me standing in front of the car holding an autosolving jammer. And now, Jiang Hao turned on the Tesla Summit. Well, normally the car will not move because I have been detected, right? However, when we jam the sensor, it moves and hit me. That hurts. Well, in conclusion, jamming attack can also have obstacles when the car is driving for itself. You might ask, well, the distance is only, like, 20 centimeters. Can it be longer? Well, of course, because if we increase the watch level of the jammer, like, if we use an Arduino output at 5 volts, if we output at 20 volts with a single function generator, we can increase the attack distance. So, in this video, there's a man standing behind the Tesla. This is not me. This is another brave man in our lab. His name is Wei Bing. This is more dangerous. So, now the interferer is off and I turn on the Tesla Summit and you can see that the car starts reversing. However, it will not move because the man has been detected. And now we turn on the function generator to turn on the interferer. So, watch closely. Now we turn on the Tesla Summit again. Well, it moves and hit the man and hit the interferer. So, the car only starts because the interferer has been hit. Thank you. Because interferer has been hit and stopped working. So, jammer attack, the distance can be increased if you have no budget, right? So, let me summarize the result of jammer attack on stop working something. So, the car, energy snares, the car does not stop and it's strong jamming. It might hit someone or something. So, there's another question. Why some sensors output their distance and some output maximum distance? Well, we believe it is because of different sensor designs. For their distance, the sensor compares the signal with a fixed threshold. So, if the signal exceeds, the voltage level exceeds a threshold, it believes that there's a justified echo. So, the jamming signal actually increased the voltage level. So, the sensor thinks that, hey, there's an echo right after I transmit. So, it is zero. For maximum distance, we kind of started the sensor on Audi Q3, broke it, probed it, and reversed the schematic. But we didn't find any useful information because it is an application-specific, I say. So, all these signals are processed inside the chip. So, to make it easier, we started another sensor, which is known as Maxuna MB1200. It is another sensor that outputs maximum distance. So, basically, we have to destroy the transducer on top of it and expose the circuits. So, this is how it works when there's no jamming. You can see that the white line means the time of flight. And the blue line means the echoes. Well, you can see that there's an excitation pulse and there are echo pulses. And if you watch closely, the time of flight exactly matches with the first echo pulse. And when there's strong jamming, when there's weak jamming, you can see that the noise fall has been increased. But the measurement is still correct. However, when there's strong jamming, you can see that the signal is totally overwhelmed by noise. It seems that there is no echo. So, the sensor outputs maximum. We believe it uses adaptive vessel. So, it is used for noise suppression. Well, the designer definitely has a good intention in designing this, but they didn't consider the malicious scenarios. Well, the second type of attack is spoofing attack. So, basically, there is to inject ultrasonic pulses at a certain time that can fold the sensor. So, for example, if we craft a fake pulse right before the first original one, we can kind of spoof the propagation time so that we can manipulate the distance. But this attack is non-trivial because only the first justifiable echo will be possessed. So, there's kind of like an effective time slot, which is right after the transmitter pulse and before the first echo pulse. So, you're gonna have to inject within this slot to make it successful. And if it change the arriving time of the fake echo, we can manipulate the sensor readings, right? So, this is a video demonstrates the spoofing attack on Tesla. Oh, sorry. So, this is GEMR connected to a computer. Oh, this is a computer. And you can see that the GEMR has been detected as an obstacle and distance is 66 centimeters. And now it starts spoofing. Wow. So, distance has been ordered. It's at stop. And if you look outside the vehicle, there's nothing moving. And if you look at instrument panel, the spoofing is still going on. So, in conclusion, spoofing attack can order distance. And this is a demo of spoofing attack on Audi. In this video, we just randomly order the distance. At first, nothing is in front of the car. Well, I'm assuring you that the jumping bars are now volume indicator of the music. So, spoofing attack also order distance on Audi. Let me summarize the result of spoofing attack. So, spoofing attack can manipulate sensor readings both on standalone sensors and on cars so that we can make the car stop where it shouldn't. The third type of attack is acoustic quieting. A method is acoustic cancellation, which means that we cancel the original one with sound of reverse phase. So, when they add up together, there's no echo at all. For matter experiments, we observed that by matter phase and amplitude adjustment, we are able to cancel ultrasound. But if you want to cancel ultrasound from the car, you're gonna need to use dedicated hardware. So, an easier way to do this is cooking, which means that we absorb the ultrasound with some kind of sound absorbing materials, like some acoustic dimming forms, which is very cheap and it has the same effect as jamming that can have obstacles. So, this is how we cloak a car. Now, we drive toward the car, this lovely panel car. And you can see that the car has been detected and displayed as the red bars on the screen. And now we'll apply the acoustic dimming form. Well, it disappears. And we drive closer to the car, still nothing. And now we remove the dimming form and it reappears. So, in conclusion, cloaking can hide a car. So, what about human? Can clinking also hide a human? We tried this. So, this is me walking across the car and you can see that I have been detected by the sensor. But now, if I wear the dimming form, I'm invisible and still nothing. Well, can you think of a new way to wear this form? Here we go. This is the foam scar. It also works. So, cloaking can hide a human. So, if you want a car, a human or glass to be invisible, just buy this. Well, by the way, behind the glass door is my advisor's office. So, this is what happens when you let our students do all the work, I'm sorry. So, the third type, the second type of attack is on the millimeter wave radars. So, we have tested this attack on Tesla Model S because the other three cars don't have a radar on it. So, I'm a double radar. It measures distance, angle, speed, and shape, et cetera, from short to long distance. It is useful on high speed and critical applications like adaptive cruise control, collision warnings, and one-spot detection. So, how can we misuse radars? It is similar. So, when there's a new driving on highway and there's danger ahead of you, and you want to stop. But the car, if you do something to the radar, that car does not stop where it should. It could cause some serious accidents. And if there is danger behind you and you want to stay away from it, but the radar tells you that there's something ahead of you, you have to stop. So, that would be terrible. So, let me walk you through how a radar works. So, a radar transmits and receives electromagnet waves and measures the propagation time, et cetera. It is similar to ultrasonic sensors except that the signal is RF. So, when we are dealing with RF, it is difficult to measure the time because it travels at the speed of light. So, in order to do this, we have to do modulation so that we can make this process easier. So, the most popular, one of the most popular modulation scheme is FMCW, so which is kind of frequency modulation. And the Doppler effect can be used to measure the relative speed and there are two major frequency bands, which is at 24 or 76 GHz. This is how the frequency modulated continuous wave works. Basically, it is kind of like a sweeping frequency signal so the frequency actually varies with time. And when the signal is transmitted and it hit a target and bounces back, we'll receive a similar received signal. And what we'll measure is the reflection time, but it's difficult. So, we measure the difference frequency FD and calculate the time, knowing the ramp slope. So, sometimes when the car is moving relatively, there will be a Doppler-15 shape. So, before doing the attacks, the first thing we have to do is to understand real signal. So, we're gonna have to analyze the signal to find out what is the frequency range, what is the modulation process, what is the ramp height, and what is the number and duration of ramp, and what is the cycle time. So, after doing this, we can know whether jamming attack or sweeping attack is feasible, right? So, this is kind of like a family picture of all the equipment we use. Special thanks to Keysight Open Lab for providing us free access to this equipment, which is three times the price of Tesla. Well, so I'm going to explain which ones I use later. Well, I forgot one thing. It doesn't have to be so expensive because you can actually, you can just buy a radar and modify it to be your own generator. So, this is how we analyze the signal. So, first we receive the radar signal, which is a home antenna, which is connected to a harmonic mixer and analyze the signal from the efficiency domain on the signal analyzer and on the time domain from the oscilloscope. So, basically what we found is that the radar outputs at 76.65 kilohertz as a signal frequency, and bandwidth is 450 megahertz. Modulation is MCW, but we have known all the details of data charts, but I'm not gonna tell you because I want to be responsible. So, the idea of jamming attack is to jam radar within the same frequency band, which is 76 to 77 kilohertz. So, we can jam at fixed frequency like this, and we can jam at 15 frequency like this. That covers all the frequency band. Well, the idea of spoofing attack is to spoof the radar with similar RF signal, something like this. Pretty straightforward. And to generate the radar signal, we have to generate a signal with a signal generator at 12 gigahertz and multiply the signal with a frequency multiplier and transmit it with a home antenna. So, before showing you how the results are, let me introduce you how the autopilot is placed. So, the blue icons means that the traffic aware cruise control and other stair is on. And the blue car means the car ahead of you has been detected and locked. And we have to do the experiments when the car and the equipment is stationary because when the car is moving and in case our attack is successful, the car might hit the equipment. And if a damage is equipment is three times the pressure Tesla, I won't be able to graduate. So, this is a demo of jamming attack. So, in this video, I am standing in front of the Tesla controlling the radio interferer, as you can see from the camera of the mobile phone. So, now the autopilot is turned on and the car containing the equipment has been detected as a blue car. And now I show how, so now the interferer is turned off. So, we turn on it for you can see that the blue car disappears. And we turn off the interferer, it reappears. We have kept trying this for many, many times and it works every time. So, jamming attack on radar can hide obstacles so that the car may now stop where it should. So, let me summarize the result of all the radar attacks. So, for jamming attack, it can hide obstacles which has already been detected and other fix or smoothing fix it works. For the smoothing attack, we can spoof the distance of the car head. So, basically what we would have seen is that the car actually jumps forward and bad boy. But the third type of attack is on cameras. We have tested stand-alone cameras from mobile eye and point of view and tested on Tesla Model S which has a mobile eye. So, camera actually detects objects by computer vision. There's forward camera and there's backward camera. It is used for land departure warning, land camping, traffic sign recognition and also for parking assistance. So, how can cameras be misused? So, our camera is mainly used for steering. If the camera does not work, the car may not steer where it should. So, there can be some accidents. Well, the attack we have on camera is blinding attack. So, basically what we jammed the, there are three types of interferers we use. They are LED spot, laser pointer and infrared LED spot, which are all very cheap. And there are two snares. The one is we point the interferers directly at the camera and the other is we point the interferer at the calibration board and reflect back to the camera. So, this is the result of blinding with IRD. So, when the IRD is pointed toward the calibration board, there's only partial blinding. But when it is faced toward the camera directly, there will be total blinding. And this is the result when we use a laser beam. It is even more prominent. Either fixed laser beam or wobbling laser beam can cause total blinding. And there is something we didn't expect is the permanent damage of the camera. So, you can see that there is a black scar on the camera. We have to send it back to the vendor and have it repaired and cost us a lot of money, which I don't care because it is Jinhao's camera. But this is a demo of blinding the camera with laser beam. This is a wheel from the camera. Now we point the laser beam at the calibration board and you can see that the effect is not very effective. However, when we point the laser beam directly at the camera, you can see that there's blurry, wide and blurry, and you can not see anything. So, you can imagine what will happen if the camera on the car has been blinded like this. So, laser can blind camera. We have also tested infrared, it doesn't work very well. We have tested blinding cameras on Tesla. Well, the good news is the Tesla extra gave you an alert message that asks you to take over when there's jamming time. So, it is kind of like a relief in response. Well, we have submitted our findings to Tesla and got their active response. They appreciate our work and they are looking into this issue. Well, looking forward, how can we improve these sensors? Well, to begin with, the sensor has to feel safe. For example, this zero or maximum distance for ultrasonic sensors, it has to be zero distance so that the car will stop instead of hitting something. And it should also be designed with anomaly detection function. I believe at least jamming attack is easier to be detected because there's a normal strong level of signal. And also increase the density of sensors such as using multiple ultrasonic sensors for measuring one distance. And also using different types of sensors to, for like a kind of double check. And also in the system that does the sensor data fusion, it is better if the transverseness of these sensors are evaluated so that when the system does not have enough confidence in the sensor data, it will stop the car from self-driving. So it can feel safe. Well, safety is always more important than convenience, right? Well, what's next? In the future, we hope to get the output out of the sensors directly. So instead of a black box approach, and we hope to read the sensor data and the actuator data, we hope to carry out moving vehicle experiments to examine whether these attacks are feasible when the vehicle is moving on the road. And we hope to measure the longest, the maximum attack range and angle, and also how we can improve the performance of this attack. Well, in conclusion, I hope what you can get from this work is that attacking existing sensors on cars is feasible. We have found many ways to fold sensors. Some attacks are easy, some are non-trivial. So this guy is not falling. It's not like someone on the roadside can easily just attack your sensors. Well, for the manufacturers, the sensors should be designed with security in mind so that we should also always think about intentional attacks, especially when the sensors is going to play a very important role in self-driving cars. Well, for customers to now try semi-autonomous cars yet, you have to always be careful yourself. Well, while we have fully secure autonomous cars in the future, let's wait and see. Well, these are the people we like to stand. Without the help, this work would not be possible. These are our colleagues that help us in the experiments. If you want to know more details about this work, please check out our web paper or just write us emails. Thank you. Thank you. If you have questions, if you have questions, you can come up here. We'd like to answer.