 First of all I'd like to thank you my incredible wife who just gave birth three months ago to two incredible twins. Without her and her support I wouldn't be here so I wouldn't be able to listen to my talk. She made it possible that I can have my or do my daddy do this while I'm also doing this talk over here. So catch her up after the talk. Talk to her. Don't only talk about baby stuff to her because she is in the industry as well. So you can definitely ask her questions about cars and security and data analysis actually. Yeah, a little disclaimer up front. I have to do this because I have several votes. Right now I don't represent anybody else than me. So I'm not representing my home company. I'm not representing any of the consortia that I'm working for. This is just me. I use some material which was published before or not has been published yet by the US DOT or CAMP. I got permission for that but that's all. I'm not representing them. I'm not speaking on their behalf. It's me purely in my views, my opinions. What is we to X? Anybody here in the room that hasn't or let's do it the other way around. Who has heard about V2X and knows what it is? Okay, there are a few that haven't heard about it. So maybe a quick introduction to that. It's about vehicles and infrastructure, exchanging messages in an unmanaged way. So there is a direct connection between those devices. There are several standards out there to do this. There's one Wi-Fi based. There's another one, cellular V2X or cellular internet based. And the idea is that without any infrastructure like cell towers or roadside units, they can exchange messages directly. There are some applications that go over infrastructure for sure, like especially when there's a backend involved but they don't have to for all the applications. So there are applications that actually require direct interaction because, for example, out of latency reasons. And I think the easiest way to kind of introduce it to this is just to go quickly through a couple of applications so they get a picture of what's done there. The first one I wanted to show here is forward collision warning. So there's a car in front of you which suddenly hits the brakes pretty hard and it's sending at the same time a message telling you where it is, where it's heading, what speed it has and that is breaking right now. And then regardless if the car is right in front of you so that other car sensors like LiDAR or radar could see this car, you still get this message and could react to that. Even if there are a couple of cars in between, you would still get this message and you would be able to react to that. So there are a couple of advantages to V2X communication over standard or classical car sensors that we have in cars nowadays. Another one is traffic light assistance. So there's a traffic light which sends out the information when the next screen shows up or when the next red comes up and based on that you could give a recommendation to the driver to slow down because it wouldn't make the green light or to start up your automatic stop system in time for the next green phase. Another one is intelligent traffic signal. So traffic signals that are looking at the current traffic situation because they are cars sending messages all the time and based on that they could just count how many cars are there and especially in the intersections they could decide to change the green light or the red light accordingly in order to optimize traffic flow. And now you think awesome, civil attack. I can get a green light. Right? And you're probably right if you're capable to do that. Another one, road condition reporting. So snow plows running around and they're reporting back to some central traffic management system, what the current road condition is and if that's too bad or the weather is pretty bad on the road and it's icy, based on that the traffic management system could decide to reroute traffic in order to avoid traffic congestion or in worst case crashes based on the ice out there. And now you think awesome, traffic rerouting. If I'm able to just manipulate those messages and tell everybody hey, you can't go through this street here because it's totally ice, it's blocked, whatever, suddenly all the traffic goes through other directions. How does this work? So just two examples that I brought in terms of message content. BSM is the basic safety message, what's in there, speed, position, heading, acceleration, time stamp, signature and a certificate. Based on that you can implement a whole bunch of V2V safety applications, crash avoidance applications. And another one is the SPAT. That's about traffic light phases, the signature and certificate. Why I'm showing this here is there are two similarities here or two items here in both of those messages and that is the signature and the certificate. So that's security. So much for a quick introduction to V2X. If you want to know more, Craig is here. You can buy his book in the vendor area. There is a pretty good introduction to V2X in there. Which I think is a pretty good starting point if you want to learn more about that. If you're already a little bit more advanced and you want to implement stuff, just a quick highlight to a project that was just published at Blackhead by Onward Security. It's the last point on the slide. Onward Security published a open source DSRC validation tool which generates messages that are compliant to the necessary standards. You could do this with the first hardware that listed there and GNU radio. There are also, there's also a Linux kernel which I think already is in the upstream, which is already in the upstream, which implements the necessary radio capabilities for HD9K based Wi-Fi cards. So if you want to play around with that, that might be good starting points for you. So now let's look a little bit into security. I did a talk here at the parking village two years ago. If you want to deep dive into how the system is built, how it works, just look up this talk. It's on YouTube I think. The important point I want to highlight here is pseudonym certificates. So those certificates that we saw in the messages before are so-called pseudonym certificates that are used to create signatures and enable the other side is a receiving end to verify those signatures. They're called pseudonym because cars have a whole bunch of them so they kind of can hide their identity behind them. They exchange them on a frequent basis so that nobody could use the certificate in order to track a car. If they would always use the same one, you just need to listen to one message at the starting point and then listen to another message at the endpoint of the traffic. And if the same certificate shows up in both sides, you know that this car was traveling from A to B. With pseudonym certs, there are some form of protection against that because they're changing those certificates so you can't use the certificate for that. And the idea goes through the whole communication stack. So all identifiers are exchanged on a regular basis at the same time so that there's no identification in there. I just wanted to highlight this because this pseudonymity concept is an important concept for privacy protections in the system. And later on if you look at misbehavior detection, what this talk is about, we kind of need to break this pseudonymity to a certain point because otherwise we're not able to identify bad actors. So the first line of defense for V2X security for sure is if you talk about signatures and certificates, the private key. If you just look at what was proposed or is proposed in upcoming or proposed regulations, especially in the US, there are certain standards that are required that you have to implement or certify for your devices. Over here in the US it's FIPS 140-2. In Europe there is the idea of a comments criteria based certification and the protection profile for that is about to be published within the next weeks. The point that I want to make here is this first line of defense might not be enough. Because if you look at, for example, FIPS 140-2, there are no protections against side channel attacks required in there. And we all know that this is kind of daily business for you guys. So that might not be enough. It doesn't mean that OEMs or other device manufacturers that are implementing V2X right now don't add to that. That they don't add side channel protections. But if you look at this from a regulation point of view, this is not enough. You need to go beyond that. And then if you're building walls, like the security causes that we oftentimes see on slides, don't forget that there might be miners that try to go around your walls, under your walls, over your walls. So side channel and the direct hardware, security hardware is not the only asset that you need to protect. In fact, if the device is generating a message and the applications built on that are depending on the position, for example, within this message, the question is, how does the vehicle that's sensitive, the message, generates this position? And then you come to GPS as one of the inputs. And you don't need to necessarily get access to the car keys in order to get a message into the system which doesn't reflect reality, which doesn't reflect an actual position of a car. You just need to kind of fake the GPS position or the input to the car and then the car itself generates the message for you, which is not reflecting reality because you just spoofed the GPS position for that car. And this is actually a recent publication where we were able, where it demonstrated with a device which is I think round about 225 bucks in hardware and some clever software where they were able to fake or where they were able to spoof the GPS position in a way that Google Maps actually rerouted the car to a different endpoint. So what it took into consideration with their software is the actually street network. They were able to spoof to a position in a way that Google Maps recalculated the route and then ended up in a different endpoint, which is C over here, instead of D where the original or the user originally wanted to go to. If you take this and would apply this to V2X, you see how easily or not how easily because nobody demonstrated yet, but a potential pitfall for V2X messages as well. This whole introduction is just about highlighting the importance of misbehavior detection. And especially the research on that. A quick definition, what is misbehavior when I say misbehavior? I mean the willful or inadvertent, in this case there was somebody spoofing the message, so the original device doesn't do anything willfully here. Transmission of incorrect data network. And incorrect means it doesn't reflect reality. It doesn't reflect your position, it doesn't reflect your speed, it doesn't reflect your heading. Misbehavior detection on the other side is the process of identifying this misbehavior. So actually figuring out there was somebody who sent a message that was not reflecting reality, so it included incorrect data. A couple of selected research approaches on this, and this work is heavily based on work from colleagues in Europe. One idea that came up in research was a verifiable path history. If there would be a way that you could figure out where a car was in the past, you could extrapolate to the current position and then figure out if this car is actually faking its position. This would be especially helpful against cyber attacks, because if a car has a whole bunch of certificates and an attacker is able to get access to the private keys and the certificates, they could simulate a whole bunch of cars. They wouldn't need to use those certificates for just one car. For example, in the US there were talks about having 20 certificates valid per week. In Europe we are talking about 100 certificates per week, so in Europe you could, if you get access to that, so that's always a big if, but if you are able to get around the first line of defense, you would be able to simulate 100 cars at the same time, and then you get a green light, right? So verifiable path history, the idea was that there are roadside units that send you a timestamp beacon, a signed timestamp actually beacon that you would then incorporate into your messages, and therefore there is a very small chance that a couple of cars get actually the very same timestamp from a couple of those roadside units and would be able to kind of show you a string that looks exactly the same. If that happens, if you see a whole bunch of cars that have exactly the same verifiable path, you actually have a high chance of seeing a civil attack going on right now. The issue with that is we need new protocols. So an issue not necessarily in terms of this won't work, but nobody took it further than that so far. It's a, I think, pretty good idea, but we don't have a protocol yet which reflects that. We would need a new roadside unit service for that that somebody has to develop, and actually we would require a high coverage of roadside units that we don't have nowadays. So there's a huge cost factor to that, and actually in this research that's not reflected, nobody is actually calculating what would it cost to deploy a system like that just for misbehavior detection. So the question is where do we go from there? Next one is plutonium linkability. So if you get all those plutonium sorts from those cars at a traffic light, same situation, intelligent traffic light, you're trying to decide if you change the green face for a certain direction. If you could just take all those plutonium sorts and get to know which ones belong to a single car and which ones belong to different cars, you would be able again to figure out if there's a civil attack going on. But this would break the privacy point. If there's a single entity in this whole system, and actually this would be a whole bunch of points, all the traffic lights that implement this intelligent traffic light application, if they would be able to break the plutoniumity in order to figure out which plutonium sorts belong to the same car, then an attacker just needs to break into an RSU and get this capability and then could start tracking vehicles with that. That's one point. The other point is again that right now we haven't really figured out how you could do this quickly because it doesn't help you if you get this information like 30 minutes later. Not to speak about days where the system is currently with this capability. So you would need to get this kind of immediately. And nobody actually calculated so far how much traffic we would see with that, what the performance requirements are for that and how we would implement that. So again, there's something to this idea, but nobody took it any step further so far. Then a couple more radio signal based. You could kind of do triangulation. You could look into power differences. Again, nobody really took this any further so far. There are other schemes that are building on the swarm type of this kind of network itself where you kind of vote if a situation that somebody is reporting on the V2X network right now is actually happening. So if there are a couple of devices that are having other capabilities like radar to verify if there's an obstacle or if there's actually somebody breaking, you could kind of vote on that and then let everybody know okay, you can trust this message or you can't. Again, there is something to this idea, but we need a new protocol for that. There is again an issue with pseudonym search change. What if the attacker just changed the pseudonym search in between? So you need to do this again and again and again with age change. And the question is what is the effect on automotive hardware for that? How much more computational power do you need for that? How much more communication will you have on the restricted channels that are available for V2X? Again, who's working on that? Reputation based is kind of similar. If over time other devices figured out that the messages that you sent are actually trustworthy, you get a kind of a higher reputation point or score and could use that in order to inform newly added devices to the system that you are actually trustworthy source. But then again, this could be used as an identifier or actually you need to have an identifier to do this over time because otherwise you wouldn't know where to add those numbers to or the score to. So this has again implications on the pseudonymity, on the privacy aspect of the system and nobody really looked into this so far. Then there is another interesting approach which I think was presented just recently. We kind of do multi-source fusion within the car itself, the receiving end. I think it's a logical step. If you have radar, if you have LiDAR, if you have other sensors and V2X is just another sensor, you could kind of fuse all of those information and see what the probabilities are that there actually is somebody breaking or that there actually is an obstacle somewhere. I think this is worthwhile looking into, but again the question is what are the requirements on the automotive side for that? And maybe I have to highlight this for people that are not working on the OEM side on the automotive industry, but computational power is restricted, energy levels are restricted, memory is restricted because we are trying to do things cheaply. And we are having special requirements on hardware because we need to accommodate way higher temperature ranges than consumer hardware does. So there are a couple of specialities to automotive hardware that you need to keep in mind when you design a system like that, that OEMs are sometimes fighting about cents. A couple of cents added to a car and you look at one of the big car companies, five million cars per year, a cent might make a difference there. So all of this is research. The question is where are we with actually implementations? Do we have anything implemented and tested out and have the data to see if this actually works performance wise in terms of detecting quickly misbehavior, but also in terms of automotive restrictions? And the only thing that I know of so far is actually work done by Kemp and by the U.S. DOT. And I just want to dip into this a little bit in the next couple of slides. They have a concept where they differentiate between local misbehavior detection and global misbehavior detection. Local misbehavior detection is the process that identifies locally in a device misbehavior or at least creates a suspicion that there is something wrong and then collects data. It's like a monitor in classical IT security speech, some data points, some note in the network that kind of reports back to you whenever there's anything kind of fishy. And then the global side is the backend that collects all of this data, all of this reports and then tries to figure out if there was actually something going on. Because on the device level you might not be able to actually make a hard binary decision. Yes, this was misbehavior, no this wasn't. So there's the idea here is that you have a backend which collects from a whole bunch of devices those reports and then somehow or hopefully makes a decision call to identify that this was misbehavior or not. How does this work? The two, I think two methods that were implemented so far on actual devices are proximity plausibility. So if you see a couple of cars driving around and their messages, they might start overlapping because of bad GPS reception or because there's an attack going on. And if this happens a couple of times, then you on the receiving end start being suspicious and saying there's something going on. So I buy the file report, collect all those messages that I got that looked like they were overlapping, sign them and send them off to the global side. Another one is false warnings. So I got messages that caused me to issue a warning to my driver, for example, for forward collision warning. Like there's a car breaking right in front of you and therefore you should engage your brakes. So I already made this decision in my device that there's something going on. I warn my driver or engage the brakes if I have an autonomous car in this way. But then nothing happens. Especially when I just put it out to the driver, there's no driver reaction. He's not engaging the brakes. He's not steering away and there wasn't a crash. So something with this data was wrong. I got clearly data that there's somebody in my lane breaking pretty hard and I'm only that far away so there has to be a driver reaction to that but there wasn't. So again I collect this data and send it off to the global site. This kind of process is called misbehaviour reporting, collecting the evidence and then sending it off to the global site. The only two algorithms that are implemented so far and some got some form of testing are device based and event based. Device based is a pure counter. How often do I see the same pseudonym search sending messages that led to the receiving end to actually create a misbehaviour report? It's just a counter. Whenever I see the same pseudonym search, I count up and when there's a threshold reach, like five times for example, I decide, okay, this device is misbehaving so I just put it on the C.I.L. or put its certificates on the C.I.L. and let everybody know to not trust this device anymore. The other one is a little bit more sophisticated. This one is location specific so I have a predefined area. I put all the misbehaviour reports that are coming from this area together. I look under the pseudonym search and actually I use the capabilities that I built into the system and here I'm referencing again to my talk before or to Craig's book how this actually works but there is some way that the misbehaviour authority within the system is able to check if those reports are actually about the same device or about different devices. It doesn't get the pseudonym search for that so it can't start tracking, it just gets the spider information, they belong together or it didn't. And if they belong together to the same device and again if there's a threshold reached, the misbehaviour detection authority puts the device or its certificates on the CRL. And this is the last step which is called device revocation or sometimes blacklisting because in Europe we don't revocation we just blacklist the device so it doesn't get new certificates. But there's some form of penalty to the device in order to exclude it from the network. So how does this work? An example, we have a misconfigured device or an actual attacker, doesn't matter in this case. There are three cars, A, C and D that are just regular V2X equipped cars and a suspicious device B. And B's position is slightly off. Could be programming error, could be that we are in New York City, downtown and we get bad GPS reception or it could be that there's actually an attacker spoofing the GPS inputs to this car. And therefore the device or the messages of this device look like that there's a B slash car in my lane, in the lane of A, C and D. This is this kind of ghost car on the bottom right. So we go further and then we see a first overlap. All the cars are recognizing this and seeing those messages overlap and therefore those cars had to overlap and they're starting collecting the evidence. They see one overlap, another overlap and a third overlap. And in this example the threshold is three for the local misbehavior detection so after three overlaps I decided okay this looks suspicious so I send off a report. And as this happened pretty quickly the chances that the sending device was changing is pseudom asserts is pretty low so it looks like three times overlap with the same pseudom assert. If the global misbehavior detection looks like those reports it can decide B is misbehaving and therefore it should be revoked. How could you trick this? And again this is conceptual level already. We didn't do a proof of concept for that but on the conceptual level already you could say I have three cars hacked. The chances are if you're capable of hacking one of the cars that you can hack the same model year, same brand and a different car is pretty high because of the reuse in the automotive industry. So let's assume you were able as an attacker to gain access to the private keys of three different cars same model type and so on. Those cars are labeled here A, B and C. And what you do now is you pick a victim which is V in this case the yellow car. The attackers are red and you create a couple of fake cars A1, B1 and C1 with your messages. So you're sending messages that look like you were in this lane where actually not in this lane you're just one lane to the left in this case. If you send out those messages over the V2X network everybody will see them and the other cars, the green cars around there will noticing those overlaps of your cars with the victim. As you have hacked this system or the cars you can exchange pseudonym search for those cars. We have either 20 or 100 in Europe so you could use for each message that shows an overlap of different pseudonym search but you actually don't need to because once you overlapped three times with it it looks like V is always overlapping whereas the others are not. They're just overlapping with V. So with the currently implemented methods on the global side the misbehavior authority would actually revoke the victim's car. As an attacker you would have reached your goal in kind of getting this car out of the system because now it can't use or nobody is trusting their messages anymore and therefore in crash situations for example nobody will actually believe that the car is where it is reporting it that it is. So what do we do? And that's a big question mark because if you look at ongoing work right now what different stakeholders are doing? Nowadays I only found so far to stakeholders doing anything. First of all the U.S. DOT they took the work that they created together with camp and decided that the connected vehicle pilots that are being deployed here in the U.S. right now in New York City in Wyoming and in Tampa should implement some form of minimum viable misbehavior product in their devices. So the capability of doing local misbehavior detection in some form should be implemented so that we get a little bit of a larger scale test out in the real world which is good. And the other one is a sensor based approach to misbehavior detection. This is kind of the step where we go and start doing sensor fusion. We take lighter and radar information and add V2X information on top and see if there are anything any devices that report a position which doesn't reflect physical reality. And if so report again back up there. I think those are necessary steps and I can only appreciate the work that the U.S. DOT is doing there. But what else do we see right now? In France there's a project called Secure Cooperative Autonomous Systems and they're actually looking into viable misbehavior detection approaches right now and the idea is they implement the full chain. They're following the same approach, local and global misbehavior detection, creating reports and then finally revoking our black listing. But that's it. And the question is if you look into the news as of lately, the R industry is deploying the systems. We have OEMs that are putting this into their cars. We have roadside operators that are putting devices out there. So this is happening right now. But it looks like nobody is really solving this issue that whenever there was somebody able to get around the first line of defense, what happens next? So this is a big concern of mine. I don't know if you share this concern. The question that I'm asking myself and trying to ask you is why is that? Why is nobody taking this seriously enough? If you look at the applications and I just showed 5 out of I think 50 or 70 applications that are defined by now for V2X, I think this is critical infrastructure. And for critical infrastructure we need to look onto the additional lines of defense beyond the first line. Especially with systems like that that will operate for 30, 40, 50 years into the future. We have to anticipate that at some point somebody is able to get around this. So what do we do? Why is it that nobody actually takes this seriously? So any ideas in the audience? Nope. I have a couple of stomach feelings here. So this is kind of a hypothesis that I want to try out here. Maybe there is some misconception about the status of V2X security. Maybe, especially in the higher up ranks, people think that we have it covered. We already working on this, we are already spending so much money on secure hardware and figuring things out for protecting the private keys. So that's enough. That's all we can do. Is that the case? Or is it just that somebody did a risk analysis and said, yeah, we don't have that many cars yet on the road and that many roadside units yet on the road. So this is an issue which we need to handle and figure out down the road somewhere. Could be. What adds to this is in the last two years, I often time saw a couple of papers published in the car hacking area where respectful researchers and heads off to them were able to get, for example, into an entertainment system. And when you read their papers and you're pretty excited about it, especially when you work on automotive to see, hey, there was somebody able to go around my first line of defense. So what did they do with that? And then in the end, they kind of claim we would have been easily being able to control the car by that. And then nothing. And I often times think, okay, so you kind of made me excited about your research. I read the news because the media might get just this last line saying, oh, they do kind of remote steer the car or brake or whatever. And then it's just we might have been able or we were being able but no proof of concept. From a researcher's point of view, I think, yeah, I don't know. And the feedback that I often times get from the industry is then, yeah, you know, those researchers, they were able to get in there, but we have it covered. So there were no chance that they would get ever to the steering wheels or the brakes and whatnot out there. So this is just for highlighting their research in the press. I don't know. Maybe we need to get to the point where research takes this additional step. And I know it's hard work. I know it can be costly. I know it takes time. I know it doesn't make immediate news, especially if you are not able to create a proof of concept but maybe we should go there and say proof of concept or a GTFO. Would that help? Because I think on the researcher area and the security experts area, we all kind of have more or less knowledge on what they would be able to do. But just one or two steps up the chain, it's all now, we don't need to work on anything there because we have it covered. There was no proof of concept. Nobody demonstrated that. They're just claiming. So they don't do anything. They don't get the budget in place. They don't get the time in place. The people in place to work on that stuff. Maybe this adds to that. I don't know. Any other ideas? Yeah. So the question was why do we emphasize privacy so much or actually with two questions so much because everybody is carrying around a cell phone already and locations get sent wherever. And on the other hand, isn't there anything else in the stack that you could use as a fingerprint and therefore ID to track devices? Last question, my answer to this is the idea is there's none of this in there. So MAC addresses get changed. All of the IDs get changed and actually there's already work and I think there are chips available with radio chips that make it hard if not impossible to do a RF fingerprinting to use that as an identifier. Why do we do that? Because first of all, we are expecting a regulation and there are laws in place. I do understand in the market over here that they have to protect privacy or at least give the option for that. And second, I think the OM industry has to cater not only like the general public but also critical security and privacy concerned citizens. So there is an interest in the industry to protect your privacy because we are broadcasting this information. So it's not just Google or some app operator that collects your location information and maybe you have even a chance to configure your device in a way that it doesn't send those location information. No, we are broadcasting this. Everybody can collect those information. You just need to set up a passive listening device somewhere and nobody actually would know that you're tracking. So everybody could do this and that's the idea that you don't want to have your spouse looking for where you're going or maybe you are a higher level target that needs to be protected. So we want to protect as best as we can the privacy of our customers. Yeah. If you turn off VDV especially you use all the safety applications and that might be something that you don't want to. Okay, so where do we go from here? We certainly don't want to go back to those times, I hope. So just with all the online connectivity out of the cars because it doesn't work. Maybe for the last point I made about the proof of concept, maybe we in the industry should critically comment on publications that are coming out. Making statements about the probability that this actually works, how good the research was maybe this helps in order to get a public image on the work that is going on and a more realistic view in our organizations higher up. Maybe we have to kind of get in touch with the VTX application developers because they're way more than the security folks and tell them you have to define for each of your application what misbehavior is and where it could lead to and how you can detect it within your application. Because right now it's a very small group of developers doing this and as I said there are 70, 80 whatever applications out there. There's no way that we can ever catch up so we need to get applications developers to get at least a basic understanding of what the security issues are and what they could add to prevent that. Have a conversation within your company about VTX and raise the issue and tell them what the issues are and that we need to work on that. There's another idea. Yeah, that would be pretty awesome. I think the car hacking village actually discussed this two years ago to add a VTX module to their batch in order to start working on this. Unfortunately never managed to do it or I don't know what the issues were but there were only discussions so far. But maybe that's actually a good idea. Maybe we should have VTX CTF challenges next year in the car hacking village. Anything else? Yeah. No problem. I don't have an answer right now on this and that's actually my point. We don't have the answers yet. We need more research or we need more kind of industry-led applicable large scale testing of what research already did in order to figure out if this works. Because there are already all kinds of good ideas out there. The selected research approaches that I show today is just a small subset of the ideas that are out there and I showed them further and actually tested them out to see how do they perform in terms of how quickly and how often do they actually identify real misbehavior with all the environment conditions that we have in automotive. How much does it cost? That's always a question if it's too costly nobody will implement it. So all those there are tons of good ideas. My point is we need to work on that in order to get them implemented. Yeah. So there's an application. If you're in a distress situation it's called you don't change your pseudonym source. Couple of sources for this. I don't know how much time I still have but as long as nobody kicks me out today I'm happy to take more questions. Yeah. That's a pretty good idea. So what is implemented so far in the US is that there is a CL server which provides it in the car or the devices. It's in their responsibility to connect on a regular basis to get them. With DSRC there's an issue because you don't necessarily can anticipate that they have cellular based on that. So you need to have other approaches like roadside unit that provide connection to the CL server. Or there was another research approach where you have collaborative sharing so cars that already have a refresh CL could share it with other cars that don't have it. But as far as I know it was never really implemented. There might be some simulation work going on right now on this. But this is an issue. When you look at CV2X the new technology which is different from this Wi-Fi, this IC standard you already have a cellular modem in there. So the assumption is that you have way more regular connectivity to some backend and therefore could download this information. There's some margin of error which is built into the applications. Actually you are not supposed to send your... You're not supposed to send BSM messages if you're out of a certain threshold which is I think 1.4 meters or something like that. But there's some threshold built into that. But it depends on the device being able to tell that it's out of accuracy for GPS. So if you spoof there's a hard chance that the device is not able to tell. Especially if it doesn't have additional measures like deck reckoning for its positioning and then it wouldn't know and still would send this messages. And in terms of overlap again this is the reason why we don't send immediately any overlap that we see on a local basis but we wait until we saw like three overlaps or five overlaps and we figured out the real threshold for this yet. Again large scale deployments missing this so far in order to test this out. But the idea is you don't report immediately each and every overlap because there could be some weird GPS condition going on right now. So those are the hard tough questions for the overall system. I can only answer them as good as I can because that's not necessarily security governance and politics and whatsoever. I my personal view is we will see many CAs that are operated by the OEMs for example and they are independent of the country that the vehicle is operated in. There might be still restricted to a certain market like there's a very small chance that a car that is produced for the market which is Canada USA and Mexico at least gets ever shipped to Europe or China and operated there so they could be different CAs for North American region, for European region. But there are also discussions there that for example the U.S. DOT stands up, puts up ACA for that and then the Canadian government has to do one and the Mexican has to do one so there are a couple of different concepts as long as we don't have a regulation we don't know and it's up for industry to figure out and decide on that. That's especially a question of latency I think. I think that the overlap that we might see is an indication that you create a suspicion locally that there is something going on because especially on the receiving end for overlap you don't know which one is the wrong. You get two messages from different cars and they seem to overlap you don't know if car A or car B is wrong. This is the reason why this data is then sent off to a global misbehavior detection backend and the global backend will collect a couple of reports and if the same device shows up in different reports over and over again then there is a high probability that this device is somehow misbehaving that means. But in between the first detected overlap and the decision on the backend side to actually revoke this device and then getting the information out to all the other cars that this device is now revoked there could be actually crashes happening. Does anybody know if I'm running out of time? No? Yeah. So that's another good example for an example if you have multiple local misbehavior detection approaches implemented overlap and warning based then you have a higher probability to make a local decision which one not to trust but again especially with the warning based it might be too late for the driver because when you figured out oh there wasn't a crash then there's a suspicion that there was somebody doing it wrongly but in case there is a crash it's too late. You know everybody was right where the messages were right but it doesn't help you anymore. Okay I get a signal that we are over so if you like to continue the conversation just hit me up after the talk talk to my wife or just send me a message on Twitter. Thanks.