 So thanks a lot for joining on this session on securing the connected car. We'll see if more people join over time. It's always interesting to see the shocked expression on people's face when they make all this noise with the door. So we'll see if we can see that again. But until then, let's see what happened to the car over the last couple of decades. You can see in 1990s, you started to introduce electronics into the car. In close to 2000, you have telematics. So basically, what's the fuel consumption and how far you've driven things like that, statistics about the car. Then early 2000, you're starting to get infotainment systems, basically maps and entertainments, information and audio music systems. And then towards close to 2010, you see the cars are starting to get more and more connected as well. So you can see this trend that there is more and more software that comes into the car up until today. And going forward, right now, there's a lot of focus on assisted driving in cars. So basically, assisted parking, for example, and warning systems if you're outside your lane when you're driving. So basically, to make the driving more safe. So obviously, that's a lot of intelligence there as well. And going forward, you've seen a lot of articles about what these bigger tech companies are focusing on in terms of automotive, autonomous automotive, I should say. So yeah, that's self-driving cars, basically. So a lot of big names in that like Apple, Google, Uber, a lot of people and existing OEM or car manufacturers as well are working a lot on that. So obviously, when we get there, there will be a lot of software running in the cars. And this has several implications. So just a short introduction after this overview. So my name is Einstein Stenberg. If you forget it, you can think about the German physicist. That's a very easy way to remember it, especially in the US and Starbucks. It's a classic. So I've worked in systems management and security for about seven years now, background from cryptography. I'm working on a project called Mender IO, which creates an over-the-air actuator. So if you're interested in that, we also have a booth upstairs. So this is the rest of the session broken down into these three pieces. So first of all, we'll have a look at what are the opportunities that we will have once this more complicated or this more advanced technology environment unfolds in the car, basically, with more and more software. Then we'll also have a look at some of the security implications. We'll go through one quite famous attack and go one level deeper to better understand what actually happened. And then finally, we'll have a look at patching cars and patching in general how it can be addressed. So why is more and more software appearing in your cars every day? One reason is obviously the more net revenue for the car manufacturers. So a new thing with the cars, once you start to be able to connect them and update them and a lot of features are built on software, you will see the same that's happening to your smartphone where you can actually buy features after the car has rolled off the manufacturing plant. So this is now happening and people are starting to look into that. So there are some big numbers behind this as well. Tesla does this today. They are quite far ahead as you might know in the car manufacturing space. So they had an OTA system that can add software after a car so that consumers can buy additional software after they purchase the car. So on the other side of why you see more and more software is that you can also get cost savings especially if you use open source software. So here is a typical IVI stack. So IVI, a nice three letter abbreviation is vehicle infotainment. So basically infotainment system. So at the top you have the HMI or the UI basically and some applications, middleware, maybe an updated operating system, board support packages, hardware. So the interesting thing to take away is that the cost and value trade off here is a bit skewed. So you can see the bottom layers are very expensive to maintain. Approximately 60% of the time and effort we spend on these bottom layers. And while the differentiation is happening at the top because you have new applications or you have new features, that's what the users will notice. And that's how they recognize the brand of the car manufacturer. So what makes sense here is to focus on having open source as these bottom layers because there is no differentiation anyway. So that's what I think will happen and it's also what has started to happen now with projects like AGL, automotive grade linux becoming more and more popular. Because traditionally all of this stack is developed by the car manufacturers or contractors for them. Yeah, I mentioned on OTA updates and obviously with all this software that we will see in particular open source software in the cars, you need to be able to update it because as you know software contains bugs and the more you have of it the more bugs there will be. Unfortunately, so you can also see that reality in today's recalls of cars that 33% of them, there's a research by ABI research says that 33% of them could have been avoided if there were OTA updateers available or installed on the cars. And this number will only grow and there's a lot of savings obviously. We'll have a look at one next, the Fiat Chrysler hack. How many of you have heard of it? Okay, almost everybody. So that's good. So in that case 1.4 million vehicles were recalled which is obviously quite expensive. So hopefully you don't know the entire story and the sequence of events because that's what we'll have a look at. But this is the outcome so to speak. There was a hack presented at Black Hat Conference in 2015 and by two researchers they managed to get full control of a car remotely and control the steering, the brakes, everything was under the attackers control and they didn't modify the car in advance either so it was a true remote exploit. But the car didn't have any way or the manufacturer didn't have any way to fix it once this happened so that's why they had to recall 1.4 million cars. And there's also an update from August that they extended the attack to update the ECUs. So we'll have a look at sort of the internals of the car but basically they could update all the microcontrollers as well over the canvas. So to start off looking at how this happened we'll have a look at how the car is laid out in the beginning. So this is a head unit and it contains a Wi-Fi so many of these cars do that because they want to offer subscription services. So for example if you have... So basically the car has a 3G connection and then it can distribute that connection over a Wi-Fi to all the passengers. So I still don't have kids myself but I can see the scenario where it's useful where you can give kids a laptop or a tablet or something and then give them Wi-Fi access and then you would subscribe to that from the car manufacturer. So obviously you could potentially do tethering as well but this is an alternative. And this Wi-Fi was password protected and that's how the setup was. So what the researchers started with was to have a look at this password and try to guess it which wasn't that hard. So it was based on the system time after the car was provisioned which is pretty much the same before you sort of have the actual clock set correctly. So it was pretty much January 1st, 2013, 0000 plus minus a little bit. So you can guess that in a couple of attempts and then there was software vulnerability in the multimedia system. So now you have a car, you can guess the Wi-Fi and you can bridge into this multimedia system. So you could argue that this is not really a security or well definitely a security but not that much of a safety issue because now you can go close to a car and you can maybe turn up the volume or look up the GPS coordinates but since you're so close to a car it doesn't matter anymore because you know where you are. So not that severe maybe. So what they did to make it more severe is that they breached 3G sprint network. So now you can do the same things remotely. So you can look up where the car is, you can also control the audio system. So how many of you guys have heard about the CAN bus or know a little bit about it? Okay, about half. Alright, so this is an internal system bus so to speak on the car. About 70 electronic control units is used including very safety sensitive systems are connected to this bus and they send messages, receive messages like transmission and brakes, airbags, things like that. So it's definitely an area where you want to keep quite secure from any vulnerabilities or third parties. There's also in this particular car there was a V80 chip which we're able to read from the CAN bus and then send diagnostics information to the multimedia or the infotainment system. The reason that has to be there is because if the tire pressure is low you're probably seeing this happening that you get a notification about this or you're running out of oil or maybe there's something wrong with the brakes. You have to notify the driver about these things. So there has to be some flow of information from this CAN bus to this sort of higher level interface to the user. But the good thing is that this was designed in such a way that it's only possible to read so the information flow should go one way, theoretically. So obviously if you change the logic in this chip then you can make it also write to it and that's exactly what happened in this case because remember the attackers already had control over the multimedia system because of that vulnerability. They were able to update this chip with new firmware and obviously there should have been some authenticity checks here so that not everybody can just update that chip but there weren't. So now they can read and write to this CAN bus and that's how you can put this all together where you have a cellular breach from the sprint network. If you have a vulnerability in the IVI system you can update the firmware of this V850 chip so thereby gain, read and write access to the CAN bus and when you have that then you can control the car remotely. So there's a couple of lessons. Obviously the Wi-Fi hotspot didn't get used in the end but the actual guessable credential is a common theme that we can recognize and then there's a remote service that was accessible but vulnerable. This happens also in other systems obviously. The V850 chips as we mentioned, the update to that did not have proper authenticity checks and then of course when the disaster happened then there is no way to fix it. So there are some lessons we can take from this attack and obviously as we mentioned more and more software means more and more ways to attack the system. There's a statistic that there's between 1 and 25 bugs of every thousand lines of code so if you guys are very good developers maybe you have two bugs per thousand lines but it obviously depends on the development process being used and how fast you need something to market versus focus on security which sort of range you and then. But the point is that it's just a ratio so the more software you have the more vulnerabilities you have as well. So what we should do in order to end up in the lower end here is to try to rely on more well maintained software instead of writing it ourselves. There's also a couple of very, I don't know if I should say famous but well known security design principles. The principle of least privilege. If you don't need everything to be run this route then you probably shouldn't. For example separation of privilege, maybe a component like the media system didn't have to be also able to directly write to this V850 chip. Maybe they could be separated so if you compromise one of them you wouldn't be able to carry out both these tasks basically. So separation of privilege you see that all over the place also in terms of virtual machines and hypervisors and things like that. And then Kirikov's principle is about cryptography. So basically it means that you shouldn't rely on security by obscurity. The only thing you should assume is secret is the key that you use in cryptography. So like the algorithms don't invent your own secret algorithm and hope that nobody figures out how it works. So here's the third piece of this presentation. It's about patching. So we'll have a look at especially in an embedded why it happens so late. The picture you see here is from a broad research of information systems and not just in embedded but basically tells you how. So if you look at the bottom axis then you will see days. So it's like 100 days and 200 days. I don't know if you can see it in the back. And then at the vertical axis you can see the probability of exploit being public. So basically after a vulnerability is found so five to days after a vulnerability is found there is less than 10% chance that there is a public exploit for that vulnerability. After 60 days there is 90% chance or more that there is an exploit. And then obviously the problem is that average time from a vulnerability is found until it gets patched is 110 days which leaves quite a big gap for just taking a vulnerability of the internet and just applying it. You have this huge window here. So obviously if you could move to this five to ten day patch cycle from when you know it until it's actually fixed then you would have a much better probability of surviving this. So you could also ask why it happens. Why do we not patch more frequently? Obviously if you're in the field of security it's hard to show value maybe until it's too late. You should have done it but now you can do other things until something bad actually happens. So it's an invisible problem. Another thing is obviously that it can be very costly or risky depending on how you do this or what kind of systems you are updating. Maybe it's a manual process like in the Fiat Chrysler case where you have to drive the car to the retailer in order to get it updated. That's obviously not cheap enough. And then production downtime is also never a good thing in terms of risk. So these are some of the causes. And then some of the things especially we have discovered in terms of patching embedded devices maybe some of you have as well. How many of you have built your own over-the-air app tater at least once? Okay, six people. So about 10% maybe. So that's what we found a lot as well that people tend to build over-the-air app taters. A lot in homegrown projects as part of development projects in the belief that it's a simple problem. What can go wrong because you're just downloading some binary and installing it maybe rebooting. But these are some of the extra problems that you will have in embedded that you might not think about initially. So first obviously you have very expensive physical access to the embedded devices. So if something does go wrong then the device is bricked as many call it and then what do you do then? Maybe you have to recall it. You have unreliable power so there could be a device that runs on battery or it could be a user that just unplugged it. So you need to be able to handle this. What happens if you're updating the device and then you lose power or you're updating 1,000 devices and then two of them lose power? So then you also have the connectivity issue on embedded devices could vary very widely. So it could be a 3G connection, maybe even Wi-Fi but typically more this low bandwidth connections devices can move around so you can lose connectivity or it can be very intermittent. So when you do updates then you have to manage that aspect as well. And of course the security aspect of the network itself. So you should assume that or in general also when you have embedded there and you use wireless technologies you should always assume that somebody could be listening to it or injecting packages as well. There's countless examples actually in OTA, just in the OTA space that you have some setup box that just screams to be updated and then the first update that it gets it will just install without any problems. So it's pretty common unfortunately. But these are some of the things you should worry about if you work in OTA updates and also the reason why especially cars are a bit slow to adopt it because there are so many things to think about. I won't go through all of this but this is the basic flow that you should follow or that an updater will follow. So first you detect and download it obviously and then there are some integrity authenticity checks to do, install it and then the last one is a bit interesting in terms of embedded what do you do when something goes wrong. So maybe you most likely want a rollback strategy in terms of recovering from broken updates. So also there's a lot of different ways you can apply updates so I think the most traditional way in embedded is to do full image updates. So there are a couple of trade-offs in terms of the size and the installation time, the ability to rollback and the consistency. So consistency means basically if you test something in your test environment, it looks good, works. What is the probability it will work in the production environment? So that's what I mean with consistency in this context. So it's consistency between devices basically. So with full image this is pretty easy because they are quite identical. The devices you have in test you flash them with the same thing as you flash the production devices with. So pretty good sense of them being the same. But for example if you have a package based update, it says the second column. This is a bit more tricky because you might have different set of packages installing these two environments, maybe different versions of libraries. So that's why it's a bit more tricky to deal with consistency in package based updates. Also rollback is a bit harder. I think package based updates are what people typically start with. If they build their own, what did you guys use, those of you? Full image. Full image, okay. And then rollback of course, how do you do that with a package? If you uninstall it, it doesn't necessarily make it the same. So another one is also Docker or containers. This doesn't a newer version or sort of a newer approach to deploying updates. That might be interesting. So how do you avoid the breaking in general? You probably can't but these are some basic things you can do to reduce the risk. So when you do an update of a device, you should have some kind of integrity check. So typically this comes in the form of a checksum. So maybe a SHA-256. So you ship the checksum with the data computed on the remote end and you compute the checksum and you compare them obviously after you've done the installation. You will catch basic problems with a network or storage corruption and things like that. And it's very easy to implement as well. So that's probably what you should start with. Then rollback is also something we discussed briefly. But basically the ability to go back to the previous state if the new version doesn't work for whatever reason. So it's kind of a catch-all strategy where if you lose power or connectivity or maybe just the application was incorrectly configured, you can still go back to the previous version. And this property is a bit more tricky to develop unless you have designed for it up front in terms of what kind of update you are deploying as well. And then an interesting one is phased rollout. It has actually several names. Another name is campaign management. So all good IT words have many names I guess. But it's used by many big infrastructure in the server side as well where for example with Facebook when they have a new feature they develop, they will try to deploy it to maybe Australia first or some part of Australia or some users between 25 or 35 have some way to segment it depending on the feature. And then you do monitoring and you see, okay, do we see any spikes of the traffic or do we see any complaints from the users? Some irregularities. If not, you can include New Zealand also as part of the deployment maybe. And then you expand from there. So you can segment it in any way that you want. This is a very generic way to reduce the risk of making changes in general actually and it's being used a lot. And actually you probably do this already because you have a test environment and a production environment. So you kind of roll it through these stages. So the summary is that in order to have more security in the car we should try to use open source software where there is no differentiation or at least very well maintained software and try not to focus on quality instead of quantity I suppose. And there should be a way to also make changes to the devices because as you know as engineers things are not perfect ever. So there needs to be a way to improve the product after it leaves the shop basically. And then also these well known design principles or security design principles, Kirchhoff's principle and principle of least privilege and separation. You can see those if you look at many of the attacks then they could have been avoided if one or more of these have been applied. If you look a bit deeper at what happened with this Cherokee case and then you can ask yourself this as well what happened after an attack. So in your left hand side you can see how Fiat Chrysler responded. It's a bit small so I'll try to read it after their cargo attack and they said that this required unique and extensive technical knowledge and manipulating the software constitutes a criminal action. And then Tesla actually got hacked in September 2016 quite recently and then they say it was a Japanese security consulting company, a keen lab sister name at least. And they say that they will pay them a monetary reward for their work and they helped us find something that's a problem we needed to fix and that's what we did. So they have quite different methodologies and approaches to improving. So that's also something to think about how do you want to manage these processes. So with that I can take any questions or comments if anyone has. So the question is if Mender, if we do any third-party penetration testing? Yeah, so we have one going on right now actually. So we have a third-party doing black box testing right now and we're also going to do more of a white box testing approach. So we're planning to continue this and do it regularly. Yeah, absolutely. So in some cases you don't want ability to roll back because if you have version 1.0 of your product and you find a vulnerability, you upgrade to version 1.1 and then if an attacker can make you go back to version 1.0 then they can still exploit the vulnerability. That's at least one case I know about. Was that what you were thinking about? Yeah, so sure. I think there's a term for this but I can't remember it right now. But definitely in some cases you don't want the ability to roll back. So the type of rollback I was talking about is more about automated rollbacks when you actually do the update. So after you've done the update you might not ever want to go back to that older version. And there are frameworks for this. There is one called the update framework. Have you heard about this? No. Okay, so there are some frameworks that will basically prohibit on the client side from going backwards to older versions. And it sounds like a really easy problem but like most things in security it's quite hard actually. So any other questions? Yes, please. There are some special issues with doing patching on open source software relating to license compliance for copy left or attribution. Do you have any insights about how that's practical to do in automotive to stay compliant with the licenses and still deliver patches in automotive? Yeah, so how can we stay compliant with open source licenses when we do patching? So I know it's a difficult question. But you typically have to ship the actual software, especially if you modify it. You have to ship the source code that you did modify. And there are some tools for controlling this process. I'm not sure if you're familiar with Yachto project. Yeah, okay. Yeah. Oh, you are from... Okay. I'm not closing. I'm just wearing my Yachto shirt. Oh, okay. Oh, you're not part of the project. Okay, got it. Yeah, so then you know what I'm going to say but there are some controls for how to avoid certain types of licenses when you do the build or at least get a warning. But yeah, I don't have a very much more specific answer than that. Sorry. Any other questions? Okay. So thanks a lot for coming and enjoy the conference.