 Today's first talk is Jan Tobias Mulberg and he will be presenting Sankos 2.0 open source trusses computing for the IoT. So let's welcome him. Thank you very much everyone. Good morning ladies and gentlemen. So we had an introduction to the topic already. There are a couple of things here that involve trusses computing. I'll stay here. Good. We have Sankos, we have trusses computing. I'll go through these things. One by another first introduce the basic concepts of what we understand on the trusses computing and how you can use it. I want to make sure that we understand that this is a talk about an open source trusses computing platform as far as I know one of the only open source trusses computing platforms available at the moment and that this is mostly about embedded security. So we're not looking into DRM for desktop systems or something like that. This is specifically a security mechanism for the embedded world but you know this is kind of the first talk of a security session so I want to talk a little bit about what security is, how security normally works and what you have to do to actually get a system somewhat secure and you see there are a couple of different systems on the right side. I have to make sure that there are a couple of different systems on the right side of that slide that obviously have very different requirement, very different use cases, very different security properties as well. And I think it's very important that when we talk about security, when we design security solutions that we don't just you know talk about features of specific systems but that we know exactly what system we have, how that system is supposed to work in a specific context, what users do with it, how they interact with the system, that we then talk about what security properties we need for this kind of system, that we understand how security can possibly be implemented and what requirements have to be satisfied by this implementation. And it's I think very important that we understand that these security requirements, they are not features of the systems, they are what we desire, they define where we want to be. And based on this we then kind of choose technology to implement specific things but we can only choose these specific technologies once we know what kind of attacker we are supposed to deal with. If we are working for a system that is like you know a smart fridge in a personal premise and we want to understand what attackers would potentially do with the system, how they would interact with the system, what they could potentially achieve by abusing the system in one way or another. And obviously that is completely different from what you have when you look into a banking website or into one of these little smart card readers for authenticating with a banking website or what you have when you talk about smart cars or safety critical infrastructure like medical devices or the smart grid. In all these contexts we have very very different requirements to deal with. So talking about requirements and talking about system models and attacker models also means that we kind of have to deal with how attacks evolve over time. So this is a very interesting XKCD post that shows us or that kind of tries to explain how certain low level attacks on CPU architectures or memory architectures work. So we've all heard about Spectre and we've all heard about Meltown. Rohammer came a few years before and all of these attacks as they come along they change our understanding of systems very much and if someone tells us there is new zero day vulnerability, if someone tells us that oh well actually cloud centers and data centers and whatever are actually also vulnerable to real hammers and not just to Rohammer attacks then we have to rethink about these models. We have to reconsider our design choices, we have to reconsider our attacker models and understand how we can adapt a security mechanism that we adapted for our systems that we are using to actually deal with these new things or maybe change to completely different mechanisms and that's something very important and that's why we kind of have to rely upon open source mechanisms to implement security because I believe that that is one of the only opportunities for us to understand how these mechanisms work, what the exact security guarantees are and how we can kind of use them in a well defined way to achieve something rather than just putting layers and layers of complexity on top of each other without actually fixing the holes that might be in system somewhere. So the fourth point here on my list of what you have to do to get security right is to embrace change, to understand how attacks evolve, how systems evolve and to be able to adapt your system and your security methodology, your security approach to these changes in attacker models, in vulnerabilities, in technology that emerges and that might allow you to implement better stuff and I think that's the point that makes it important to think about security requirements not as features but as requirements because only that allows you to actually then seek for a better technology, choose a better technology to implement security in your system. So I think that's the basic introduction. Trusted computing, yeah well a treacherous computing, I was actually thinking about putting one of these treacherous computing stickers on my laptop before giving a trusted computing talk at FOSTEM just to get some plausible deniability for, yeah, you know what, so a trusted computing, according to the trusted computing group we talk about protecting computations at endpoints, protecting data at endpoints, using hardware extensions to enforce specific behavior and well that's also where it can easily go bad and that's what my next slide is but for now these are kind of the things you normally have in a trusted computing system, there are endorsement keys that you can use to, that are kind of hard-baked into your system and that you can use to bind specific software components or specific hardware components to a user by using certificates and you can then use these public keys and private key cryptography there to authenticate these systems to a third remote party that kind of breaks your ideas of anonymity on the web pretty much so we have this concept of a trusted third party that can act as a mediator to kind of implement anonymity on top of it but of course you have to trust it there's no way around this in these settings, the interesting things that we mostly work with are what is in the middle there so this kind of memory curtailing that gives you guarantee that for example a specific piece of data can only be accessed by a specific piece of software that the software cannot be interfered with and we also have remote attestation that is a process that I'll try to explain on the next couple of slides that allows a remote party to actually attest that a piece of software as it executes is in a state that you can actually trust it and that's kind of the building blocks that we talk about. In practice there is a whole range of different architectures around already, if you think about Intel CPUs that implement the ESGX technology or ARM that implements trust zone or trusted computing, there's a platform modules which is what the next talk on this stage is going to be about, they all have different features, they all implement a different subset and different extensions to these basic primitives here. So it matters a lot which one you choose and that you understand exactly what guarantees you get out of this. If you then look into the Wikipedia article about what you can potentially do with trusted computing you find this kind of stuff and I think most of us here will agree that it's actually not what we are interested in. What we were always wondering is that can you use these kind of primitives to do something that really matters, not protect computer games or to do DRM but protect for example critical infrastructure, can we use this to get some security guarantees about medical devices, about automotive computing which is one example I'll show later or about for example the smart grid and guarantee that it's very hard to use this infrastructure by an attacker to for example take down an entire country in a snap second. So there is the treacherous computing approach by Richard Stallman, I have a couple of sentences from one of his papers copied here. This is particularly interesting if you see the idea of trusted computing in the light of recent vulnerabilities in hardware and software we have seen. So I think it's very important to understand that trust computing technology as we have today does not help you against vulnerabilities in the trusted software you actually deploy it will never help you against that. It just makes it harder to kind of inspect these specific vulnerabilities to understand what you're deploying especially if you're on a platform that deploy that supports encrypted deployment of code and things like that. So it also doesn't protect you against all kinds of side channel vulnerabilities. These are specifically out of scope for many trust computing architectures in the documentation in the specification and again you have to gain some understanding about these properties to understand what kind of attackers you can protect against. Is that an attacker that just downloads some hack from a website or are you trying to protect against a governmental institution that has really strong abilities in research and reverse engineer your architectures your designs and use that to try to harm your critical infrastructure. Very different approaches here. So what kind of useful stuff can we do with trusted computing. That's where it all gets interesting. To do useful stuff you first want to make sure that you use these kind of primitives in a way that is understandable for the developers for the entities that deploy software that is kind of non-invasive from a privacy perspective that actually makes sense and aims to provide measurable security for systems that matter. And I think in the Sunco's work which goes on since 2012 already we've been trying to follow a couple of lines here that aim for enabling users to use this technology to understand this technology rather than restricting them and to build upon well understood open source components like compiler technology from LVM and open source microcontroller design and stick these things together extend them to actually build an open source trusted computing platform that is aimed to be analyzed to be understood to be reused in various contexts. So I've been talking about remote attestation a couple of times already. I'm now trying to follow this approach of making you understand this technology by presenting exactly what remote attestation means in the context of for example Sunco's like architectures for that I think you first have to understand what kind of risks there are in microcontroller systems. So one common thing is that you don't have any form of isolation of different processes in these systems. So if you look at ECUs in cars for example or also control systems in the power grid and in industrial control systems you will see a lot of very little microcontrollers deployed like 16 bit MCUs that often have no security mechanisms building. So that means if you in these contexts deploy some application and then you might deploy another application on your MCU and then you might deploy a third application. They all share the same address space and it's very likely that something goes wrong. So if you have a buffer overflow in one of these different components that might even come from different shareholders, from different stakeholders it's quite likely that you know a bug in one component compromises your entire system compromises the interactions of other software modules that you rely on. So if then our little robot there comes along and wants to get some measurements out of the system, wants to get a sense of reading from your microcontroller, wants to control an actuator which might be a critical thing which might be a brake controller in a car or something like that. You can easily end up with situations where this is not reliable, where you have no assurance of integrity of your software, of the security of the communication whatsoever. We address exactly this situation with the Sanquus approach or with trusted computing architectures in general. So one thing we give you with this approach of memory curtailing is strong isolation between different components. So for example the third component, the red one down there, it just can't be overlapping with other components in the system anymore that are using this kind of technology primitives. That is we introduce a little unit in your microcontroller that is just a few gates big and that allows us to capture certain properties of each software module. For example we would want to know where the text section of that module is, that is the actual code you're executing, we would want to know where the data section of that particular module is and we want to of course know the value of the program counter. And then we can for example very simply say that only if the program counter points somewhere into the text section, we can access the data section. Otherwise the data section is not available. So no other module would ever be able to interfere with the confidential state of that one module we are trying to protect. Now we can also use these things, these properties of a specific software module that we captured to for example generate unique keys that are related to that module. So if we again extend our microcontroller a little bit by sticking a triple unit in and say that we use the module identity, let's say the text section and the addresses of data section and so forth to generate a key that is specific to that module. We can use that key even with a third party even with a deployer of that software to securely communicate with the module. And get some guarantees about that specific module has been deployed in a secure way on a microcontroller. Now of course there are some issues here. So if you think about just generating a key directly from the module code and the loading context that would not be really confidential because anyone could potentially know where your module is deployed. So you need some additional software, you need for example or you need some additional steps. For example you want to use that endorsement key that I've used before that is baked into the hardware and integrated in the key derivation process to actually get a proper secret that then as well binds the software module to a specific microcontroller. But that's essentially all what attestation is about. So in this specific context our little robot knows pretty much for sure that he's communicating with the right piece of software and he can even extend this to multiple pieces of software. So for example if these other modules also use our technology he can even extend these guarantees to say okay our module is running on a specific microcontroller and it expects another module with a certain identity, with a certain key to run on exactly the same microcontroller and therefore get a chain of trust. So you basically can use this to get a trusted path on one microcontroller that extends from a piece of application code to a driver module to actual hardware access. That's what we do in Sankos and then of course if you have this kind of secure communication remotely you can extend this to remote sites and you get some guarantees that say that essentially you get a trusted path between input and output components, input and output device drivers on different nodes over a distributed application. I think that's a pretty cool thing, that's a pretty strong guarantee that we've not seen in industry control systems so far. But you know, if you want that kind of stuff then where do you go? What kind of application, what kind of technology do you have at the moment to implement that? And well, here is like a map of hardware and trusted computing architectures that give you attestation, that give you isolation and if you look at the open source column that's on the right there and if you also look at the lightweight, the embedded column you see that basically Sankos is the only thing that is available at the moment. You don't have to take pictures, the slides are online. So I think that's quite a unique thing for Sankos at the moment that we are really the only architecture that is available that you can basically download, deploy on FPGAs that you can experiment with, that you can build experimental software that you can use for research to actually find out how to program these kind of modules because isolation is often not easy to deal with how to develop application scenarios on this kind of architectures. So what I'm going to present on the next couple of slides is a bit of introduction to what we've done to Sankos and how it works and then I would want to present an application scenario from the automotive domain that shows you how you can securely communicate within a car context and make sure that for example, road microcontrollers cannot interfere that remote attackers cannot interfere with the communication that actually controls the critical behavior of a car. I have a little video if there's still time at the end to show that kind of stuff off. So, Sankos. Sankos is built upon a 16-bit MCU that's the open MSP430, so it's an open hardware design that you can just download in its very lock market description and extend with whatever kind of stuff you want. So specifically we put in a little memory management control unit that gives you that kind of isolation guarantee and we also built in a crypto call that allows you to store keys in hardware so that they never lead to software that they can only be used by specific crypto algorithms through a hardware API and that cannot be used in other ways. Where do we start with this kind of software modules and software isolation stuff? So here it is. In Sankos you would see a software module as I explained before in the attestation example as a text section that identifies that contains the executable code of your module and a data section that is located somewhere else in this continuous address space and contains the runtime data, the runtime state of your module. And we use these two to actually generate an identity of the module. That is, we use maybe a hash of the text section plus the start and end addresses of the data section to say that a module is linked, is deployed on a specific node in that kind of setup and we can combine this with a key that is baked in hardware to generate specific keys. There is another extension that I have been talking about before that is this entry point you see there at the beginning of the text section. So the interesting thing is that Sankos modules cannot be accessed. You cannot access the code at any given point. You can't just jump to some arbitrary location in the text section or call an arbitrary function. Instead, you declare a specific function to be entry point functions for your protected module and then you can only jump into these specific entry point functions. And if you then make sure that the interfaces of these functions are sufficiently restrictive, that there are no let's say function pointers passed or stuff like that that you can easily upuse for a return oriented programming attack or something like that, you actually get very strong guarantees that your modules are pretty hard to upuse that an attacker cannot easily manipulate your module to change the control flow and to make it execute, perform cryptographic operations, for example, that are not intended in the application source code. An interesting side effect of this kind of setup is by extending a very simple microcontroller. I think we get quite a strong understanding of how the architecture works, how instructions are processed, how side channels evolve or don't evolve, which is something you typically don't get on larger architectures like, for example, x86. There's no pipelining here. There's no speculative execution. So this is even spectral resistant, I believe. But of course, there's no proof there that's something we still have to do. This is also a very power efficient architecture. So the MSP430 basically runs for I think 12 years on an AA cell that is more than the shelf life of that cell. We have extended it a little bit, especially with the CryptoCore and of course the power consumption rises by this, but that's only 6% if you're not using the CryptoCore continuously. So basically we still have a very long runtime on battery power or a CPU that is still suitable for energy harvesting or similar technologies. So there's a link for the source code of the architecture for the support software if you want to have a look at it, if you want to use it. I want to talk a bit about how keys are managed, how keys are derived in this architecture. So essentially what you have is you have a computing node, you have a Sanquus MCU, we call it N in this context here, and each of these things has a key baked in. At the moment they are static keys, but you can think of using physically unclonable functions to generate these keys initially, something like that. Now from that key we generate a software provider key. So we can name a software provider, we just give him an ID, and we can use this ID to compute a key that is specific to a software provider. And then using this key the software provider can deploy software and the identity of each of the software modules he deploys or she deploys will be used to compute another key that is a software module key. And only the software provider can actually infer this key offline independent from what happens on the MCU and communicate securely with a specific module. So that's kind of the guarantees that you then use to securely communicate and to establish attestation guarantees in your network of Sanquus modules. Now what do we do with this whole stuff? I've been presenting this slide a lot of times in the last couple of months. You might have heard that cars can be hacked, right? So there have been a couple of papers in the last couple of years that show that you can use the remote connectivity of a car, the car Wi-Fi cellular connectivity to interfere with a car's safety critical function. Researchers have shown that you can actually remotely steer a car off the road while it is driving, control the car to engage the brakes and things like that. This is pretty bad. And we've been employing Sanquus in a demo setup to actually mitigate this issue by not just providing authentication on the internal buses that provide this kind of functionality that exchange messages between different controllers and make sure that these controllers interact and control the car. So you need cryptography to kind of protect these messages, but then again, what happens if you're an attacker and you manage to redeploy software somewhere or you manage to abuse a component, to abuse the software on a component, to make an entry decrypt something, you would basically be able to control a lot of the behavior by this kind of attacks. And then of course your cryptographic keys would probably leak. And well, the baseline is you need guarantees on top of just message authentication on these automotive buses that give you some form of software security that allow you to measure the state of the individual software modules that allow you to protect keys from being abused by malicious code, by return-oriented programming attacks and things like that. So we've actually came up with a neat design for deploying software modules in such reactive systems, in such reactive critical systems. And we've also been building a little demo setup that involves some car dashboards, some actual car components that interact with a couple of ECUs that run our Sanctus hardware and that show that an attacker can interfere with the unprotected components, might harm availability on the critical components, but does not interfere with the security of the protected components. So how does it all work together? So a car is a fairly complex bus system. You have a lot of electronic control units installed there. They all interact via different networks, CAN is just one of them. There have been specifications for doing message authentication on these bus systems. They are in the AutoSAR standard since 2015, I think, as far as I know, there are no industrial viable implementations yet. There have been academic proposals for communication protocols that give you some form of key exchange between these components that give you message authentication there. So it would be pretty hard for an attacker to just inject messages. One approach is layer and we have the first author of the layer approach sitting here in the front row. Another approach is Vatican. I don't think we have anyone from Vatican here today. So this has been proposed, but there are no efficient implementations yet. One issue is that if you do your cryptography and software, well, you also have a timing issue because many of these microcontrollers, they're just not efficient. They're just not fast enough to do this so that you actually maintain full control over your car and break in time. You need some hardware support to do that. The next thing is, of course, what about software security? So I explained it before, if you would be able to run a return-oriented programming attack against one of the software components on a controller, would you leak keys? Would you be able to control the software to actually send fake messages or to inject messages that are encrypted, that are signed, that are, so to say, authenticated for the network and control the car? Probably yes, and that's something we wanted to address here. So what you need on top of just having a protocol that gives you message authentication is a way to encapsulate software security and software component authentication somehow in that system and to do it all fast enough, to do it all with probably hardware support for cryptographic operations so that it's actually viable in a real-time context in a safety-critical system. So essentially that's where Sankos comes into play. We have, in such a car network, you have a lot of software, right? You don't just have your critical application that consists of, let's say, a device driver for your brake pedal and a piece of application code that transmits brake messages to some ABS system and a piece of software on the ABS system that gathers information from the different rotation sensors in the state of the individual brakes to send out new signals. You also have little operating systems. You have a lot of software that controls communication protocols, it implements communication protocols, and all that forms a pretty big software stack. So what we're trying to do with our approach here is to isolate the critical application code in these protected modules so that only they would be able to encrypt, decrypt, use the hardware, crypto and protection primitives of the Sankos architectures while all the rest is untrusted. That doesn't mean that this untrusted software is unnecessary. It's certainly necessary for functionality, but it's not necessary to maintain the security properties of this distributed application you have in a car there. So in a traditional setup, you have various approaches how this kind of security can be harmed. You have road control units, someone might just stick in one of these black box dongles from your insurer to inject messages or that could potentially inject messages. Someone might do a remote attack against the car. This is basically all ruled out by our approach because you first have this kind of software authentication, you then have message authentication over the bus, and transit computing or Sankos is the infrastructure to implement this. Now, talking about performance, there are some measurements we've done. So the interesting perspective here is that if you do it in software, your authentication will typically need like six, seven milliseconds on a AVR controller or on a Sankos-style controller. If you do it in hardware, you are actually fast enough, you're below the three millisecond margin which allows you to use this kind of stuff at least in an automotive context as a safety critical system. That's quite important. Another interesting observation is that the software isolation that we do on top of just doing cryptography isn't very expensive. It just imposes an overhead of about 5% maybe which is typically quite doable within the safety margins of these systems. And of course, importantly, there are many, many applications to this kind of system that go far beyond automotive computing. So if you think about smart grid applications, the smart grid of the Ukraine has been hacked quite recently in the takeover of Premiere. We really want to protect this kind of systems in a European context, in a national security context. We want to protect medical devices. We want to protect industrial control systems that could potentially also be up used to actually have disastrous consequences on our society, on our countries. So there's a whole range of applications that can be used here where our technology can apply. And what we outline in the paper I'm citing there at the bottom is essentially a notion of authentic execution that you can use in these contexts. And authentic execution means that you can essentially explain all outputs of one of these embedded control systems based on the inputs you give to that system and the software code of the system. Modulo, of course, bugs in the hardware, Modulo, of course, bugs in the software, but these are orthogonal lines of research, orthogonal lines of work in testing, in system verification to give you very strong guarantees that you've actually implemented the right thing and that there are no easily abusable channels, mechanisms in these components. That's what you have to rule out. So we're actually here to almost finish this talk. I still have a video to show and I still want to summarize this. So I think the key thing for security is to focus on understanding systems, understanding systems, understanding attackers, understanding security requirements. Without this, you will never be able to design lasting, reliable security solutions for anything. And it is my quite strong belief that to do this, you need to rely on open source components, be it hardware, be it software. I believe that there is no other way to actually come up with a solution that is adaptable, that can be well understood and that can actually give you some lasting notion of security for these systems. The next thing is trusted computing is just one little building block that gives you certain features like attestation, like key management, for example, that you can stick in into specific systems, but you have to be willing to adapt. So if it essentially turns out at some point that certain trusted computing architectures are not reliable anymore, or that you need stronger guarantees than your choice of architectures provides, you either want to adapt the architecture or you want to adapt your system and your attacker model and go for a new design. Now, Sankos is made exactly for this. Sankos is made for understanding. Sankos is a, to a large extent, a research prototype that we built to understand how to develop software, how to develop this kind of software, how to use security primitives as we know them from trusted computing and how to analyze this whole system to understand the implications, to understand the weaknesses, to understand how to adapt these systems and change them so that they can work as reliable security solutions in future scenarios. So Sankos is particularly neat in that sense, it's open source, it builds on an open source hardware, it extends this open source hardware, it's a very small trusted computing base only in hardware that allows you to do these key derivation steps that allows you to do at the station steps, attest software, attest multiple individual components, isolate multiple individual components and make them interact in a secure way, even in a distributed context where you have a huge network of this kind of little microcontrollers that somehow must interact to achieve something, to achieve some safety critical functionality, to achieve some IoT sensing functionality, something like that. Many applications to this kind of stuff. But of course, talking about this as a research prototype, there's a lot of ongoing and open stuff to look into. So we've worked a lot in the few parts that are green up there and I think we understand them quite well. Also one of them is IoT trust assessments. So can you use a individual trusted software module in the context of an untrusted, let's say, quanticky-like operating system kernel and use this one module to do some inspection of the running system? Can you, for example, make sure that you get reliable information that no one can mess with about the software that is currently running, about the maybe even integrity of that software? Has it been modified? Has an attack have been tampering with a list of processes? Things like that. That's what you can do here. Of course, that's kind of invasive already because that goes into the direction of treacherous computing. Another thing we've been looking into is secure IO. So this is what I've been talking about before, this notion of extending a trusted path from input devices over device drivers over application modules that do some data processing, again to device drivers and output modules. I think we have a strong, even formal notion about how this can work, what security guarantees we get out of this. The third thing is programming models for kind of developing these applications and deploying them. So we came up with a little extension of C that is like a reactive style language that you can use to write application modules that can then even be deployed on multiple nodes and interact securely and the whole key negotiation, the whole configuration of that network is pretty much automatic. It's also software that is in the public repositories. But of course, there's a lot of stuff we don't understand very well. One is of course, talking about a research prototype, there are maturity issues. So I think at the moment, we are still not really sure about what really the minimal hardware extensions are that you need to implement these kind of guarantees. Is that really a crypto core? Is that really a memory management unit? Or would you maybe also need more like a hardware supported scheduling mechanisms? Or maybe would you need less, and to say, doing your cryptography in software is maybe even much better because then you can even change it. You can adapt to faults, to vulnerabilities that are found in these algorithms. There are a lot of issues then. Of course, we also have issues with tool maturity. So this whole thing builds on LLVM which has for our architecture, a couple of weaknesses and the kind of compiler architecture we've been building around this has its own problems and needs to be enhanced. There's a master student working on this at the moment. There's a lot of work to be done to actually make this a product. That's what it is not, what it not is at the moment. Then we also have to understand the text against these architectures better. So even assuming that isolation gives you very little chance for software attacks like return oriented programming or something like that, we still need to understand side channels in this context. What can you learn, for example, by observing taming behavior of the system, by observing memory access patterns, by observing interrupt behavior? There are many options where an attacker could potentially, especially in the context of an embedded device of something that you might have on your premise, that you might have unlimited access to, that you can screw open and disassemble and inspect, that you can interact with by sending signals to, that you can interact with by controlling timing aspects and so on. There are many issues that an attacker could potentially abuse. And another thing is what we are working on right now is in the scenario of these automotive components or power grid components or medical devices, for example, we really need real-time guarantees. And understanding real-time is not easy, especially if you do cryptographic operations on top of that, because those take and not fixed amount of time. So you normally want to do some form of worst case, execution time analysis on your software to know that it's done within, let's say, one millisecond or something like that. But if you do cryptographic operations, if you have a cryptographic core in your component that needs to be locked to execute an operation, you have to wait for that, then it comes back, all these kind of interactions, they are very difficult to manage for this kind of systems. And of course, I've mentioned it before, you want to have very strong formal guarantees that this software, that this hardware cannot be tampered with. So you want some form of formalization or formal verification over these systems that actually make sure that if you deploy a piece of software and it passes through several steps, it has no bugs, it has no memory safety vulnerabilities like the hard read thing that you could easily app use if it's ever deployed in a protected module. And that's an open line of research, that's an autogonal line of research that I'm pretty much, that I'm very interested in, but it's not covered at the moment to an extent where we could say we understand this, we can deploy this kind of software. I think that's the end of the technical part. I still have a video for you, I really want to show you a video. So there's the setup, there's no audio channel, so I'll just talk a little bit over it. You see essentially a car setup that has several little microcontrollers. On the left side, on the right side, there are these things there. It also has two dashboards, normally you don't have two dashboards in a car, in this case we have two dashboards because one is showing you, the right side is showing you how a unprotected car would behave when it is under attack and the left side shows you how our protected setup would behave. So you have central components that could, for example, simulate an ABS system, you have these little ECUs on the outside that could, for example, be brake controllers or controllers that measure rotation speed. You have one keyboard, that's the left keyboard, which I'm pressing right now, which is used to simulate driver interactions, so that could be your brake pedal, your steering wheel, you can increase traction on one side, you can increase speed, and that is all displayed on both dashboards and the individual microcontrollers will receive these messages and do authentication, find out that they're legitimate and then react or not react. You can use the red keypad, now there's an attack going on, you can use the red keypad to actually simulate the attack. So an attacker will inject messages, will execute software on the central component and this will be detected by the protected ECUs that show that, okay, there's something going on, there's a bad message being received that cannot be authenticated and this results in this message being rejected and the controller stays with its previous value. The interesting thing is you see there's a needle dropping on the secure display on the left side. So there are implications of this attack. If an attacker controls software on a central computing node, of course, he will probably do something. He will, let's say, inhibit the break system from working properly, he will inhibit messages from being passed from one component to the display or something like that, but he does so on only one ECU. He probably will not be able to change the safety critical behavior of the entire network and that's an important guarantee we get here. So by controlling this unprotected software, you lose availability guarantees of your network but you probably would be able to steer your car in a safe manner off the road without just crashing against the next tree. That's it, I think I'm done. Thank you very much for listening to my rumblings and I'm happy to answer your questions. Can you put your hands up? From your future work slides, I took away that you do not have preemption right now. So on your architecture, would be something like a microkernel be realizable where you have forceful preemption of the untrusted parts? Well, we have some ideas for implementing preemption. So I think I would put it on our future work but some microkernel like architecture is certainly underscore for managing software there, yes. And then the concept of separating the system into trusted and untrusted portions, it's something that's been done in the past for confidentiality and integrity, that's pretty well researched. But you have the availability as the most important thing on the table, right? At least in these kind of systems, yes. Yeah, and isn't that an issue when you have the untrusted software which could, by some malform packet, for example, made stop and then you have no availability guarantees, how do you think you're gonna solve that? For this kind of thing, so you lose availability guarantees if your untrusted software is actually required to implement them, right? So that's what I meant before. You can isolate the application code for security properties but not for availability. You might lose availability there because that depends on untrusted code. But of course, you can extend that, you can slowly say you integrate your scheduler in your trusted computing base, you integrate your protocol implementations in your trusted computing base and so forth. And by combining these things, you actually get like a microphone-like architecture and you get availability, yes. But of course, preemption is something you need for that which is currently not available. Let's thank our speaker again.