 My name is Dominic. I work for Intel Open Source, and we're going to talk about security. So, those who know me have been working in security for quite a bit of time. I did a project with British Telecoms and I work in TV industry, where security has been a long tradition, and after that, I probably moved to IVI. So, the first thing I want to raise really strongly is that a lot of people believe that in embedded, security is a problem mostly for the press, but not for the engineers, not for the management. What we have to realize is that actually, attacking IoT is a very viable business, and it's viable for different area. One that has not yet started, but I expect will start soon. I see people doing photos of the slider. It's very nice things, but I can tell you I've already posted that on the website, so you should have it. So, the ransom model that has been very popular in the last two years on PCs is certainly going to start in the IoT. So, why would you like to run something? You can store and manufacture it quite easily. You can lock down the very expensive items, and as I told you, I'm working on automotive Linux, and it's a very interesting proposition to actually lock your car, and one day you will come in the morning, and the only thing your GPS system will display is, please pay 500 euros in bitcoins before using your car, because you will have to pay. So, it's an interesting model. It can be used for competitive advantage, especially when you want to collect R&D or manufacturing data. It's quite badly protected, and they are highly valuable. And you can also disturb production line. I was recently discussing with a guy doing food in the food industry, and one of his fears was that actually someone with bad intention blocked his manufacturing line before Christmas, because that would kill him, that would completely kill the company. And finally, indirect, and that's what we have seen in the last two weeks. You may have read in the press that for the first time, a terabyte per second dose attack was launched against a French cloud provider called OVH. That was the first time ever that a dose attack actually overpassed one terabyte per second bandwidth, and that was achieved by actually hacking mostly video surveillance camera. So it is definitely a very viable business, and we cannot just say, oh, in IoT, we are protected, we only do small devices, who is going to hack small devices? It's going to happen, and it's already happening, and especially video cameras have been very weak, but the other example, you know, Dick Chenay had to get his space maker replaced because they realized that actually it was not protected, and you could remote activate the emergency mode. Not that so many people would try to make any bad thing to Dick Chenay, you know, it's so popular, but no, it's not things that you want to hear. So we really have to understand the risk, and the risk in one way is a bit tempting to forget. So if we look at software, we tend to believe that we are all the same, we're all hidden, and why would it be me? For a developer, it's an absolute nightmare to try to look at security. He has to think about all the potential weaknesses he should have to deactivate every possible user error, and we all assume that long-term support kernel and application are provided for free. So that's a developer side. When we look at it, yes, if you just take it to the absolute point, you should say it's so difficult, why to bother? You know, it's too complex, in any case, I will fail. The good thing with security is that you don't have to be good, you just have to be less bad than your neighbor. It's a system which is moving all the time, it's a bit like in houses, if you live in an area which is not very safe, if your house is a bit more difficult to hijack, then likely it's your neighbor house that will be hijacked. It's not pleasant, but it's far more pleasant than being your house. If you live in an absolute perfect world, you can leave your house open. I'm lucky enough to live in a place where when I just go, I leave the house open. It's very practical, the friend visits and he has something to give me, he just can get me in the house, take what he has to take or put it, but not everybody can do that. In software, we've been behaving and embedded a little bit with the open house policy. Default Wood Password which are published on the internet, open port, kernel which are five or 10 years old and which have a few tens of zero-day attack, well-published. So that is on the development side. Obviously on the blackout sides of the baddies, in one way the work is a little bit easier. You just need one security hole, you just need one entry point and then you're in. You can trust to be held by careless users and careless developers. That's a good thing when you want to hack a system, you can take as a principle that the user and the developer will have been misbehaving and so you are going to do social engineering and try to find out who they are. And you see things far worse than just having a post-it with a password glued on a screen. You see terrible stuff. And once again, the long-term business viability is very good. If you think from a return of investment and you put yourself in a black hat situation, it is a very promising business proposition. If you think the cost against a return, it's extremely interesting. And they are well organized. We have a good international network here in the Linux and open source developer community, but the black hat are not bad at all themselves. They are extremely well organized. They have a good way of sharing information, sharing businesses, sharing tricks. If you go on the dark web, you will find out that people are even grading themselves so you know what you buy is good. The guy is reliable. Some even provide some after-sales support in case you would not know how to use a system. So we really need to understand that risk. And that potentially is the biggest problem I face today. I've been talking about security and embedded for the last five years, almost in quite a number of conferences. And quite sadly, I have to say that I have seen very little progress. There is more and more people in the room when I talk. That's the only good thing. When I started five years ago, when I had just a bunch of a few people and most of them were my friends who came for support, it was not there. So we start to have a little bit of hearing, but we have difficulty with multiple layers in understanding the risk. The developers, because it's a pain for them, they have to really work harder and they have to organize themselves. So they're a bit lazy on the side. We have difficulty with the people providing the tools and the continuous integration. When you tell to your QA manager that now he's going to have to sign every single image that he's doing and continuous integration. So the continuous integration test will be done on a signed image. It tells you, yes, this is going to require a specific server and who is going to provide the key. Yes, it is difficult. And it's difficult with management. And with management, the difficulty is slightly different. Obviously, they look at the cost, but they also look at the risk. You know, if as a manager, I say, I am going to provide a secure system, then you have to commit to it. And as a manager, you put your head on the block when you say something like that. So if it doesn't work, you are fired. So there was a key to statical. Now, it's not going to last. So one thing I am sure is that before I will retire, which is not in that long, you see my white hair, it will happen one day, I am sure that security will be taken into account. So that's a good thing. At least I've been talking for a few years in empty rooms, but I can see that it is coming. And in coming, the good thing is that we have most of the technology and we have the capability to really improve the level in one way quite easily. So what are the fundamentals? The first one, you have to minimize the surface of attacks. That's the base rule, is that the more you provide services which are not needed, the more you are going to have trouble. So yes, you may have a debugged demon which is running on your system. In production, it should not be there. And the only way to not getting there in production is to remove it from day one and just make it optional so people have to manually install it. You have to control the code which is when. That's something which is done far too little. People start to check licenses, but a lot of people are not running code checker, buffer, overrun, stack violations. You have static code analysis you can do, that one step. But also trust your code, get your developer to sign their code. So you know that code which get in your Git repository is a valid code. And most of the company I know don't do it. So if a hacker has a capability to get into the system, for example, show a video camera like it happened not that long ago, they can post code on the Git repo that will become valid code because nobody ever checked that. It's very simple to do it. So trusting your code is interesting. Obviously provide an upgrade bullet proof model. I will come back on that. I will never insist on the fact enough that the update must be bullet proof. Being available is step number one, which is already a big progress because quite a number of unbedded devices. The upgrade process is so difficult that actually it's impossible in the field. But it has to be bullet proof. Tracking security patches, that is potentially something we have quite far away in embedded in the sense that most of people tend to be stuck with one version that they are using. They are not capable to go to the next release of kernels. They are not capable to go to the next release of the build system. I was discussing yesterday with automotive grade Linux. We are likely going to have to move to the next Yachto. It's not too difficult, but the natural tendency is to say, no, we're going to stick with the one we have. That's really bad. So you have to put a process which allows that. Use hardware security helper when they are available. Should be done, but most of people still store their key on a disk or on a flash. Stop lateral system, that's far more complex. We will see how automotive grade Linux does it. But it's a significantly more complex stereo. And develop and get QA with security turn on. You cannot turn on security later on. And finally, never rely on human. If you want to fail, you rely on human. All developer are good. They are going to take care, no. Customer will change the default password. No. Or QA will remove the special extra software we have for debugging before shipping the image to the customer. No. So that doesn't work. You have to get your process which are taking on. Which probably to a first fundamental that security cannot be added after the fact. So if you develop your project without security and you say, oh, we will add security when we are after the proof of concept, you've already failed. The good news is that then you know immediately you have failed. That's practical. But it is a reality. So you really have to sell that to your management very quickly. Why can we not rely on human? There are two different reasons. The first one is that the people who really understand security and in particularity in embedded are almost non-existent. And I just give them a few numbers. We have about today in the world nine million mobile software developers, about eight million web developers, about half a million embedded developer. And out of these half a million, a handful of them are security aware. So your chance of hiring them or your chance to have one on your team is about the same than the one of winning the lotto by buying a ticket. OK, it's almost nil. So you definitely cannot rely on that model. It's just doesn't work. And if you take IoT, because we are in an IoT summit, where we want to develop numbers varies. Today, people seem to agree on 30 billion device by 2020. OK, it was 50 billions a year ago, so it's getting more reasonable. How many developers are we going to need to develop these devices? Actually, we need more developers that we have in embedded world today. So it means that these devices are not going to be developed by embedded developers, that the first rule. You have to realize they're going to be developed mostly by web developer and potentially mobile developer. So we are going to have to adapt that. And the knowledge of security is for them in existence, because that's part of the platform. So on the application, they don't bother with that. And so that is one error. The skills are not available. And the second one is that, as I've already said, and I've repeated because it's difficult for us because we are developers and we tend to believe that the developer has the best people in the world. We are unreliable. OK, we find any reason to actually put it for tomorrow. Now, I told you the concept are known and the technology are known. So we all know the basic principle of this type of things. The first one is that, I cannot do that. Sorry, I wanted to put a mouse, but I cannot. We know that our software is going to run on a hardware. OK, it's a pain. We all know that software developers, we would love to not have any hardware, but it has some good side effect. Me working for Intel, it allows me to be paid every month. So I'm quite happy you have to run on hardware. But on the hardware, we have a number of areas. Typically, I would put them in three domains. The first one that we all know is the secure boot. Now, depending on the hardware vendor, it can be different. At Intel, it's part of our UEFI model. The secure boot phenomenally allows you to know that the core part that you're going to run on your hardware is known and that your hardware is in a safe state to receive it. That's what it will do for you. Every vendor has that. Then we tend to have a TPM, which is a type of trusted zone in our friends from Cambridge, which allowed to have some safe storage and potentially some safe execution of the dedicated code, which will have very key functions, like validation of something and scripting of something, time to be available. The final one is probably beginning. It's been available in certain technologies, like in TV, we've been there for years on mobile. It's coming to traditional embedded, which is capability to manage IDs, which is a very interesting point, capability to really have an ID which is specific for every single device. But that's coming from the hardware. So then you put your Linux, and your Linux has to be up to date. Because if it's not up to date, you're going to be in trouble. Now, up to date doesn't mean you have the very latest kernel, but you could work with an LTSI, for example. But you have to put the patches for security. And a lot of people in embedded, what they do, the first thing they do is they do their own kernel. They had a few drivers from here and from there. And that's it. That's the end of the maintenance. That's not good. People are laughing. They know that. OK. And then you come to more complex area. And we have capability to harden OSs. It's been in Linux in the cloud for quite a long time, in enterprise. We had capability to harden Linux quite well between the mandatory access control, the name services. We can create domain. We can do restrictions in the APIs that you can call in the kernel. There are plenty of models which are there available. And then you're going to have API and you're going to put your application on top. That is fundamentally describing the base principles that everyone should use. I have a hardware. It has some functionality and I take benefit of it. I am using Linux or another real-time OS for smaller system. But here I'm talking to Linux, mostly community. And that has some facilities. I am going to use them, OK? Not deactivate them to start. And then I'm going to separate the application from the core OS. And that's a very important point. We all know the principle now very well because Android has made that very common practice where the application and not part of the OS, Chrome OS as well. So we start to get used to the concept. But in embedded, it's new. In embedded, we used to have the application built in the OS as one package, one big blob. And that was the system. That's not really viable in security, which has some side effect. Obviously, if you have application separated from the OS, it means you have to get an application framework. OK, there is today no standard application framework in Linux embedded. There are candidates. I will show you one for automotive, which could be used for other industry. You have to create default policy. So you have to get a packaging we say what is going to be authorized because a good principle is that everything which is not explicitly authorized is strictly forbidden, which means that you have to know what you are going to authorize, which are typically policy. And finally, you have to manage all the signing hassle to know where your code is coming from and should it be running here and who has done it and so on, which is actually more complex than it seems to. So do we know what we can trust? You can only know what you can trust if you have a valid chain of trust, which has to start with a trusted boot. So there's no alternative to that. If you are permanently connected, which was the case when I was working with set of boxes, you can find some shortcut around that. But for most of the devices, without activating the trusted boot and without having a very valid chain of signature, you will never know if you can trust anything because your base will be very unstable and you will be building on top of mud. You have to separate the application installation from your OS and you have to have an update system. And that update system should validate that update are only coming from a trusted origin because if you install a faked update, you are not really progressing very well. I will make one specific point on the container and people want later we could do privately from container. Container are not a miracle solution, especially when you come to update because you have no control of what gets in the container. So it's a big side effect. But you have to really think about that. Once you know what you can trust, you're going to do a layer architecture. And here's the idea that you are going to start to make a model in such a way that if you get one part of your system broken in, which is not more if, but rather when, the guy cannot move from one place to another one. And so obviously the most untrusted part is everything dealing with the outside world. So most of the time, the UI. It could be an API if it's a system which is UI less. But what is dealing with the outside world is really what you cannot trust. And so you have to isolate that very well. And the best is to isolate it by providing APIs where you have defined services which are acceptable for that application. We started the first system I worked on which had that and embedded was Tizen. It came from Samsung. But they had the very basic principle to say, is that application can do this, can do that, and can do that. And that's it. And every time it will try to do something else, it would be rejected. Then you have the services. And the services quite often are part of the OS and they are trusted. That is something that is less and less acceptable. Ideally, you want to be able to treat your services like your application in such a way that you don't really trust them completely and you install them outside of the OS and you run them in a domain which is contained. And this is really interesting because it means that if someone break into your service, it doesn't have the capability to spread out on the side. Because otherwise, if you have a Bluetooth demon, and I take Bluetooth as an example because it's a code which is extremely complex, if someone can break into it and you run bluesy as root, which most of people do, if you break in it, you are root. And if you have not layered your system in certain ways that root is not capable to do anything, which most of people let root be the king, then you are in and you are the king. So this is how actually the big hacking through video camera happened. People came through a very side effect maintenance and became root, and from root they changed the VLAN on which the camera was connected. Hey, yes, it's very easy once you root. And then they could listen at another VLAN and they could go in the management of the router and then they could patch the router and progress that way. So you definitely want that. This is a bit difficult to implement, but you have OSS, which does it. And finally, obviously, at one time you have a platform and a system services that you have to trust. So the trick to be able to trust that part is to make it as small as possible. So the more you're going to add services, application, and extra on top of your very basic core system, the more you're going to take risk. So what do you want in your core system? Absolutely the minimum. The trusted boot, the base kernel, the file system, and then the daemon to load your services, half trusted, the application framework to load your application completely and trusted. That's what you want in your core OS. I talked a little bit about update. I did a project a few years ago which had the capability to cut every single telephone line in 30 million houses in the UK. Believe it or not, I had a security guy on my back for quite a long time. It was very worried that someone would hack and system and actually would cut the telephone in the 30 million of houses in the UK. And one thing we learned out of that was that the only defense we had in case of trouble was update. The trick was how could we do an update on a system which is already compromised? And that's what you need to think. A lot of people are thinking about update, how I'm going to optimize, how I'm going to make it quicker, how I'm going to be able to roll back. Yes, all of that is nice to have, but it's not critical. The only thing which is really critical is how can I upgrade a system which has already been compromised? Which is a far more complex proposition. And you have to think about that, how to do it. The way we did it in British Telecom was that the upgrade software was actually not on the box. The only thing we had on the box was the capability to download the software that actually would be the upgrade software. So when the upgrade software was downloaded, if we had some specific things we had to do before we can upgrade, we actually had the capability to get them. So it was coming after. And that was not perfect. We did that system 15 years ago and at the times of risk were not as bad as today. But you will still have to think about that. Obviously you have to sign it and you have to think how your keys are going to be. Protecting keys is a very serious challenge. You cannot trust anyone, especially you cannot trust your manufacturing if you are manufacturing in a low cost country. In a high cost country, you cannot trust them much but in a low cost even less. And it's a problem. So you have to really think at the complete chain of protection of keys in that area. And obviously non-reproductibility, you don't want someone cracking one device to crack all devices. That's not acceptable. Now sometimes with mass stuff you have to do it. But if you can afford to have either update system, something which is specific to that device, it can be quite simple. It can be just one file that when the request comes is the idea of the device and you add that in a file and you push it just in such a way that it makes more difficult to reproduce identical by just replaying memory. It will increase your resilience quite a lot. So I've been talking in general. I will take a practical example, automotive grade Linux. Automotive grade Linux is a distribution that could be called industrial IoT Linux. It's fundamentally a Linux which is secure, provide display management and has an application framework to be honest. It doesn't have much specific to automotive except that it fits that requirement. It's been derived from Tizen too. Then they have done quite a few interesting improvement in certain area. So service isolation. Obviously you can use system D to start your services and you can drop privileges. You can activate mandatory access control. In AGL is don't use Mac. You have C group which can stop people to eat all your CPU or your RAM. And namespaces can be used to further force isolations. So in one way there is no magic there. It's just organization. Now if you do that the first time on an OS I can tell you when you're going to boot nothing is going to work. Because right will be wrong. So it's that where it's interesting to work with people who have done it. More interesting is to segregate application from the OS. And AGL went a bit further in the sense that they not only segregate the application which is something we were already doing in Tizen they also segregate the services. And that is done through the application manager which is capable to load the applications, to run them, to start them. But when they are started they are started under the control of the security manager. And fundamentally what the security manager does is that when an application wants to use an API it does check that the application at installation time was declared allowed to use that service. You all know that from your telephone and apparently we use the small Android robots or the Apple. They have the same principle. Application have privileges which declare what an application can do. And so that's the basic principle. So obviously this right have to be managed somewhere and that is managed in the security manager. And that has to be interfaced with the entire system to know every time you create an API that that API is dependent from a specific privilege. What is a bit original in the way it's been implemented in AGL is that the services have been implemented under a model of binder. So it's in one way it's an application which is running without UI. And you talk to that one through a web socket. You can use D bus for legacy but the real model is to use web socket. And so you have a West API and that West API is managed like you would do in the cloud. So you have a token system and that token is passed to the application at the initialization. And if that token, if this is broken it's stopped to work which means that if a fake application would try to use a socket which has been opened by a valid application it would be rejected. That's quite unique in embedded world. It's unique in the sense that it wants to provide a very nice level of control and isolation because it really stops someone who would become valid to run an application in system. Obviously a fake crook application would have difficulty to actually eat resources which have been established by other application. In that point it's interesting. It's also interesting in the sense that it does rely on the technologies that web developer know. And remember what I said at the very beginning. We don't have enough engineers to actually develop IoT devices. So the fact to implement the security model on a concept which is known and understood by nine millions developer is very interesting because that means you can actually pull all this guy to start to do embedded and not only to do embedded but to do secure embedded. So practically this is the way it's implementing and I give an example. So the transport and access control is done through web socket. The access is done through open authentication too. And then you can have services and that these services for example navigation service can do the map handling, the point of interest. It doesn't have to be the UI. And a service can call on another service which means that you can actually extend an AGL core to become what you want to be. Most of people today trying to do something in cars with it which with automotive grade Linux is not surprising. As I told you, it could be very well an industrial controller for a machine. And you actually run them on top of this transport and actually call them. Now every service is controlled with a Mac. So if you have not looked at mandatory access control, there are three which are very popular. SC Linux which is very well known because Red Hat use it. Aparmo which is used by Ubuntu and OpenSuzi. And SMAC which is used in mostly embedded devices for reason that it does a little bit same thing but smaller. It's used by Philips originally in television. It's used in every Samsung, Tizen device or more or less every TV, watches and gadget done by Samsung today is using it. And it's used in AGL. But they do the same thing. They're more or less what you have attached to an executable, a label. And that label will define what that executable is capable to touch. So it's really a control. And the executable does not have the capability to change it by itself. It's not like access control. When I put an access control, I have the capability to change it. So if I become rude, I can decide that anyone will be able to read a file. Not with SMAC. With SMAC, it's a bit more tightly controlled. And anyone who has tried to play with it knows that it can be a bit painful on the configuration. Because it requires you to be extremely well organized. But the beauty with a ready-made system like AGL is that it provided for you. Now, how does it work in the application framework logic? Well, it's very simple. The application is having a binder client and can actually even embed a binder service if you want. So your application can be having a fairly wide if you want domain. But it runs in a specific isolation control. So this application could only talk through the standardized transport to other services. So if that application requires other services, it will call on them through the transport system, which as I told you is mostly based on WebSocket or D-Bus for legacy. And this services could themselves change that. So it created a little bit of heaviness in the system. And this transport is using JSON. So once again, the technology that web developer knows so we can pick up them. We'll look at what is the cost of it. Because when you look at that, say, oh, this is really tough. You know, it's going to put my embedded system down. It's actually not bad at all. We could on small, medium-sized CPU. I did the test on a miniboard. And my friend Stefan, who's at the end, did the test on the portaboard, which is from Renaissance. Results are roughly equivalent. We could manage about 30,000 exchange of data per second. So it's not bad. Because these are small CPUs. So it would fit most of usage even on a small CPU. So the cost is not prohibitive at all. If you use D-Bus, it will put your system down very quickly. Okay, WebSocket significantly quicker than D-Bus. Now, when you write an application, what is the side effect of it? The first time, you will have to decide what your application has a right to do. That's a very important point, because that will define the privileges that you're going to request. And if you don't know what your application has to do, then you'd better not write it anyhow. So you will access to your services through WebSocket, mostly. You will have your own run domain. That's done by the application framework. You don't have to do that. And so you typically write the front-end. Our advice is that you run the front-end and we like HTML5, because then you can write these one-page applications with this very nice framework that people use today. And it's interesting because it allows to make very nice application. It allows to debug running on a PC. While you are connected to a WebSocket, you embedded the development platform, which is really luxury. But it's also possible to develop with other technology. It's not linked to HTML5. So you can use QML for people who are using QT, or you could use anything else, to be honest. At the end of the day, it doesn't really matter. And you're going to use WEST APIs to talk with your services. Once you have done that, you're going to make a package, and that package is based on WS receive widget, on which you have to add what you need as privileges. And you shake all of that, and then this is going to be installed by the application framework, independently from the OS. That's a very important point to remember. So that's the only thing you have to do special. So that's what we have today. Today, if you go on the website from HGL, you will find HGL 2.0 candidate, and you will have an application framework and you will have capability to play with that. Where are we going on? Now that we have a transport system, which is working on WebSocket, we can actually work in different manner. We can have multiple CPU talking together in a network, or we can work with the cloud. There is no need to have the service, which is on the embedded device. It becomes completely transparent. So we have the capability to extract services, send them to the cloud, as well as we'd want them, locally, as long as we are connected. Because the technology we're using is technology, which is very common in the cloud area. So that's the next step. That should come, I would say, reasonably quickly in the sense that it's more an implementation, details, and a real architecture change. The architecture we have done is already doing that job. The next step is a bit more complex, especially I like it because working for Intel, virtualization is a domain where we are pretty strong. We believe that there is an interest to actually allow virtualization to run in this type of environment at very low level, especially when you want to deal with safety functional applications. Functional safety is out of reach for Linux today, and it's likely going to be out of reach for quite a bit of time. So, by running a very low level hypervisor, we could have capability to have some functional safety compliant application running on the same hardware. But on the other side, we could run as well as a client of AGL and another operating system, which is potentially more friendly to what we want to do. For example, two games, okay, and then we could have the system which is running the basic application, but you want to have a game on the side. You may not want to run it on the same OS and have to manage all the codec issues and potential risks, so I think a virtual machine is a very strong interest. So, as a conclusion, technology for implementing a decent security are available already. So, all the excuse for not doing it because it's not available are no good. The management is not more ready than the engineering. And they are still capable to push it out. And engineering sees security as a break for innovation and that is really wrong. I've been in security for quite a bit of time and I can tell you there is a lot of innovation in security. It may not be as sexy as doing UI work because typically people don't see what you do, but I can tell you for your paycheck at the end of the month so for finding a job, it's a very good one. So, there is a lot of innovation and there is a lot of opportunities which are there. The complexity is there, complexity is huge and I would not advise, I would actually dis-advise you to try to do it by yourself. You should take a ready-made implementation and it doesn't matter if you go with AGL, obviously we do AGL so that's the best and Stéphane agrees with me. But if you go Philippe on the side, he's working on Tizen. And Tizen also has a security model. And even Snappy has a security model from Ubuntu. It's based on containers so it's a bit, in one way, less secure. It's more difficult to manage because you don't have any control of what gets in the container when you control the platform. But they are models which are viable. Try to do it by yourself except if you really have an army of very skilled people is likely a bad idea. With this, we may have a few minutes for a question. Yes. Should I have a comment on that? It's clear that the systems are not yours. They are actually owned by someone else in the car. So, while I would like, if I buy a car, I would like the car to be secured on the outside attackers at the point where you try to make the car secure from you. I own the car, you are trying to secure it from me. Yes. I get pretty angry at you. No, you would like, you would like not to do that? Are you, would you be more happy if you would know that your system with your remote key control can be opened by anyone? No. No? No, but that's different. Yes. You want to hack in. No. Yeah, no. Okay. That's a very valid question. I have known that question in TV for years. In TV, we had that problem for years. It is very clear that when you buy a device where people are committing to maintain it for you and are taking legal responsibility that it is behaving in compliance with the law. You know that your car is not allowed to do things as soon as it moved more than two kilometers per hour by law. If the car manufacturer ship you a system which could do that, he's actually liable. So if you want to hack your system in a car, like if you want to hack your TV, you will have to change the system. That's very clear. You are not going to get from a car manufacturer, from a TV manufacturers, from a telephone manufacturer, a device that delivered as it is, is easy to hack at low level. Because there are legal compliances that they cannot commit. And it's very unpleasant, I do agree. Yes, you get phone, but that phone is actually not necessarily legal to work on your country. Yes, that's a trick. So if you want to go outside of the law and you make a clear decision, that's perfectly possible. But don't expect any company or corporation to take that risk for you. Now they may not lock it completely, but they are not going to take it. Yes, yes, you're right, you're absolutely right. There is a part of population, which is a hacking population, who will actually break into devices. They have been for years, they have been there. It will be there. I have been part of them. It's just a reality, but that's not a business. Reality of a manufacturer having to do it. If you are a service company and you have to develop a management system for a factory building devices or making food, the fact that the guy can hack into it is the last, the least of your problem. The fact that someone walking by as a company is not capable to break in is far more important. But you're right, it's a break on that point. What's the question? Yes? Okay, so the question is if you create a system which has only one application, would you need all that complexity if I translate your questions? Obviously it's possible with a system running on a single application to simplify it. Now, you still have to develop a secure Linux. You still have to develop a secure update mechanism. So the cost of doing all of that specifically for you and to maintain it is likely going to be more than taking a ready-made Linux which is secure and to write a single application. So it's likely not that it is as critical in that type of situation, but all the things you have around makes that likely it's still interesting because it comes ready-made with people who know how to use it, have documented it and will maintain it for you. Yes? Yes? Yeah, so the question is if I translate it simply for people who will watch the video after that is if you make a very small device which will not be capable to run a full Linux with all its complexity, is there some standard system or pattern to develop it secure? Yes, there are few domain remain very true. The number one is that the update capability online must allow to update the system even if it has been compromised. So I would say with very small devices, the number one security is going to be can I upgrade the system? And the number two, can I upgrade the system and not be sure that no one else upgraded for me with the wrong thing? If you have that, you already put yourself at a fairly reasonable level for small system because you're likely not going to be able to do much more than that. And if you take, I take two examples, Zephyr, when I was being sponsored by Intel so I have to talk about it, but Ambed as well that I quite like because the user interface is really friendly. They both have a security model. Okay, it's not as sophisticated as that. It's designed to run a small OS with one application but they still have ported the SSH so they have the capability to use a kind of SSL implementation so they can secure their communication. They have implemented a model of keys and they have an update system. So that's the absolute minimum. If you don't have that, you're in serious risk. Time for last question. Yes? Yes? So the question is, is virtualization viable from a performance point of view? Yes, in embedded world. Okay, virtualization has a cost. It's a bit, as I told you, our security model has a cost. It's not too high but it has a cost. We know that getting messages and getting security has a cost which we tend to evaluate at around 10% of CPU, roughly what we think that get eaten by the security model. Virtualization tend to be in the same range so the low virtualization is very low. The ones that are working on very low hypervisor like JL House from Siemens which was presented on Tuesday here. Those one are very low. They are below 1% but they do very little. Now if you have a full virtualization, the cost can go anything from 3% to 15%. It depends how much you have in your hardware. And this is why I like virtualization because I've been working for Intel. We have a lot of hardware helper which enable to reduce the cost of that virtualization. But if you have the right implementation with KVM you can be as low as 5% of cost by running virtualization. So it depends what it provides you. 5% is a lot and little at the same time. If it allows you to run and the way that otherwise you would not be able to run then it's cheap. If it allows you to provide a solution very quickly because the customer wanted to run Android and you had not planned it, okay. And you want to put Android in three weeks on top of your system, it's very cheap. And finally I would say if you have to buy a more expensive CPU which is more powerful at Intel we're quite happy with that. Okay, with this, thank you. The slide already on the website. And so I'll let you with that. If you have any question you have my email.