 I'm really glad that you're all here and that today I can introduce Johanna Rutkowska to you. She will be talking about reasonably trustworthy x86 systems. She's the founder and leader of the Invisible Things Lab and also, and that's also something you all probably use, the Cube OS project. She presented numerous attacks on Intel-based systems and also virtualization systems, but today she will not only speak about the problems of those machines, but will present some solutions to make them more secure. Give it up for Johanna Rutkowska. Let's start with something, with stating something obvious. Personal computers have become really extensions of our brains. I think most of you will probably agree with that. Yet the problem is that they are insecure and untrustworthy, which personally bothers me a lot. And here I want to make a quick digression for the vocabulary I'm going to be using during this presentation. When we say, well, there are three adjectives related to trust and people often confuse them. Trust is something that is trusted, that means, by definition, something can compromise the security of the whole system, which means we don't like things to be trusted. Trusted third party, trusted certification authority, we don't like that. When we say something is because trusted doesn't necessarily mean that it is secure. So what is secure? Secure is something that is resistant to attacks. Once this laptop might be resistant to attacks that I open, email attachment and the email attachment compromises my system, or maybe if I plug USB for the slight changer, I might be hoping that this action will not compromise my whole personal computer. And yet something can be secure but not trustworthy. And a good example of this might be, for example, Intel management engine that I'm going to be talking about more later. So it might be very resistant to attacks. So it might be a backdoor. A backdoor that is very resistant to attacks, yet it is not acting in the interest of the user. So it's not good, whatever good means in your assumed moral reference. So there's been, of course, a lot of work in the last 20 years or maybe more to build solutions that provide security and trustworthiness. And most of this work has focused on the application layer and things like NOPG and PGP First, Tor, all the secure communication protocols and programs. But of course, it is clear that any effort on the application level is just meaningless if we cannot assure, if we cannot trust our operating system. Because the operating system is the trusted part. So if the operating system is compromised, then everything is lost. And there's been some efforts, notably the project I started five years ago, and now there's like more than a dozen of people working on it at CubeSOS that tries to address the problem of operating systems being part of the TCB. So what we try to do is shrink the amount of trusted code to an absolute minimum. There's been also other efforts in this area. But operating systems is not something I'm going to be discussing today. What I'm going to be discussing today is the final layer is the hardware. Because what was operating system to applications, it is hardware to the operating system. Again, most of the efforts so far to create secure and trustworthy operating system, they have always been assuming that the hardware is trusted. That means that usually, that means for most operating systems that a single malicious peripheral on this laptop, like a malicious Wi-Fi module or maybe embedded controller, can compromise again my whole personal computer, my whole digital life. So what to do about it? Before we discuss what to do about it, we should quickly recap all the problems with present personal computer platforms. And specifically, I'm going to be focusing on X86 platform and specifically with Intel X86 platform, which means laptops. So this picture shows how typical modern laptop looks like. You can see that it consists of a processor in the center, and then there is memory, some peripherals, keyboard and monitor. Very simple. It's very simple. It can be very simple because if we look at present processors, present Intel processors, they really integrate everything in the kitchen sink. Ten years ago, we used to have a processor, a north bridge, a south bridge, and perhaps even more discrete elements on the motherboard. Today, all these elements have been, or nearly all these elements have been integrated into one processor package. This is Browdwell, and this long element here is the CPU and GPU, and the other one, it is said to be the PCH. PCH would use the platform controller hub, which is what used to be called a south bridge and north bridge. The line has somehow blurred between these. Of course, there is only one company making these. It's an American company called Intel. It's a completely opaque construction. We have absolutely no ways to examine what's inside. But obviously, the advantage is that it makes construction of computers of laptops very easy now, and lots of vendors can produce little sexy laptops like the one I have here. On this picture, we see now some more elements that are in this processor. When you say processor today, it's no longer CPU. Processor is now CPU, GPU, memory controller hub, PCI Express, root, some south bridge. For example, SATA controller, and so on. As well as something that is called management engine that we discuss in a moment. There are a few more elements here that are important. The most important for us is the SPI flash element. Because what's interesting is that with this whole integration that has happened to the processor and the other peripherals, still the storage for the firmware, so the storage where your bios as well as other firmware is stored is still a discrete element. We will see this element in one of the next slides. So let's now consider first the problem of boot security. Obviously everybody understands that boot security is something so how to start the chain of trust for whatever software is going to be running later is of paramount importance. I think I'm out of range. Connected with boot security is malicious peripherals that I mentioned shortly before. So we will be now thinking, can we assure only good code is started and how the peripherals might interfere here? Again, we will look at this SPI flash, which if we now considering the boot security, we would like to understand what code is loaded on the platform. And if we now think where this code is stored, it seems that the code is stored on the SPI flash and potentially also on some of the discrete elements. Let me state it again that this whole integrated processor package has everything in the kitchen sink except for the flash. So except for the storage of the firmware. Here we have one of the SPI flash chips. This is from my laptop actually. It's a little microcontroller and it typically stores the firmware for these things that are written here. The question is, let's say I have got this laptop from a store. How can I actually verify what firmware is really on this chip? Well, I can perhaps boot it into some minimal Linux and try to ask it. But of course, if there is a malicious something on the motherboard, not necessarily this chip, I will not get reliable answers. Another question, let's say I somehow know that there is something trustworthy on this SPI chip. Can I somehow enforce redoneliness of this program? Well, there have been some efforts to do that, like some project by Peter Stuck who took a soldiering high and connected one of the pins. One of these eight pins is called write protect. And if you ground it, it will be telling the chip, well, to discard any write commands. But again, remember, this chip is still a little microcontroller, well, it's a little computer. So it might ignore whatever you request it to do. It's not like you are cutting off a signal for write commands. You are merely asking the processor to ignore it. So if you don't trust the chip in the first place, this doesn't provide you a reliable way to enforce redoneliness. Finally, can I upload my own firmware? Can I choose to use whatever BIOS I want? Again, we don't seem to have luck here. And as I mentioned, this is just one of the places on the platform when the state is stored. Embedded controller would be a whole another microcontroller having its own internal flash or if not using another SPH chip to get flash from. A disk would be another microcontroller, read small computer, having its own typically flash for its own firmware. And perhaps the same with the Wi-Fi module. Now, for many years, myself and lots of other people believed that technologies like TPM, trusted execution technology, like UFI Secure Good, I never really liked, but many people did. They believed they could somehow solve the problem of Secure Good. But all these technologies have been shown to fail horribly on this premise. And then we have, so these were problems, just tip of the iceberg of the problems of the Secure Good. The short story is we cannot, today we cannot really assure Secure Good. Maybe before we move on to Intel, for example, Intel TXT, trusted execution technology was introduced by Intel in the hope to put the BIOS outside of the trusted, of the TCP, of the trusted computing base for the platform. So the idea was that if you use TXT, which you can think of as a special instruction of the processor that was the root of trust. So the promise was that when using Intel TXT, you can start the chain of trust without trusting the BIOS, as well as other peripherals like Wi-Fi card, which might be malicious, perhaps. And that was just great. And I really liked the technology. And we have done, with my team, we have done lots of research on TXT. But one of the first attacks that we have presented, and that was back in 2009, was that we could bypass TXT by having a malicious SMM. SMM was loaded by the BIOS. So apparently it turned out that the BIOS could not be really put outside of the TCP so easily, because if it was really malicious, it would provide a malicious SMM, and then the SMM could bypass TXT. So response from Intel was okay, but worry not. We have a technology called STM, Secure Transfer Monitor, that is a little hypervisor to sandbox the SMM, which might be malicious. So they wanted to build a special dedicated, they built it, actually, they built a special technology to sandbox this SMM. And then it turned out this is not so easy, because as usual, David was seeing the details, and it is six years, six years has passed, and we still have not seen any real STM in the wild. Which is just an example of how hopeless this approach in building, in trying to provide secure boot is for the X86 platform. So another problem with X86 that has rise to prominence in the recent years is the Intel management engine. One of these additional, or one of these things that Intel has put into this integrated processor is called management engine. And this management engine is a little microcontroller that is inside your processor. It has its own internal RAM, it has its own internal peripherals, like DMA engine, which has access to the host RAM. And of course, it allows only Intel signed firmware, and it has also its own private ROM inside the processor that nobody can inspect, and nobody knows what it does. And it runs a whole bunch of proprietary programs, and it runs even Intel's own proprietary operating system. And this all is happening all the time when you have some power connected to your processor, even if it's in a sleep mode. It's running all the time here on my computer. It can be doing anything at once. But obviously, when I say something like that, the first thought for security people is that this is an ideal bug-during or root-kicking infrastructure, which is true. However, there is another problem, and it's a zombification, I call it zombification of personal computing that I will discuss in a moment. So I'm just stressing these are two somehow independent problems with this ME. So about 10 or more years ago, I used to be a very active malware researcher, or stealth malware researcher, especially root-kit researcher, and back then when, if I was to imagine an ideal infrastructure for writing root-kits, I couldn't possibly imagine anything better than ME, because ME has access to essentially everything that is important. As I mentioned, it has unconstrained access to DRAM, to the actual CPU, to GPU, it can also talk to your networking card, especially to the Ethernet card, which controller is also in the south bridge in the processor. It can also talk to the SPA flash and asks the SPA flash, it has its own dedicated partition on the SPA flash, which can be used to store whatever ME wants to store there. This is really problematic. And we don't know what it runs. But the other problem that is perhaps less obvious is what I call zombification of the general purpose computing. About a year ago, there was a book published by one of the Intel architects, one of the architects who designed Intel ME. I highly recommend this book. It's about the only somehow official source of information about Intel ME. And what the book has made clear is that the model of computing that Intel envisions in the future is to take the model that we have today, which looks like this. The size of the boxes somehow attempts to present the amount of logic or involvement of each of the layers in processing of the user data. Obviously, we have most of this processing done in the applications. But we also have some involvement from the operating systems and also from the hardware, of course. For example, when we call, when we want to generate a random number, we would usually ask an operating system to, whoops, that's a mouse, actually. So we would usually ask the operating system to return us the random number because the operating system can generate it using timings and interrupts, whatever. But again, most of the logic, most of the code is in the applications layer. And this is good because thanks to general, thanks to computing being general purpose computing, everybody, every one of us can write applications. We can argue what's the best way to implement some crypto and some people can write it one way, some other people can write it another way, and that's good. Now, this is the model that Intel wants to go to. It essentially wants to eliminate all the logic that touches data from apps and our operating system event and move it to Intel ME. Because remember, Intel ME, it's also an operating system. It's a separate operating system. Only that this is an operating system that nobody knows how it works. It's an operating system that nobody has any possibility to look at the source code or even reverse engineer because even we cannot really analyze the binaries. It's the operating system that is fully controlled by Intel. Not to mention that any functionality it offers is also fully controlled by Intel. But anybody being able to verify what they do, that might not even be malicious. They might not even be doing malicious things. Perhaps they are just implementing something wrong, bugs, security bugs, right? But then, of course, Intel believes that whatever Intel writes must be secure. For some reason, they must have missed a number of papers that my team and others have published in the last, in the recent 10 years. So, those questions are, can we disable Intel ME? Or can we control what code runs there? Or can we see at least what code is there? And as far as I'm aware, the answer is unfortunately not. As I mentioned, I have this cool laptop that runs CubeSOS, of course, but still, it not only runs CubeSOS, it also runs side-by-side Intel ME, Intel ME proprietary operating system. And I cannot do anything, I can't do anything about it. I used to, about also six or seven years ago, my team has done some work on Intel IMT. I believe this was the first and probably the only work where we managed to actually inject code into ME. That was back in times when Intel ME was not in the processor, it was in the Nordbridge. It was in the Q35 or Q45 chipset, if I remember correctly. So we actually showed, we demonstrated how we can inject a root kit into ME. Of course, Intel then patch it, and then now they continue to think that whatever they write will be always secure. The problem is, I used to, for a number of years after that presentation, I used to believe that we could use VTD, an Intel IOMMU technology, with TXT perhaps, to effectively sandbox ME. Because some of the specifications I saw suggested that VTD should be able to, should be able to sandbox ME accesses to host memory. And because we use VTD heavily on cubes, thanks to Zen using it, I was pretty much not that worried about ME. Unfortunately, it turned out that ME can just bypass VTD, and this is a feature of this ME, which brings us to this rather set conclusion that perhaps if we look at Intel X86 platform, then the war is lost here. It might be lost even if we didn't have ME. Even if we somehow managed to convince Intel to get rid of the ME, or at least to offer OEMs, laptop vendors, an option to disable it by fusing something. But the problem with Secure Boot that I mentioned earlier, and that I analyzed in more detail in a paper I released two months ago, is that it really is hopeless because the complexity of the architecture, where we have, well, ring three, ring zero, okay. Then we have SMM, then we have virtualization, then we have STM to sandbox SMM, and the interactions between these, this all doesn't look really like it could be solved effectively, which of course is quite bothers me a lot, at least purely egoistic reasons, because I spent the last five years of my life on this cubes project, and of course, with such a state of things that makes my whole cubes project somehow meaningless. But sometimes if the situation is so bad, perhaps the only way to solve the problem is to change the rules of the game, because you cannot really win under the old rules. So that's why I wanted to share this approach with you today that starts with recognizing that the problem, most of the problems here is related to the persistent state that is start pretty much everywhere on your platform, which usually keeps the firmware, but not only. So let's imagine that we had, that we could do a clean separation of state from hardware. This is the current feature, this is your laptop, and the reddish boxes are the state, the persistent state. That means these are places where malware can persist. So you reinstall operating system, but the malware still can reinfect. These are also places where malware can store secrets once it's still them from you. So imagine I can have malware that might only be stealing my disk encryption key, and it can store it somewhere on the disk, or maybe on SPI flash, or maybe in the Wi-Fi module firmware, or maybe in the embedded controller firmware, somewhere, somewhere there on those rectangles. Now, if the malware does it, it's a pretty fatal situation, because if my laptop gets stolen or seized, perhaps, then the adversary who gets a key to the malware can just encrypt the blobs, and the blobs will reveal my key, my disk encryption key, and then the game is over. And also another problem with the state is that it might be revealing many of the user and personally identifiable information, however you read this PAI shortcut. So these are, for example, MAC addresses, or maybe processor serial number, or maybe ME serial number, whatever, or maybe the list of SSID networks that ME has seen recently. How do you know it's not being stored somewhere on your SPI flash? You don't know what is stored there, even though I can take off my SPI flash or just connect a programmer to my SPI flash, an EEPROM programmer. I can read the contents of the SPI flash, but all of this will be encrypted. So we recognize that the state might be problematic. And now imagine a picture that we have the laptop, which has no persistent state storage, which is this blue rectangle. Let's call it a stateless laptop. And then we have another element that we're going to call trusted stick for lack of a more sexy name for that. That's going to be keeping all the firmware or the platform configuration, all the system partitions, like boot and root, and all the user partitions. Now we see that, and of course, the firmware and system partitions will be exposed in a read-only manner. So even if malware, perhaps a traditional malware that got into my system through malicious, through a malicious attachment, even if it found a weakness in the BIOS or maybe in the chipset, allowing it to reflash, normally allowing it to reflash the BIOS, which we have seen plenty of such attacks in the recent several years. Now it would not be able to succeed because the trusted stick, which is going to be a simple FPGI implemented device, will just be exposing a read-only storage. So we see that firmware infections can be prevented this way. Also, there is no places to store stolen secrets. Again, the same malware running in the ME still can steal my disk encryption key or my PGP private key. But it has no place to store it so that if somebody now takes my laptop, I will not be able to find it there. You might say, maybe it will be able to store it on the stick. But then again, the stick, the firmware and system partitions are read-only, and the user partitions are encrypted, encrypted by the stick. So even if ME can send something to be stored there, nobody besides the user can really get hands on this blob. Also, we get a reliable way to verify what firmware we use or ability to choose what firmware we want to use. Because we can just take the stick, plug into our trustworthy computer, some Lenovo X60 from 15 years ago that we have running core boot and we just analyze all the elements, whatever. So we finally have a way to upload firmware in a reliable way. We can, thanks to the actual laptop having no state, we can have something like Tails finally doing what it advertises. I can boot Tails or something like that. I can use it. I can shut it down. And there's no more traces of my activity there. I can give my laptop to somebody other, or I can boot some other environment, perhaps some, I don't know, Windows to play games or whatever. So what would it take to have such a stateless laptop? This is the simplest version, which shows that the only modification that has been made here was to take the SBA flash chip and essentially put it outside the laptop on a trusted stick and just route the wiring, just four wires to the trusted stick. And that's pretty much it. That's the simplest version. Oh, and I also got rid of the disk. And also, I had to ensure that whatever discreet devices, which are in that case embedded controller and Wi-Fi module, they do not have flash memory but use something like OTP memory. We can further get rid of the Wi-Fi and use an external USB connected one if that is not possible. And for the embedded controller, that should be possible much more easier because embedded controller is always something that the OEM chooses. So we can just talk to whatever OEM who would like to implement the status laptop and ask the OEM to use an embedded controller with essentially ROM instead of flash. So that's the simplest version, which is really simple. This is a more complex version where we also fit something that I call here SPI multiplexer, which allows to share the firmware, not just to the processor, but also to the embedded controller and perhaps also with the disk. Because maybe we actually would like to have internal disk. Because internal disk will always be faster and will always be bigger than whatever disk we will put on our trusted stick. You might object that, come on, disk is actually not a stateless thing, right? Because disk is made especially to store state persistently. But it's a special disk that I will mention in just a few minutes. It's a special disk running trusted firmware and doing read-only and encryption for everything. And now for the trusted stick, as I mentioned, that the trusted stick is envisioned to have read-only and encrypted partitions. And the read-only partitions are for firmware and the system code. So the first block is something that we would like to export over SPI, typically. And the system partition is something that we make visible to the operating system using something like pretending being a USB mass storage or actually implementing a USB mass storage protocol, or maybe SD protocol. And the encrypted partition, again, the important thing here is that the encryption should be implemented by the stick itself. So we have some key here. The question is how this key should be, what input should be taken to derive this key from. It could be something that is persistent to the stick. It could be combined with a pass phrase that the user enters using the traditional keyboard, plus maybe a secret from the TPM. And when I say TPM, I think about the firmware TPM inside the processor that is using storage provided by encrypted user partition. Actually, encrypted firmware partition. Yeah, the optional internal disk that they just mentioned, it should essentially do the same as the stick. And because it will be running trusted firmware, that it will be fetching from the trusted stick. Itself, the disk will not have any flash memory. So because we will trust the hardware of the disk and because we will trust the firmware, we will trust the firmware to provide read-only and encrypted partitions, just like those ones I mentioned on the stick, which is nice because it reveals the stick from acting as a mass storage device, which has practical consequences, which are nice. So that's a picture with the internal trusted disk, which you see just here. As you can see, it takes also the firmware from the trusted stick. And there is even an open source project, open SSD. And it looks like people have already built an open hardware, open firmware SSD, very performant disk. So this is not set to science fiction, even for this SSD. OK, so that looks all very nice, but there is one problem. Even though malware might not have any place on the laptop to leak the secrets, it still might try to do it over networking. And let's differentiate now between classic malware and sophisticated malware. Classic malware is something you get over with an attachment or some drive-by attack, which we'll discuss in a moment. Now let's focus on sophisticated malware. So a hypothetical rootkit in ME. Before we move on, for obvious reasons, such a sophisticated malware would not be interested in getting caught easily. So it would not be establishing a TCP connection to NSA.gov server or whatever, right? That would be plain stupid. Having that in mind, let's consider a few scenarios. Scenario number zero is an ERCAP system. Even though it might be an ERCAP system, still remember there is ME running there. If the computer is not inside a Faraday cage, there is still plenty of other networks or devices around it, which means that ME can theoretically use your Wi-Fi card or even speaker to establish a covered channel with, say, your phone that might be just nearby. So in order to make such a system truly ERCAP, knowing that we cannot get rid of the ME, we really need to have Q-switches for any transmitting devices, including the speakers. And apparently, even that might not be enough, because some people showed covered channels that used things like, say, power fluctuations or temperature fluctuations. But let's leave that exotic examples aside. A more interesting scenario is a closed network of trusted nodes. In that scenario, we assume that all these people are trusted. Again, by definition, that means any of these people can compromise the security of anybody else. We really don't like trusted things, but let's start with something. Now, even though each of these trusted peers, which run stateless laptop, even each of these have this malicious ME in itself, because we're going to fit a small proxy. So a modification that we should additionally do, that I have not shown you before, is that rather than connecting your Wi-Fi module directly to the processor, which is not good, because it gives full authority of the processor over this Wi-Fi module. Instead, we would like to connect it through some proxy that would be doing some kind of tunneling, something like VPN or maybe terrifying, and a traffic that is generated there. So even though ME might be willing to be sending some traffic, maybe not explicit traffic, maybe will be piggybacking on some user-generated traffic by only modifying, I don't know, TCP initial sequence numbers or something, it still all will be happening inside the tunnel. Again, some people might be saying, yeah, but still, ME might be modulating the timings of the generated packets and this way try to convey some information using timing. Well, we can't really do much about that, but on the other hand, it would be extremely difficult for ME to do that implementation-wise. Finally, a scenario where we want to connect with anybody, not just with a trusted computer. So say, with some website on the internet that might or might not be trusted. Again, by having this proxy, which, by the way, might be implemented inside this embedded controller, that we know, if you remember, it runs the trusted firmware because it fetches firmware from our trusted stick. So the proxy, again, is tunneling any potential leakage from ME, which means that a nice malicious ISP or any part of the infrastructure here still cannot really retrieve the secrets that ME tries to leak. But of course, at some point, we need to terminate this tunnel. It might be the VPN server or it might be a TorExit node or maybe the server itself, if it's, say, maybe an Onion server, in which case anybody here can still presumably get the secrets from ME in this cloud here, which, unfortunately, we can do nothing about, except for hoping that at least if we used, perhaps, Tor for this first part of the equation, then at least a malicious administrator of, say, the server would not be able to correlate whose description, whose disk decryption key it really is. Although this is very tricky, that would only work if this computer was a special purpose computer. If you used something like cubes here and if you wanted to have different partitions, one for your personal and only one of these partitions would be special secret partitions, that wouldn't work. Because ME still would be able to gather your identification information from whatever partitions you have. Because again, it has unconstrained access to all the host memory. But still, this, using this proxy, pushes adversary, hypothetical adversary in the ME, into extreme difficulty of needing to piggyback on some higher-level protocols, establishing exotic cover channels, comparing to what they can do today. When can they simply steal the key and store it on SPA flash partition, or maybe on your disk? This is like orders of magnitude more difficult for them to do. We mentioned the sophisticated malware, and I mentioned that the classic malware is a different story. The classic malware doesn't need to be shy against leaking something through whatever means you can think of, perhaps by sending email to somebody. But obviously, to address the classic malware problem, we can address it quite reasonably well on the operating system level. For example, using compartmentalization. But here comes the problem, is that a malicious bios. Let me get back a little bit. Because so far, we have been assuming that we don't really need to trust the bios. Because having this status laptop and a trusted stick, even if the bios was malicious, it still, again, would not be able to change anything in its own firm partition, would not be able to store stolen secrets anywhere. So it's convenient to think that the bios might not be trusted. But then again, a compromised bios might instead be providing privileged escalation backdoors for classic malware that executes on your compartmentalized operating system, such as to do VMScape. Such things are trivial to implement. And we don't want classic malware, which means we want to ensure that the bios does not provide such backdoors. And to make it short, we need open-source bios, something like Corbut. And it's great that we have Corbut, and we could hope Corbut to become such a bios for the stateless laptop. Even though Corbut is not fully open-sourced, it relies on so-called Intel FSB, the firmware support package, which is an Intel blob that is needed to initialize your DRAM and other silicon. Still, it should be reasonably easy to ensure that FSB does not provide SMM backdoors. So this is a solvable problem. Finally, there's this question. So let's say half a year from now, or a year from now, purism or somebody will tell you, here is the stateless laptop. You can order just $2,000. And okay, so you got the laptop, but how do you know it really is a stateless laptop? Maybe it is full of state-coring elements. Maybe it's full of radio devices that are emanating radios everywhere. This comes down to the problem of how do we compare two different PCBs, two different printed circuit boards? As far as I'm aware, right now, our industry has no ways to compare two different PCBs and to state, yes, they look identical. Because if we had that, then we could... We could have the laptop vendor, which would obviously have to be open hardware, to publish the schematics and pictures of the boards. And then anybody who orders laptop would have an opportunity to always, say, photograph the board and have a diff tool to compare it if it really looks the same. Sure, we would not be able to see inside the chips, but at least the geometry-wise comparison would be a tremendous step into making malicious modifications by vendors very difficult. This is a vision problem, kind of, right? You take two photos, you have two photos of two PCBs, and you have a tool to compare it. And I believe Jake Apelbaum has already mentioned it some, I don't know, a year ago, probably. It's a great research project for all you academic people sitting here. That's an example of a board that... Well, I have no idea. I got this laptop, I open it, I see this board. Sure, I can identify some integrated circuit elements like this embedded controller here. But really, it's... Maybe it's connected somehow differently. Maybe there is some other flash elements there. I don't know. I would like to have an ability now to check this with the golden image that some experts will analyze in depth and say it's safe to use. Many people say that perhaps we should all give up on Intel X86 because M.E., for example. Yet, this is not such a nice idea, or maybe this is not such a silver bullet, I should have said. First, we have ARM. Everybody says, why not ARM? Why not ARM? Let's go to ARM. First, there's no such thing as an ARM processor. ARM just sells specifications or IP. And then there are vendors like Samsung, Texas Instruments, et cetera, who take this IP and design and make their own SOC. This is still a proprietary processor that they can put whatever they want inside. For example, we have TrustZone. By itself, it's not as closed as M.E., but there is nothing that would prevent the vendor to actually take TrustZone and lock it down and end up with something like M.E. very easily. Just a matter of the vendor willing to do that. Also, the diversity of the processors makes it difficult for operating systems like Cubes that would like to use advanced technologies like IOMMU for isolation to actually support all of them because different SOCs might be implementing completely different versions or even technologies doing that. Another alternative, much better one, is to use open hardware processors. Currently, that means FPGA implemented processors. In the future, maybe we will have 3D printers that will allow everybody to print it. That will be great, but probably it's not coming any time in the coming 10 or 20 years. And meanwhile, the performance and lack of really and the security technologies like IOMMU or virtualization doesn't make this a viable solution for the coming, say, five years at least. And even then, even if we had such an open source processor, this clean separation of state still would make lots of sense. Again, because few more infections can be easily prevented because malware, if it gets there somehow, still has no places to store stolen secrets because it provides a reliable way to verify or upload firmware and makes it easy to boot multiple environments and share laptops with others. And I know that most of you will now say, yeah, that might be a cool idea, but the market will never buy into that, right? But understanding that personal computers are really, as I said, extension of our brains, we should stop thinking about market forces as the ultimate force shaping how our personal computing looks like. Just like we didn't resort to market forces to give us human rights, right? We should not count on the market forces to give us trustworthy personal computers because it might just not be really... It just might not be in the interest of the market forces. So hopefully some legislation could be of help here. Maybe the European Union could do something here. Because it's really funny when I often talk with other engineers, and we all know that our world now really runs on computers. And yet, apparently, almost every engineer I talk to says something like, yeah, but the salespeople will never do that, the business will never agree to that. But if the world runs on computers, shouldn't it be us, the engineers, who should actually have the final say how the computer technology should look like? Yeah, I'm just here with this thought. Thank you very much.