 Rwy'n credu'r ffordd y ysgol o'r r Sinai, byddwch'n ardal yn y Rhaglen Llywodraeth. Rhywb am ychydig yn ei wneud. If you have questions or I am too fast, please raise your hand and I will try to slow down and explain. There will be time for questions at the end as well, so please think of some questions, and I will look forward to answering them. Ac felly dywed i gweithio sy'n gwybod y cyfnod, ac yn ceisio'r cyfnod ar gyfer ar gyfer y ooedd, yn cyfnodeb yn gweithio'r glwyddol, i gyd yn cais ofnod? Oni fe dynnu chi ar gyfer gwybodaeth, ac yn cyfnod gan gwybodaeth cyfnod. I'n rhai gwybod eich hynny o'r cyfnod ar gyfer y gwybodaeth cyfnod a'r gynhau'r cyfnod. Wrth ei gwybod yw'r gwybod mewn. Cymryd i chi. Mi. I like tea because I'm British so that is like Japanese people who also like tea. I am the author of this book Trust in Computer Systems and the Cloud, and I am also the executive director of the confidential computing consortium, which we will speak about a little bit later on. First, what is confidential computing? Here is the brief description. I will explain a little bit more about what it is. It is the protection of the data in use that is important using hardware-based and tested trusted execution environments. So, let us briefly talk about those. The first thing is that data has three states. The first state is when it is in transit on the network, it is moving. So we use TLS or IPsec to protect it in transit. It is data at rest is the second state, so we might protect it in a database or an encrypted file system, for instance. The third is when it is being processed, when it is in RAM, and that is very difficult to protect, and we will talk a bit about that. Confidential computing uses hardware-based capabilities in chips, CPUs, GPUs, DPUs, for instance, and it is cryptographically attested. You have verifiable information which you can prove that it is working. It is part of the family of privacy-enhancing technologies like fully homomorphic encryption or a secure multi-party compute and can work with those technologies. It is less a competitor and more complementary with those. It is available on almost all clouds right now. You can buy chips and computers which have it right now. It is very fast unlike some technologies like homomorphic encryption. It is close to normal processing speed. You can use it for general computing. You do not need to rewrite your algorithms, for instance, to use confidential computing. Let us explain a little bit about why it is important and how it works. This is a standard virtualisation model. Everybody should be quite common. You have a host operating system and you have maybe two workloads, two applications which are running. Isolation is about protecting things from each other. Typically, we talk about the CIA triad. C is confidentiality. That is keeping things so that other people cannot see them. Other processes cannot see them. I is integrity. That is protecting them from being changed by unauthorised entities. The last one is availability, making sure that your data and your application can be used when you wish. Generally, it is pretty easy to know when something is available. If your application is not running, you can usually tell. Protecting confidentiality and integrity is much more complex. Let us talk about the three types of isolation. The first one is workload from workload. If you have the workload on the left, it should not be able to interfere with your workload on the right. You want to protect the confidentiality and integrity of that workload. Type two is host from workload. You do not want your host, your operating system, your hypervisor, your other applications, running as part of the host, to be interfered with by workloads. If you are running a cloud, for instance, if you are a CSP or a hyperscaler, you want to protect your infrastructure from the workloads. These are fairly easy. We know how to do these things. Hypervisors and containers and SecCons and SeLinux are all techniques we use to protect these. These are quite easy. I should say the slides will be available and are available now if you wish to look at them later on. What about the third type of isolation? This is protecting the workload from the host. This is more difficult because virtual machines, hypervisors and container does not protect this. The way that virtualisation works is typically your VM manager, your hypervisor, your container management system understands the data and the memory pages that your workloads are running in and can change them if they wish. This is kind of fine if you trust everything, but I don't trust everything. I'm a security guy and banks don't trust everything and healthcare providers don't trust everything. Nobody trusts everything. This is the problem that we are trying to address. Privacy-enhancing technologies like confidential computing are largely about providing these types of isolation, particularly type 3. Let's go back to this definition. This is exactly what confidential computing does. It is providing the protection not just for your data, but actually also for the application while it is in use. It uses capabilities provided at the chip level to protect that data, to encrypt it whilst it's in use. What it does is it creates what's called a trusted execution environment, which allows you to create a section or a set of memory pages into which you can load your application and its associated data and protect it. I won't go into detail today. We don't have the time, but there are also mechanisms called attestation, which provide measurements of the data and then sign them cryptographically so that you can check that these have been set up correctly. This is important because let's say you go to a cloud provider and you say to the cloud provider, well, I'm going to use confidential computing because I don't trust you. I have my customers' data and I don't trust you not to look at it. More importantly, the regulator, the government doesn't trust you not to look at it. So you say, please will you set up a trusted execution environment? And they say, yes, it's done. And I say, wait a moment, how can I trust you to do that? If I said I don't trust you, how can I trust you to set that up? And the answer to that is that you use the capabilities in the chip to provide you with a cryptographically signed measurement which you can verify so that you don't need to trust your cloud provider. You don't need to trust the host OS, the hypervisor, the container management and all of those pieces. You're reducing the trust. I will talk a little bit about that as well. So TEs encrypt the workloads and they protect the integrity and the confidentiality of your workloads. They don't protect the availability because, of course, again, hypervisors or kernels can do resource starvation. They can just say, well, I'm not going to give you the clock cycles to manage or I'm not going to process data in and out on the network, for instance. But they do provide confidentiality and integrity protection. So why is this important to the cloud? So this is a slightly complex picture taken from my book, in fact. And this is kind of what your stack looks like if you're putting something in the cloud. And there are an awful lot of different layers here, right? So in the virtual machine, which is the bit on the left, you've got all of your stuff, right? You've got your application. You've got the middleware, the user space, the kernel, the bootloader and the BIOS. They are all, let's say you control those completely. But there are many pieces on the host system that you do not and you cannot control. It doesn't matter how big a customer you are. If you go to Google or Amazon web services or to Azure or to Rackspace or to any of any other CSP, they will not give you the details of all of these different pieces. And typically, different actors, different entities will control or provide these different layers, which is why they're in different colours. So your firmware may come from one provider, your bootloader and kernel from another, your BIOS from someone else. So it may come from many different pieces. And this is part of the problem. So although the cloud is great, it allows us to do digital transformation, to move things into the cloud, to expand as we need to scale out, to have to worry about managing all of the stack, there are trust issues. How can I know that the bootloader or the firmware or the kernel will not interfere with or export or otherwise do bad things with the things in my VM or my container? It's very similar. What you really want to do is not trust any of that. You want to avoid trusting all of them. And so this is what a TE does. Now, there is one thing you are going to need to trust. And that is the CPU or the GPU if you're doing that. I haven't shown that here. And that is because something has to run the instructions. Something has to do the processing. So you have to trust that to some degree. Which is why we do the attestation and why we do the cryptographic proof. But what a TE, a trusted and secure environment does, is protect your application and your workload from all of the rest of it. And so this allows you to do all the things that you want to do. It allows you to have data privacy, whether that is for customer data, for patient data, for cryptographic keys, for pharmaceutical modelling, for AI models, for instance. Allows you to think about data sovereignty. This is a very big question at the moment. Because, sorry, many companies wish to be using, for instance, Cloud or big companies to process data, to manage data. But many countries are very keen that there should not be the possibility that other countries or other actors from other countries can get access to that data. This is something that I'm very pleased governments are taking seriously. And we're seeing many countries saying, well, you can't be using particular applications or particular cloud providers because we don't trust them. And this is something which can go away if you use confidential computing. Because if, let's say, the computer, the host is in, I don't know, Germany, but I am a British company and I don't trust the German company. It's more like to be the French. Many years we've had disagreements with the French in the United Kingdom. But let's say I don't trust them. Even if I don't trust them, if I put my application into a trusted execution environment, I can have cryptographic assurances that they cannot change or mess with that. So this is all good. It allows us to think about using the cloud in new ways. But it is important to be thinking about open source. Well, those are the pieces that you want to be open source because you want as much of what you do trust as possible to be open source. I mean, really, you want to trust that piece and that to be open source as well, right? But that is your application. And you can do with it what you want. So why is that important, though? So the first reason is that the best security is possible with the fewest number of trust relationships. I don't want to have to worry about having relationships with four, five, six, or, well, with a previous picture, all of those different entities. I want to have as few trust relationships as possible. And the other thing is that security is always complex and is always difficult to do correctly or as correctly as possible. There is no perfect security. Sorry, but there just isn't. But the more that is visible and auditable, the better. And this is a really important word. I am a techie. My background is as a programmer. I program very badly so I move to management. But for me, I am interested in doing all this stuff right. But for business people and for regulators and for companies, auditability is absolutely vital. And open source allows everything to be auditable. So from my point of view, the less code as possible, the fewer numbers of trust relationships, and the more open source, the better. Because it is auditable, it is visible, and you can check what's going on. Without open source, you cannot have transparency. And transparency is important. It's always one of those strange things that security people who worry about confidentiality want transparency. But you want the underlying technologies to be as transparent as is absolutely possible. There's an interesting thing that's missing from this. Can anyone guess what's missing that I haven't shown as open source? I'll take that, the CPU. I am happy if, I'm happiest, shall we say, if the CPU and the associated firmware and instruction set is also open source. And so there are some providers who are doing that. So, for instance, RISC OS is a member of the Confidential Computing Consortium, and that's all open source. And some of the other providers are also looking to move more and more of what they do to open source. So let me talk a little bit about the confidential computing ecosystem. I think I'll also have some time to have a slightly more technical conversation about what's going on if people are interested. So we'll do that, but let me first do this. So when you're building an application and you want to run it, there are a whole bunch of different things you need to do. And here's the first bit, right? You need a chip to run your application on. So we need silicon manufacturers and vendors and designers and vendors. So there are a number of models you've got. People like AMD or Intel or Huawei who kind of sit on the left. They make and design and make and sell their own silicon. On the right, you've got other people like maybe ARM or RISC and Samsung, for instance, and Intel and AMD actually, who either create CPU designs, silicon designs, and then manufacture them and sell them. So you need that. You also need people like your OEMs, the people who create and make the hardware. So that might be people like Fujitsu or Mitsubishi or Dell or HP who are making this information and selling it to the data centres or to you, customers as well. And then you have operating system vendors. So this is your Microsofts, your Canonicals, your Red Hats, your Zoosers. Those are the main ones I can think of at the moment. There are some others around as well. Huawei does so as well. But you need all of them because in order to use this, it needs to use confidential computing. It needs to be turned on in the kernel. You need the correct libraries and the correct management pieces to do it. And then you need the TE platform projects and the vendors. So when I showed you this picture here, this one here, those orange pieces, somebody needs to do that, right? Someone needs to put that together and package it and make it all work. And that is what TE platform projects and vendors do. So in the confidential computing consortium, we have a number of projects doing that. I will talk about those in a bit. And these are the pieces which are most important to be open source because these are the bits that you need to be able to trust, right? And on top of that, you want to be able actually to use this, right? So it needs to be provided by CSPs, hyperscalers. This is your Alibaba's, your Azure's, your Google's, people like that. And then you have ISVs. So for each sector you can think of, banking, pharmaceutical, auto manufacturer, telecom, you will have people writing applications which use confidential computing to make it easy for you to adopt. And SaaS providers, people are going to be providing these as SaaS products as well. And of course, last but not least, the end users on top. My personal point of view, I'd like to see all of that open source all the time, but we don't live in that world yet. Keep coming for another 20, 30 years and maybe. I don't know. This is the right conference for that. So what is the Confidential Computing Consortium? Well, it's about doing this, right? And we are about a community, first of all. We are part of the Linux Foundation, which is why I'm here. And we want to create and promote projects, open source projects, to manage data in use and to promote the adoption of confidential computing through open collaboration. So we do a number of things. We do technical work. So we have some very active work in our technical advisory groups and the SIGs, special interest groups, looking at the low-level technical issues about protocols, about attestation, about how you can apply this, for instance, to regulators and standards, the work of NIST or CSA, for instance. We also do outreach and marketing. I suspect that many of you here didn't know much about confidential computing. How many of you here had heard of confidential computing before you saw this talk listed? Okay, so that's not enough. So I'm pleased. I have more people know about confidential computing, but we need to let people know that this is available. We wish to grow the ecosystem because we need all of those sections to be filled in those boxes and also working with regulators, working with standards bodies so that they know, when you're talking about GDPR, for instance, that if you use confidential computing in a particular way, that you can meet some of those requirements about data protection, privacy protection, data sovereignty. So we're working with a number of regulators and standards bodies and governments and want to do more of that as well. This is, I think, slightly out of date, but it's pretty close. A list of our members. There are some premium members down the bottom. You'll see some big ones. Our general members on the right are also some big ones, but many startups as well. We encourage startups. We have people who are making billions of dollars and have tens of thousands of employees and we have people who are making no money and have two or three employees. We also have some associate members who are either at different consortiums or academics or government organisations like RISC, Lenaro, and people like that. If you don't see your organisation listed here, but you are at this talk, then you should consider joining. I've done my selling bit for the day. We have members or projects in all of these different boxes that I showed you before, and some of them span multiple boxes. So if you think about, I don't know, Microsoft, they are an OEM. They are an operating system vendor. They are an ISV. They are a CSP. So they span multiple of those boxes. It's sometimes interesting to see different parts of organisations arguing about which are the most important parts of that. That could be fun. I used to be at Red Hat. IBM and Red Hat have very different priorities, but it's open source. It's the Linux Foundation. Everyone can get along and be very friendly. Yes, good. Let's hope. These are the projects, the open source projects. There are two more which are in the process of being approved at the moment. Rust, SPDM, and the Certifier framework. There are at least two more which are going through at the moment, which have been presented to the CCC. One is the Islet, I-S-L-E-T project from Samsung. Another one is work between I think Huawei and Red Hat. I can't remember exactly. We have other ones coming along as well. How can you get involved in all of the interesting things that is going on? We have varied make-up of our members, very different types of organisations. Across all of these different areas, from very large to very small, one of the things that we are doing at the moment and one of the reasons that I'm speaking to you here is expanding into the Asia-Pacific region. There is a lot of interest, a lot of pull from customers. There's a lot of regulatory interest in privacy enhancing technologies and data and use protection. Our members are doing work here so we want to be spending more time and growing the membership to represent the needs of Linux Foundation members and general interest members across Asia-Pacific. We absolutely are committed to open source and we want people to get involved. But not everyone can get involved as much. If you're a small company, you maybe can't do as much. You can focus on the pieces that you're interested in. Some of the things that are very big, as I said, would be industry visibility, making sure people know about it, Asia-Pac growth there. We also want to be helping people understand good ways that are useful for them to use the technology. So we're working on use cases so that we can publish those so people say, oh, yes, that fits what I need to do, for instance. There are so many use cases, so many different sectors. It's difficult to think of a sector in which you don't care about the privacy and integrity of your data. Everyone has data that they want to protect. But this is about not just security, it's about risk. When it comes down to it, all security should really be about risk. The sort of people who care about this are your CISOs, your chief information security officers. Although their title has the word security in it, none of them care about security. I'm sorry, they don't. They care about risk. They care about managing risk, understanding risk, mitigating risk. If you have technologies such as any privacy-enhancing technology, including confidential computing, which allows you to change how you manage, change how you think about the risk to your organisation, to your customers, to their data, to your business, then that is a new thing that you should be paying attention to. Before I go questions, I will quickly talk about what happens here. Basically, normally, when you're doing computing, you have a bunch of memory pages which make up a workload in RAM. This is what it looks like. What a TE does is it encrypts those memory pages so that in RAM, any other process with access to those memory pages, including a hypervisor or the kernel, can't look at them. The exact technology is slightly different between different versions available, but this is the basic idea. Each TE instance has a different cryptographic key. That means that one can't look at the other. There is isolation between them. It's only when the memory page actually goes into the CPU that it is decrypted, and it is then encrypted again when it goes back into RAM, which provides the protection. That's just a very brief description. We have a few minutes left, I believe. Five minutes left, I think, for questions. Are there any questions that I can try to answer for you? The question was, for multi-core workloads, accessing the same memory, is that possible? The answer is absolutely yes. The early technologies, it wasn't so easy, but Intel, AMD, ARM folks have absolutely nailed that question. When it gets interesting is when you're doing it across different systems and trying to think about migration, there's some very complex issues there. One thing that's happening now is beginning to take off. I talked about mainly CPUs, but if you think of AI workloads, for instance, they often use GPUs. How can you establish a trust relationship between a GPU and a CPU? There's exactly work going on in that, in things like the IETF, which, again, the Confidential Computing Consortium is involved with to make sure that that all happens. So, yes to your question and beyond as well to the question. Any other questions, please? Oh, there's one over there, sir. Are there any good reasons not to use trusted computing? Confidential computing. Trusted computing rather different. I take issues that were trusted computing, but you have to read the book to understand why. Are there any good reasons not to? So, it's not easy for all workloads yet. It's getting there. It's not all CPUs provided. Typically, it's a server-based technology. That's changing because ARM has brought out in ARM 9 capabilities and RISC is also doing that as well. So, for instance, the Islet project I talked about from Samsung is using this technology on mobile phones and other mobile devices. So, it doesn't fit for everything there yet. The performance is negligible difference, generally not a problem. There's one interesting one, which we don't talk about much. If you care about workload density, so typically, when you're running applications, let's say a set of containers on a host in the cloud, think of pod, something like that, many of them will be able to share memory pages, because you've got the single kernel and unless you're writing to stuff which often you're not in the kernel, hopefully, you can share those memory pages. You can't do that typically with confidential computing. So, your workload density is maybe reduced. Now, there are some ways around that, some interesting stuff happening, need to be aware of. In most use cases, not a problem. So, I think it's about ease of use. Again, things like the GPU offload is coming technology, it's not mature yet. So, I think it's largely about the maturity and people knowing about it. Any other questions we have? I think a couple more questions. Yes. Okay, so the question is how is a cracker attack of the data in a GPU for example? Thank you for using the word cracker, so I really appreciate that. We use the word hacker negatively too much, so fine. So, the answer to that is there are a number of attacks which are some easier, some harder. As I said, there is no perfect security. There are a number of side channel attacks which can in certain circumstances leak data. So, for instance, if you're doing key cryptographic, key material work, you may need to be looking at things like constant time cryptography implementations within your trusted execution environment. Most of the attacks require long-term physical access to the chip. And that there are other mitigations and other protections for those sorts of things, but most attacks are very difficult with confidential computing. It depends on the technology and the maturity but most of them require long-term physical access and scanning an electron Microsofts or the ability for instance to be varying voltages on the chip. These are quite complex long-term attacks. I think you have time maybe for one more question, if we have any more questions. Please, sir. So, the presentation today mainly focuses on computing. So, confidential containers uses confidential computing technology underneath. There are a number of projects, some of which sit in CNCS, some of which fit in the Confidential Computing and Social Sciences, and they are all very friendly. So, the workloads it depends how you define a workload. You can make the entire pod a workload, or you could say just the container instance itself as a workload. And so, how you draw that boundary depends on how you define your isolation granularity, basically. So, you could put an entire operating system if you wish in it. And again I think it's important to think about not only how many trust relationships you have because if you think about not just a pod, but even in a single container the number of dependencies can be very high. But also your TCB. So, the smaller your trusted compute base, the smaller the TCB the higher level of security you may be able to assure. So, there are interesting trade-offs I think around here, some of which are technological, some of which are supply chain, some of which are process-based. So, great question, thank you. So, I thank you very much indeed for your questions. Please do get in touch with me if you wish I am available either via Shed or It's a long way back. There we go. So, you can find me via that. Thank you for your time and have a good conference. Arigato.