 Hello, everyone. I'm Sarah, a cloud orchestration software engineer in Intel. The container runtime and language runtime are my focus now. I have made some performance optimization contributions to the PHP language open source community. And my teammate is Yong Li He, a senior cloud software engineer at Intel with 15 years. And many effects on the network security and Linux system. This cloud journey began with OpenStack in 2011, where he eventually became a commuter on mobile. In 2021, he embarked on cloud native and Kubernetes, enhancing Istio service mesh security. Today, it's my honor to be here to show our topic, a Washington runtime for fast protected by T. We will show you the background of confidential computing and T environment. And the current solution and why we choose a Washington runtime for fast protected by T. So we have four parts. Firstly, what is traditional confidential computing and now data security is a critical issue. Confidential computing focused on securing the data in use, which is a strong demand in many cloud computation cases. The data being processed and the techniques that are used to process it are accessible only to authorized programming code. These are invisible and unknowable to anyone else, including the cloud provider. Confidential computing is important in many commercial scenarios. For example, financial firm space and endless problem of digital safe and fraud. Recently, confidential computing can be employed in an AI based money laundering detection approach. Utilizing federated learning. In this strategy, team often work in different companies, collaboratively to build a shared prediction model. Unlike standard machine learning approaches that require data centralization. Federated learning allows for training data to be kept in the local environment, such as banks internal system with no need to store data in the cloud. They can use confidential computing to make sure that the right programs are operating on the right data rather than being shared across all companies. Besides, there are all sorts of scenarios in the financial services where confidential computing could help. As many companies rely more and more on public and hybrid cloud services, data privacy in the cloud is so comparative. Confidential computing provides more data security assurance and encourage companies to move more of their sensitive data and computing workloads into the public cloud. Specifically, a hardware based trusted execution environment in CPU is built for confidential computing. How TEE secured it in use? The embedded encryption case machine is ensured that the keys are accessible to the authorized application code only. If the authorized code is hacked, the TEE denies access and can solve the computation. In this way, sensitive data is protected in the memory. Now, three hardware platforms already support the TEE. Internet SGX, ARM Trustroom and AMD SCV. They have different implementations. For example, Intel has been a pioneer in confidential computing by introducing software guard extensions that are known as Internet SGX. It continues this string with upcoming trust domain extensions, TDX. AMD also introduced secure encrypted virtualization to isolate GAS and the hypervisor. ARM Trustroom isolates the critical security, firmware, assets, and private information for the rest of the application. As we can see that different TEE environments have different programming models based on their own hardware platform, which brings overhead for programmers. For example, software developers need to create in-play and debug applications enabled by Intel SGX. The trusted function is called and the code running inside in play consumes important data directly. And Intel SGX is introducing a new architecture to help deploy hardware isolate virtual machine. And AMD SCV is an extension to the AMD V architecture, which supports running encrypt virtual machine under control of KVM. Encrypted VM has their pages secured such that only the guest itself has access to the unencrypted version. Now ARM already gave a client solution in mobile. Applications in the conventional OS and TEE refer to untrusted and trusted world. A TEE-based kernel is used for scheduling memory management, crypto methods, and other basic OS functions. TEE Functional API here defines the interfaces for communicating with the trusted world from untrusted world applications. The trusted applications have access to the OS functions exposed by the TEE internal API. Furthermore, the hypervisor and virtualization makes it possible for many TEEs in a single device. There are two separate hypervisors in one device. For the secure world in the right, the monitor mode just manages the actual switch of state in the course from the normal world to the secure world. The boot code sets up the initial state of the secure world. However, all about with just a client solution, many developers are trying to bring TEE to cloud at computing and more scenarios. So how TEE exists in cloud native? Let's say the confidential container project. Confidential containers project which is to standardize confidential computing at the container level. This will enable the Kubernetes users to deploy confidential container workloads by using the familiar workflows and tools without extensive knowledge of underlying confidential computing technologies. Now, a zoomed-in Kubernetes service supports adding confidential computing VM nodes as agent roles in a cluster. These nodes just help you to run system workloads while keeping a special block of memory encrypted per container. The Kubernetes scheduler will dispatch these TEE containers in a cluster which is convenient to run for cloud native in a cloud native scene. So let's dive into the working model for TEE containers briefly. The TEE images needs twice building. Normal building is the same as the process of normal images. Then the TEE devil tool will rebuild it and encrypt some layers. Finally, we push these container images into the image registry. This registry is responsible for storing and delivering encrypted container images. The secret storage is responsible for storing secrets that the workload needs in order to run, such as disk decryption keys, and it delivers these keys to the TEE application. Finally, in summary, the confidential containers projects have several advantages. It keeps TEE's confidential data secure. It removes the SP from trusted computing base, and it constructs the general destination infrastructure, and it's compliant with outside runtime specification. And it could be deployed in any public cloud Kubernetes platform. And it really standardizes confidential computing at a container level. Then just let's take a look at a specific example of CoCo, confidential containers. Inclava provides industry with an open source container runtime architecture for confidential computing. It's led by the Anababa cloud operating system security team. Let's see the workflow of Inclava containers. The Kubernetes is initialized container runtime interface request to container D, such as request to create a pod. A CR container D project is provided for container D. After container D receives a request, it forwards that request to shame-rime. Shame-rime can create both run-c and run-e containers. Here, for example, create a run container, use libOS to convert a comma image to a Tee image. Run-e will create an enclave in the container and run the application in the enclave. Run-e just loads the inter-sgx driver into a container. Create a plus first in the container s, init-run-net, and then use init-run-net to create the enclave. The enclave is Tee protected by inter-sgx. Here, it includes libOS, language runtime, and applications. Now we can see that a trusted application is running. Well, this project really reduces the high threats of confidential computing and provides developers with real practice. However, the current confidential container technology still has some shortcomings. The use and development costs are relatively high due to the different hardware. Since the Tee encrypted container image for different platforms can be reused easily, which brings a big storage and network overhead simultaneously. Encryption induces a bigger image size, big footprint, and the isolation is still too coarse due to the whole workload estimation. So, we want to try some solutions. And the wasn't runtime, then let my teammates only introduce the wasn't runtime in the detail. Thank you. Hey, everyone, this is Yongli He. I'm very glad to present this to you. Thanks for watching. We're going to go through two sections. In section three, we talked about how wasm could be used in serverless platform by two users cases. In second section, we talked about how put wasm runtime into hardware Tee. In this section three, we're going to talk about what could be done by combine wasm and the hardware Tee. And to follow two user cases, the open source serverless platform, Canadian, and the wasm native platform, Formum Spin. WebAssembly is a binary instruction format for a stack-based virtual machine. Wasm is designed as a portable compilation target for many programming languages. Enable us, deployments our apps on the web. It's also enable us to deploy our wasm applications in server by using some specific wasm runtime. For server-side application, especially in the edge cloud, the safety and the trusted run environments is where we demand. We combined wasm with hardware trusted execute environments. Then provided a safety and the trusted cloud. This means we need to put wasm runtime into hardware trusted execute environments. We will soon to know we gained more than a safety solution. We will talk more about it soon. We have several different hardware trusted execute environments models. For example, the compact solution like Intel SGX requires specific SDK and tools to build your applications. But the TDX instead, in the VM level, provided more familiar tools for developers, required less effort and could transit the legacy application smoothly into trusted domain. WebAssembly running in the trusted execution environments have more benefit than we expected initially. For example, in the function as a service platform, more specifically in the Knetu, function boot time is a key factor for user. Wasm function have a unique benefit to provide help on the fast boot, even running in the trusted environments. Wasm runtime is small and identical to every wasm application. In traditional applications, each application has their unique and different runtime requirements. We already tried to improve the image boot up speed by letting it boot from a snapshot. That's very helpful. Now combined with WebAssembly makes Snapshot solution become more attractive. This means less disk space taken and quick boot. And a pre-booted function run time queen is then became possible. We choose two example platform to introduce more about combination of the wasm and trusted execute environment. Knetu is an open source function as a service platform. And this bin is a wasm-netu cloud. Knetu has two main components, the soaring and the eventing. These two components work together to automate and management the applications. Knetu takes care of the details of networking, auto scaling. That developing team can focus on more about the serving logic itself instead of the everything details in the platform. In the Knetu function is a key component. Knetu provides a simple programming model to not require in-depth knowledge of Kubernetes and the container. Knetu functions is easily created and easily deployed into Knetu platform. When you build or run a function, the container image is generated automatically for you. Each time you invoke your code, Knetu will boot several posts for you as required. Then run your function. That is your code. Combine the function as a service platform with trusted domain-based wasm application. Means we need to put the whole wasm runtime and the wasm binary into the trusted domain. This is going to get a safe and trusted function as a service platform. This will address some concerns from users. For example, a company may run a fast platform. They need to secure the interaction with their function and protect their critical data. Put their function and their data in trusted domain could prevent machine attack and prevent malfunction operators. A talented company may want to protect their function code and algorithm data. Want their data in the trusted domain. And won't leak their data to cloud service provider or other functions in same provider. To address this concern, we build the function as a wasm binary. And then incorporate the binary. Then use WebAssembly here is mandatory. Because the files usually build your function from your source code. But for security, we're going to incorporate our wasm binary. That totally breaks the function as a service cloud's workflow. We even could not provide a correct container image to run customers binary application. But with wasm, we could address this problem. Because WebAssembly's runtime is the same for all wasm binary. Now let's discuss another example. Build a trusted execute environment with wasm native cloud. For instance, spin is a wasm cloud. Every application in the cloud is wasm binary. Spin is kind of wasm runtime build on some wasm engine, of course. And spin also interface wasm binary to host in a standard way. This is a very simplified chart to elaborate how spin works. Spin is open source framework for building and running fast microservice with WebAssembly. It aims to be the easiest way to get started with WebAssembly microservice. And take advantage of the last developments in the WebAssembly component mode and the wasm time runtime. Spin offers the simple COI that helps you to create, distribute, and execute applications. Spin also offers a standard library to help your wasm applications interact with cloud and with the host side service. The wasm engine links your application with host components. That host components provide your wasm application services like HTTP, Redis, and other APIs to make wasm runtime secure and trusted in the wasm native cloud. We conclude that put all the spin in trusted domain is a smooth solution. For VM like TEE, for example, the Intel trusted domain extension, this is easy to do so. In section 4, we are going to talk about how we put the wasm runtime in trusted domain. We partition this topic in several small sections. Talk about three things. First one is how this works for trusted domain extension. Second one, we talk about how it could work for SGX in cloud. And the last part, we talk about what wasm runtime available to begin with. For VM level trusted domain, we have all to support wasm runtime. But need we treat that VM a little bit differently? For example, in Intel TDX, there is a TDX tools provided to help developer build their individual components, package, or install pre-build binary. Even create guest image. This is a very nice start point to build wasm runtime in the Intel TDX. As another example, the SGX in-clay needs different support. We evaluate the open source ego project. That's very helping. Developing confidential applications require the knowledge and the significant code changes normally. But with ego, you can skip that and write your code as you usually do. Like SGX does not exist at all. With ego, we don't need to refactor the wasm runtime. We could use three simple commands to build, send, and run your wasm runtime in the in-clay. As shown here, ego given the wasm runtime example, which is also mean we immediately go to a working start point. That's exciting things for starting exploring new projects like this one. We can put our energy to put it together with fast platform, instead worry about how to put the wasm into in-clay. We also have another good choice for awesome runtime, the WebAssembly Mikro runtime to work with. This is a lightweight standalone WebAssembly runtime with small footprint, high performance, and highly configurable features for applications. They then target for embedded edge cloud, support trusted execution environment, natively. Wammer is a good choice for edge cloud to further reduce footprint and size. Wammer is a safe base to implementation, but not a problem while help of ego. Wammer itself could be a simple library to be a part of Go version Wammer. Thanks everyone. That's all I present today. Thanks for watching.