 Hello everyone. I'm happy to be here today to share with you our vision for the confidential computing of us. My name is Xinzhan. I'm from Intel and my colleague Ho Liang will present with me. Today we wanted to introduce a new solution about how to make our fast function more secure with the help of confidential computing and WASM. If you have any questions, please contact any one of us via the email. So before I get into the details of our proposed solution, I want to take some time to discuss our self-conception that we are going to mention later in our talks to ensure that we are on the same page. This will be a very clear understanding of the topics. Then we will discuss the challenges and potential solutions in the market. We will list some typical use case and the challenges that we face. We also will discuss some solutions to these challenges, giving you an insight into our proposal. Finally, we will show you our reference architecture, which includes the architecture diagram and all the components that we use in our work. This will give you an understanding of the design choice we make and how they impact the confidential computing. So first, I'd like to give you some background notion that you can use to easily understand in today's topic. The first one is service. What is service? It is a kind of method which allows the developers to deploy their function without any knowledge of the server running in the backend. In this way, all the developers need to do is to provide their function code to the CSP. And the CSP is responsible to run the function in their infrastructure. It will also bring some security concern, especially for the highly sensitive applications. The goal of our solution is to solve this kind of concern. The second item is Wassermann-Walmer. It is a platform independent solution for users to write and execute their functions. It is portable and can be deployed on a different environment. As I mentioned, the serverless is a cloud computing model where CSP manages the cloud computing infrastructures required for providing resources according to customers' needs. And the function as a service or FOSS is a service way to run functions in any cloud environment. With FOSS, developers can focus on writing function codes without the need to build their application or maintain the required infrastructure. In our work, we chose KineTip as a FOSS orchestration layer because it is Kubernetes-based and it is open-sourced. You can easily take our solution as a reference and leverage it into your own service environment. KineTip uses an event-driving computing mode as others. The functions are triggered by a specified event, such as a message queue, HTTP request, or any event source from the cloud. Once an event is received, the FOSS function will be triggered and the FOSS platform will automatically build the execution environment and run the function in this environment. Also, a function can invoke another function as well as other cloud services, such as a storage service. As I mentioned in the previous slides, we chose KineTip as our service platform. KineTip provides auto scaling for a function instance to match the incoming request number. As showing this figure, the client will evoke the function via Ingress Gateway and the activator is a KineTip component who is responsible to start a first KineTip function instance. The KineTip service will be created in a container running in a Kubernetes port. This container is called user-container. And also in this port, there is another container called Qproxy. The Qproxy is a sidecar container to queue incoming requests and forward them to the user container. And it will also connect the metrics and report them to the autoscaler components. According to the metrics the autoscaler received, it will compute the replica member based on the algorithm and decide whether it needs increase or decrease the port number. Then the autoscaler will evoke the deployment controller to adjust the port's number. So we can see from here, the KineTip platform provides zero-to-one ports creation as well as one-to-one replication. So the next item is Intel SGX and Intel Amber. Intel SGX is a technology developed by Intel, which provides a secure execution environment for applications, protecting them from both software and hardware attacks. It enables the creation of secure enclaves which are isolated, which are isolated area of memory where code and data can be processed without being visible to other part of the system. This helps to ensure that sensitive data remain confidential and also ensure the integrated tail of the code. The Intel Amber service is a service that aims to provide a framework for developers to developing their confidential computing applications using Intel SGX technology. This service provides the remote attestation where which allow applications to verify the identity of the remote part they are communicating with help to provide attacks by malicious actor. Overall, Intel Amber is a valuable resource for developers which look to build secure, confidential computing applications using Intel SGX technology. Here is a simple introduction about HTTBA in our solution. We use HTTBA as the communication protocol. HTTBA is HTTP attestable. It defines an HTTP extension to do the remote attestation, secret provisioning and private data transmission because it is HTTP based, so it also works on layer 7. The difference between HTTP and HTTBA is that HTTBA requires the attestation before establishing a secure tunnel. The caller should attest if the caller is running SGX and click before sending data. In addition, the mutual attestation is also supported. Here is an overall of the total solution. There are several projects involved. At the bottom, it is Kubernetes. The Kubernetes provides a basic infrastructure. On the right bottom, there is Intel SGX device plugin, which is one of the Intel device plugin implemented in Kubernetes. It manages the SGX resource on the host and report it to Kubernetes. On the top of Kubernetes, there is a Kinect, our secure function is running as a Kinect service on this level. Besides, the function is running in Intel SGX enclaves to provide the security. Also, the user handle function is transferred and distributed in Watson module. All the traffic is based on HTTP protocol, where our service provides the remote attestation to ensure the Kinect service is running in a trusted execution environment. You may feel a little confused now. We will explain what's the purpose of each project or each component and how they work together to realize this total solution. So I'd like to pass to my colleague earlier and have you continue to introduce the remote content to you guys. Thank you. Hi, this is Liang from Intel. I'm going to keep showing, keep the representation about us and security analysis. So our next part is about challenges and solutions. I'm going to start with three use cases. The first use case in war, a company that has set up their own first pencil, which run their own proprietary algorithm and sensitive data. In order to ensure the security and privacy of this data, they have implemented SGX enclaves to facilitate the coding and the data being processed. One of their long concerns about the risk of the attack from internal minister who may have access to the platforms. Another concern is about the risk of malfunctioning operators who may inadvertently or intentionally cause damages to the functions or the whole platform. And the Sydney use case involves a startup company that is using a public first pencil to run their private algorithms and sensitive data to ensure the security and the privacy of their data. They can implement SGX enclaves to facilitate the code and data being processed. Still, their main concern is about the possibility that the cloud service provider CSP may have access to algorithm and the data, which could compromise their own IP and competitive advantage. Another concern is from other users of the same fast platform who may try to access their data and or disrupt the whole their functions. The first case is a developer who wants to use a third party image processing function to handle some personal images. Of course, it's also on the same public platform. To ensure the security and privacy of the personal data, they can run algorithm SGX enclaves which may have access to the code and the data being processed. One of the main concern of this developer is the risk of the data leakage, both from the third party function or the CSP that is hosting this function. So, as we see, overall, all three use cases both have some additional security concern besides using SGX or TTE as their execution environment. First, they have to trust the CSP and other users or services in the same platform. Second, they have to trust algorithm authors. If we want to achieve a fully protection, we shall test more than what we are doing we can do now. To give a summary impression, we itemized the key aspect of the trusting and confidential requirement risk oil in use cases and we classify this request in two categories. Function provider means the functions also. So, in this side, with the privacy of the source code, the key stored in the enclave and the function running the enclave. The second part is the function consumer. The privacy of the request data, the data is being processed in enclave. The function is provided without any changes. And the last one, the privacy of the response data. And this is something I want you to notice. So, the key here is not just about the, about the attestation or attestation for the enclave. And we are here to use it to decrypt the privacy data and or the functions. I want to explain this later. And the response data should be encrypt energy and that's what we want. And this image is mapping map to all the security related request from fast work and other items we want to protect from previous sites. So the green part is the security part. And this is the, it's worth to note that's the invoke and create a function is our main target to protect that we don't want to involve the Canadian events for now. So one of the challenge facing in fast platform today is the requirement for the multi compilers and around times to support a variant program languages. This can be a bigger forge, and it can pose a significant burden on platform operators to address this challenge, we propose the development of a cross platform or cranks cross language, the attribution format for fast pencil by providing a standard format for code distribution to use the need for multiple compilers and around times, and make it easier for fast platforms to support a wide range of program language. The new distribution format will not only simply find the maintenance of fast platforms but also improve the developer experience and enable them to use their preferred program language without having to worry about the compatibility issues. So in this part, we represent what's up. And as we know, another concern is that HTTPS is currently the most widely used security protocol for communication over the internet. We have identified a skinnier than a floor. However, in the current implementation of the HTTPS on Gateway system, specifically, or increasing the message are decrepitated as the message means that services behind the gateways can save all information that is supposed to be protected. This pose a significant risk to the privacy and the security of the sensitive data that is transmitted over the internet. To address this challenge, we propose a new approach to secure communication that has not rely on to the direct decryption of the message at the gateways by using an end to end encryption. We can ensure that the sensitive data remain protected throughout the transmission process and is only decrypt at the end of the destination. This approach will not only improve the privacy and the security of the sensitive data also provide a more efficient and scope solution for secure communication over the whole internet. So the next decision will introduce the architecture and components of this solution. In this slide, I would like to highlight some serverless platform that has its own challenge. So here, as we explained in the previous slides, our solution will require and encrypted to us a module as the user's handlers function. And the service platform can create more innovative service in order to do the auto scaling. However, the function provider and the consumer are not aware of the auto scaling. So the platform can need to a place to store and distribute the user's provided key by itself. This is why we are going to add a new service called a secure store in a candidate of framework. It aims to store the user's private key in SGX and distributed key like the HTTP protocol on a new candidate of service is created. This is how a secure store located in our design. We added as a serverless control platform service. The service will monitor the candidate of service resource update. Once the new security candidate of service created, the secure store will find the corresponding private key and prepare to send it to the candidate of services. Before sending the key, the secure store should first test the candidate of service in a CBA to make sure it's running inside SGX and play. If the attestation success, the secure store sends the key to a candidate of service. In this way, we solve the service's unique challenge here. So as a summary, first, our design proposed the following three points. The first one is to provide a security store secure store to hold the private key. The second one is to create a secure user defined handler function. The last one is to access the secure function or secure store or end-to-end protocol at CBA security tunnel. So here is a very detailed of our solution and we can go through it to first step as that. First, the function provider, also named function author, need to define or create or write its own secure user defined handler function. And package it into a model and signed or sign in encrypted and upload it to the candidate. In the second step here, the function provider need to attest the secure store service by the ampere service to make sure the secure store itself is running in the trust environment. And the third step, the function provider will store the private key in secure store. This key will be used later to decrypt the lesson model with following steps. So until now, the function provider only to do three steps are done. And the next step is from the function consumer to invoke other function that secure consumer can invoke this function. Actually, with the HTTPA, in this step, the candidate will at least create one secure function. User defined function. If the traffic is very heavy, then the new replica will deal with it. The candidate will auto-scaler the new replica creation. Once the traffic arrived in the secure function, the consumer will attest secure function with the help of the ampere. So to make sure the function user defined has been authorized. And then the secure store will notice there are some new secure function created in the cluster. And you will attest them and push the private key. Yeah. It's the very last step here. Secure the function here. That's why we'll be decrypted the lesson model and start serving. So now we'll use case or use request. Message will be sent to the lesson model directly after processing. I know the consumer will receive the response. So this is a whole flow about how we make a service function secure and how to make all the data transmission secure and a company. If we take a very close look at what happened in the user defined a function, or we call this secure function or service. The component of this architecture is about two things. First, as checks in clave. The second was a module, which contain the user defined handle of function. So there's a part there are beside those two parts. There is a TV store. Here to handle all the communication thing. The function will runs within an sgx in clave, which, you know, it's secure hardware based on the execution environment that provides identity and integrating function code is compiled into a wasn't module, which is very, you know, cross platform cross language widely used. The first thing to do is firstly, it's encrypted and signed for security purpose and uploaded from your start storage secure start as we mentioned before, and the function is invoked. The server receive the request, and the first established a security connection with the gateway. The htdb server is responsible for generating coders and authorizing request and receive the security key, which is used to decrypt the wasn't module. And the server that dispatch incoming request to the wasm runtime, which executed the wasm module and gets to return the response. The war this fast architecture provides a security efficient way to secure user defined functions using a combination of the hardware based security low level binary code, which is cross platform cross language, and a security communication protocol. This is all for today.