 Hello everyone. My name is Mingxuan Sun from Baidu Security. On behalf of the TCLIP community, today I will talk about our open-source universal secure computing platform, written in Rust, TCLIP. I will start with background and then briefly introduce current status and some highlight of TCLIP project. Then I will provide some details of the TCLIP internals. Lastly, we will talk about how to get involved and present the TCLIP community. Okay, let's get started. Emerging technologies of big data analytics, machine learning, cloud and edge computing, and blockchain are leading significant progress in our society, but they are also bringing confidentiality and security issues. On public cloud and blockchain, sensitive data like health and the financial records may be exploited at runtime by entrusted computing processes running on compromised platform. During in-house data exchange, confidential information may cross different clearance boundaries and possibly fall into the wrong hands. Also, not to mention the privacy issue arising offshore data supply chain. And other than data privacy, the model's algorithm themselves also need to be well protected. Once leaked, attackers can steal intellectual properties or launch white box attack and easy to exploit the weakness of the models. Facing all these kinds of risky scenarios, we are in desperate need of trusted and insecure mechanism, enabling us to protect both private data and computing models during a negligible execution in potential unsafe environment, yet preserving functionalities, performance, compatibility, and flexibility. As illustrated in this figure, secure computing provides a solution for the trusted and secure execution environment that redefines a big data business model, even if data and the model originated from different parties with no mutual trust. Confidentiality and integrity can still be effectively protected. Moreover, it significantly reduces a trusted computing base and makes the whole stack easily auditable and verifiable. Secure computing, also called confidential computing, is to provide a secure place and safe place for multi-party to compute unsensitive data. Trusted execution environment, or TEE, is one of the technologies for secure computing. It provides hardware-based isolation, memory encryption, and testation. For example, Intel SGX, ARM Trason, and AMD SCV, TEE implementations by different vendors. In that show, developers need to separate the program into two parts, one in untrusted world and another in secure world, or trusted world, to process sensitive data within the TEE with all the security guarantees by TEE. So right now, service providers like Microsoft, Azure, Google Cloud, and IBM Cloud have already provided the TEE VM products on their cloud. The goal of Declave is to create a framework or platform that allows programmers to concentrate on business logic and automate model production of their code and data without worrying about technology detail, technical detail, or TEE development. Programmers or users only need to focus on sensitive data, business logic, and their interface between users and the platforms. The platform manages data and executes business logic in TEE computing units, deploy ISO, distributed system. So, when implementing the Declave, we have several requirements for the programming languages in mind. The first one is memory safety. As we all know, a memory safety issue in current application development can cause dangerous damage, but the memory safety issue in the trusted execution environment can break all security guarantees provided by hardware. Historically, we have seen that memory safety vulnerabilities of TEE written in C and C platforms can lead to sensitive data leakage. Another property we want to achieve is efficiency. Because TEE is a resource constrained environment, and we have limited memory and IO capabilities, we need to have a minimal runtime, and also for security, we want to have a deterministic runtime to ensure the confidentiality of privacy data and integrity of code. These security properties also be remotely attested by any users. That's why we choose Rust. Rust has a strong type system to guarantee the memory safety of the program. It can be statically compiled and has a small runtime. So, the Rust ecosystem is ready for cloud computing and have many third-party libraries for RPCs and such cloud computing capabilities. The community of Rust is also very healthy and strong to support our development. So, then let me summarize TEE Clave, Apache TEE Clave. It's an open source or universal secure computing platform written in Rust, makes a computation on privacy sensitive data safe and simple. This project originally developed at Baidu as known as my city. It was open sourced in July 2019. Then we donated the project along with Rust SGX SDK to Apache Software Foundation in August 2019 and changed the project name to TEE Clave. In 2021, this year, we also donated Rust OPT trust zone SDK to TEE Clave as a subject. Right now, the TEE Clave have ability to write TEE in Intel SGX and ARM trust zone. Currently, TEE Clave is under the Apache incubator and the open source in the Apache way. I will introduce some highlights next. You can visit our homepage and the repositories to learn more. TEE Clave have four basic highlights. The first one is functionality to give a convenient interface for any users. TEE Clave is provided as a functional service platform with many building functions. It supports tasks like machine learning, privacy section, crypto computation, extra extra. In addition, developers can also deploy and execute Python scripts in TEE Clave. More importantly, unlike traditional functional service platform, TEE Clave supports both general secure computing tasks and flexible single and multi-party secure computation. For security, we adopt multiple security technologies to enable secure computing. In particular, TEE Clave use Intel SGX ARM trust zone to serve most secure sensitive tasks with hardware-based isolation, memory encryption, and the test station. Also, TEE Clave is rated in Rust to prevent any memory safety issues. For usability, TEE Clave builds its components in containers. Therefore, it can be deployed both locally and within cloud infrastructures. TEE Clave also provides convenient endpoint APIs, client SDK in a lot of languages, and also command line tools. And last, modularity. Components in TEE Clave are designed in modular and some components can like remote the test station can be easily embedded in other projects. In addition, TEE Clave SGX SDK and trust zone SDK can be used separately to write standalone SGX enclave and trust zone application for other purpose. So since TEE Clave is a functional service platform, you certainly need to consider about functions, business logic, and participants. When client or user have determined about three factors, they can follow this step to execute tasks on sensitive data in SGX, just like a normal function service platform. First, you register data and function to the platform, and then create an around task. At last, you can get execution results from the platform. The APIs are pretty easy. We provide C, Rust, Python, and even Swift client SDKs. Currently, services are implemented in SGX enclaves and written in Rust. And we have several services in front-end core services and workers. They are authentication services, front-end service, storage service, management service, scheduler service, access control, and execution services. We separate it into three domains and to manage the data and make sure sensitive data can only be around inside one domain. Services are communicated with RPCS, and here are some interfaces defined in protobuf. I will skip these details here, but if you are interested in the communication interface, you can see the protobuf definitions. So here's a brief introduction of the interfaces between each service. Basically, clients communicate, authenticate their ID confidential, credential to get a session key. This session key will be used later to communicate with the front-end service. Then, clients then register data and function if needed and assign or approve invoke tasks. Clients can also get information of functions tasks. Front-end service will already write our valid requests to the management service. And the management service get authorization of data function usage and task invocation, and then persist function data tasks into database in the storage service. And the scheduler service will fetch any functions data tasks need to be executed in the queue and dispatch task into execution services. So the execution service and the scheduler service use subscribe and pull model execution service can get task and execute it. After executing the function, the result will be updated and the processing the storage service. So synchronized clients can get the result later. So that's the simple interfaces between different service. As you can see, the interfaces between service are pretty simple and they're designed in cloud. And you can deploy these service in dockers and in your cloud infrastructure. To get started with Tcliff, we provide extensive documentations including to first try the functions, how to write functions in Python, and how to add building functions written in Rust. And also we provide documentations to describe our designs and implementations in details like our threat model and how the mutual attestation in our platform and access control modules and our build systems for Rust and some other internals of implementation of Tcliff service. So if you want to read the code for each call-based directories, we also have a readme to help you to get you through the code. So at last, let me give an overview of the Tcliff community. Since Tcliff is a huge project which has multiple layers, many users can get involved in the community. Platform users can use the Tcliff platform directly. For example, deploy the system in private infrastructure. And some other users may prefer to only use one send-alone service, for example, or storage service, execution service, and so on. Also a lot of projects using all attestation implementations. And of course, some users directly use Rust SGX SDK and Trust-on SDK to build their own applications. So we are pretty open to different users to meet their various needs. The Tcliff community have also supported many other projects like commercial products, academic research projects, and some other open-source projects. You can see from our homepage, there's a page called Powered by in the community. You can see like organizations like Baidu, Baidong, Ikinigma, and the projects like Advanca, Nonify, Crypto, Chain, Alclum, and so on. So first, I will not try to list them all. So they are all using Tcliff at the platform and the library of Rust SGX or Trust-on SDK. So overall, we encourage all people in the Rust community and in the SGX TE community to come in and get involved. So at last, I want to put more information on Tcliff. If you want to follow our latest news, please join us on the mailing list. And right now, we have monthly virtual meet-up on Zoom every month. So follow our mailing list to see the schedule. And we also invite some speakers to talk about some topics in Rust and Tcliff. So you can also visit our homepage to see documents and projects powered by Tcliff and some tutorials. You can also follow us on Twitter and check out our code. And at last, we always call for contributions and contributors. So thanks. So last thing, we just announced the Tcliff Trust-on SDK. Besides SGX SDK, we can also write Trust-on applications. Please check out it at our whole page. Okay. Thank you so much.