 Hi, how are you? Good morning. This is Fupan Lee from Ant Group. Actually, there should be my co-speaker, Jiyuan Ma, should be here. Okay, Jiyuan Ma is not Alibaba's Jack Ma, but he has some issues today, so he couldn't come here today. So I will give the talk lonely. Okay, let's begin. First, general rules, I will give a discussion about the topic. First, I will talk about what is card-cateniners. Last day, Puntao has given some more details about card-cateniners. So today, I will just give a short description. Second, I will talk about what is service mesh. Let me do a simple poll. How many people are familiar with service mesh? Please, up your hands. Oh, thank you. Great. Second, third, I will talk about card-cateniners security threat model and where the service mesh will take some issues to card-cateniners security threat model. And first, I will talk about the service mesh development direction, the third card-less service mesh and how card-cateniners will incorporate with this third card-less service mesh. Last, I will show a simple demo. So what is card-cateniners? As we know, the traditional card-cateniners, just like the classic card-cateniners, it uses the Linux kernel's virtualization to isolate the resources, such as use the name space, the IPC name space, and network name space, and mountain name space, etc., to isolate the resources. And then, it will use the C-groups to constrain the resources it uses, such as the CPU and memory. But for this case, all of the workloads on the traditional card-cateniners, they share the same kernel in the node. So, if some workloads cause some critical kernel six-cars, which will use some kernel's lock, it will prevent the other workloads. So, the traditional card-cateniners have some issues, which they may be influenced with each other. So, card-cateniners will use the VM to reduce those influence between each other. And they put the workloads in a VM. Thus, the workloads will use the VM, and they don't share the same kernel as the node. So, even some untrusted workloads will take some breaks or some ORM event or some other creative issues. It won't influence its neighbors. So, the big difference between the card-cateniners and the traditional card-cateniners is that card-cateniners can embrace the untrusted workload. Now, let me talk about what is service mesh. Okay. It's only a brief description. As we all know, before the service mesh, we use, we developers will develop their programs, we'll deal with the network issues, we deal with the data encryption and decryption, and find the service, and each other, such as, and they are the network failure trial, and those issues, both of those functions will combine with the business logic. They are all in one process. But there are some issues with this issue, with this case. If there are some issues in their network issues, they should rebuild their programs. So, the first service mesh generation is that they split those logic from the business codes. They separate it into a library so that most of the programmers will share the same library. But there are also some issues with this case. Since the library is the language-specific, maybe some program uses the C language, and the other way uses Java languages, and the other way uses Go languages. So, they should prepare the specific SDKs for every languages. But to maintain so many languages that SDKs will take a lot of effort. So, the service mesh evolved into the next generation, the start-car generation. So, in order to reduce these issues, they split those SDK codes into a separate or dedicated process, and the workloads will communicate with the mesh start-car through the traditional socket or network socket. The program will adjust as they did before to access their service through the Fnet unit socket. And the service mesh start-car will inject some IP table rules to hijack their network flow into the service mesh. So, this start-car service mesh is transparent to the workloads. Also, in addition to add some storage discarding, resilient connectivity, and identity-based security issues, and they also add some other functions or features into the start-car service mesh, such as the opposability and tracing events, and even some level 7 traffic management. So, this series they enriched the service mesh. Talk about the service mesh. Let's talk about where the start-car service mesh will bring some issues to Cata containers. As I talked before, Cata containers will embrace some untrust workload into a guest. But the service mesh will also, as a start-car, will also put into the guest. Service mesh is an infrastructure which is not which is not belonging to the workloads. Sometimes the service mesh will take some some security keys and some certifications in its memory. And this is a secret data. But the untrusted workloads would break out their traditional containers limited and it will escape. So, they can steal or cut or watch or even take out the certifications or the secret keys from the start-car. So, if a malicious code or users got those certifications and keys, they will send some attack to the entire service mesh service. So, this start-car will break Cata container security threat model. In order to fix these issues, we want to move the start-car out of the sandbox or move out of the guest. Thus, even the malicious code or malicious users, they break out the traditional containers boundaries. They couldn't cut, they couldn't steal the mesh start-cars prior to keys or certifications. Since we think the VM boundaries is always strings and we assume the traditional users couldn't break out the VM boundary. So, if we move out the start-car out of the guest, how did the data communicate between the application and the service mesh? The left side is the traditional mode which the start-car is in the guest. Thus, the application container will just as normal, they will send some F-unique socket operations and send data from the socket into the network static in the guest. Then, the service mesh start-car injected the IP table rules which will hijack this network flow and will redirect this network data from the container to the service mesh. And the service mesh will catch this data and to do their process such as the service discovery and load balance and data increasing or decreasing and etc. And then they will send the data from the network. But if we move the service mesh from the guest out of the guest, how the data will trace through from the container to the service mesh start-car. Since we moved the start-car out of the sandbox of the guest, we put the network, we also moved the network from the guest out of the guest. And the network is only in the port network system. Actually, we proposed a solution to fix these issues. We use the TSI, which is transparent socket impersonation, which it was developed by the lab PRINCE project. What is TSI? TSI, it means that it's embedded into the guest kernel, which will hijack the standard AF unique socket operations, such as create a new socket and bind a network address or listen, accept, etc. This is the TSI. The TSI will hijack these six cores and transform those traditional socket operations into a VSOC operation. So the application calendar will adjust as just what they do before. They just call the traditional socket six cores. But the TSI in the guest kernel will transform this socket operation into the VSOC. And the VSOC backend in the VM, it will accept those VSOC operations, and they, such as they received the data or received the accept operations, and then they direct the data to the host kernel's network static. At the same time, the service mesh in the host, which is not in the guest now, the start car will also inject some aptables in the host network namespace. So the VSOC's backend sends the data into the network static, network stack, and the aptables will hijack those network streams and will redirect this network data to the service mesh sidecar. And then the service mesh car will deal with those data and redirect them to the peer endpoint. So is this end of the story? Of course. As I said before, the service mesh has evolved from the single process to the SDK or library service mesh. Then it evolved into the start car service mesh. But how, or where the directions the service mesh will evolve? Since the service mesh community has realized the sidecar service mesh has many drawbacks, such as for everyone part, there will be a sidecar to cooperate with it. So the sidecar will cost many resources, such as CPU memory, et cetera. So the service mesh community tried to reduce so many sidecars. Thus, they afforded the startcarless service mesh. This is two mesh stream service mesh communities. One is Celiom service mesh and the other is H2. Both of them afforded the startcarless service mesh solutions. The Celiom service mesh, they try to use the EPPF in the kernel to hijack the network data. And they removed the sidecar and they put one EPPF programs into the kernel. And these EPPF programs will hijack all the workloads on the node. So there will be not reduced, so many reduce the numbers of the sidecars. Also, the EPPF will try to deal with the R2 or R3, or even R3 level the network, such as the load balance, and even some data decoration. But some R7 operations, such as ATTML, ATTPS, and even some WebSocket protocols, they also used a node program in Void to deal with it. For H2 community, they afforded another solution. They split the startcar into two parts. One is the node returnal proxy. Yes, there is only one returnal proxy in one node, the node sidecar. And all of the workloads on the node will, since their data will be hijacked by the returnal proxies, aptable rules, or even some EPPF programs, they redirect this data flow from the application to the returnal proxy. The returnal proxy will do some R2 or R3 level operations, such as the service funding, and even some data increasing or decreasing. If they don't need the R7 deal with, they will not use the vPoint. If some services want to deal with the R7 operations, the returnal will send their data to the vPoint, which is also a service. So, if the service mesh has involved to the startcarless service mesh, how Cata will incorporate with this mode service mesh. Just I talked before, Cata will use the TSI to communicate from the application to the startcar, but now there is no startcar. How the data flow? Take the similar startcar service mesh, for example, there is no startcar, but there is an EPPF in the host kernel. So, the VSOC backend will receive the data, and just as normal, they send the data into the host kernel's network static, and this time, the network flow will not be hijacked by the startcar's RPP rules. Instead, it will be hijacked by the EPPF. So, this model is very good compatible with the Selium startcar service mesh. Talk about the similar service mesh. How about A2's ambition to startcarless service mesh? Just I talked before, the A2 startcarless service mesh splits the startcar into two parts. One is the tunnel. How those data flow from the application container to the tunnel? Just as before, if we still use the TSI plus the VSOC, the data flow will be like this. The data flow will try to, from the container, and the TSI will hijack it into the VSOC, and the VSOC backend will send those data in the VM and send the data to the network static. And at the same time, the tunnel process also will install some RPP rules to hijack those data flow. So, the data flow will also be hijacked by the tunnel process RPP rules. Those data can reach to the tunnel process. But from this data flow passes through several paths. One is the guest kernel and sends the TSI and to the VM and then to the network static, the host network static, and several other parts. And the last, they reached to the tunnel. This will improve the network's latency. So, if we want to reduce this latency, how did we do? Yes, we can just watch the DPD case data. We host use protocols to communicate with the tunnel with the VSOC directly, instead of through the host network static. Thus, the application data will be sent directly from the guest to the tunnel and they don't pass through the host network. This will reduce the response latency drastically. So, let's do a summary about this topic. Card containers need to reduce the issues bring out by the service mesh static car. We want to move the static car out of the guest. This is the card containers need. At the same time, the service mesh has evolved their model from the STK to the static car and to the static carless trend. So, those two trends are matched each other correctly. And by now, we have implanted the Cata plus TSI and VSOC. This model we have developed and implanted. Next, I will show a simple demo about it. About the host user VSOC plus TSI and Cata and to communicate with the tunnel, this we still incorporate with the S to M based developers and S to under development. This is our future work. So, next, I will show a simple demo. Okay. I had logged onto my text machine. Let me see. This machine is Kibice cloud. I had found a service here. This service split two parts. The above one is the service definition, which we defined Kibice service, which worked on the port 8888. And then it will direct to the target port 8A. This will select the application index. Next, the service defined a workload, which is the intake server. This definition has some keywords here. First, we defined it will use the static car inject. So, this service will be inject static car by the H2 service. Then we defined the static cars run time. It's run C. Yes, it's not a Cata container. It's run C, so that this static car will not run in the guest. And we also defined a service inject. Next, it's run D. Yes, it's a Cata container's internal run time. Let's launch. Okay. We have launched this service. Let me... Okay. This service has been launched. And let me show how many containers it's in this port. Yes, there are two containers in this port. One is index. The other is the H2 proxy injected by the H2. So, how those containers are different? Let's first into the static car. Okay. Now we have into the static car. Let me show its current version. Yes, its current version is this one. And let me show... So, the index container's current version. Yes, its current version is this one. There are differences. This means they not work on the same kernels. Okay. Let me launch another NGX client. And let's use the client to access the NGX server. Okay. Now the NGX client has been launched successfully. And let's get into the NGX client. Now we head into the NGX client. So, let's access the service. First, before we access the service, we have to find the service IP. Let me find the service. Okay. We can see the service IP. Let's into the NGX client and access the service IP. And it's port. No, yes. We can access the NGX successfully through the service mesh. Okay. That's all. Thank you. Is there no questions? No questions? Okay. Thank you.