 Hi everyone. Welcome to join the cloud native journey to shuttle back and forth applications across edges. This is Ruoyu and Qiang from Intel who are going to introduce you a brilliant project on how to establish secure and reliable connections between the applications which reside in different edges. And me, Ruoyu, will first talk about the background and give a brief on the project architecture as well as three significant merits of the project. And then Qiang will show us a very concrete demo on how we use the project in an actual scenario. We all know that now the applications and the servers have been moved to the cloud. They are usually packed in a cloud native way as containers or service served on the Kubernetes cluster. Benefit from the characteristic of cloud, the applications can be distributed everywhere depending on the user's needs. They could be located in the traditional data center on the public cloud or on the edge clusters reside in the telecom's branch office or even in the local device. The dynamic deployment really gives a lot more good to the users to reduce latency, to get some hardware from the cloud or to lower the cost, but it also brings huge problems for the operators. So here are several challenges we may matter in networking. First of all, management complexity. Since the applications are now distributed in different networks, it brings huge problems for management. Second, security risks. Data are now flowing on different lanes, so there could be more chances to face the security threats such as data breach or data loss. Third, application performance and predictability. So the client may need to go several rounds to actually finally reach the application. Any trouble we find in the route will trigger an inconsistency in the application of performance. Fourth, configuration conflicts. This is usually seen in the edge scenario like a private cloud. It will be more likely that the private cloud will set up identical subnets or IP address, so it could be hard for them to talk to each other. Fifth, monitoring and maintenance. Since the applications are now distributed everywhere, it's very hard for the operator to maintain the whole flow. Here I also listed several solutions for getting on these challenges, such as SD1, secure internet gateway, zero trust network access, and our project SDE1. We could see that our project could be the most suitable solution. So what's SDE1? Currently, it's an enhanced version of SD1. It's short for software defined at when. It is a project targeting on bring up reliable, secure, and fully functional network connections for applications which reside on different edges. The project itself consists of three components. The overly controller, which manages the configuration generation and distribution in the central cloud. The CRD controller, which utilizes the Kubernetes custom resource to do the configuration setup. The CNF, which includes multiple network functions such as firewall to actually bring up the internal. The three components will help to set up the network connections to allow the applications to talk to each other smoothly and securely, even they are residing on different edge edges. I will then use three of its critical merits to elaborate more. First, ease of use. The SDE1 manages the network configurations in an automated manner. Utilizing the overlay controller in the cloud, the clusters reside in same overlay can easily establish network connections with a simple registration to the overlay API. This greatly reduced the burners of operators to manage the connections which way. The building SLBs will also guarantees the stable performance for the applications. As I mentioned before, the security risks are a big concern for both users and operators. Using IP sector setup turn us between guarantees and say secure data transfer. Keep protection and crypto support are integrated inside to address physically insecure edge. CNF also provides a fully functional firewall to do the content filtering to meet the common security needs. Flexibility. All three components are atomic. They can be used totally as an overall solution or it can be used separately with other frameworks. This is mainly because they are based on the Kubernetes customer source. Any framework that can adopt this kind of mechanism is able to use the CNF or DCRD controller separately. So speaking of flexibility, we do have an example. Let me hand over to Chao for the demo. Okay, let me see a simple case. In this case, the SDE1 project will have to extend the usage of SOSMesh to provide their load balancing worker across edges. This will transfer the details on the cross edge collaborations. The request come from the client who trusts me through the hub and reach the application reset on the edge with proper load balancing strategies. In this diagram, we have edge one and edge two are isolated Kubernetes clusters. But the difference here is that we have cluster come with a public IP and the edge one and edge two are residing in a priority network with no public IP address. Every cluster contains at least one SD1 CNF to build up the external connection. SD1 only controller generates and manages the configuration to set up IP sectorals between the CNF in the edge and hub for data transmission. So the client requested to the application will pass through the hub to the edge properly. In this scenario, we integrated SD1 with SOSMesh. In our case, it is used to to accomplish load balancing and use mutual TS for authentication. The sample application Hello host used is used to expose a service hardened by mutual TS which will retain a response include the class name. On the hub side. So the entity got set up to have the edge application involved in the hub mesh and do load balancing in the virtual service. An extra SD1 CNF is used in the hub side to expose the service. The client request first comes into S4 CNF on the hub and redirects to the each ingrid gateway through Dnet. Then the request will hint the configuration of the service and select one of the service entries. Let's assume it's choice at one. Next, the data will redirect to the CNF pod using the root of the host and pass it to edge one CNF through IP sectoral. Then hitting the east gateway inside the edge and finally around at application pod and retain the response. This flow involves two fifths of configured mutual TS authentication. The first is from the client to the hub is through ingrid gateway. A second is from the half service entry to the edge is through ingrid gateway. I have already described the whole architecture and next let me show the demo. The left pane is a machine simulating the client that would like to access the application through the hub and the right top is an edge one cluster and the below is edge two cluster. We do several tests on the client side. Now we could see the request is almost equally sent to two edges. That's all. Thanks for your attention. Hopefully you enjoy our journey and can utilize our work in more scenarios.