 Hi everyone, I'm Yachen from Tencent. As a VP of Tencent Cloud and also the general manager for the Tencent network, now I'm merely leading in the Tencent network infrastructure and also the R&D and also the network construction and also the operating for Tencent. So today, I'm very happy to be here, it's also my honor to introduce you a very brief introduction about Tencent network consideration for our distributed cloud and also for the age computing. So regarding Tencent is very quick introduction. Tencent is the internet-based technology and culture enterprise. We started with 20 years ago with their social networks. So we started with the social network, now we provide almost more than 200 applications in the APB store, covering their games and social networks, music, video, news, exports almost everywhere. And we also cover, it's also a cloud service provider. This is kind of services that we started a couple years ago as a basis for our social network and also for our outside customers, business customers. And now the Tencent cloud is moving very fast and we are supporting not only for the WeChat, Tencent Video, QQ, Tencent News, this kind of applications. This also supports our business partners. We support not only the S, SAS and also the past services to our business partners based on the Tencent cloud. So the Tencent cloud infrastructure is very high-level introduction. Now we cover the, we have the 27 regions globally and also the 70 available zones globally. Now I think we have more than 3,000 global CBN and the POP nodes in the world. And the servers, there is our, I think it's more than 2 million servers already to support the applications and also our business partners. And the internet bandwidth is also growing very fast because, you know, we are not only the social network service provider, we also support lots of their content and also gaming services. So it's more than 300 TB internet bandwidth in Tencent. So about the network, because today I would like to talk about the network infrastructure for Tencent, especially for the, for the edge computing, for the distributed cloud. So in this picture, you can see from the right side to the left side, it's very high-level architecture, network architecture for the Tencent. We have four different level and network. We have the DCN, DCI, ECN, IDE, IGN. DCN is very, you know, we use the wider box to support our data center networks based on SDN technologies. It's connecting all the resources inside of the data center. And DCI is more like the overlay and underlay network to connect, connecting the data centers in the different regions. We call it data center interconnection network. And also there's another one is the ECN. ECN is more like the external connection networks. We use this network infrastructure to connecting our enterprise IDC, other business partners branch office to our cloud to make, to ensure the customer can access to the Tencent cloud efficiently. And today I would like to talk about the EI and its edge interconnection networks. We initiated this network solutions two years ago. We provide an edge-to-edge and edge-to-cloud interaction and also the networking for the distributed cloud. And also we provide the, also the cloud like 5G cloud, 5G cloud, cloud-based for the 5G core or 5G UPF. It's also can be supported by using the EIN. So this is the Tencent distributed cloud solutions or infrastructure. Why we call the distributed cloud? Two years ago, we don't have the age node. We only have a data center. We call it DC. It's mainly located in a big city in China, like Beijing, Shanghai, Shenzhen. It's huge cities. It's data center. It's roughly about 10 more than 10, I think. And three years ago, we have the edge zone. Edge zone, we can provide the network latency less than 10 milliseconds. And it's located in the provisional capital in China, roughly about 30, I think, edge zone. And two years ago, we have the, because of the cloud gaming and the new AI coming out, we need more distributed edge node and close to our customer, we have the edge node. Now, we have roughly about more than 300 edge nodes located in almost all the cities in China. And there's another one we call the edge box. It's very small, tiny things, edge side to located in our customer IDC. We can provide very low latency to our customers. So we have the distributed cloud that we have. We almost have our computing resources everywhere. So for the network, we have the requirements. For example, we need a very high reliable networks. We need to secure a reliable network to support edge to cloud interconnection for hundreds of even thousands of the edge node and edge box. And we also need a very flexible traffic control or the flow control, flow scheduling to control the traffic between the different nodes. And also the network functions, our network solution need to be affordable. That means the cost efficiency. So that's the requirements with its very high risk requirements for the network. So we came out to technologies or two network solutions for our distributed cloud. The first one, we call it version one or phase one, it's edge interconnection and acceleration network. And another one we call it version two or phase two, it's programmable and high performance network for the distributed cloud. So the edge interconnection network in phase one, we use the edge gateway distributed software based the gateway distributed in a different edge node or the edge boxes to support the interconnection, decryption and encryption. And also the square and also it can be a flow can be scheduling very smart scheduling by the different in a different network links because the in this among these 300 attention to cloud node, not every node has a private link, maybe some eight node, you know, it connected by the server. We're using the different water links based on the different service provider or the internet. So we need a very flexible traffic control or scheduling between these two, these among the 300 attention to cloud eight node. And so this edge edge gateway, as I mentioned, we used, we deployed this gateway based on our software defined router. It's a software based NFV and SDN routing system. So in this picture, you can see we have the control plan and the routing plan and also the forwarding plan. In the forwarding plan, we use the, it's very happy we use the VPP and also the DPDK, you know, from open source to establish our forwarding plan based on the servers. And on the control plan, couple years ago, we used the ODR. Now we use the, you know, it's more like the microservices based on control plan. It's also software based on SDN architecture for our control plan. And the advantage is very clear. This kind of software gateway is very flexible and it's easy to scale out. And we can support more than 10,000 BGP and BFD neighbors in, in this architecture. And it's also the software gateway. It has a lot of limitations. For example, the server, we need, we need a lot of servers in our eight cloud to support this kind of the software based gateway. For example, maybe we need roughly about a 30% consumption servers in eight cloud in each side. It's all, it's all about the gateways, not for the computing services to our customers. And also there, as I mentioned, we need encryption and decryption. It's more like square and very high square connections and functions in their each side. So this kind of the encryption capacity is also limited to our server's capability. So we need another, we need new architecture to support very high and very, it's cost effective solutions for our edge node. So we have the version two, phase two. Now we are using the smart switch plus IPGA instead of the server to, to support the edge gateway. So you can see in this architecture, it's a hardware architecture. It's very, it's very high. We use the CPU, of course, and IPGA and also the P4 switch to have a smart switch as a hardware. And the throughput, we can improve the throughput very high. We can support it because it's switch. It's not a service. We can support very high throughput by using this kind of their solution. And also latency, we reduce the latency in dramatically. And also the smart switch because they also have the CPU, we can run either applications on the smart switch by using their, you know, we can run an application on the Docker or the container-based applications. And also you can see all the flow will be offloaded from the server to the P4 switch so that we can have very high throughput by using this kind of architecture. And also we, because the application running on the Docker, so we can have their, it also can be easy to be upgraded and scale out, you know, we, this is also very flexible. And we also have our own, you know, software solutions for their, for this smart switch. The very key point is their elite architecture. We have our own sonic based on the Tencent Linux. It's there, for example, we can support very flexible, very powerful functions like, for example, in the management plan, we support the GRPC and hardfix and the NDEU is non-disruptive upgrade functions in our management level. And also the monitoring plan, we can support the telemetry and also the telemetry detection and high report quick, we can have the filler report quickly to our network management system. And in the control plan, we support the BGP, OSPF and BFD and so on. And also in the database, we have the ready database to use this ready space to save the configure and the state of the data. So it's there, all these software platform is based on the sonic and also there is on the Tencent Linux. So the high level application can run on the Docker side. So more flexible deployment with Docker is also supported in this software design. So let's just a very quick introduction about our networking phase one and phase two based on their software gateway. And then we move to their smart switch based gateway. And there's a use case you can see, the Tencent is not a cloud provider, we also provide the social network and also the gaming, the mobile gaming and the PC gaming is very popular now. So now, a couple of years ago, we had the cloud gaming as well because we used the same client. Everything is running on the cloud, like the rendering, gaming streaming and the gaming server running on the cloud. We don't need to download the big, the client in the mobile side of the PC and it's click to play and no need to wait. We don't have the very high requirement to our terminal side performance. But two years ago, we realized the problem because the cloud gaming is very sensitive to other network latency and sometimes suffering from the jitter and the network quality will affect the cloud gaming user experience. Okay, so we move some functions inside of the cloud server from the data center to the edge node. For example, the gaming streaming and the rendering, these functions we move from the data center to edge node. By using these distributed cloud functions, we can see for example, the RTT and OPEX, we reduce their 50, roughly 50% RTT by using distributed cloud. And also the OPEX, we reduce the 30 to 40% OPEX. It's mainly about the bandwidth. We save a lot of bandwidth by using this distributed cloud. So now we have 300 distributed cloud node in China and the 100 distributed cloud node running on the cloud gaming. So it's very, very popular now. Okay, I just give a very quick introduction about the Tencenter networking for the distributed cloud. And we also use a lot of open source products from the Linux Foundation. So Tencenter is very happy to see more technology. We would like to explore more and some new technologies with Linux Foundation and all the partners in the Linux Foundation. Okay, that's all my presentation. Thank you very much. Excellent. Thank you. If you can stop sharing, then we can be side to side by side. Very good. Thank you. So that's fantastic. I mean, yes, there was already gaming, but with edge computing, you have really enhanced the experience of the game. So a couple of questions have come in related to game and one related to smart gateway. So one question is, what makes the gateway smart? Simple question, but if you can answer it quickly. Yeah, it's not a gateway smart. I think it's a switch smart. Currently, switch we use is very popular in the data center network, it's a wider box switch. And the CPU is very low power. And now we have a smart switch. We use the P4, it's a programmable language. So the pipeline in the switch can be programmed by service demand. So that's why we call it smart. We have the GPU, IPGA, in the switch. And we also using the P4 as a programmable in a very high level language, we can design our pipeline and design our forwarding policies in the switch. That's what makes us smart. Yeah, that's great. There is also a question on would Tencent explore AR VR in the gaming industry? Yeah, yeah, sure. I think now, but now it's just the POC some 12 running under, you know, currently. But cloud gaming is the commercial now, but I think AR VR is just the POC running under age. Yeah. Okay, very good. There is one more question is what does the SDN technology stack look like in the control plane, management plane, any open source projects that you have used in the as I just mentioned, two years ago, we used the open daylight as their controller in our control plan. But now, you know, we use their we upgrade our open daylight, you know, controller by using the micro services design. Yeah. Okay, got it. I also had one final question. That is edge computing has opened up new use cases and new applications. So a year from now, or what are you seeing as killer use cases for edge that your teams are either trialing or working on or, you know, what would the audience kind of dream about, right? From from your. Yeah. I think our cloud gaming is our first, you know, it's key applications for Tencent. And now, we are also looking for like AR VR is also so I mentioned, and also the live broadcasting, you know, in China, live broadcasting you and also Tencent meeting, similar to Zoom. Yeah, we also have the audio and the video, you know, we do audio and the video, we can function. So if so distributed in our age node. Okay, so much better latency. Yeah, so much better latency than the civil also bandwidth. Yeah. Excellent. Okay. Thank you for answering so many questions and appreciate your insights into your network. So with that, thanks a lot. And I just realized this is my first time to give you introduction or presentation on behalf Tencent. Yeah, exactly. That's good. I'm very happy to see you. Thank you. Thank you.