 Let's get started. Today's presentation consists of four parts. The first, the overview of ZTE's path solution. In this part, we are talking about the limitation of the original path solution, the use case of ZTE's path platform. And we are talking a little more about the two components, which have been reflected based on VPP. And the second part is the detailed information that is for VPP-based fast MQ architecture. The third part, detailed information for ONF service, net output forwarding as a service, as a boundary of the path cloud. At the last, a little optimization for VPP. And let's get started the first part, the overview of ZTE path working. I'm sorry, it works. It seems good. And let's get started. OK, let's get started. The first part, the overview of ZTE's path solution. First, it works as a public cloud. As a cloud service provider, it provides the path service to government, company, and person. The second, the path platform supports 30-part service aggregation. It supports the application stores, gathering developers, the 30-part service providers and users. And finally, become the ecological chain controller of the cloud computer. And it also played as the telecom providers or IT vendors as the operator of paths can make its core ability more open, promote the sales and development of main business. And then, second, for the private cloud or IT cloud, as telecom providers, governments, banks, companies, and other customers build a private path cloud platform. And it also built a unified platform for application development and operating to achieve the integration and middleware, support automatic development and transportation of IT apps. The third part is for the NFV, with the help of automatic arrangement and deployment, the telecom providers can build NFV much easier. It helps the telecom provider realize the transformation of telecommunication network architecture from special equipment to cloud or general equipment. The next is the features of the IT path platform. The first, all the applications, all the service is packaged as a container image and it runs in Docker and deployment in VM or Ironic. The second, the IT path platform implement the service discovery mechanism. And third, it also supports the ICT applications. The last, it also supports the multi-tenant, multi-networking plans, for example, the control plan, the data path plan, and also supports the API plan. Okay, let's get started. Let's have a look at the packet flow inside, outside the path cloud. It works, I'm sorry. Let's have a look at, this is the logic diagram for the IT path platform in the middle. This is the first message queue. It is based on the VPP, and the bottom is the NOF service. It's also based on VPP. First, if a packet from the outside of the path cloud is firstly through the VPP-based NOF service and then it will transform to other Microsoft containers or Docker. All the traffic is through the VPP-based messaging queues. After processing the microservice, finally the packet will be transferred out of the cloud and through the NOF service. Let's have a look at the VPP-based fast MQ architecture. It's based on VPP. First, let's talk about the publish and subscribe pattern. First, in the path cloud, when the discovery mechanism should be used in the path platform, let's talk about it. First, all the microservices are deployed dynamically and in the meantime, with the need of expression and sharing the service instance will be changed dynamically. So the topology of the network will change too as the reason just talked about. Path plan from unified resolve solution. It calls the publish and subscribe. But the traditional publish and subscribe pattern is shown in the picture. Here, this is the publisher and the three is the subscribe. They communicate with each other based on the subject. Here the messengers come through to this subscriber or this one or this one. The publisher publishes the messages into classes without knowledge of the subscribers. Similarly, the subscribers express interest in the classes also without knowledge of the publisher. So the publish and subscribe is a spelling of the message Q paradigm and it's typically one part of a large-oriented mid-wire system. Most messaging systems support both the publish and the subscribe message Q. For example, we have the radius. It's an open source implementation. It is usually used in memory, no-circuit database. It is networked in memory and stores keys with optional durability. We also have another open source implementation. For example, the RabbitMQ. RabbitMQ is also an open source message broker software. It is implemented at one MeshQ protocol, AMQP. And we also have a zero MQ. Zero MQ is also an open source implementation. It's a synchronous messaging library. It's a little different with RabbitMQ. But all the implementation of the software has a weak point. They're all based on the Linux socket. So the performance of all the implementations, radius RabbitMQ, zero MQ might be poor. It didn't cover the city application requirements for these kind of implementations. So let's look clear at the application in containers communication. For example, the containers in the same host, in host 1. The containers 1 and the containers 2, they each other want to communicate with each other. The first, they will invoke the MQ live and to send the message to container 2. It finally uses the Linux implementation. It means that if the application in container 1 want to communicate with application in container 2, there might be a switch between the user space and the two containers space. And there will switch from the container space to user space again. And similarly for the container in host 1 and host 2, they will also switch from the user space and to the container space. So the performance for the communication between containers based on the Linux socket, the performance might be poor. It can't cover the requirement for city requirements. So we decided to reform the MQ live based on VPP, not based on the Linux socket. Here is an example. There are four ports, port 1, port 2, and port 3. Also we have a port base. In the port base, we have a HM node. It means the health monitor. This node will monitor all the containers in port 1, port 2, and port 3 based on the heartbeat technique. For example, for this port, they have different IP address, dot 1, dot 2, and dot 3. For example, if the container 3 want to get a message, some kind of message, first he must bind the session. For example, session 1. On the other hand, it means the C3 is a subscriber. This is database. If the C3 subscribes his information in the database, there is a record, session 1, and there is IP address, dot 2, and the port. It's an AIR4 port, UDP port. Next, the container 6 also want to receive messages. He also need to subscribe to bind the session 1. Also we have another record in the database, session 1, and IP address is dot 3, and the port is the AIR4 UDP port. Next, the container 1 want to send a message. He must publish the session. In the meantime, in the database, he will go to this session based on the keyword session 1. He will find two records, and he will choose one destination to send this message. For example, session 1, 0.3, and the AIR4 port is the port 2. It means that this message will be sent to the container 6. Yes, and it all includes the messages and the data. In this port, just mention we have a HM node. It keeps monitoring the link between all the containers. If the link is broken down, for example, for the link to C6, it's broken down. The HM mode will update the records in the database. For this record, it will be deleted. It means that for this message, we only have one record, and the information will be updated to the C1, the publisher, and it will change the destination of this message. The message will be sent to the container 3 because, formally, the destination C6, the link is broken down. For the communications between C1, C3, or C1, C6, they all have a message lib, just mentioned before. They send or receive messages based on the MQlib. In the native part implementation, the MQlib is based on Linux socket. In ZTE solution, it's based on the VPP. This is the detail of our implementation. The MQlib is based on the RTG ring. Let's have a look. It's based on the huge page share memory. We have the dbtk must container. It will allocate the ring for all the containers in this host, and we all have the dbtk slave container. In the slave node, we all have the ring IO node. It will reopen the memories for the ring IO, for the ring. We also have the MQ node. It will do some logic process, and also it supplies the MQlib for the applications in the container, just like the send or receive messages, for the communication between two hosts, we basically always switch. It's also the native always. It's also the user space in OpenWish switch and dbtk. For the communication between, for example, this container, from this app, he sends a message based on evoke the MQlib, and based on the ring, and finally it will go through the OpenWish switch. OpenWish switch, dbtk, and then it will be sent to the destination container. All the packages transmitted in the user space didn't switch from the user space to the Linux user space, the kernel space. Here we have the performance data for the original Linux socket and dbtk vpp-based solution. For the 128 bytes, the Linux socket, the performance is poor. But for vpp-based, it's almost 4G. All the data is tested under the 10G adapter, and the CPU is, wait a moment, and the CPU is the Intel E5 2399. The frequency is 2.3 GHz. The adapter is 10 GHz. Let's look at the data. For the 200 and 12 bytes, for the vpp solution, we all cleared the 10Gb pps, and for 1,024 bytes, it's 10Gb. So the performance based on vpp is much better. We also have another vp. Our solution for faster MQ, we just mentioned, is that based on the ring IO, and we also have another vp, it's based on the virtual IO and vhost pairs. The diagram is like this. For the container, we have the vhost user here, and for another container, we use the virtual IO user. It's a pair. The messages and the data from the container or to this container, or in the user space, not... They didn't switch from the user space to the container space. It's similar like the ring IO solution. Okay, the third part. Vpp-based NOF service. NOF means that networking output forwarding service. NOF is a boundary of the past cloud. For the package, if the package will go into the past cloud or go out from the past cloud, the package will first through the NOF. The requirements for the NOF are the service. First, it must run in Docker or in the container. Second, the NOF must classify the package. It determines the package where it will go to which microservice node to be processed. And third, it needs the high performance and throughput. The fourth, it also needs the security. Last thing is because the package is from the outside of the past cloud, it needs the... It needs the NAT. Also, it needs the load balance. In the VPP-native solution, some... Implement has such as the IPsec, NAT and ARB. In the ZTE solution, we have implemented for the package classifier and the firewall. Here's an example. This is the past cloud. The red one is the NOF node. It's also based on the VPP. There are a lot of nodes to classify the package and do some... For example, the package comes from the zero. Here we have some microservices. In this node, NOF, we have this node. dbtq.io is just a machine like the message queue for this part. NAT, ARB, and IPv4 is the VPP solution for this node. Flow classifier and firewall, message queue, they are all implemented by the VPP. If a package goes into the NOF node, firstly, it will be processed by this node. Flow classifier. This node will determine the package go to which node? For example, the NAT node or the IPv4 node. And then they go to the past MQ. Just mentioned just before, it's based on the VPP. The package will be sent to any kind of microservice in another container. Here's another example for SFC. This is also the NOF node. We have the classifier node. It might be in a container. Another node for NAT. Another node for firewall and another for ARB. All of the services or the container might be in different nodes, computer nodes. So the communication between this node will transmit the package between all the containers where communication is transmitted through the OSDBDK VPP switch. First, the package comes into the classifier node. The classifier node will determine the package will go to which microservice node and it'll be in contact with the NSH header as a service chain. And if the package finally processed, it will go through the past MQ and go out. And the last part, we have some optimizations for the VPP. The first one is a NOAP. Using native VPP implementation, if the package is decided to put out, the final process is in the IPv4 up node. In this node, they will look up the up tables based on the destination IP to get the destination MAC address. But unfortunately, if the MAC address can't be found in the native solution, the package will be dropped. But in some use case, this action is unacceptable, especially in the CT use case. So we did some optimization. For example, just the MAC address can't be found. The package will be sent to the main thread in the VPP. In this node, when control process is in this node, it will store the cache index for the package and then they will construct an up request based on just the up look, missed destination IP address, and then send out the up request through the interface output. And if we have the up request received for this node IPv4 up, and we will look through all the cache packets and then determine which packet can be sent out. And we also have another mechanism, time process, because we usually can't have the up request in time. So we have a time process. We scan all the cache packets in a certain time to construct an up request and send out the up request to try to get the up reply. And then if the up reply was got and the cache packet will be sent out, this is the first optimization, and the second. So the recommendation. In our use case, we usually have a very large packet which has an error for information. It will be split for many pieces. Usually for the first piece, we can have the IP head and the UDP information. But for the... This is the first piece. And for the next, we just have the IP information, no UDP information. So that's the problem. Because in the past platform, the packet or the packet transmitted based on the L2 information, L3 information, and also included the L4 information. In this situation, these two pieces didn't have the L4 information. So for these two packets, they can't be transmitted correctly. So we have optimization. For the first piece packet, which we received, based on the IP, based on the L2 information, L3 information and L4 UDP information, we got a hash value. And then for the next piece of the packet, we also use the hash values to send out this packet. For this optimization, we didn't combine all the split packets together. We just based on the first piece of the packet for the IP, UDP, L4 information to get a hash value. And the value is used to send the other pieces of the packet. So we call it a pseudo-recombination. That's not a real recommendation. That's two optimization for the VPP. We just down. Let's go to the summary. Just finish. Let's get a summary. Why did the ZTE use to select the VPP, DPDK? Because that the VPP and DPDK is a fantastic development platform. And the second, for the implementation of VPP, the graph node and the plugin mechanism provide great flexibility of different network applications. It means that it is easier for the developers to develop a new functionality. And the third, the new capabilities can be implemented to rearrange the network functionalities in the data center. That's all. Thank you.