 Hello, everyone. I'm Chen Jiexu from Intel. And another speaker is Ying Nanchen. He is from InSpar. Today, I'm going to introduce a way to upload network policy to hardware. First of all, let's say what is network policy? Network policy is a concept in Kubernetes. If you are familiar with OpenStack, you can consider it as OpenStack security group. Network policy basically will define which kind of traffic can be accepted or dropped. Let's say an example here. In this example, it contains almost everything about network policy you need to know. To define a network policy, there are four parts. First of all, you need to have a pod selector. Pod selector will select the pods. Those pods is the pod. The network policy should be applied. And we can see here, the pod selector will select the pod which have the label, row, DB. And then you need to have a policy type. There are two kind of policy types, which is ingress and ingress. It's the direction of the traffic. For ingress, it means the traffic going to the pod. For ingress, it means the traffic going out the pod. And the third way can see ingress. Ingress means the traffic going to the pod. You can define IP block, namespace selector, and pod selector, and pods. This will match traffic. For example, if your traffic is TCP traffic, and pod is 6379, it will be accepted by this network policy. And the last is ingress. Ingress will specify which kind of traffic can go out the selected pod. So usually, network policy is implemented by IP tables, which means the traffic will go through the kernel space, and it will consume some CPU resource. Today, I'm going to introduce a way just to offload network policy to the hardware. And then some traffic can be dropped in the hardware, which is NIC directly. And then CPU won't take part in, and we can save the CPU for our customers. Next, let's see our problem here. This problem is delivered with as-I-O-S-N-I plugin. For now, as-I-O-S-N-I plugin is supposed to be with motors. Motors can let pods to have multiple interfaces. In this example, we have Calico as the default plugin. And the interface ETH0 is created by the Calico. And NIC1 is created by as-I-O-S-N-I plugin. Suppose we have a network policy, which forbids pod1 to access pod2. Then the pod1 can't access pod2 through the ETH0, but the pod1 can still access pod2 through the NIC1 because as-I-O-S-N-I plugin doesn't provide the support for network policy. So this is a topology of the as-I-O-Wave NIC. There is an internal switch in the NIC. As-I-O-Wave provides one pf and multiple wafs. Waf is provided to containers. Then containers or VM can use the waf to have the network. The traffic here, we can say waf1. The traffic between waf1 and waf2 is going below. Waf1, pod1, and internal switch, and pod2, and waf2. So we can say, if we offload the hardware, if we offload the network policy to the internal switch, then we can match the traffic and drop or accept the traffic. Then the traffic won't go through the kernel space. To offload the network policy to the hardware, we can use dbtkrtflowlaborary. This library can configure hardware to match specific ingress or egress traffic. The below flow command is from testPMD. TestPMD will finally call rteflowlaborary. In this command, we can see that there are ingress, egress, and transfer. The transfer is used as our way. For one pattern, we can match the specific traffic. And actions, we can accept or drop the packet. There are a lot of network policies. We need to use dbtkrtflowlaborary to implement the network policy. We need to figure out whether all the part of network policy can be implemented by dbtkrtflowlaborary or not. So I compare them here. There are ingress and ingress in network policy, and dbtkrtflowlaborary also have ingress and egress. For CIDR in network policy, we can use the last and mask to do the same thing like this example. And the accept in the next network policy, we can also use to calculate which kind of IP block should be accepted or blocked. Then we can construct the flow command and offload the policy to the physical link. And for namespace selector and podselectors, we just can use Kubernetes API. We can query the Kubernetes API and saying that we want to the pod which have the label, like row, db, and for pods. dbtk has UDP, TCP, SATP, and so on, which cover all of the network policies. So let's see my architecture. I got three components, which is network policy agent as our SNI plugin and dbtk agent. First, network policy agent will watch the network policies and pod. If there is a new network policy created, then network policy will be notified and then it will construct the flow command and finally it will call dbtk agent through the socket. And dbtk agent can call the RTE flow library to set the network policy to the hardware finally. As always, the SNI plugin also need to call dbtk agent through the socket to configure the web. Currently, I have posted this to the GitHub as a usual. We have several discussions over this and hopefully I will continue to push this feature to, as always, the SNI plugin. Finally, I'm going to deliver a demo. Here we can see this video. Here we can see dbtk backend is already running. Next, I'm going to create the network policy agent. It is running in the Kubernetes node as a demon site. Next, I'm going to create a network attachment which is used by models to create multiple interfaces. And then I'm going to create two ports. This is port one and port two. There are two ports. Both have two interfaces. The ETH0 is created by Calico and the 91 is created by SNI plugin. We can see the IP. We can see the ETH0 and 91. We can see the IP assigned to the interfaces. We need to pin from port two to port one to verify that the traffic can go through from port two to port one. For now, network policy is not created. So we are in port two now. We are trying to pin the 91 in port one. So we can see the pin success. And then we will exist the port two and we will apply a network policy. This network policy will drop the traffic between port one and port two. This network policy is offloaded to the hardware. That is the SRVNIC. We are logging the port two again and trying to pin the IP in the port one. We can see the traffic is blocked. That's all. Thank you.