 So, good afternoon, thank you for your time. I know it's quite a heavy afternoon. We have a 50-minute talk. So, today we're going to talk about Kubernetes Networking with SRV6 and ContiPPP. It's another way to think about Kubernetes Networking. My name is Ahmad Abdel Salam. I work for Cisco Systems. I have with me Rasto, who works for Pansion, and I have Marek, who works for Intel. And this is going to be a joint talk. So, our agenda for today, we will go very briefly on Kubernetes Networking. Fortunately, we have two talks before, so this part was covered. And we will talk about SRV6, some introduction to the technology and also how it can work with Kubernetes Networking. And Rasto will cover the ContiPPP part. And Marek will go into the acceleration of SRV6 using the Intel SmartNIC. So, Kubernetes Networking, just in one slide. So, Kubernetes doesn't do anything for networking. It basically offloads this function to CNI plugins. And those CNI plugins are supposed to do two main functions. The first one is connectivity. So, when you create a new Kubernetes pod, the job of the CNI plugin is basically to create some interface inside the pod, connect this interface to the network fabric, can be either v-switch or whatever, and allocate an IP address to this pod. The second functionality is to make this pod reachable through the cluster. So, CNI plugin needs to do just these two things, different technologies and different ways to do it. And for the IP addressing, basically all your Kubernetes pods need IP addresses. And unfortunately, we don't have enough IPv4 addresses anymore in Europe. This was a talk in RIBE 78 where they projected when they expect the IPv4 addresses to be exhausted. And actually, in Europe, it was supposed to finish by January 2020, but three months before, they announced they already assigned the last prefix. So, we don't have any IPv4 anymore in Europe. Some service providers do have, but RIBE who assigned the IPv4 address, they don't have any more. The solution for this is to use IBv6. So, we are covering three main problem statements. Some people might not agree with the problem statement, some people might not agree with our solution, but this is at least the problem statement and the solution from our point of view. And the second problem, when we use IBv6 for container, for addressing, giving address to Kubernetes pods, we will need to implement all of the use cases of Kubernetes, like bot-to-bot communication, network policy, services, services chaining, ingress and communication between clusters. And the question is that, do we need to do these use cases in IBv6 the same way we do it in IBv4, or should we think about them in more IBv6-native way? And from our point of view that SRV6, I will explain what is SRV6 later, can provide a solution for this problem statement. The third one is the IO for the pods. Some kind of workloads, they need very fast IO. You need to have your pods required to process, like a lot of packets per second. How would you provide this IO to the Kubernetes pods? You have several options in the data plane. You can use the kernel forwarding. You can use something like EPPF with XCDP. You can use something like VBP with DBDK. And from our point of view, again for this, we think that VBP is a solution for that, or you can have also some way, depends on your use case, you can have a kind of accelerated way to use VBP. And on the right I show some numbers of some comparison between the forwarding of the Linux kernel compared to VBP. And in the second diagram is that VBP when you use it in software or when you upload it to a smartNIC. So what is SRV6? SRV6 basically is a source-routing mechanism. So you define the path or the forwarding path of the packet at the source. How you do it? You attach to the packet a list of instruction or a list of endpoint that needs to process this packet. These endpoints we call them segments or SRV6 segment. And each segment should have a segment ID. And segments can have two different meanings. One meaning can be a topological segment like going to node one or two or three before reaching the destination. Or it can have like a service meaning or NFV meaning by going to function one or service network function one and two and three before reaching the destination. And what SRV6 can provide you? First is scalability. Since you push the packet path at the source, you basically remove all of the states from the fabric of your networking. And with the SRV6, you can eliminate some of the protocol you already used like for traffic engineering use cases if you need to traffic engineer inside your network fabric for some use cases like service chaining and for overlay. And it gives you end-to-end connectivity. It has segment routing has two implementation, basically one for the MPLS data plane. And we will not go in this direction which basically maps this segment identifier to MPLS label. The second one is for the IBV6 data plane where each segment is mapped as an IBV6 address and you insert, you add to the packet a routing extension header that carries a list of this IBV6 addresses. Before explaining how it works, this is for the ecosystem. So SRV6 has a very rich ecosystem. So when you decide to implement SRV6, basically you are not in your own. So you have support from network vendor, from network equipment manufacturer. You have support in SmartNIC. You have immersion silicon. You have an open source. And you have some NFV as well. So SRV6 is defined basically as NFC, as a draft in the IETF. And basically it defines what is a network, SRV6 network programming. The SRV6 network programming is a model where you encode in the packet the processing bus. And as we said, this will be implemented as a number of segments. Just to give more details what is an SRV6 segment, or I said, is basically an IBV6 address, which divided in two parts. The first is the locator, which node should process the packet, and the second one is the function. Which function I should execute in the packet at this node. And here how your packet looks like. So you get your normal payload. It can be IBV4, IBV6, TCP, UDB, whatever. And then you encapsulate this packet into an IBV6 encapsulation. And in this IBV6 encapsulation, you carry a list of segments that basically define the path of the packet. And each node when processes this packet, it will update the destination address with the next node that should process the packet. So here I have, I need the packet before reaching the destination to go through three nodes inside the network or inside my Kubernetes cluster. So I will encode the ideas for this node. And every time I process the packet, I just update the active segment pointer to be the next segment and which be carried in the destination address of the packet. And this is how the segment routing header is defined in the IETF. So basically it has some common fields as any routing extension header, plus the seed list or the path of the packet which encoded as IBV6 addresses. And there are two main types of behavior that you can execute in the packet. The first one, what we call the head end, is this is where you do the encapsulation. So you have two flavors. Either you want to do layer 2 networking, so you encode the layer 2 frame inside the SRV6 encapsulation, or you do layer 3 by including the IBV4 or the IBV6 traffic. And the other behavior, this one, these are the one you execute on the node defined in the packet header for several use cases, some use cases like traffic engineering, some for just overlay if you need to connect to Kubernetes worker nodes, and some other for service chaining. And this is just, I will give two example, one is for overlay. So I have considered this green overlay as two Kubernetes pods. So they need to speak to each other through the network fabric. So this is a Kubernetes node, and the other one is on another Kubernetes node. So basically you encapsulate the packet into SRV6 encapsulation. But for some use cases, someone might need to do some traffic engineering inside the fabric. So you get your Kubernetes node connected to a data center fabric. And in this data center fabric, there are some I need to take a faster bus to go to the other node, or a low latency bus. So you can enforce this kind of bus by adding a new segment to this. So the low latency bus here is not the one directly from one to two. But if you go one, three, two, this is a low latency bus. So you can enforce this also inside the fabric. Just I will do two slides on the Kubernetes with SRV6, and then I will hand it to Rasto to speak about the support of SRV6 in Qunty VPP. So what you have now for Kubernetes networking is basically for each, when you want to implement load balancing or IE Kubernetes service, you rely on some feature in the Linux kernel like the Linux IP tables NAT, or if you use a CNI plugin that uses VPP in the data plane, you do use the VPP NAT engine. The same for port forwarding, for the network policy, as mentioned in the previous talks, people use the Linux IP table firewalls or people using VPPs use the VPP ACLs for overlay with different protocols like VXLAN, IP and IP, each CNI plugin supports one protocol or another. And for some use cases, for example, for service provider when they want to do some service chaining between the pods, the way they do it, they create different tunnels and they try to stitch these tunnels together. And the result of this model that you get kind of NAT everywhere, you get also a quite complex network policy model which basically relies on the container IPs and the containers in Kubernetes or the pods, they come and go very fast. So every time you create a new pod, you need to update the IP table rules across all nodes. Plus IP tables, as people mentioned before was not designed for this very fast forwarding. Service chaining is complex, I would say almost impossible to do currently. And for some use cases, like if you want to implement some enter cluster communication, hybrid cloud, multi-cloud, or if you want to implement a kind of network-wide policy, I mean, myself I don't know the solution so if someone knows, can tell me. So how we think SRV6 can simplify this? The way we think about it is just one technology that you can use it to implement at least most of your use cases because it has instructions for that. So you want to implement an overlay, SRV6 provides you the instructions to the encapsulation and the decapsulation. If you want to implement a policy model that doesn't rely on the container, the IPs, you can leverage some of the metadata in the encapsulation to implement the policy using labels. Some use cases like port forwarding, by assigning some NIPv6 address to each application, there was some good slides from Facebook on this that I forget to add the link. And for load balancing, you get what we call segment routing policy, which you can have multiple backends which basically does this without having to do that. And service chaining, you get service chaining out of the box because you have an extension header where you can encode the bus of the packet. So you don't need to create several tunnels and try to stitch them. You can encode the bus of the packet from the beginning. And for the other use cases, you can leverage the SRV6 and some control plane like network service mesh. I will just cover one use case is the network policy. If you want to implement a network policy on a scale, that basically does not rely on container IPs. So here just these two parts, the blue part is within your Kubernetes node and this green part is across your data center fabric. So I have two containers. So my Kubernetes cluster is implementing workload from several tenants or from several application tiers. And I want to implement a policy between those different groups. So the way I will do it, I will encode. I don't want to implement it based on the container IP because this way every time I create a new container on this node, I have to go to all of the other nodes and update the IP table rules or the ACLs there in order to block the traffic. So I need to have a kind of a common identity to each container or to each group and based on this identity, I can filter the traffic. So the way we do it here, when you do the SRV6 encapsulation to send the packet across the fabric from one Kubernetes node to another, you encode the source of the packet. So this packet that comes from the pod R1 is coming from the group red. And when you remove the encapsulation at the other node before handing the packet to the other container the Kubernetes pod, basically you compare the source group of the packet to the destination group or the group of the destination of destination. So what this gives you better than the IP table rules or the normal firewall rules? Basically you get a scalable network policy because now your policy does not rely anymore on the container IP. So my policy table will only have some rules based on the group. So I have group red. When group red wants to speak to group blue, I accept the traffic. But when group red wants to speak to group green, I drop this traffic. Say I added a different Kubernetes node and I added new pods from group red or group green. My policy table will not change because each packet will come with a label or already identified and I will just filter based on this. Second is integrated inside your overlay. So you don't need to implement a new technology to do policy. You don't need to use a new firewall to do the policy. The policy is already implemented inside your overlay. The second, the third is independent of the container IPs because basically it relies on the group of the containers. And with this, I will hand it to Rasto to speak about SRV6 in content. Thanks, Ahmed. So before I go into SRV6 in content VPP CNI, I will just tell you some details about content VPP. So content VPP is yet another CNI, but this one uses VPP as its data plane, leverages DPDK to access the network interface and has the Qproxy fully implemented on VPP for network policies and services as well. It is production ready, passes all Kubernetes conformance tests, so we can use it as any other CNI. But apart from that, it is really good for different cloud-native networking deployments, meaning if you want to deploy some network functionality as a set of microservices running in Kubernetes. For those, we have features like we allow multiple network interfaces per each pot and actually different kinds of interfaces. We have support for multiple isolated networks, L2 or L3. We do have support for service training between the pots for CNF deployments. And of course, since we are here because of SRV6, we support IPv6 and have some SRV6 features implemented. Very briefly, the look at the data plane. So on each Kubernetes node, we run one v-switch pot which executes VPP. VPP uses DPDK to access the network interface, which interconnects the node with other nodes in the cluster. And between VPP and the pots, we have the interfaces. By default, you would get tap interfaces from VPP into the network namespaces of the individual pots. Between the nodes, we, by default, use the excellent tunnels and we'll see what we can do with SRV6 instead. This is one of the features that Conti or VPP implements for CNF deployments. So let's say we have two CNF pots which need to talk between each other, let's say on L2 level. So they can have a first network interface connected to the default pot network and they can have additional network interfaces connected to additional networks. The way you define this in Conti VPP is that in order to connect multiple interfaces towards the pot, you use the Conti VPP annotation custom where you can define the name of the interface in the pot type of the interface and the network where you want to have it connected. The types that we support currently is the tab interface between VPP and the pot then VIT interface and MEMIF. MEMIF is a memory interface which can be used in case that the CNF supports that and it allows to forward the packet between the V-switch VPP and the CNF through the shared memory. So bypassing the kernel. This is another use case of CNF deployments. In this case, we again have pots with multiple interfaces. In this case, they are MEMIF interfaces and we want to have them connected in a chain. So we want to have a chain which starts in a pot CNF1, goes through the pot CNF2 and ends in the pot CNF3. Those pots can be running on any node in the Kubernetes cluster. So once you define this kind of service function, as shown on the right side of the slide, you just refer to the pod labels as in Kubernetes services. Then you refer to a name of the interface which you have given in the pot spec. And you basically define the service chain as an ordered list of these pots and their interfaces. This is an implementation which uses AutoCrossConnect to create the chain. So the chain in this case is AutoCrossConnect on the same VPP V-switch instance or CrossConnect between the VX LAN tunnel and the interfaces when we need to go multi-node. We'll see again later what we can do for the same use case with SRV6. And this is just an extension of the previous case to show that you can chain the interfaces not only between the pots, but you can also chain with some external DPDK interfaces or sub-interfaces. Okay, let's go on SRV6. So by default, if you deploy conTVPPC and I, you have IPv4 networking. You can switch that to IPv6 and then you can optionally enable SRV6. And once you enable SRV6 in IPv6 deployment, what you get by default automatically is that instead of the VX LAN tunnel overlay between the nodes, you would get an overlay with SRV6. So whenever a pot on the node 1 needs to communicate to the pot on the node 2, the packet is steered into an SRV6 policy based on the destination IP subnet of that node. The SRV6 policy would have a segment list which would contain just one segment which identifies the other node where the packet needs to traverse. Between the nodes, the packet is encapsulated with SRV6 header. When it comes to proper node, it is encapsulated in the DT6 local set function, which does the SRV6 decapitulation and table lookup in IPv6 table and then forwards the packet towards the destination pot. The other thing that you can optionally enable in conTVPP if you have SRV6 deployment is Kubernetes services implementation with SRV6. So for IPv4, we implement Kubernetes services as most of the CNIs using the network address translation and load balancing after that. With SRV6, we can actually get rid of the network address translation. And the way we do it is that whenever a pot needs to talk to a service, which is actually a set of backend pots that act as a backend for the service, we want to somehow load balance between the pots. And with SRV6, we again, let's say that pod1 wants to communicate with some service endpoint. And in this case, the service endpoint can be either pod2 on the same node or the pod2 on the other node. So let's say we have a cluster IP Kubernetes service. So it is a virtual IP address. When the packet from the pod1 comes to the VPP, we steer that packet into an SRV6 policy based on the destination IP address, cluster IP. And then the SRV6 policy in this particular case would have two segment lists. One would be path towards the pod2 on the same node and the other segment list would be the path towards the pod2 on the other node. And we basically load balance between those two segment lists. In case of the same node, the segment list would have just one segment inside of it, and that would be the local seat with the encapsulation of SRV6 and cross-connect towards the pod2 interface. In case if it is load balance to the other segment list, that one would have two segments in the list. The first segment would have a local seat end shown on the node2 and the other one would forward the node towards the pod2 interface. And the third thing that we can do with SRV6 is service training. So I have shown the service training based on cross-connects and this is another way of doing that with SRV6. We implement only snake-based service chains, so whenever traffic needs to traverse through multiple CNFs, we always need to go to VPP and then to the other CNF. We cannot go directly. That is the limitation of comTVPP's CNI currently. And whenever you want to use SRV6 for service training, you use the exactly same APIs as I have shown for the L2 cross-connect service chain. So how it works with SRV6? So let's say that this is our chain that we want to achieve. So we want to have one CNF pod, which would act as input. Whenever a packet comes out of that CNF, we want that packet to go to CNF1 pod, from there to CNF2 pod, and from there to CNF output pod. How it is rendered into network configuration with SRV6 in comTVPP is this. So first we steer the packet based on, let's say, destination IP into an SRV6 policy, which in this case would be a little bit more complex. It would contain a single segment list with multiple segments. Each segment does its job in this part. So the first one is the local seat AD, which pretty much takes out an SRV6 header from the packet for a virus that's to the CNF1 pod. And when the packet comes back from the CNF1, it puts the SRV6 header back to the packet, and then it goes to the next segment, which would let the packet traverse to the proper node. That would be the local seat end on the node 2. And from there it goes to another local seat AD. Again, there is decapsulation from an SRV6. The packet goes to CNF2. When it comes back, again, encapsulation and goes to the next segment in the list, which would be eventually the local seat DX6, which just forwards the decapsulated packet to the CNF output. One nice thing that we can do with SRV6 rendering of service chains and cannot do with cross-connect is that we can have a multi-part rendering of chain. So in case that we have multiple pods that match the pod selection criteria that we find in the SFC API definition, we can create multiple chains and load balance between them. And of course, all of these works, even on multi-node, even if none of the CNF runs on that particular node. If CNF out input 3 would be the input pod, the traffic is steered into an SRV6 policy, and from there it goes to the proper node with CNF1 or CNF1-1 or 1-2, and then the rest of the chain. Everything works dynamically, so if you, let's say, shut down one of the nodes, the CNF would reschedule on a different node and the forwarding through the chain still works. Okay, this would be it. So, thanks. And now we take a look at how we can accelerate SRV6 with smartness. Okay, can you hear me? The microphone works. Okay, thank you. Okay, I want to talk to you something about the accelerator. This conference is mostly about the software, but I want to talk something about how we can use the hardware to making software working faster, yes, and making our implementing faster, yes. This January I visited our spiritual home, the computer history museum in Montenville, that I found the first network card, yes, what was created over the world, yes. So it is like on this picture, yes, maybe it is not clear. And I use this one to explain what is our accelerator with the FPGA, which is exactly this, is that hardware making exactly some function like this first card, yes. We're connecting here the network interfaces and we're trying to put some traffic to the host, yes. It is just the... And our accelerators can make the multiple functions, and by default it behaves like this card, yes, it's doing nothing, but with this accelerator we can do some more complex things, yes. Why we should do some more complex things, yes. Because, for example, our colleagues from Cisco is implementing that a lot of the new protocols, yes, which provides some bottlenecks to the hardware processing. Ahmed explained to us that how it looks like the SRV6 and normally when we have the normal card without any acceleration, this network card should do all the packet processing going through the, for example, parsing of the headers, making the decision of the headers. With every header we should make the forwarding decision, we should make a lookup, next lookup, next lookup, next lookup, yes. It is... When we want to do work, make this on the server layer, it is quite complex, even for the very good software like the VPP is doing, yes. And it is the reason to including the accelerators, yes. We have, in our models, we have here the model when we making that some connection of the two VNFs or the two containers, yes, through the VPP when we want to accelerate or want to process the RSS segment routing, traffic faster, yes. And the problem with the packet processing is that because of the number of the lookups what we are doing inside of the software, the less packets means less packets or the shorter packets means more problem and have more impact on the performance, yes. Of course, when we are using only the video traffic or we, for example, streaming only the YouTube or somewhere, it is not a big problem. But, for example, when we are going to the word of the IoT or the shorter packet of the voice it's still that this problem of the packet processing is going to be bigger and bigger problem. What we are doing in this operation, yes. This picture presents how we are working without the accelerators, yes. It is just the inside of the cart we are doing not accelerators, or the SRV6 processing is doing in the VPP router, yes. And it is nothing more our cart accelerator works like the normal NIC card, yes. It is nothing happens here. In the accelerator model we have exactly the same model but we pushing the part of the acceleration. I don't say that we pushing everything, yes. The FPGA and the hardware accelerators in this model is not the, for example, the model which can do everything, yes. That we still, we are making the accelerator and this accelerator still working with the VPP router but the sense of the acceleration is to make the job of the VPP router, software VPP router faster, yes, to limit number of the look ups, yes, inside of the hardware. And additional problem what we solving with this model not connecting the VM or the VNF or the container directed to the FPGA in this model is to management, yes. Our cart in this model or our FPGA is hidden inside of the VPP. So from the user point of view whatever software we are using for the managing like the Conti VPP which is the very good example it is invisible to the software, yes, to the user, yes. We accelerating the VPP or some function of the VPP not the overall data path, yes. It also in our models, our FPGAs today are quite limited in the space and of course we couldn't put all the internet here in the FPGA and it also help us to limit our acceleration to the model where is really necessary, yes, to limit the functionality, what is really necessary and the most useful. What we are achieving that way, yes. Here is some pictures of which presents that what we can achieve. This line which is that in the gray one is the way how we are working what we are can achieve with the pure software, yes, without the acceleration, yes. We could see that the overall path or the overall performance we can achieve because the number of the slow cups and the complexity of the processing includes the complexity in the packet processing to make a processing efficient we should use the many, many cores, yes. You see that for example to achieving the 36 gigabytes for the 192 bytes packets we need to use the 12 cores, yes. Intel CPUs have for example the 28 cores and it is not very efficient way to work to use the 12 cores from the 28 to make just the packet processing, yes, and nothing more, yes. It is just for the infrastructure, yes. With our accelerator we can limit, we can achieve using the four cores 44 gigabytes or with the six cores 50 gigabytes, yes. And everything, all more cores, and we are freeing more cores to the users, yes. So user can use efficiently to make our cores, CPU cores to making the processing faster, yes or making it much more efficient, having the same flexibility what is delivered with the SRV6 and the VPP. Here this picture presents the more example what we are doing and we can achieve for example saving eight to ten cores, yes, by this one, yes. Intel unfortunately do not provide this operation or this solution directly. We are working with our partner HCL which is the company from India who is our partner who delivering and testing the solution for our end customer. What the Intel is really doing, Intel delivering the hardware, yes, we can do this operation, the SRV6 and many, many other examples, yes, of the acceleration, yes. Okay. Again, thanks Rasto and thanks Meryk. So just to conclude, as we say, Kubernetes does not provide any solution to networking. You need to pick your CNI plugins that you will use to do the networking and CNI plugins, as we said, like they provide two main functions in Kubernetes, the connectivity and the reachability and we need IBV6 to provide IPs for these containers or Kubernetes pods and we believe that SRV6 by leveraging the IBV6 data plane can handle various Kubernetes use cases in a more simple and scalable way and we give in this talk two examples of SRV6 support, both in Conti VPP to do as a CNI plugin and also accelerating the SRV6 processing using the smartNICs in order to free, let's say, more cores in your servers for application workload. And with this, thank you and ready for any questions. How much does SRV6 add to the packet header? Because I saw some annotation I don't know if it was in there or slides ago, a 96-byte header. Is that accurate? No, actually it was this one, 192. This was the size of the packet. So if you speak about the SRV6 encapsulation, so basically it depends on the use case. So if I go back to the overlay... You actually need to look at the questions. Okay, so the question was how much of overhead SRV6 adds to the packet? How much bytes SRV6 needs as an overhead added to the packet? So for most of the use cases when you want to do just overlay basically it's just an IBV6 outer header. So it's the same as IP and IP encapsulation. So you add just IBV6. If you want to implement some more advanced use cases like traffic engineering, for example when you need to send the packet through several nodes. So in this case, you will need to have an SRV6 header as well. And based on how many nodes you need to process the packet. How many bytes does each node add? Well, actually it's not... it's how many nodes you need to address in the packet. So I'm sending traffic from X to Y through 1, 2, 3. So you need to add a segment for each node. Each segment has an IBV6 address because the nodes are IBV6 addresses. Each segment has a 16 byte extension header. Okay. The question was more how much of the original packet if it goes out of container do you preserve the internet header and preserve the original IP header? Well, it also depends here in use case. So there are two ways to encapsulate the traffic. So if you need to implement some layer 2 services, you can encapsulate the whole layer 2 frame inside the packet. But if you need to do just layer 3 services or kind of layer 3 VPN, so basically you can encapsulate just the layer 3 traffic. Is there IBV4 or IBV6? SRV6 defines different models for the application of the container which is the SRV6 AWARE and it is the SRV6 NOT AWARE. So when there is the SRV6 NOT AWARE, the hardware of the VPP just the strip the SRV6 header and when the packet is sent out it adds it again. So the tunnel of the SRV6 tunnel is not visible to the application. But for example when the application is the SRV6 AWARE it can actually receive all the packets with all the all the encapsulation. It is depending what the option here is the added here. It is used to configure it in the routing table of the VPP. For the last 3 operations basically when you implement services chain, you can have your network function or CNF or SRV6 AWARE so they can process and skip this encapsulation and process the packet. Say you have a firewall that needs to apply some rules on the original packet. So either the firewall is aware of the encapsulation and can apply the rule directly or it's unaware in this case you need to strip out the encapsulation and this is where the Intel card can do it at the higher processing rate. Time is out. I will take it offline. I can repeat the question. So basically the question was if the application is SRV6 AWARE and will there be a way or a layer between the application and the network that handles the encapsulation. That's true. So if your application is not aware of the SRV6 encapsulation there are 3 different ways of doing this kind of proxy between the application and the infrastructure. But if your application is aware we will process it directly. There was some discussion recently that Cisco had proposed a bunch of non-standard SRV6 extensions that Linux not yet implemented. Does this all work with the standard tags that are or does it require some of the special extensions? Well I'm not sure which part exactly are you referring to but from the slide here all of this at least up to here all of this behavior are supported in the Linux kernel. So for the proxy behavior are implemented currently as a Linux kernel module.