 Good afternoon, my check. Thank you. Good afternoon, thank you for joining this session. We should get started because we've got a full agenda. My name is Azhar Saeed. I'm one of the chief architects in the Red Hat Talco team. And joining me today in this presentation is Doug Smith. Doug and I are going to talk about scaling NFV and our containers the answer. Let's explore that particular question. But before we get into the presentation, there are a lot of people we have to acknowledge for their work. Let's start with Dan Williams. Dan, is he in the room by any chance? No? Rashid, is he in the room by any chance? No? OK, both of these guys were incredibly helpful in terms of trying to create a POC for what we are going to talk about today. And I will share that particular POC with you. And then there's another gentleman by Tomo. He's right here in the room. He's done some work with container connectors. And Ajay Simha, who had helped me with some of these slides. I don't know if he's in the room either. And of course, there he is raising your hand. So thank you very much. Thank you, Tomo. And of course, Doug, who has actually done the actual demo itself. And we're going to try to show you that demo today. Let's quickly go into the agenda. So we do a quick background in terms of how VNFs are deployed, what are the typical use cases. Very, very short introduction and background because this is part of telecom NFV operation session. So I want to make sure I put everybody on the same page. Then we dive straight into containers. What do containers do? How do they help? How well do they scale? Then we go into some scale questions. We'll ask scale questions from various different angles. And then we'll see how much of a problem do they solve and what are the open questions that we still need to solve. And then we'll show you a POC and a demo to say, how are we solving some of those problems? And of course, as a community, we'll need more people to participate in order to drive in a certain direction and we'll wrap it up with a quick summary. As you know, virtualization progression, you have seen these are all part of the introduction set of slides just to set the conversation up. You run applications of VNFs on bare metal or in virtual machines. You can also run some applications on containers. The question we are going to explore through this particular presentation is, for NFV are containers the right answer. Let's take a set of different use cases. In the use cases, if you see virtual CPE residential, virtual CPE business, or SD-WAN type of use case, or if you take on the mobile side, voice over LTE, IMS, virtual EPC, or on the GI LAN side, you can probably segment them from a wireline and mobile perspective, also from a consumer and a business perspective. You can look at the four quadrants and say, where do I need the most amount of scale? Where do I need the most number of sessions that I'm tracking? Where do I need what capability from a network functions perspective to actually go deploy my service as a telco? So those are the angles to look at from each one of these conversations. For our practical purposes, we'll just take a look at VHCPE use case and look at the residential aspect of it. In the residential use case, you're aggregating thousands of subscribers. You're looking at various different functionality that's being provided to the subscribers, whether it's incremental set of services or whether it's standard set of services through virtualizing a BNG or virtualizing the CPE. Now, what do you need to look at it from a scale point of view? You need to look at the orchestrator, you need to look at the SDN controller, you need to look at the networking components, you need to look at the VNF managers, and all of these things need to provide the capability and the service that's needed to deliver for thousands and thousands of sessions, and you need instantiation of these VNFs in the data center infrastructure or on the remote side. We are not going to deal specifically with that remote side use case. There is another presentation on Thursday you can look into where we look at multi-site open stack. So this presentation, we won't be able to explore the multi-site open stack specifically, but we'll look at specifically the scale conversations. Let's take a step back, look at what are the NFV requirements. Those NFV requirements are incredible. I mean, we need to really have a good understanding of what those NFV requirements are, and then how do they map to virtual machines and the current deployment models, and then more importantly, how do they map to the container models going forward. We need flexibility for IP addressing. This is an important topic that we need to know. Everybody takes it for granted, oh yeah, I can do DHCP, I can do Slack, I have V6 Slack, I can do private IP, public IP, overlapping IP, all those type of things. They take it for granted because it's available today, that's what they do, that's how they deploy these type of services today for virtual CPE, for residential, for business, and so on. DHCP-based address assignment and management is also something that's, you know, you'll say, yeah, this is available, this is done, it works, right? That is a requirement for NFV. Multiple interfaces assignment to a virtual machine, whether you're doing a SRIOV or whether you're doing DPDK, it doesn't matter, but you need the ability to actually emulate that hardware and provide that capability into a virtual machine. Now you need the same thing in the context of containers as well, but we'll talk about that when we get to containers. Multitenancy, and multitenancy is important because you're sharing so many different services across thousands of subscribers, whether they're business or residential, and you need multitenancy capability. From a packet forwarding perspective, you need all of those nick bonding, pneumo affinity, you know, huge page support for better latency, jumbo frames, CPU pinning, because all VNFs are not equal. You have hybrid VNFs, VNFs that actually have run some portions in containers, some portions in VMs, you have mixed apologies, you have to do load sharing, you have to have elasticity in terms of configuration. So there are lots and lots and lots of requirements for NFV or network function virtualization to work in a open stack and virtual machine environment. And people have deployed this, people have made it to work, people have actually gone through a lot of pains to get it to work and to provide the right level of performance and so on. Now let's look at some of the scale factors for those telcos. Scaling is a multidimensional problem. It's not, you know, when you start to say, I just, scaling is not about just the number of sessions. You can't say I'm gonna scale, you know, to thousands of sessions and that's it. Well, with that comes performance, with that comes QOS, with that comes service density, with that comes through, you know, orchestration scale. Management and troubleshooting scale is also important for us to understand how well will you be able to do. Are these areas visible? Are these VNFs visible to us? Do we have traceability for those particular VNFs? Also, you know, scaling the whole deployment environment itself, how do you build a CI CD pipeline to actually add new functionality, add new capability for your existing subscribers? Those existing subscribers are, you know, you're residential customers, if you're a telco, you're business customers, or they could be your internal IT departments actually trying to deploy new capabilities on that same infrastructure, on that same data center infrastructure. So it's not just about the size of scale, meaning how many, it's also the speed of scale, which is how fast you can do these things. Let's take that, go back to our residential services example of the VCPE and see what happens. Now, we all know virtual BNG, if you have deployed, you know, if you're a service provider, if you're a telco, you've deployed BNGs, you've provided residential services to those customers, and those, I mean, the number of sessions on a BNG and the number of, you know, IP sessions that you can hold with QoS to terminate those residential subscribers, it's been tuned over a period of time. Now you can get in half rack heights, you know, boxes, dedicated hardware boxes that provide half a million sessions, 300,000 sessions, easy. When you enable all the bells and whistles with respect to QoS, with respect to, you know, all of the functionality, maybe you can cut that number down to about 100, 200,000. Again, it varies by vendor, you know, some vendors will say, yeah, I can support half a million with QoS and so on. Sure. What you have to also look at is, what's the size of that box? What's the footprint of that box? What's the bandwidth you can serve in terms of the number of subscribers? What is happening at the end? If you look at the access networks is with G-Pon, the subscriber bandwidth is rising up, right? You're going up in that from single gig connections or 100 meg connections to now tens of gigs of connections. If you take then do, you know, the average statistics and say, how many connections do I have active? How many subs per connection? Now, for those of you who have designed mobile networks, even though you designed it for LTE, the average bandwidth per sub that you take from a design perspective is much smaller than typically what you advertise for an LTE. You do the same math here. You do oversubscription. You do, you design this particular network and you say, OK, how many sessions do I need to serve simultaneously? Then when you take that, you can then translate that into how many virtual machines do I need? Then from the number of virtual machines, then you translate that to how many servers do I need? And you can very quickly say, if you have just 50,000 active subs with all of the QoS, with all of the capabilities, today you can easily fit that 50,000 subs in a quarter height or half rack dedicated box. If you want to do that on a virtual machine, how many QoS do you need? How much memory do you need? How many machines do you need to be able to actually scale that particular virtual machine up for VBNG? BT did an interesting test and they found out that they couldn't actually support about 3,000 subscribers on a server with all of the bells and whistles. Now if you want to do 50,000, 100,000, 150,000, you're talking about footprint size that it increases pretty rapidly. So then what do you do? Can you scale that in a many number of VMs and keep scaling horizontally and throw more hardware to the problem? Well, no. What happens is pretty quickly, your cost efficiency and all of the dynamics that we were thinking about with respect to virtualization and with respect to NFV goes out of the window. So then enter containers. Then the question becomes, can containers help address that particular scale problem? First, let's understand what are those containers? Containers is basically a software packaging concept that include applications and all its runtime dependencies. Of course, lower virtualization overhead, lower memory footprint, instantaneous restart time, potentially much faster than a virtual machine word. Perhaps even lower latency if you want to actually talk about intercontainer communication through IPC because of a shared memory model. You can then potentially pack higher density because container runs as a process inside that particular hardware. You can encapsulate various different microservices in containers. Portability is available. If you use a standard container format, then you can use standard container capabilities like Docker containers orchestrated via Kubernetes. Deterministic packaging models. And you can accomplish reasonable isolation with C names and groups. Let's do a quick comparison of containers and VMs first before we get into NFV with containers. As we all know, in terms of VMs, VMs require their own host operating system. They run on top of Hypervisor. So you have particularly an overhead in terms of the guest operating system that actually is there inside a virtual machine. Whereas on a container, container runs in that same host, you can have something like a Docker engine that behaves quote-on-quote like a Hypervisor but actually containers are natively as a process on the host. Docker engine provides some certain capabilities and separations. There is no Hypervisor there. You have isolation through C names. They are considered much more lighter weight. And I have some specific examples to show you what is the difference we're talking about between a VM and a container for exact same metrics. You can orchestrate these containers by Kubernetes. And typical expectation is they probably scale better than 10x in the context of VMs versus containers. Then the question becomes, do you run containers natively on the host? Do you run them inside a virtual machine? Do you run them containers and virtual machines together? Well, the answer is it depends on what your environment is and where are you migrating from. Do you have containers natively available for those network functions to be able to run? Do you have the network functions re-architected in such a way that you can actually run them as containers appropriately? And do you have all the capabilities to run them? Maybe you go to a hybridized model. I was talking to a bunch of service providers in a room. And they turned around and told me, look, it's very hard for us to tell our VNF vendors to provide containers for those VNFs. I said, why? Oh, because they've done so much of tuning with their VNFs and the guest OS that comes with it as part of virtual machine that they're having a hard time trying to standardize on a common platform and actually apply all of those tuning capabilities. I said, well, you know what? Please introduce us to them. We would like to talk to them. We'd like to figure out a way to actually put those optimizations in the host OS and actually provide that capability. But it was sad to see in a way that a service provider was resigned to what a VNF vendor was offering, rather than actually turning them around and saying, hey, if you don't do it, I'll go to somebody else who can actually do it. So VNF re-architecting becomes important. So now let's explore that idea of containers and VNFs and let's explore where the problem set is, what are we doing, and how are we doing it? First, a lot of people look at containers a la VMs, use containers like a virtual machine. The question is, does that really work? What are the challenges to make that work? And what are we going to do about it and how are we going to do that? Leveraging dockerization of functions, right? There are some functions, network functions that are already available via Docker images. You can just get it out from the Docker repo or a library and start to use them right away. How well do they scale? Do they perform as much? No, we don't know that. You may have to do some of that testing and you may have to do some of that work. What you're doing really on that VNF that has already been dockerized or containerized is you're using that as like a VM lock, stock, and barrel. So in other words, the VNF has not been rearchitected to meet a microservices model or a container model. So you're limited to some of the capabilities that it provides in terms of how somebody has created a container out of it with respect to putting wrappers around the existing application. It is intuitive to apply, so people always assume, yeah, sure, it'll work. Let's now revisit our requirements that we put out and say, what happens when you run it with containers? What works, what doesn't work? So I've just put some check marks and cross marks to say, flexibility of IP address management. Anybody who's playing around with Docker and Kubernetes will tell you that the IP addressing model of Docker and Kubernetes is very rigid. What does it do? It creates a default gateway. It does an automatic address assignment each time you restart a new container. You can do it in host mode where you're exposing all your host interfaces to the containers. You can do it in bridge mode where it's actually isolating some of the things. You can do it in Mac VLAN mode. Any of those modes that you choose, it's one interface per container. The addressing is all predetermined in terms of the private addressing, regardless of what bridge do you create and what do you use. It's always knotted. So the flexibility of IP addressing, private IP, overlapping IP, multiple interfaces, doesn't work. You have to play around with the code to make it work. And one of that is what Doug's going to show you today, how you do that. Multitenancy and management overlays, perhaps you can accomplish that using Kubernetes. Things like nick bonding, num affinity, vCPU allocations. Because remember, all of these VNFs are not created equal. All workloads are not created equal in this particular context. So what do you have to do? Well, you need the ability to assign CPUs to certain VNFs. If that VNF is running containers, now you need those containers pinned to certain type of CPUs. There are VNFs there that require eight cores, 16 cores. If you have containers that are running those type of VNFs, now you need to be able to assign multiple CPUs to that particular container for the processing power that you need for those set of VNFs. Hybrid VNFs here, probably you can make that work again in a container model if you're not too worried about the networking model itself. Mixed topologies, also you can make that work. Load sharing and scale. How do you orchestrate those set of containers? And how do you specify this? See, Docker and Kubernetes do a great job of masking that complexity because all you're trying to do is deploy a web app on those. So they mask that complexity, which is awesome for the type of applications that we need. But when we go to network function virtualization, it's quite the opposite. You need traceability, you need packet traceability. You need the ability to understand what's happening, where are things getting dropped. So one of the telcos that I was talking to give me this definition says, hey, maybe what I can do is not worry too much about all of these data planes issues and just look at some of the control functions and containerize them. Sure you can do that. They actually said there are some VNFs that are control plane heavy. There are some VNFs that are data plane heavy. Wherever I don't have a requirement for those multiple interfaces, multiple addresses, and all I'm worried about is signal processing and some CPU allocations. Maybe I can use those type of VNFs and lead them to containerization a lot faster, a lot easier. Yeah, that's an approach. But what will that do when we talk about, remember, we started this conversation with scaling VNFs? If you're not gonna be able to look at VNFs holistically, rearchitect them, containerize them appropriately in a cloud-native manner, then it is not gonna work. It's not gonna help you. So that's an important thing. Let's do a quick sizing and do a quick comparison. Here's an anecdotal example, right? We did this in our lab. We're just running a simple stock image of Viada OS distribution that's available online. You download it, you do a minimal configuration of it, and you run it on a hypervisor. The amount of memory that it consumes is about 387 meg per instance that I instantiate in terms of that particular Viada OS. I have a containerized version of that Viada OS as well. The amount of memory it consumes is about 34 meg per instance. I can fire up 10 Viada OS instances or 20 Viada OS instances by a simple script, and it comes up in less than 10 seconds. So it's definitely, from that point of view, much smaller footprint, much faster, and I can restart in a matter of a couple of seconds, two seconds, three seconds, four seconds, something like that, and the whole thing is running. I haven't actually measured, for example, BGP reconnect time when I kick down a Viada OS container and then bring it back up, but I'm sure people have actually measured that, or if you go and measure it, it is again really much, much faster. So the size of the container changes a little bit in terms of how large it configures. When I started to configure a whole bunch of a whole bunch of BGP sessions and IGP sessions, and I could see the container size growing a little bit, but nevertheless it was still much, much smaller than what you typically see. So this is kind of, you can achieve easily from a six to 10x type of density in the same memory, CPU footprint in the context of containers. At least that's what we saw. Let's take a look at forwarding performance. Now forwarding performance of containers, you can use namespaces to isolate functions, network namespaces for containers to see their individual set of resources, but kernel forwarding, a kernel stack becomes incredibly important in this situation, and you can use a software such like Mac VLAN. These days there's also some work to have OBS with containers, with container networking to make it work, and then there's also some interesting conversation going around DPDK acceleration for that. Orchestrating those set of containers, you can orchestrate them using Kubernetes. Of course, Kubernetes has proven out to scale from an orchestration perspective for different apps, in enterprise IT, as well as in over the top providers, whether it's Google or whether it's Facebook or somebody else, even Red Hat has a large amount of exposure in terms of how Kubernetes orchestrates thousands and thousands of containers. Scaling the number of parts with respect to this, that's, again, Kubernetes works fairly well in that. You have hundreds of nodes, 3,000 parts for VNF deployment that you can actually see. And then, of course, you use something like COLA Ansible to actually containerize the OpenStack control processes as well. NFE does replace, in the interest of time, I'm just gonna skip and talk a little bit about the control functions of OpenStack and how they can be containerized. You can run OpenShift or Kubernetes on top of OpenStack. You can use something like Courier to connect to the neutron interfaces and actually manage those containers through Neutron as part of that particular API. Magnum will allow you to actually manage container orchestration engines as an OpenStack native service. And COLA will allow you to actually containerize the OpenStack control plane itself. So with this whole container orchestration model, you can not only just containerize the OpenStack control functions, you can use the same Kubernetes model to actually now go manage those containers that are running virtual network functions. And then the last point before we get into the POC and the demo conversation is subscriber service chaining. Well, you can take, for VNF functions or for residential services or business services or mobile services to be stitched together, you need to actually take these VNFs and stitch them together. How do the service function chaining work in the container context? Well, in the container context, if those VNF functions are on the same host, you can use IPC to stitch them together. You can actually do orchestrated service chain through Kubernetes itself. And that's what we're gonna show you as part of one of the POCs that are in a few slides. And then when you look at different hosts, you can definitely do the same standard model of VLAN, VXLAN and map those. Let me quickly walk you through. So remember I showed you a whole bunch of requirements in the beginning in terms of NFV. Now we can map those requirements over into containers. We need multiple interfaces, that's important. Docker or Kubernetes model today does not provide that to you, so we need to make that happen. You need physical NICs and SRIOV interfaces to be associated directly with containers. You need DPDK-enabled applications to work in that particular environment. We need flexible IP addresses to work. It's a reasonably flat architecture with some deterministic memory CPU allocation models and V6 support. So there's again, the list goes on. I have an interesting, with the help of one of the product managers we put together an interesting epic to actually define all those requirements and start to prioritize them and then see how much of that capability we can actually develop and commit it upstream. Sorry, so this is what we did in terms of a POC. We actually took a bunch of containers, assigned multiple interfaces to that, modified that code where you can assign multiple different interfaces to containers, stitched the containers together into a service chain through orchestration and then actually run traffic end to end. This is again to prove out the concept of that virtual CPE architecture that I started with. You know how it works internally where here's a firewall or virtual router that's sitting inside a container. We've used the kernel networking stack here. DPDK is optional at the moment. I don't think we've done that work yet. We've assigned multiple interfaces to that particular container where you have administrator defined IP address instead of the default addressing model. And then you have service chaining through, excuse me, service chaining through VE with an SFC endpoint, again with an administrator defined IP address and of course another interface that actually talks to a management SDN plane. So this is the features that it works on today, static V6 support, simple service function chains, flexible addressing. There's no bridging or SDN involved here. There are no default Docker networking models involved here. And we do multiple interfaces per container and then we assign a particular container to a CPU for the CPU pinning capability that I described to you earlier. There's a lot of work that needs to be done still in this. No more affinity needs to work. IPv6 Slack, this is the automatic addressing capability for IPv6, right? Dynamic service function chain, network service header work and additional SRIOV enhancements. And our goal is to get this to upstream as part of one of the SDN plugins with respect to Kubernetes and that's work. Now I'm gonna hand over quickly to Doug. Doug and Ajay have been doing some work internally to prove this particular multiple interface model. Over to you Doug. Thank you, Ajay, appreciate it. All right, so here we're looking at an architecture of what I'm about to show you at the kernel, or at the kernel in the terminal. So the deal here is what is so important about what we're going to show here. Azar touched on it and the deal is yes, you can use just a plain Docker, do a Docker run, have and assign multiple nets to it, but you're not actually going to have isolation between these containers. So what we've set up here as a demo is we've got two container hosts and on these hosts we're going to run containers and we're going to run Quaga as a cloud router between each of these. One, we're gonna have an open stack and if you're curious, this is in particular, this is a bare metal host provisioned by Ironic and then on AWS we're going to have an open shift instance running containers. Between these two instances, we're going to have a VXLAN interface that connects them so that we can route this over the WAN. As you can tell in each of these containers here, you'll see interfaces here with RFC 1918 style addresses between them, but then we also have given them some aliases here to represent what might be a real world WAN IP address. So you'll see from left to right a 1.1.1.1, 2.2.2 going down to 4.4.4.4. So CentOS A and CentOS B in this diagram represents some end points here that we're gonna route traffic from. You might, for a simple example, think of these as some type of service that's gonna consume, say, HTTP from Nginx on CentOS B. So we'll take traffic from CentOS A, we'll hop across the two routers, which are using OSPF between the two of them, and we'll route it to CentOS B. Well, how are we accomplishing this if this isn't technology that's available natively in, say, Docker today? Oops. We're going to use primarily a application that's called COCO that's been written by my associate Tomofumi Heishi, who's here with us today. And what this allows us to do is actually go ahead and connect either vEath or vXlan interfaces to the network namespaces that the containers are using. And we also have implemented this preliminarily as a CNI plugin. CNI is Container Network Interface that's used by Kubernetes and other things, but so we have that. So what does COCO do? Takes your two containers and connects them. So COCO Container Connector is how we got the name and that's what it's like. So moving on to the demo, I'm going to switch here. All right, so looking at the terminal here, we have got, I've got two tabs open. This one here that says a doctor octagon four is the Centos host. It's bare metal provisioned by Ironic. It's just a Docker host. So if I do a Docker PS, I'm going to see two containers running here. We have a Centos A and a Quagga A, not exactly the best display out there, but on the very far right wrapping to the line. And that represents the first two containers on the left in the architecture diagram. And then this host here with the funny host name is our open shift host. So if I say OC get pods, which is an open shift command, I can see that I have a Centos B and a Quagga B instance here. So let's take a look here at first Centos A and we'll look at the interfaces in here. You'll see that there's two interfaces here. A loopback and then this interface called in one. That in one is a VEETH connection to Quagga A and it's a VEETH to this interface into here in Quagga A. So and then we have the mid one interface in link show mid one. This is a VXLAN interface and you can see that it's remote is this WAN IP address going out to AWS. And reciprocally, we have an open shift here. If I look at Quagga B, we have this VXLAN which points in reverse to the other end. So and we also have that VXLAN or VEETH connection between Quagga B and Centos B. So we can do it either local to the machine or over the network should we need to. And to show it in action, if we can go ahead and ping from our Centos A which is a 1.1.1 routed to Centos B which we have numbered as 4.4.4.4 and there we go. We've got that coming across and then we'll show you that there is a trace route as well so you can see that it's hopping across these two Quagga instances here. This is also available on GitHub as well. So we've got on GitHub Cocoa Container Connector, Ratchet, CNI implementation of this and then also Ansible Playbook that we call ZebraPen for the containerized Quagga which is a type of Zebra, I believe it's extinct. And so you could actually spin this up yourself and take a deeper look should you so please. So that's what I've got, thank you guys. Thank you, Doug. Thank you, Doug. So what you saw there was actually containers that were sitting in AWS and containers that are sitting in here in local host and you actually saw a live demo that goes and each container had multiple interfaces that were signed. So we are not using the typical Docker networking model. So then just summarizing what are the challenges that you see with using containers for VNFs? We didn't talk about security at all. That's probably a topic that we can put another session in at the next OpenStack Summit and see if that's of interest to people. What's the risk? They run with the kernel stack. What's the optimizations? How do you do namespace isolation? How do you do C-groups isolation, right? How do you achieve multi-tenancy? How do you achieve some of the network specific functions and now you're intruding into the host TCP stack? So in terms of OAM, in terms of do we need to redefine some of those architectures? So these are questions that we need to further explore, but overall, we do believe that containers can achieve much better scalability than the typical VMs. They need to be designed as such in order to be able to achieve that level of scalability. You can't take a default model and say, oh, I have a container image, I'm gonna make it run and it's gonna run optimally, right? They do provide a much smaller footprint. Some VNFs maybe need to re-written to take account of this whole microservices architecture and the microservices model to run with containers. I know that Verizon, for example, showed you an interesting presentation yesterday where they containerized a whole bunch of services on that micro-CPE that was part of the keynote and also part of the session. That's interesting, and I think that's probably the model a lot of people will drive towards in future. You can use Courier to actually interact with the open-stacks, native open-stack set of services and we still need more work to do in the context of things like NUMA, things like dynamic service chains and more flexibility in terms of the V6 address assignment and so on. So that's what we had for you. Hope this was interesting. We have, I think, five minutes for questions. We'd be happy to take any questions. I wanted to ask Doug about the demo in the containers CentOS A and CentOS B. Who was placing the routes towards the service for the chaining? Was it Coco? So the routing was determined by OSPF in Quagga, but the actual chain of interfaces between one another is Coco. So it was Coco that... So the VF, yeah, is put by Coco itself. Yeah, that's correct, yep. So yeah, and in this case, when we've provisioned it with Ansible, we've said we run Coco from the CLI and we say, okay, hey, Coco, I wanna create a vEath pair between these two network namespaces and with this IP address assignment and we say that statically. Otherwise, if we were doing it with CNI during the process where it says interface add or interface delete, then you could potentially do it in more of a dynamic way. Right, that was the follow-up question that I have is it would probably be interesting to have, there's a lot of contention now about the multi-network proposal in Kubernetes to be able to define those things in a declarative way so that then the CNI could pick it up and would be able to set that up because if I understood correctly, now you're calling Coco out of band, like you stay still that bold thing. That is correct, that is correct. So one of the things we're also looking into is actually just enhancing the SDN plug-in itself, right? So the POC that I showed you earlier before the Doug jumped into the Coco specific demo, that approach is actually much more in line with what is being discussed in Kubernetes. Unfortunately, I didn't have something to show you live, but that is available. If anybody's interested, then we can certainly come and talk and we'll be more than happy to work with you closely. Yeah, well, thanks. Thank you. Yep. I hope we have still a few minutes. Sure, two minutes according to my clock, so. Okay, so my question is, as you know in Kubernetes, you cannot spin up a container with multiple interfaces. That's right. We probably need to use, okay, at least what I'm aware of is you can try multi-CNI or something like that. That's correct. But again, there are limitations. Like, how do you make it work with SRIOV and something like that? That's absolutely right. So in this demo, what was used? Was these containers, especially that your Kuga, something which had multiple interfaces, were they spinned up using Kubernetes and some special CNI or did it manually? Like after creating the containers, you run some scripts and out of Kubernetes or Kade Kubernetes out of bounds. So two things, one, let me just explain. So I explained to you there are three slides in terms of the POC, the Provo Concept. That is primarily via Kubernetes. That was not shown today. That's the approach that we will take and move forward with. What was shown today to you was using Coco. So maybe Doug, you can just walk through. Absolutely. So in this particular demo, what we did is we spun up the containers and then we ran Coco essentially manually with Ansible. However, in the ratchet CNI plugin that I mentioned, we actually use the same exact concept as multi-CNI uses, where what they're doing is they're calling the delegate add functionality, which allows them to specify multiple plugins. Essentially, it's sort of like a meta plugin. Yeah, it's a meta CNI and it just changed them. Right. Okay, so I understand this part. And the second thing is another thing which probably there is a lot of debate going in Kubernetes communities about the CPU pinning and those aspects. So as I see in your presentation also, is there any work you people planning to do on that? So the POC actually does have the ability to pin that container to a given CPU. That's the POC that I spoke about, yes. When will it make upstream? I don't know. I'm working with the developers to get it. And I think it's part of the Kubernetes conversation. So yes. And one final last question from my side. Okay, we're out of time, but... Okay, go ahead. For SRIOV, again, as you know, we need to do some kind of a bookkeeping for the VF, how many VFs per node are allocated, how many available. That's right. Retrieve them back. The SRIOV CNI currently available in community doesn't do that pretty well. Or at least it doesn't let the, should you learn how many VFs and VFs are available? So is there work going on that? That's an enhancement that we still need to do. That was part of my list in terms of work still that needs to be done. Okay. Sure. Thank you. I may say in Kubernetes somebody sent a patch for SRIOV. And it will end up doing the management of the SRIOV devices if you want to. Okay. Just to add, we are looking at the OPEC integer resource feature to be able to do that and doing some POC with that. Maybe we can collaborate and work upstream for that. Absolutely. Thank you. Thank you very much.