 All right, good evening, everyone. Thanks for making it to the last session of the day. Hopefully, we will make it interesting and informative. Let's go with the introduction. I'm Ganesh Narrajan, working for AT&T Integrated Cloud Design Team. I have my colleagues. Please. Hi. This is Manish Mahan. I work for AT&T Cloud in the development. My name is Trevor McKaslin, and I'm an upstream developer at AT&T. Thank you. All right, so we've got a power pack done today. We want to just start with why the need for speed to begin with, right? And then, how are we going to achieve this with the help of technologies like SRIOV and DPDK? And within AT&T, we have written a new service. It's called VFD. It stands for Virtual Function Demon. We are going to talk about the architecture of that. And then, we also wanted to give you an end to end view of how this whole orchestration can happen with the help of own app and OpenStack. And followed by that, my colleague is going to talk about some of the internal details that needs to happen on the compute node. We also wanted to show a demo where we were able to achieve speed closer to the line speed. And we'll also talk about some of the limitations and recovery. Followed by that, we will go with the implementation details and finally, how we are working towards the community to kind of contribute to the upstream community. So the need for speed. I mean, we all have Wi-Fi routers at home. And typically, when the speed slows down, especially in my house, the first person to get impatient is my five-year-old kid. He figured it out now to use that data plan thereby giving me more bills. But imagine, if this is about to happen in a house, how much of a speed and latency requirement is critical for the business customers. So traditionally, the service providers built all their infrastructure with big monolithic black box devices. We call those as the physical network functions. And if you are a customer premise and you want it like a router, we need to build a specific router, put the vendor software in it, and ship all the way to the customer premise. So everybody knows it's all time consuming. It's very costly. It's not scalable. So thanks to the open source community, with the help of Linux Foundation, more importantly, OpenStack and OneApp, we were able to move from the domain 1.0 physical world to the domain 2.0 virtual world. We were able to virtualize all our physical network functions into virtual network functions, aka VNFs. So the real need for speed is for those VNFs, which going to run on our cloud, which is the AT&T integrated cloud. And we have transformed ourselves with the help of software-defined networking. And more importantly, we have built the infrastructure with the help of OpenStack. Now, if you see the requirements for VNF could be broadly categorized into the following three areas, right? So you have the requirements coming over the network side. For example, they need high bandwidth, fast or high PPS, quality of service, service chaining, port mirroring. You name it. Everything they needed today to achieve that. And if you talk about storage, typically they need IOPS and potentially some local attached storage with SSDs or SAS. And if you talk about compute, right? So they need CPU pinning. Why CPU pinning? They want to avoid the context switching that would happen for all this VM process from one core to the different core. And they wanted huge pages to avoid the number of page lookups, filling up the TLB cache aside buffers, and so forth. And then, of course, they wanted NUMA, right? NUMA is non-numeric memory access. The whole idea is they wanted to access the memory, which is proximity to the core. They wanted to avoid the cross-bounding NUMA access, which incurred some overhead. And they might need some affinity to run a couple of virtual machines on the same host. And also, they need migration. It could be offline and as well as live migration. So to satisfy all these requirements, we had to extend the default flavors that it's coming out of the open stack and define a much more informative and meaningful flavor series for our customer needs. So we kind of identified the three categories here, which we call it as the network optimized flavor series. So if you talk about NS, NS is network optimized SR-IoB. So anybody needs SR-IoB, they could create something out of this flavor series. And then the second one is a network optimized DPDK. And then the third one is the regular plain kernel vRouter, or it could be like, in some cases, many would also use OBS. So these are the three different types of networks, typically a VNF would need, and they could achieve it using this. All right. So in this slide, I'm just going to show you how the packet traverses on each of these hosts for these three different types of network. On the left, what you see is the packet actually goes via the kernel vRouter, or it could be an OBS before the packet is sent out of the network card. Now with kernel, you get a lot of overheads. There is an interrupt happening at the host of the hypervisor when the packet needs to be copied from the kernel space to the user space. And then there is an interrupt going also at the VM level to read this packet. So with all these overheads, you would be able to actually only up to a speed of probably like 1 gbps on a 10 gigi card. So definitely we can optimize this. So the second type is the DPDK vRouter. So we were able to run the entire vRouter or the OBS on the user space and completely go away from the interrupt, and we are actually doing this in the pole mode in our drivers to pick up the packets. Of course, the host interrupts are solved, but there will be still an interrupt on the VM side to kind of read the packets. And then on the third side is the SR-IVV, which we are going to deep dive in the session, where the idea is you take up a network card, and then you can create multiple virtual functions out of that. And then you are directly attaching this VF device all the way to the virtual machine. So there is no interrupt, there is no overhead, and that's where we were able to achieve up to closer to 9 to 10 gbps speed over a 10 gigi card. And in fact, we are also going to show you a demo of how we have achieved it. So in this slide, I wanted to show you how all these bare metal servers needs to be configured. If you actually see in this picture, all these three servers are all the same. It has got 24 cores. If you enable hyperthreading, it will become 48. And then it has got three nicks. Each nick has two ports. Why do all your switches? But the idea is, on the left, you see it is an SR-IVV host profile, where your Nick 1 and Nick 3 is used for the workloads, SR-IVV workloads. And the Nick 2 is used for your regular V router. For example, you might need to attach an operation management interface for the VM for all your operation needs. And then the other interfaces could purely take the traffic. And then in the middle, you see it's a DPDK host profile, where in Nick 1 and Nick 3 are bonded. And then the middle nick could be used for your Pixie storage and other kind of OAM traffic. And then on the right, you see it's a regular V router or OBS base, where everything is running on the kernel. So you might ask, why I'm really bothered to see this? The idea is, it is important to define all these host profiles so that you could do the complete automation. We want to avoid any manual intervention. Once this has been defined, your automation framework could take that and deploy it in your data center. All right, so this is an interesting slide. The idea is, we are achieving SR-IVV, in fact, with DPDK. So what does that mean? The idea here is, you would need to virtualize all the physical Knicks into virtual functions. And you can set up so many filters and parameters over it. The whole idea is, we have written a completely new daemon, or you can serve it as a service. It's called Virtual Function Daemon, which you'll see in the middle of this picture. That is actually a DPDK-based application, which is going to configure all these VFs to provide an SR-IVV network. So we have a research team with an AT&T who has developed it, and they also outsourced this project. I encourage you all to go and check it in the GitHub. So what we can do with this VFD? There are a lot of things that you can configure on a Virtual Function. For example, you can set a quality of service. You can enable anti-spooping checks for the VLANs and Macs. And say, this is the SR-IVV. It's all layer 2. It's all VLANs. So you should be able to set some VLAN filters. You should be able to support the Q&Q tags, strip the outer tag, insert an inner tag. You'll be able to support all type of Bump traffic, which is a broadcast, unicast, and multicast. So all these things are going to be achieved by this VFD. And just like any other open stack service, there should be a command line interface talking to this VFD service. In fact, if we see in today's Linux, you've got IP link command that would talk to configure your physical NICs. But what we are talking here, it's a DPDK-based application. So we created a new command line interface called Iplex, which is be able to go and talk to this VFD to configure all your parameters. So this is a high-level architecture of VFD. So what happens is typically all these parameters are sent from the heat template, and those parameters are taken all the way to the NOAA compute. So the NOAA compute is going to put all these port configuration information in a config.json file, and then it is going to invoke Iplex. So the Iplex command is going to notify your VFD that, hey, there is a port that I need to configure. Please go and do it. So VFD is going to pick up and step 5 on that config.json. Then it uses DPDK APIs to go and configure the virtual functions and send the feedback all the way to the IPX, IPLX, and it goes all the way to the NOAA. So if you really see, we have leveraged an architecture which is very similar in open stack. So what happens when you create a virtual machine through NOAA, the NOAA would go and spit out a libber.xml, and it's going to give it over to your KVM hypervisor to go and create the virtual machine. That's exactly what's happening. You are actually spitting out the port information to the VFD, and VFD would be able to go and configure it. All right, so this is the slide I wanted to show you how everything fits together, right? Like I said, we are leveraging ONAP, which stands for Open Network Automation Platform. There are a lot of, it's a topic of its own, and there are a lot of my AT&T colleagues who are given some good presentation. In this summit, I encourage you all to go and read it. But I'm going to give you a quick introduction or a summary of ONAP, right? So ONAP typically helps you for all this BNF orchestration. It helps you to design the BNF. For example, let's take an example. Like if you wanted to run a VNAT, you need to know what type of flavor that you need to use, what image you need to use, how much interfaces that virtual machine needs to connect to, and what are the VLAN parameters settings that needs to be applied. So all this needs to be done at the design level. And the next step is orchestration. So ONAP has an orchestrator called MS4, which would send this information via heat template to the OpenStack region. And then the third one is configuration. It's not all about creating virtual machines. Now you need to go and configure the VM. So that is an SDN global, a lot of controllers with an ONAP that helps you to do that. And of course, we need to inventory all these VNFs. You need to have a global view of how much of service is running, you need different data centers. There is a component called ANAI, which stands for Active and Available Inventory. That would do all the inventory things. And finally, there is DCAE. It stands for Data Collection and Analytics. So you would be able to look out for events, in case of failures, then you can go and recover. And if you need to scale a VNF, that component would be able to help. So in the middle, all this ONAP is running in the global centralized region. And then that would be able to manage multiple OpenStack regions. So in the middle, what you see is what we call as a location control plane. The idea is you have to run all your OpenStack services, your NOVA, Neutron, and everything. And then the third setup is your compute server. So this is where actually the virtual machine is getting created. I hope this has been informative now. I will hand over the next session to cover. Munish, please go ahead. Thanks, Munish. So I'll walk you through the details how the implementation is. So this is the first implementation. And this is our attempt to show you what are the advantages that the NIC card providers are providing by exposing the features. Now, some of the NIC vendors are providing us mailbox on the NIC cards. And you have the switch on the NIC card that you can program with these features. So you can send the message to the PF by mailbox. And then you can set those parameters on the NIC. And you can offload your NIC filtering, Mac filtering. You can control your bump traffic. You can say that you want to allow or not allow multicast, broadcast, all those things. You can offload to the NIC. So you have more CPU cycles for your processing of the packets at the app level. So for the setup of the compute, so the implementation first you have to have on the compute the VFD demo that Munish talked about. So to set it up, you need to enable the IAM or U. You need to have in the BIOS VTD enabled. And you have to configure the huge pages. So in the VNF world, we use one GB huge pages so that we can reduce the misses. And then you need DPDK driver IGBUIO module on the kernel so that you can talk to the NIC. And then you can also configure how many VFs you need on the host. So your team can decide how many queues per VF you need and you tune this parameter. And then you can, all you need in the end for the OpenStack is the PCI white list where you'll specify which PCI is unmapped to which physical network. So physical network is the parameter that you currently use when you do provider networking. You specify that field when you're creating a neutral network. So Ganesh walked you through the hardware part of it, Numa and all that. So I'll take you to the implementation, how the OpenStack will see it, and how OpenStack it will be set up, and how which part administrator will do, and which part the tenant has to do. So as I said, the PCIs that are physical networks are defined, mapped to those PCIs, then the admin can go and define the provider networks using those FizzNets. And once the networks are there, in the tenant space, tenant can go and start creating the SRIUV ports. So it's the same concept where how you create the direct port today in OpenStack, but with some customization. So the customization, the key here, the changes we did is we input more fields into the binding profile of the Neutron port. I'll show you that. And once you have the ports, you can call the NOVA API to instantiate your VNF. So when your VNF will instantiate with those Neutron ports, NOVA will call plug. Everybody knows plug who understands NOVA. So it will call plug interface, and that call will invoke the Iplex interface of VFD. So it will generate configuration file, and also call Iplex to add this configuration to the NIC. So that's how the flow is. So going to the next slide. So we have modified not only Neutron and NOVA, we have also exposed all these parameters to heat, so that any kind of orchestrator can pick this heat definition and can orchestrate the workloads. So we have the VLAN filters. So VLAN filters will be applied to the incoming traffic on the NIC. So the filtering will be done on the NIC. And the NIC will decide which VF to hand the packet based on which VLAN. And you can do anti-spoofing rules using the Mac filters right on the NIC. So then you can also do queue in queue, where you have advantage as a service provider. You can strip the tag, the service tag, and then you can pass the customer VLANs, the whole trunk traffic back to the VM, and VM will process it. So then you have the bump traffic booleans that you can set based on what your VNF is looking for, whether it supports multicast, whether it doesn't support, whether you want to see it back to the VNF. So your VNF doesn't process all this. It's all done on the card. So that's the advantage right there. So then there are some more booleans. So this is the first attempt. So the NIC vendors, any NIC vendors here? So NIC vendors know, the new features are coming in the new NICs. So this list is going to grow. So as we said, this integration that we did is from the first version of the VFD. VFD is inactive development. We have already enhanced it to do QoS. Mirroring is under trial. That's another requirement for most of the telcos. So all that will be offloaded back to the NIC. So that's pretty impressive for all the services. Like, you don't have to worry about mirroring overloads and all that on the system. So next slide. So this is my demo setup. I have a recorded demo, and I'll show you how we spin the VNAT. And we generated traffic from one location. And we connected, we can assume the other location is a cloud provider or something. And this is how the traffic is. So this is a duplex traffic, both ways, bidirectional. And I'll now switch to the recording. Sorry about that glitch. OK, let's go. OK, so I have this VNF that I shown you. It has two SRIOV port. The rest of it is taken care of in another heat template where we had the Cinderboot volume for this. So this is where the customization is. So if you see on your screen the binding profile field, I'll pause it here. So the binding profile for this neutron port that I am spinning for this VNF, I have the VLAN filter. I can specify whether I want to insert tag while exiting on the transmit traffic. I can specify whether I want to do spoof tech. And this snapshot is after the VM is created, right? So I know which PCI is allocated to my port. And which fizznet I am mapped to. So it can give the operator a view on which physical port this port is for the VNF. So going a little further. So once you're... Isn't that press play? Yeah. Press play. So once you're... So this is the next port. So as you see, both the PCIs, right? One is from fizznet one and one is from fizznet two. So inside the VM, you can bond them for resiliency. So that's another requirement for the telcos. So we are using... This is the VM definition. We are using... We modified the NOVA definition of the VM. So this is the NOVA config, right? For the VM, instead of interfaces, when you use DPDK drivers, you have to provide the host device. So now your VM is actually accessing that PCI address that's highlighted on the screen, right? So this is how... These are the basic changes that we need to do to achieve this implementation. And so on the... We have this iPlex... So this is the VFD running on the thing, and this is the iPlex. So this is the interface that we call from NOVA plug when the VM is instantiated. And we call iPlex delete when the instance is deleted. So this integration is pretty easy and seamless. So you can even do updates when you want to do the QoS. You can call iPlex update, and it will update the configuration on runtime to the cart. So it's pretty easy. So I'm going a little further. So I'll show you the traffic. So as an operator, right? Operator will be interested in knowing which... how the traffic is doing, right? So this view will provide the operator the details of how the traffic is doing. In this picture, I have a couple of VNFs pinned. I have the traffic passing through. You can see the transmit receive. If there are spoof packets, you can see that too. And it's pretty impressive. Like you can make a judgment whether the link is up or down. So this is very useful for the operator. So we did some performance tests during our demo recording. And we were able to do close to line speed. I mean, this is a XEA. It's sending the bi-directional traffic on both the streams, one in and one out. So we were able to do close to line speed using it. So that, I think, concludes this demo. And I'll switch back to the presentation. So the results are we are able to do line rates, high MPS, it can handle this profile that we are using for this traffic. Sorry, I just had this full-screen. OK, so the profile we used for this traffic is iMix. So we have frames ranging from 64 bytes to 5,000 bytes. So then we are getting close to 9.7, which is like 9.9 on the NIC, right? So let's talk about the limitations, right? So what are the limitations of this solution? So number one, with the regular SROUV, right? You don't have any security features with regular SROUV. With this SROUV, you at least can control the bomb traffic you have all seen. You can control what's hitting your VNF. You have the VLAN filters. You have the MAC filters, right? So this is better than the regular SROUV, right? So that's one. So talking about the live migration, that's something that will evolve in the coming days. Like, right now, there are limitations in the live world and there are limitations that we cannot migrate something which has host device attached, right? So we are working with the vendors. And if we could, we'll take the snapshot of the memory. We'll take the snapshot of the registers as it evolves. And we'll pass it to the destination host during the migration. That will take some time. But in the time being, we have own app. Own app can detect, as Ganesh said, it's a closed loop. It has analytics. It is monitoring your VNFs. So you can build it resiliency into your app design. Plus, you can also, like, re-spend your VM if there need be. So that's our recovery strategy for now. But we are getting a lot of performance with this. So there is always a trade-off in your first shot. So we are not there yet, but we'll be there. So that's our main thing. I'll pass on to Trevor next. He'll take it from here. OK, so I'm going to explain the implementation details of the demo that you just saw. So at the beginning of it, you're going to have to create a port with these API flags for the NIC. And that was put into the profile binding field. And then everything goes pretty normal. And the only thing that's different with VFD is that at step seven, you're going to have to generate the virtual machine configuration and add the host dev devices. And also, with the DBTK driver, you can include the PCI virtual functions as well. And then with the iPlex, with the virtual machine configuration, that will bind the port. And then that will also pass it to VFD to continue the rest of the operations. And so when I was trying to upstream this to the community, this was my first proposition. It doesn't look really pretty because you just stop a whole bunch of flags into the profile binding field. And I wanted to offer some other alternatives. So I also proposed using the VIF details field. So I mean, there's not that much of improvement. So I was trying to think of other things. And when I came up with another proposition, this fits mostly with making a hard-coded path for an API. And then this will also make it easier to extend it for more features to come. So this is just what the database API would look like. And then you can create the ID and associate it with a port using the synthetic fields that are already in place in Neutron. But then when I took this to the Neutron drivers meeting, they were just saying, well, many of these already sound like they can be derived from existing APIs. And then after some research, this kind of was true. But not all of them exactly support SROV. So I can map some of them. But the ones that weren't implemented were the broadcast, unicast, multicast allow being enforced through the security groups and the firewall. So when I came to propose another API, I wanted to make it more abstract so that way more community members and operators can use the feature to its full stability. So whenever the port's created, you could have an underlying Linux bridge implementation or an OVS or an SROV. And what it matter, because each one we delegate to its controller and enforce it with IP tables or OVS firewall, and then in our case, would be the offloading on the NIC for the firewall. And so here's kind of what that would look like if you were going to add that to the security groups. It's just simply just adding those three fields to the bottom which are false, and then you could use that to pass to the NIC. So this is already upstream. VFD is a part of DPDK library. And this is part of the documentation and their use cases that they describe. So you can see here you have two different services running. You can have latency-sensitive services running. And also you can use the DPDK application to offload your computation-intensive services. And not all the DPDK flags are going to be required for this implementation. We're only going to be pushing the ones that are coming from our customer requirements for our VNFs. And so this is just a fast host-based packet processing you can find. This documentation is relatively new. It came out this year, but it's on DPDK. And then here's another use case. It's about inter-VM communication. This will allow you to do really fast communication between VMs that are on the same host because the NIC has a switch built into it that makes it capable. And this example just shows how what Mac address lookup table would look like. So if you wanted to update the Macs, you would have to go through this flow and the steps are described there in the documentation. So coming on to future work, Amari have a few patches proposed for enhancing the Nova and Neutron capabilities. Right now, there's actually no Q&Q network type in Neutron. So I have a patch for that. And then also it's VLAN-based. So I also refactor the VLAN to make that work nicely together. And then we're going to have to add some kind of OS5 object to pass these parameters and make a proper negotiation for the port binding between Nova and Neutron. And then there is also, whenever these NICs are released, when the Nova is scheduling, they're going to have to know what capabilities it has. So there's a patch that's already been merged for enabling SROV-NIC offload feature and discovery. And then now, after that, you're going to have to integrate VFD with Neutron. So now that Nova and Neutron can now the prerequisites have been met, now we can start implementing the rest of VFD. So first of all, you're going to have to test it somehow. So we're going to have to support some SROV third party CI with VFD installed. And then we need to implement the iPlex interface in Neutron. So it's like IP link, but it has a little bit more extended capability. So it'll look a lot like that class. And we're going to have to add options to the client. I mean, that's pretty standard. And adding and modifying database API models for VFD support, which I've been showing through this presentation. And there might be even more to come. And then whenever it comes down to the agent implementation, you could come up with your own agent, or you could also make an agent extension of SROV, or you could modify the existing SROV-NIC agent to just use the new iPlex tool rather than IP link. So and then also lastly, just depending on where the API lands, because I just discussed how you would want to make it abstract and be across Neutron, there might be more modifications needing to come. So if one of the changes were in the security groups, you'd have to modify the OVS agent in the firewall as well, because SROV is designated for the no op driver. And that is all. So thank you all for listening for the presentation. And if you have any questions, can you please stand up to the microphone and use that? I can answer the upstream questions. And then Munish and Ganesh can answer everything else. Hi. Thank you for the presentation. So what I understood is the things we saw on the demo is a way of hacking, but the future-wise integrating VFD with the Neutron properly, with the agent. That's the holistic way to go, to make sure we are not overloading NOAA with the networking requirements. Correct? Yes. That's the intent. So for every implementation, this is pretty new technology. You need to first prove yourself and community, showing them some work. And then now we are working with the community, as Trevor already submitted to proposed two changes. Along with it, Intel also submitted a new change, wherein they are exposing the NIC features to the NOAA scheduler. So NOAA will know in future which NIC is capable of doing what. And based on that, your VM will land on particular node. So you may have some capability from some vendor have X capability and some has Y. So the NOAA scheduler will handle what is your need and based on that, we are to send your VM. Thank you. No problem. Can you go back to the side-by-side-by-side early in Ganesha's slides? Yeah, escape out. There's a bit of messaging here. And part of the brilliance of the simpler one, part of the brilliance that I hope folks can see here. As you go from conference to conference from meeting session to meeting session, there's a bit of messaging here about coexistence that gets lost as you hear one person's experiences and another vendor's experiences and preferences. One of the brilliant things these guys have demonstrated here is coexistence using the exact same compute nodes, using the exact same software, coexistence of three different forms of networking, all using Neutron with, of course, their additions that they're working on upstreaming. All of these options work for different workloads in production simultaneously. You don't just have to pick one type of network virtualization. All of these work in production, in parallel, same stuff, same open-stat compute nodes. So I applaud you guys for that. This messaging is very clear. Thank you. Thank you, Jeff. So also a question about these slides, the part on the right. So for the VNF perspective, what drivers do you need to take advantage of this implementation? Is it a V sufficient, or do you need DPDK, or is DPDK hidden in the infrastructure from the VNF perspective? Yeah, I mean, you want to take that. So the VFD is built with its own static DPDK library. So the VNF doesn't need to be our. But on the other side, the VNF also could be a DPDK-enabled application so that it could directly read from the VFs. So it doesn't matter whether your VNF supports the IXGB VF, or we hook it to the DPDK driver. We have VNFs that works both way, and we have certified those. So that, I think, addresses your question. So my question is once you configure the number of VFs and allocate bandwidths to each VF, are you able to change it dynamically, or is that future work? No, we have more. We have, I think, we can saturate the line card with just very few VFs. So, but the advantage we are getting with spinning more VFs is you can spread, right? You may not have the peak all the time. So you can spread your workloads on the host, and you can take advantage of more VFs. So we know, like, there is a, like, you can optimize based on the queues on the NIC, but... Right now, it's a pretty much a static allocation. You've got a percentage for all the VFs, but I think that space could evolve more with more dynamic. We are beginning to see a need for changing. Not, it's not instantaneous. It's not, you know, on a millisecond boundary, but let's say every few days, you want to repurpose a running compute node for different VNFs, you know, move things around, spin up new VNFs. So we are beginning to see the need for that. Sure. Would the VF agent that you are proposing do that work? It should be able to do it. I mean, as long as you put the right parameters in it, what to configure, I think we can always go on. Okay, thanks. Add to that last question you had. One thing that Iplex gives you that in years past was a real pain, a real pain was determining. It's like, well, I just put, like a, you know, vendors virtual router or vendors firewall load balancer on a link and, you know, you're using SRIV, you're saturating that thing fast. That table view of all the VFs state and all of those values is golden. It's wonderful from an operation standpoint. Because before you had to like scrape through all the individual VF outputs, compile them somewhere and that sort of thing. So having that visibility when you have a bunch of DPTK native VNFs, you know, routers, firewalls, that's huge. And as I mentioned, like VFD is very actively developed and we are working with Multinic support. So you can go to the JTUB link and... Yeah, in fact, there was a question, the other session about DPTK, how would we go and debug all these traffic, right? Can I use the ping and TCP dump? And of course it's not going to work because those running on kernel. I think the idea is you can either run it inside as part of the VM. The other option is you can always do a port mirroring, right? And you could take it to the probes and, you know, you can analyze your traffic. I just want to add something here. If we have any upstream developers in the room, don't feel free to join in on the effort and get in contact with me. Sure. The inter-VM traffic, does it work out of the box of you need to do something special? You need VEP for that, virtual ethernet bridging on the switch. Sorry, can you say that again? VEP. There's V. VEB, virtual ethernet bridging. The switch which the compute is connected to should be VEB capable. Thank you. Just a question. For projects that are coming up like Project Calico, is there... Are you guys working with that project as well to make sure everything works together? Yeah, I think Calico is for the container network interface, right? So we do have a plan to kind of, we are right now all our open stack services is running as virtual machines. So we are planning to containerize all our open stack services in the Kubernetes parts. And then, you know, we are also thinking about Calico to leverage as our container network interface for all this communication. But containers with SRIOV, I think it's a long way to go. Probably, you know, we can think about it for future. No, there is a plugin effort going on for SRIOV, plugin for CNI. And also, if you go to dpdk.org, they already started DPDK for containerized loads. So they are just starting. So it's at the, like, just started. So we'll see how it goes. Yeah, hi, I had two questions. Could you comment a little bit on your approach towards resilience? And in particular, have you contemplated architectures where you have redundant nicks that are both capable of, you know, maintaining the same state information and processing, you know, without, you know, failure if something drops or breaks? Yeah, that's an interesting question that you really brought to the table, right? So it's all about live migration capabilities with SRIOV. You know, can we have, like, a Mac, VTAP, and, you know, those sorts of things to go in? Definitely, we are considering that option. But right now, our strategy is to at least seamlessly, you know, recreate or do an offline migration when we detect a failure, right? And a lot of issues that we need to solve before even talking about live migration. You know, in my slides, I went over CPU pinning. I went over huge pages. The idea is, you know, if we enable huge pages, there are a lot of dirty pages going to handle. And we are talking about line card speed here. And taking that VM actively, it's going to be a pain. So either the VNF itself has to be, you know, cloud native. You know, for example, some of our VNAs are running as primary backup. So we don't really have an option to do or address the live migration issue right now. But we are actively working to make sure that, you know, Munish also pointed, some of the pointers, like, you know, snapshotting the registry and other things, we are also, you know, researching that. To add to that, it's not only the host responsibility for the resiliency. So we have a VNF that we are working with a vendor, where we have A and B side of the VNF. And these sites are talking on another interface together. So we have the state on both. So if you add a route, you know that route on the other side. So if this host goes down, our service will be up. So you're pushing it up to the application level then. Yeah. But as I said, if one side goes, I'll re-spin it because my service is not impacted by that. Also, I think you mentioned that you're actually pushing some of the VF functionality all the way down to the NIC. Is that right? Sorry, your VNF functions. So are you actually programming the NIC to do, like, certain kinds of functions? Yes. So NIC exposes certain APIs, the modern NICs. So we program them to apply the filter, and all those parameters I've shown you. So are you using something like P4, or how do you do that? We use DPDK APIs to directly manipulate the NICs, but that would not really interfere in the traffic. The traffic is all between the virtual function and directly to the VNFs. So this DPDK library doesn't interfere in that. Thanks very much. Thank you. All right. Last question. I have a question regarding the mirror interface that you mentioned briefly. I was wondering if that mirror interface is also implemented in the same way utilizing VF and virtual NICs and DPDK. Yes. It has to be the VF, and it has to be on the same physical network card because that's how we can do the put mirroring. All right. I think we are top of the earth. Thank you all for attending this session. Thank you. Thank you.