 And I think the microphone is live. So OK, well, welcome to a session on supporting virtualized telco functions with OpenStack. I'm Bruce Davy from VMware. I work in our networking and security business unit, which is the group that brought NSX to the world. Some of you may have heard of that product. And I've been working at VMware for a couple of years now since I came in when VMware acquired Nasera. Hopefully some of you heard of Nasera as an early startup company in the space of software to find networking. And before that, I was a networking guy in Cisco for a long time. So I've been doing networking in the sort of telco space for quite a while. And so what I want to talk about today is how we're attacking the telco space from a perspective of virtualization and how that fits into the bigger picture with OpenStack. Maybe I'll ask a few questions to the audience. So how many people here have been sort of following NFV for some time already? OK, so maybe two-thirds of people who have some NFV familiarity. So the first part of this presentation will be maybe a little bit of recap for you folks. So what I mostly want to focus on today is the networking aspects of NFV. And it's a little bit confusing that NFV has network in the name. But in fact, when a lot of people think about NFV, the first thing they think about is actually compute. So they say, I'm going to take some function that was previously implemented in some box, and I'll move it into a virtual machine. And hey, Presto, I've done network function virtualization. And so I guess if I only convince you of one thing today, I just want you to think about NFV as being bigger than just taking things out of a box and putting them into a VM, that there's a larger component around particularly virtualizing the actual networking components. And that's what I'll spend a lot of time talking about today. So sort of bullet two here is talking about how network virtualization fits in to the NFV landscape. And then as we start getting into some of the slightly more detailed parts of the presentation, I want to talk about service chaining specifically. Service chaining turns out to be something that has a lot of application in the telco environment. You want to be able to create some customized collection of services for a customer. And that entails functions that might be implemented in different virtual machines or different boxes being connected together to some kind of chain. So I'll talk about sort of what are the problems in that space and how we're tackling them. And I'll at least make the assertion that some of the things that you would like to be able to do with service chaining are actually not that well supported by Neutron today. And so there are some of the things work well, some things there's probably room for improvement. I'm not a Neutron developer or even particularly a Neutron expert, but I've at least spent enough time looking at what you can do in Neutron to sort of see where it does and does not line up with objectives for service chaining. And then I'm going to spend the last couple of minutes just talking a little bit about OVN just because it sort of fits into this space a little bit and it's just, it's a super exciting development in the space of virtualized networking. It's much more broadly applicable than just the telco space, but it definitely fits in here. And I just wanted to mention that it's something that some of my coworkers are driving the charge on. How many people went to the OVN talk on Tuesday? Just a handful of people, good. So you people, you won't learn anything new from that last bit but for the rest of you, I'll just cover it real quick. Okay, so let's launch in. This is one of my favorite pictures about NFV. This actually comes from a pretty old white paper produced by Etsy, a bunch of people got together and started fleshing out this NFV architecture. And I think I first saw this paper sometime in 2012. One reason I like this picture is because it's sort of on the left-hand side is a whole bunch of telco boxes. And I actually know what almost all of them do, which is not always the case when I look at a picture of telco equipment. And so you notice all of those boxes, they come in different shapes and sizes and they're sort of vertically integrated. They're kind of like the mainframes of the telco industry. And then if you look at the right-hand side, this is where we want NFV to go with taking all of those sort of siloed, vertically integrated boxes and moving them into a much more kind of cloud type of architecture where we use standard high volume servers, storage and networking, and run all of these functions as virtual appliances or virtual machines or maybe even containers, but some kind of virtualized form factor on top of standard commodity stuff. And so for people in the OpenStack community, the sort of the right-hand side should hopefully look pretty comfortable and familiar. And what we're trying to do in some respects is help the telco industry sort of move from what is really a very sort of 20th century architecture to something that is much more contemporary and leverages the technologies that we have today. I have learned over years of talking to telco people, you should never say to a telco person, oh your problem is just like this enterprise problem that we already solved. But there are a lot of similarities and we are definitely trying to leverage some of the capabilities that we've developed over the years for enterprises and public clouds. And so now there's a lot of reasons why telcos are pretty excited about this kind of shift. By the way, how many people here actually work for a telco? Okay, quite a few. So hopefully some of this will ring true. Obviously telcos are under lots of competitive pressure and there's a lot of reasons why this sort of moving to a more cloud-based architecture would be a good thing. So in maybe no particular order, if you think about rolling out services as a telco, you really don't wanna have to go and dedicate a box to a service. Because over time that service is gonna become either more or less popular. If it becomes more popular, you're gonna have to buy more boxes and then at some point it becomes less popular and now you've got boxes that you can't do anything with. Much better to have a pool of capacity and then deploy services on top of that capacity and increase the amount of capacity you allocate to a particular service and do that in an elastic way. And so you can now sort of think of your services as being decoupled from your hardware and that maybe the demand for a service even fluctuates on a minute-by-minute basis so you can allocate more of your generic compute storage and networking resources to the services that need it. Another big issue is how long does it take to deploy a new service? When I first started working on network virtualization, the number one reason that we found ourselves getting into conversations with customers was that they wanted to be able to virtualize their networking and deploy network services as quickly as they could deploy virtual machines. This is very much the case for telcos. The time to deploy a new service in the telco environment is often measured in years. So to be able to get it down to more of the kind of, I'm just gonna install a bunch of new software who are running virtual appliances and spin them up in a highly automated way is obviously very attractive. And then finally, in order to sort of provide more differentiation, it'd be really great if you could customize the services to different customers. So then instead of saying, okay, here's my service take it or leave it, you can say, okay, you know, Mr. Customer type A, you get this sort of very premium fancy bells and whistles service and Mr. Economy conscious customer, you'll get the more basic and less high frills kind of service and be able to do that in a way that doesn't require me to have a lot of different processes for those different services. So that's kind of all of the aspiration. And the thing I didn't mention on this slide is of course cost. And I kind of leave that off just because, A, it's kind of obvious and B, I think it's a mistake to focus only on cost. But I think, you know, if all you're gonna do is drive cost out of your system, it's kind of a race to the bottom. And so while it is important to control cost, I think it's important to focus on the benefits in addition to just driving down cost. But obviously, you can't afford to ignore cost in any case. So this is another slide that's pretty much stolen from another Etsy white paper of sort of overall NFB architecture. And just kind of, again, I kind of liked this picture for a couple of reasons. One reason being that even though a lot of people kind of over fixate on taking what used to be in a box and putting it in a virtual machine and declaring success, what we actually see in this picture is this virtualization layer, what's labeled NFVI on network function, virtualization infrastructure has virtual compute, but it also has virtual storage and virtual networking. And then sitting underneath that, you have the physical infrastructure, compute storage and networking. Above that, you have the virtualized functions. And so a virtualized function would be something like a virtualized firewall or a virtualized instant messaging system. Any of those things could run inside of VNF. And you can think of those as basically applications running in VMs. And then you have to manage those things. So you have VNF managers. You also have to manage the infrastructure and you have to orchestrate the whole thing. So it shouldn't take too much of a stretch for you to see how OpenStack sort of fits into this picture. Your OpenStack actually plays a lot over on the right-hand side here in terms of orchestration and also infrastructure management. And then if you kind of zoom in on the virtual infrastructure layer, you see kind of pretty good mapping of some of the components of OpenStack with Nova for compute, Cinder or Swift for storage and Neutron for networking. And it's that third piece, the networking piece that I'm gonna focus on in the rest of this talk. So I wanna stress again that network virtualization is a different thing from network function virtualization. It's an unfortunate sort of collision of terminology, but network virtualization is what we do in the other product that I work on NSX. It's what Neutron provides in OpenStack. It's essentially about providing virtualized networking capabilities in a cloud data center. Network function virtualization is taking things like firewalls, voiceover LTE systems, all kinds of telco functions and virtualizing those comprehensively. So it's more the NFV is kind of an architectural shift from a siloed architecture to a cloud architecture. Network virtualization is a component that lets us provide virtual network services in the way that Neutron does. So at least for the purposes of this talk, you can think of NFV as the big picture and network virtualization as one component within that picture. And the reason that we need network virtualization, first and foremost is that this whole system has to be agile. So if you're going to make this successful, it can't take you as long to provision a new service in the new architecture as it takes to provision it in the old architecture. And a good example of this is when we first started working on network virtualization sort of back in the, I guess late 2000s, like 2008, 2009, lots of people thought they were going to go and stand up a public cloud service and compete against Amazon. And so they would figure out some way to virtualize compute, some way to manage the virtual compute, and then they would find that they couldn't actually automate the provisioning of virtual networks. That was actually the problem that got Neutron started and it was the ability to fully automate the provisioning of virtual networks that led to our network virtualization product and also to ultimately the creation of Quantum and then Neutron within OpenStack. So making the whole thing agile and easy to automate is really critical. The another really important point is you need multi-tenancy in these environments. You're carrying the traffic for potentially hundreds of millions of customers, they're not all getting the same services. So you need to have some mechanism for providing multi-tenancy. You might also have different organizations within a single telco having control of different resources. So multi-tenancy comes up a lot and again, that's something where it's really hard to do that if you haven't virtualized the network. You want to have network services that are independent of the underlying infrastructure. So for example, if you want, say, logical routing between certain parts of a service, you don't necessarily want to have to go and go through a physical router and you don't even necessarily want to have to go to a special virtual machine to get that routing function. It would be better if that was kind of built in to the network virtualization layer. And then finally, service chaining, the idea of taking multiple different services, sorry, multiple different functions and composing them into an actual, useful end-to-end service. You want to be able to do that in a fairly dynamic way. So to be able to say customer A is gonna get a firewall and a NAT and a load balancer and customer B is gonna get a video optimizer and some other function and we're gonna put those different classes of traffic into different service chains. That's gonna be much more easy to do in an agile way if you have some software layer to stitch these functions together. So here's a fairly standard picture of what Neutron looks like, hopefully not too many surprises there. And so Neutron provides you these APIs that let you do various things like create virtual networks, connect a virtual network to a virtual machine, create logical routers and so on. And then over time, there's always a set of core APIs that let you access the kind of core Neutron services. And then you tend to see these API extensions because there are all these different pluggable backends that offer different types of service. So the product that I work on NSX provides its own pluggable backend which sort of replaces the sort of default plugin for Neutron and lets you get access to the networking features of NSX. Some of those are core Neutron features, some of them are accessed through the API extensions. And so anyway, Neutron essentially lets you perform all these network functions by calling APIs and then there are various different consumers of those APIs that basically let the overall open stack system create services that contain networking components. Okay, so what are people actually doing with NFV? So part of my job is I kind of go around and talk to our Tolco customers and try to figure out what they really want to do with NFV. And while there is a lot of aspiration around NFV, there's two use cases that seem to come up again and again that we're actually seeing people put into pilots and early production. And essentially these divide between two different types of operators, the mobile operators are mostly focusing today on how to virtualize their Evolved Packet Core. And the Evolved Packet Core, I've got a picture coming up in a minute, but you can basically think of it as a set of functions that are responsible for getting packets from your mobile handset off to the internet and doing a bunch of things like making sure you get the right services based on your data plan. And then there's, so we see that as kind of one very popular use case and there are some pretty big trials going on around the Evolved Packet Core. And then the other one that we see a lot of for the operators who are fixed or wire line is doing some kind of virtualized CPE. So taking a set of functions that would have normally lived in a customer premise device, functions like firewalling, routing, maybe some encryption, those kind of functions, instead of doing it in a CPE, moving it into a virtual machine is actually operated by the operator and potentially not even on the customer premise. So just to go into these in a bit more detail, here's a picture that I borrowed from somebody who actually understands mobile networks. And the main thing to see here is that on the left-hand side, you've got the radio network, that there is actually some work on doing virtualization out there, but it turns out to be fairly difficult to virtualize things out in the radio. So typically you have the radio network and then coming out of the radio network, there are a couple of interfaces that deliver effectively packets into this Evolved Packet Core. And a bunch of stuff goes on in the Evolved Packet Core. I don't need to belabor that, but it's things like figuring out who you are, figuring out if you're authorized to get types of service, and then applying the correct services based on that information. And then ultimately, you wanna get a connection to the internet or to some other data service, potentially a corporate VPN. So there's a bunch of stuff that goes on in that Evolved Packet Core, and that's a sort of a standard telco architecture, and the thing you see there is quite a few different functions or traditionally implemented in different boxes. And then I mentioned the virtual CPE. So virtualized CPE is taking functions that you'd like your customer to have in which he or she might expect to have running on their premises, but actually hosting those in a virtual environment. And somewhat confusingly, it's not just a matter of taking things that lived on the customer premise and sticking them all inside a virtual machine. It's actually taking those functions and moving them into the cloud somewhere, typically into a data center that's run by the telco operator, and then you can run them inside virtual machines. So you still have to get the traffic from the customer premise into the data center somehow. Maybe you get it in over a L2 connection, maybe you bring it in over an IP sec tunnel, somehow you get the traffic from the customer premise to the data center, but now all of the complicated services that are somewhat custom and that would traditionally have required configuring a box on the customer premise, all of those functions now get moved into a data center where they can be provisioned in software, automatically by software. So this is actually a very compelling use case for how NFV can make an operator more agile because now they can change the services that a customer gets very, very rapidly without him to go and visit the customer and reconfigure their equipment. And in this environment, network virtualization plays a pretty important role. So if you look at network virtualization solutions, they come in a few different flavors. Most, well quite a few vendors now have a network virtualization solution and some of them have just L2, some have just L3, some have a mix, some have firewalling. So those are what I call the native network services that kind of come as part of your network virtualization solution. So you're gonna get some of those services which for example, if you have a distributed firewall as part of your network virtualization solution, then you could offer firewalling services to the customer directly. As opposed to the picture up on the top there kind of suggests maybe running a firewall inside a virtual machine. And there's maybe a bit of subtlety here, but you can always run stuff in a virtual machine and make that a service available to a customer, but you don't necessarily always want to do it that way because of scaling issues that it's often quite a bit more efficient to actually make it sort of part of the virtual infrastructure layer. Clearly, I've mentioned that you wanna be able to do this in a very agile way. And so if I bring up a customer and that customer needs five different functions, I'm gonna need to spin up a bunch of VMs, connect them together into a service. That means providing networking connectivity between those. And I wanna do all of that without actually going and touching a networking device. I don't wanna go and have to configure VLANs or access control lists or any number of bits of hardware. I'd like to do this entirely in software. I think I mentioned multi-tenancy already. And this is now starting to get us into the topic of service chaining. So that picture on the top there is a very simple service chain. And in the next slide, I'll go into some more detail on service chaining. And then the other thing is, this is all pretty much independent of my underlying physical network topology and it's also independent of location. So I can now offer these services to a customer pretty much anywhere I want, subject perhaps to latency concerns. But so moving away from the old model where I would have had to say, I have a certain set of functions that are actually located in boxes that are sitting somewhere. Now I can actually move these functions around very freely. And anytime I move a function, I want it to stay connected to the other functions. So that means I need the networking to be agile as well. So I know there's, I guess a decent amount of information in this presentation, I hope there is. But so I just wanna stress again that if you only take away sort of one or two things from today, I think a good part of what I'm trying to get you to understand is sort of how central the neutron and virtual networking piece is to this whole landscape. So now I'm gonna sort of go into part two of the talk where I'll go into a bit more detail on service chaining. So that picture at the top you've already seen, it's a chain of services. And it's actually what I consider a kind of trivial service chain. And a better example of a service chain would be something like this one down the bottom, which you'll notice is actually not a chain, it's more of a sort of a general graph. And that's really a better way to think about service chains. They're really graphs of services. And this is an example where the first box in the chain is doing classification. So now I'm gonna figure out what kind of user does this traffic belong to? And based on that classification, what set of services will I give him? And then based, I'll either take the upper or the lower path through those services to try to give something sort of custom to this class of customer. Creating that kind of topology of services, this is basically bread and butter for network virtualization. If you think about what does Neutron do? What does network virtualization do? It basically lets you create topologies amongst virtual machines. So if you think of each of those VNFs being a function implemented in a virtual machine, a firewall, a NAT, a load balancer, a WAN optimizer that all I'm really doing with network virtualization is creating this topology. But I can now do that in an automated way and with high agility. So I was chatting with one of my colleagues just during the break and he said, so what's actually hard about service chaining? What's the big deal? And I think what turns out to be difficult about it is, I mean, none of it's rocket science, but the hard part is ultimately about efficiently providing customized services and making sure that the packets go where they need to go. Traditionally, that was done by manually stitching things together, using things like VLANs and routing to steer the packets between a set of boxes. So if you think about what we need to do in this fairly simple example, we need to classify the packet and the classification could be drawing on quite a lot of data, some of which may not even be present in the packet. And at some point we've classified the packet and we figured out that it needs to get some sort of service. So what we really wanna do is store the result of that classification with the packet so that we don't ever have to classify it again. That's a classic example of metadata that I wanna carry something with the packet that tells me something about the packet that I can't actually figure out by looking at the packet. So if you look at sort of where the arguments are heating up about how to implement service chaining, they tend to focus on how we carry metadata. And if you look at VXLAN, which is the kind of the de facto standard for building network virtualization overlays, it actually doesn't have a good place to carry metadata. There's kind of some sort of hacky ways that you can carry it, but there's no kind of good general purpose way to carry metadata in a VXLAN header. And so we've done some work on suggesting other ways of carrying this metadata. There's another packet format called Genev that we're working on. Again, I don't need to go into details, but just to say that's kind of one of the core problems is I wanna carry information about the packet all the way through the service chain. And you could even imagine like after the packet goes through VNF3, that chain could diverge again based on the class of the packet. So I might need to carry that metadata sort of all the way through a very long and complicated chain. So that's pretty critical. There's another piece to this, which is that even though I'm drawing these chains as pretty simple things, if you're gonna do this at scale, then you're gonna have to probably have multiple instantiations of these virtualized functions. So say that firewall function up the top might actually be implemented by 10 or 50 different virtual machines. So I better have a way to load balance the traffic across that and I better have some kind of high availability story about how I deal with the failure of some kind of service. So those are the kind of things which I think need to get tackled in service chaining. And as I've kind of alluded to, some of them are pretty straightforward today. Building topology is not that hard, but carrying metadata, dealing with some things like load balancing, not entirely cooked today. Here's a pretty useful reference if you wanna read more. That's from the IETF. It's written by a group of operators and vendors talking about how to do service chaining in a mobility environment, like the Evolve packet core. Here's another service chaining example. This is taken out of something that we do in the NSX product. So the top picture is what you'd like things to look like logically, things going between a couple of different virtual machines, going through a router and also getting firewalled along the way. And then down the bottom is the sort of physical picture of how we'd like things to look. And so the sort of, I'll just back up a second. The sort of interesting thing here is that when traffic goes from, say, the web tier to the app tier, you don't just want it to follow the shortest path. You actually want it to follow a kind of non-obvious path through the firewall. So that's another example of something that's kind of critical in service chaining. You should need to be able to redirect packets into functions that might not actually sit on the data path for those functions. And that again can be done pretty straightforwardly by using overlay encapsulations, but it can also be helpful to have some metadata to make sure that packets that need to go on this detour get there and others that don't need to get there don't get there. And then one of the things that we've implemented is we have a firewall that sits inside the hypervisor kind of as part of the V-switch. So we do firewalling on traffic as it goes directly from the web tier to the app tier, but we also have the ability to selectively redirect traffic to a third party firewall. So we can do kind of basic firewalling in the V-switch and then more advanced firewalling in a virtual machine. And we don't send all the traffic to this third party firewall but only a subset of it. And so this is another example of kind of non-trivial service chaining where even packets from a given customer might take different paths based on something that's either in the packet or in the metadata surrounding that packet. So that's kind of service chaining. And I noticed we're getting close to Q&A time here, so let me speed up a little bit. I just want to point out that so some of this stuff is just very basic neutron like building topologies. Neutron's really good at building virtual topologies, it gives you all the ability you need to do that, and it does some amount of service insertion for things like load balancing and firewalling. This is not really a complaint about neutron, but there's not a general purpose way to express that I want to attach metadata to packets. Part of the reason is because there's not even a general agreement on how to do that in the networking community. And then some of the ways of inserting services, like inserting a service that's actually not in the data path for a packet. As far as I know, that's not supported in Neutron today, and nor is the idea of like, selectively directing packets to a service. So that was kind of my quick summary of like, if we're gonna actually make this stuff really work with OpenStack, there's some opportunity here to extend the sorts of things we do in the virtual networking layer of OpenStack. So the very last thing I want to do is just my quick advertisement for OVN. So OVN is pronounced OVN, hence the logo kind of looks like an OVN, and it's built on top of the OpenV-Switch project, which hopefully you've all heard of. So what do you need to know about OVN? From this presentation, not very much, except that OVN provides a open source virtual networking layer that is basically targeting OpenStack environments. It's targeting a few other environments, but so you can think of OVN as a new option for providing open source virtual networking in OpenStack environments, which is clearly applicable to everything I've talked about. It's gonna support a whole lot of different capabilities like security groups, ACLs, logical switching and routing, can be mapped onto different types of overlay, and will support all the kinds of environments that OVS works on today. And so if you were one of the fortunate few who put their hand up, you could have seen a demo on Tuesday, and since many of you didn't put your hand up, that presentation has been videoed, and it's listed down there, OVN native virtual networking for OpenV-Switch. You can also, if you're that way inclined, get involved in the development of OVN. It's being done as a sub-project within OpenV-Switch, and you can download the code and all kinds of stuff. You can read a blog about it. So that's kind of all I wanted to say about OVN, but I think it's actually a really exciting development given that when here's a lot of negative things about neutron networking, at least I've heard quite a few this week, and I think often people conflate all these different components of neutron, and so we actually have tremendously successful customers running on top of neutron on top of NSX, but most people probably don't run neutron on top of NSX, they run neutron with say the standard OpenV-Switch plugin. Here's another attempt at least providing an open-source plugin that could be very scalable and robust for neutron. So to sum up, there's a lot of thrust behind NFV. So this is kind of the old joke that with enough thrust a pig will fly. That may or may not be appropriate in this environment, but there's definitely a lot of people who really want to see NFV succeed. I would count myself among them, but certainly our largest telco customers really are betting very big on this. I'm actually pleasantly surprised at how well NFV has taken off, that there was not that much incentive for traditional vendors to jump on the NFV bandwagon, but they seem to be responding to the wishes of their customers in a fairly encouraging way. So the big thing here I think is all about agility, about getting new services out quickly and also making it easier for Operator A to be differentiated from Operator B. There is some amount of cost driving this, but I don't think it's the only thing to focus on by any means. There's actually a lot of overlap, or certainly a lot of interest in OpenStack, from the NFV community. That's kind of interesting because, you know, I think most of you would probably agree that OpenStack is kind of not for the faint of heart, that it requires a fairly high level of sophistication, and I think a lot of the telcos are just effectively saying, you know, we're gonna make the investment to become OpenStack experts because we see it as the best thing out there for managing these kinds of sort of cloudy environments. I didn't really talk much about this, but NFV is really about moving away from siloed architectures to more open architectures, and one of the things we need to be very careful of is that we don't end up in just a different set of silos. So if today, you know, I sell traditional telco equipment, and tomorrow I stick it in a virtual machine and then put a hypervisor underneath it and bundle the whole thing up and sell that on my own hardware, I've only replaced one silo with another and inserted the extra cost of a hypervisor. So that's kind of not a good outcome. So what we really need is more kind of open horizontal architectures. That's kind of the approach that we're taking in the work that I do at VMware. It's also the approach that the OpenStack community kind of broadly takes. So I think that's another reason why OpenStack is so interesting for NFV. And I think I hammered this point enough now is that doing NFV, it's not just about virtualizing compute. It's about virtualizing everything, and that means networking and storage. And so hopefully I've made that pretty obvious and you can get a sense of what it's gonna take to make NFV successful. So with that, I'm losing my voice, but thank you for your time and I'm happy to take a few questions or applause if you prefer. Sorry, that was horrible to have to beg for it, but sorry, anyway, question there. So quick question. You talked a lot about service chaining and the need for metadata. I think there's been a lot of work that's been done on a network services header. And I know if you wanted to comment on that and how your system kind of or the way you approach that here. Yeah, so the network services header is I think a good example of one of the approaches to carrying metadata. I think it's basically work going in the right direction. And essentially we're at that sort of stage. I spent many years going to the IETF and I don't go as much as I used to, but we're sort of at the stage now of arguing about exactly where the bits go on the packet as opposed to the overall structure of information. So I'd say that's kind of a good thing. Great, okay, I'd like to see that there. Other questions. I was told to allow time, but that's super quiet audience. Was it all crystal clear? Awesome, awesome. All right, here's the question, okay. Yeah, so I'll just repeat the question for those who didn't hear it. So the question was, given that there are already some open approaches to doing virtual networking, why do we need another one? And I think that there was a sense that we needed to, you know, open V-switch is still the most popular, you know, the most popular V-switch in OpenStack environments. And we wanted to make Open V-switch kind of a first class candidate for virtual networking in OpenStack environments. So we kind of looked at this as not so much, let's go and build another virtual networking solution so much as let's incrementally grow the capabilities of OVS so that, you know, one of the reasons why people have had mixed experiences with Neutron is because the default OVS plugin kind of not that great. And so having something that's more robust as the default would, I think, be a big win. Other questions? Okay, we're gonna take a follow-up from the front row here. So the question was about, can you pass VLAN tags all the way up to the guest with OVN? I don't know the answer to that off the top of my head, but I know that adding such capability would be pretty straightforward. So whether it's getting done, like there's a long road map of things to do with OVN, but definitely that could be done. Okay, if there's no one else, here we go, one last one. Yeah, I was wondering, any estimate on when the optimized version of the OVS would be out? Sorry, could you repeat that question? Any estimate on when the optimized version of the OVS? So when you say the, so the question is, when will the optimized version of OVS be out? So are you talking about any particular optimization because we're kind of optimizing it all the time? So it would be the DPDK version. So the DPDK version of OVS, I mean, as far as I know that's available today, that I'm not sure exactly what state of maturity the DPDK version of OVS is in, but there is definitely a version today. So that's a long discussion about all the pluses and minuses of DPDK, but in general, if you want it, if you have an environment where you need a DPDK optimized version of OVS, I'm pretty sure it's available today. Okay, well the question is getting further and further from my expertise. So it's a good time to stop. Thank you so much for your time.