 Alrighty. Welcome this morning. I guess it's going to be a little cozy in here with all the people coming in. So my name is Mark McClain and this is Kyle Mestri. So just a little bit about me. I'm a member of Technical Committee, a core reviewer for Neutron, former PTL of Neutron. Currently CTO at Akanda Inc. And I'll let you introduce yourself. And I'm Kyle Mestri. I'm currently the Neutron PTL. I'm also the Chief Technologist for Open Source Networking at HP. Alright, so let's kind of dive into Neutron 101, kind of what, you know, where are we headed today? And then so basically we're going to cover like, you know, OpenStack Neutron. What is it? You know, it is the OpenStack networking project. And, you know, underneath it has a constellation of projects which is kind of cool. We'll touch on some of those. What are the open source implementations and the reference implementation? There are multiple open source backends, but we'll kind of cover what the reference architecture looks like. Some of the community initiatives regarding Neutron. And, you know, what's, when the most recent release in Kilo and kind of take a little peek ahead to what's in the next release in Liberty. So, OpenStack. So, OpenStack. I thought we just kind of level set. I know everyone probably came to the Keynotes, but just to kind of level set. So, OpenStack was a project founded in 2010. As of the Kilo release, we have 3,654 contributors contributing upstream into OpenStack projects. Ten projects in the integrated release, plus a larger ecosystem on Stackforge. Actually, a pretty significantly large ecosystem on Stackforge. You know, production ready cloud software, as you saw during the Keynotes today being deployed by a lot of different enterprises, a lot of different groups, a lot of people building applications on top. The latest release, the 11th release, was Kilo, which was just recently released. And it's licensed under the Apache 2 license. So, I think we're obligated to show this type of slide, just to level set everyone on what OpenStack is. I'm sure everyone's seen this before. So, this is, you know, this is a high level overview of what it is that you can show people. So, conceptually, they can see what it is with compute, networking, storage, all the shared services underneath, the dashboard, things like that. So, we're obviously going to talk about OpenStack Neutral on the networking component today. So, and as we go through this, it's interesting to know what does the user see, right? Through the GUI, through the CLI, through APIs or whatever. And the user is going to see the APIs, the tenants in this case, are going to see APIs that they can use to interact with the system, whether it's compute, whether it's network, whether it's storage, those are the APIs that the user is going to see. And of course, you know, these could be backed by something like KVM. On the networking side, it could be the ML2 plugin. It could be something like Ceph on the storage side. But the point is, the user doesn't see those technologies, they see the abstractions through the APIs. So, okay. So, once we take those basic APIs and we kind of dive into what exactly is OpenStack networking in Neutron, you know, we talked about the basics, where you see KVM, ML2, you know, and you've got the basics. Also the kind of level set because you take networking, you talk to non-networking people, all the networking people just start talking about L2, L3, you know, L7, what are the network layers? Just real quick, I'm not going to go into, you know, full discussion of OSI model. But basically, layer two is basically a link layer, you know, the wires to connect everything, layer three is basically IP, you know, v4, v6, and then layer seven is like we start getting application layers in between, you know, you get, but those are the different layers when we talk about the numbers, you can, if you want to read more, there's lots of information on the web. So, when we talk about the abstractions in Neutron, if you take a look at the orange boxes at the top, those are essentially the VMs controlled by Nova, each of the virtual servers has a virtual interface, a VIF, that VIF is managed by Nova, so it creates the VIF, but everything else below that going to the bottom of the slide is managed by Neutron. And so, you have a layer two, you have a virtual network in Neutron, that's a network that's one of the core resources in Neutron, it's a layer two connectivity, you have the virtual subnet, in this case it's 10, 10, 10, I mean 10, 0, 0, 24, that's the layer three service, you can also choose IPv6, and attached to that virtual network, you have a virtual port, and so network, port, subnets, those are the three core resources that you will find in all Neutron deployments, regardless of which extensions are enabled or not, and then from there, Neutron's responsible for you take a port, the VM has its VIF, and then you plug the VIF into the port, and now your VM has connectivity. So, these are kind of the basic abstractions we have. Fortunately, we can use the Neutron API and do kind of elaborate setups like this, where you have multiple tenets, you have the orange tenet, you have the green tenet, if you notice they're using overlapping IP space, one of the features of Neutron is that you can have overlapping IP space, it's really helpful if you're saying you're doing CICD pipelines, you want to do testing, you want to do configuration testing, you can basically spin up a complete, replicate your entire environment, including down to IPs within, virtually within Neutron. Some of the design goals when we set out to create Neutron is to have a unified API, to do network services via a unified API, provide a small core, the one of the benefits for having a small core is that it's easy to have very compliant implementations because you have a very small surface area in which you have to match, but also have a pluggable open architecture. We didn't want to dictate which type of technologies we were using because there's multiple ways in which you can provide layer 2 connectivity, so we wanted to have it pluggable and really empower deployers to have their choice for that back end, as well as extensible. If we had a very small core API of ports, and networks, subnets and ports, how else do we expose higher level services? Common extensions you'll find are, say, routing extension, security groups extension, load balancing and VPN are some other extensible services which we've layered on top of that very small core. Like I mentioned a little bit earlier, some of the common features you'll see are support for overlapping IPs, that's one of the differences that you'll find, as well as configuration support DHCP metadata services. Floating IP is a common feature you'll find across all implementations, so floating IPs, if you're not familiar, basically gives you the ability to map an address from the public range onto a particular instance and have it float around. Some people will also comment refer to them as VIPs, but we specifically floating IPs are different than VIPs in neutron, but sometimes the terminology will be confused. Security groups, security groups basically protect the VIF on the hypervisor. So we have support for overlapping IPs, which is kind of nice, so that way if you have different, different tenants can have rules which don't conflict with each other, even if the specific guests are running on the same hypervisor, you can apply those security group rules both on for egress and ingress traffic. egress is a little bit different, Nova doesn't support egress, traffic, Neutron does. They're fully IPv6 compliant, as well as you can have security groups for different VIFs. So in a multi-tier architecture, you may have once on your public network of a VM, you may have one set of security groups and say accessing the database layer, you may have a different set of security groups rules. And one of the interesting things about security groups is it's a logical concept, so the actual implementation we leave up to the back end, so if some of the very smart back ends are able to offload the processing of security group rules and make it very efficient. Okay, so next we're going to talk a bit about the architecture of Neutron here. So this is a pretty easy to read, nice implementation of what a typical open stack architecture would look like. So which piece are we going to talk about? We're going to talk about that piece right there. That's the networking piece, you can't really tell, but that's it. So that's this. Okay, so what is the basic deployment of Neutron look like? So it starts with the Neutron server, so the Neutron server is deployed, and this is where the API layer lives, this is where the database layer lives interacts with a database. You can see right there. So the next piece is you have a message queue, so this is where the Neutron server is going to interact with different components through this message queue. In this particular case, if you're using the current kind of built-in reference implementation, you're going to end up with an L2 agent, and this could be either the OVS L2 agent or the Linux bridge L2 agent at this point. You're probably also going to need a DHCP agent, so that agent will exist somewhere. You can have multiple of these running, you can schedule networks, different subnets to different ones of these DHCP agents. You're going to have an L3 agent if you're doing L3. So this is going to include things, like if you're not using distributed virtual routing, the L3 agent will handle routing, it'll also handle floating IPs, and then you could have multiple of these as well, stack deployed in different configurations. It's worth noting with the L3 agent, we actually support HA using VRRP as well, so you can have HA functionality for that, and if you're using DVR, we can actually offload the east-west routing down onto the hosts using OVS, and we can also handle the DNAT down on the hosts. We still need the L3 agent for SNAT, and then advanced services as well. These are things like firewall, load balancing, and VPN at this point. So let's see what we have next here. Okay, so what does the Neutron server look like itself? So this is kind of what the Neutron server, whoops, I wasn't sure, yeah, no text on that one. Okay, so this is what the Neutron server itself is composed of, right? You have a plug-in on the bottom, and then you have the REST API service, and you have the RPC service. It's really that simple. This plug-in can either be a monolithic core plug-in, or it can be the ML2 plug-in, which itself has mechanism drivers that can host multiple different technologies at the same time. And so what does a monolithic plug-in look like? A monolithic plug-in will have a full implementation of all the core resources inside of it itself. So you can either do a proxy with this, or you can do a direct control, meaning if you're doing a proxy, you maybe aren't even using the Neutron database, you're just proxying API calls across to something else. So that's what that looks like. And what does the ML2 plug-in look like? So the ML2 plug-in looks like this. It's a full V2 plug-in implementation, and it actually segregates the mechanisms from the types, and the types in this case, we mean types of segmentation, whether that's VLAN or different tunnel types, like VXLAN or GRE. And this will actually delegate all of the calls to the proper L2 drivers. So with this type of setup, you can actually run Linux Bridge and OBS at the same time. You can run drivers for physical switches with this as well. And ML2 will take care of working with those. So plug-in extensions. I don't mark it alluded to this in the architecture slide as well. So we do allow for extensions to the API. So you can add logical resources to extend the API. Things like security groups are actually extensions of the core API as well. So like it says right there, yeah, other things are the binding DHCP, L3 quotas, these are all implemented as extensions. We also have the allowed address pairs, extra routes, and the metering API are extensions as well. So the L2 agent, let's go into a little bit of detail on the L2 agent, as we said. So the L2 agent actually runs on the hypervisor, and it's going to communicate back to Neutron via an RPC layer at this point. And the L2 agent's main job is to watch and notify when devices are added and removed, and it's going to actually configure the networking on the host for that device. So whether that's Linux Bridge, in which case it's going to use the VR Kettle commands to set things up, or OVS, in which case it's going to use things like OVS-VS Kettle, and things like that to configure the networking on the host for OVS. This is also going to handle security group rules as well. So the L2 agent will handle setting up security group rules for those hosts as well. So what is the Open V-Switch? The Open V-Switch L2 agent works with Open V-Switch, you can see there, and it actually supports VLANs, GRE and VXLAN networks as well. Let's see if this, there's that. Yeah, it's using OVS-DB down to talk to OVS to wire these things up this way. So the current, it's actually worth noting that the current OVS agent, the way that it works with the tunnel networks is it actually configures two bridges on the host, and the tunnel networks are configured between the hosts. On the host itself, it actually uses local VLANs to segregate traffic, and those VLANs only have meaning on the host that it's running on. And then it's going to use RPC to talk northbound back to the neutron server. So like we said, for isolation, these provide isolation locally. You can use VLANs, you can use GRE. It's kind of whatever you want. We provide the capability, you can look at the trade-offs for the different types of isolation that you want, what might work with your infrastructure. So how does tunneling work? If you assume all of those nice lettered boxes are hosts, we'll build a mesh of tunnel networks like this with the OVS-L2 agent. And then we have these VMs. So the way that this works is we're sending, so this is showing you that previous to this L2 population, the way that this would work was we would essentially end up flooding broadcast traffic across to figure out where the other VM was. So then we have L2 population was a feature which was implemented a couple of releases ago, which is still supported and used. And it's a little bit more intelligent about building peer-to-peer tunnels this way. So L3 agent. So we talked about connectivity with the L2 agents. So L3 agents is where it's an extension we added to neutron. And one of the primary building blocks of the L3 agent is we use, in reference implementation, we use Linux network namespaces. We also use these whether you're using the plain agent that's running on the network node or you're running it within DVR. So basically the Linux network namespace provides an isolated copy of the network stack. You get, you know, the nice thing about it is you get your own private loopback because the IP stacks monolithic, if you don't do this, so that way you can have overlapping IP addresses, which is very important for reusing them. The scopes are limited to the namespaces. So if you see on our host, we have like our host namespace. We have the root namespace that has, you know, E0, E1, BRN. And then you also have namespaces A and B, which also have the same devices, but they're actually different devices. A lot of times what you can end up doing is creating a device like E0 in a namespace and plug it in to a bridge or a switch in another namespace. So in this case, we wired them together and so now we have explicit configured, you know, but it requires explicit configuration to connect them. Another one that benefits is you can spawn processes within those namespaces. So if you're providing routing functionality or forwarding, the forwarding is separate from one namespace to another, or if you want to have multiple DHCP servers that support overlapping IPs, you can spawn that process. The process contacts will know which will be restricted to namespaces. It's a really cool feature of the Linux kernel. So we talk about the L3 agent and this is something that you typically run on a network node. It uses namespaces, typically will have the metadata agent enabled to provide metadata services if you're deploying it. And typically you're going to provide one or more of the network nodes. Mainly what we end up doing is you will place the logical router, we'll place it in one network node. If you have HA installed, we'll create a second namespace on another network node, use VRP to sync the states up between them so that way you get HA support. So typically, you know, how we're implementing it is, you know, like I said, isolate IP stacks, we're enable forwarding, you know, this is just enabling IPv4 forwarding. One of the cool features about Kilo is V6 is fully supported as well and you get basically static routing, so no dynamic routing that you would expect to find, like BGP or OSPF is not currently supported in the reference implementation as well as a proxy to the metadata service. So sitting in a level above that, so this is how the L3 agent works. And so if you're running DVR distributed virtual routing, we actually spawn little mini layer 3 agent on each of the hypervisors, which if you have a floating IP mapped, we'll handle mapping and doing the NAT for a floating IP for a specific instance. This gives you a higher north-south bandwidth out of a particular host, as well as it also improves east-west routing because you were basically running routing in the host. Higher level services, we start talking about, you know, layer 4 through 7 services. Some of the extensions that we have available is Low Balance Service v2. The version 2 Low Balance Service is actually new in Kilo. So if you've looked at it before, we've actually changed up some of the models. They have slightly different attributes. It is really a cool community initiative that we've had over the last year where multiple members of the community came together and said, hey, there's some things we need to fix in load balancing. So the community worked together, came up with some of the revised design and worked through it and delivered it. So it's really cool. But basically, for the reference implementation, you're going to have, again, the network namespace. It's basically a driver-based. The agent talks to the driver via RPC. And the blue box is the namespace that you have running. And the open-source version, HA proxy, is the process that we're using to do it. And it allows you to provide basic load balancing services with HA proxy. VPN and the service is very similar, except this time, within the router namespace and the network node, you have the VPN software running. It uses OpenSwan, and they talk together. Firewalls as a service is an experimental service that we have. It provides edge firewall services to the logical network. Essentially, the firewall rules are applied and policies are applied at the router level at the agent node. And so again, this is experimental. So we need deployers to test it, to provide feedback and assist the team, and making sure it's ready in production. Okay. So we just talked about kind of a high-level overview. As you know, we just released Kilo. So I thought we'd take a look and see what we added in Kilo. So I think as we iterated, there's a lot of plugins and drivers that can back neutron. We explain kind of the default reference implementation, but there's a lot of other ones. So this is a count. So we added, that's a lot of drivers. It's really hard to see. But with the amount that we added in Kilo, we actually pushed over 50 plugins and drivers, both vendor-based and open-source based in Kilo. So it's a growing ecosystem. And as you can see, we're adding a lot of different stuff. And in this case, a lot of these were services plugins, whether it's VPN or LBS or firewall. So we're adding a lot of advanced service plugins now. I think that's because a lot of people have implemented L2 and L3 already. So now there are people that have vendors or open-source projects that have the capabilities to do these advanced services are integrating into neutron at this point as well. It's also worth noting, I'd just like to spend a second to talk about plugin decomposition at this point that we did during the Kilo cycle. Previously, before Kilo, all of these vendors, all of these drivers and plugins, they had all of their code inside the neutron source tree. So that's a lot of code. That's a lot of drivers to support. So we came up with this decomposition process, which decomposed the bulk of the code, the back-end code for a lot of these, moved them out to a StackForge repository and kept a small shim layer inside so they can still effectively ship with neutron with the open-stack releases. But the back-end logic, which is specific to all of these things, and not just these, but all 50 drivers and plugins, that's outside. So there's actually a panel discussion later today where we'll kind of go into this in detail, but it's been a pretty successful thing for neutron as a whole. And in fact, one other change we're making now in Liberty is we're actually bringing those back-ends back under the neutron tent. Again, there's a talk on the whole change in the open-stack governance model, so we like to think of this as the neutron tent where we're bringing these back-ends in as separate Git repositories released separately, kind of owned still by whatever project, but they'll fall under the neutron tent as well. So we're growing that ecosystem that way. So let's see. In layer three, we added some interesting new features. As Mark said, we have full IPv6 support now. So the team that's been working on that got that done, and that's pretty exciting because I think a lot of people were interested in the IPv6 support. So DVR now supports VLANs as well. DVR is the distributed virtual router functionality. In Juno, it only supported tunnel network types. Now it supports VLANs. And subnet pools. So subnet pools is actually an interesting API addition, and I think we have a nice graphic to explain here. Yes. Okay. So let's say you wanted to... So previously, when you allocated a subnet, you had to actually pick the subnet addressing, the cider that you wanted. So we added this functionality called subnet pools, which allows you to specify a pool like this. And then what you can do... So the admin could create this subnet pool like this. And then when someone allocates a subnet, they can just specify the subnet pool, and it will automatically allocate. It'll automatically allocate subnets. So you can have your users basically allocating subnets, and they don't have to worry about specific ones. They can just... There we go. They can just allocate until it's full. And I think we went too far there, but that's the idea there. Let's see if we can go back to this. Three taps. Three taps. There it is. Yeah. Okay. So that's actually a really handy field, a really useful feature, actually, because I think it removes a little bit of the stickiness for having to know exactly the addressing that you want. So you can actually take advantage of that in Kilo. Wait, did we go back? Yeah, that was a double slide, I think. So, oh, did you want to talk about this real quick? The HPB, or should we... Actually, well, we... Okay, I can just cover this quick, yeah. So we had a new feature also in Kilo called Hierarchical Portbinding. I think that that diagram shows that there's actually a talk later today on that, I think, right? Tomorrow. Tomorrow, same time. Tomorrow, same time. There's a talk on the Hierarchical Portbinding, so you can learn about that. Port security for OVS. So this was an important one. I think I'm going to say it. It's an important one for NFV. The NFV people like this because this allows you to disable the security group support for OVS. So if you have, for example, VMs that you don't... So Neutron installs a bunch of default security group rules, this will allow you to disable that. So that's handy there. We have some new API extensions. Also, NFV-related, MTU and VLAN transparency. So the MTU API extension allows you to specify the underlying MTU for the network. So if you know that your network supports jumble frames or some, or if you want it to support a smaller MTU because you're using tunnel networks, you can do that. And if you're using the built-in DHCP agent, it will actually propagate that through the DHCP request. So your VMs, if they honor that, will actually get the lower MTU. And then we have VLAN transparency as well. This API attribute extension actually allows you to specify that the underlying technology that you're using can actually pass VLAN encoded frames. That's all it does. It's not actually trunk ports to VMs, but it allows you to at least kind of get there a little bit. Oh, that's interesting. I guess we like this slide. It's a lot of drivers and plugins. So, okay. So I'll look ahead to Liberty. Where are we going? What are we looking at in Liberty? Of course, we're going to be discussing that this week at the design summit a bit, but we've kind of got a rough high-level overview of what we'd like. And that's Liberty Saskatchewan. Yes. Yes. First thing you hit when you Google for Liberty in Canada is Liberty Saskatchewan. So IPAM, so Plugable IP Address Management. This was some work that was started during the Kilo cycle and it never merged at the end. This is definitely something we're going to merge in Liberty and get there so we can, right now, Neutron provides its own default IPAM implementation. This will end up being Plugable, so we'll allow different IPAM management systems to plug in. I think this is a requested feature. A lot of enterprises have existing IPAM systems, so I think this will be a nice addition. BGP Speaker Support. Again, this has kind of been looked at off and on over the last year. I think we're going to look at merging that this time as well. That was kind of reliant on a bunch of work in the L3 agent that went on during the Kilo cycle. So that work and refactoring is all done. You know, we're looking at NFV enhancements, which are things like service function chaining, possibly enhanced security groups. There's actually a lot of proposed blueprints at this point around enhancing security groups in various different ways. So I think those are some interesting things. And then, you know, paying down technical debt, I think there's still some things there. For instance, we're going to look at doing API micro versioning, which is what Nova recently did. We're looking at possibly pulling all the extensions in so we don't have all of these extensions and we can make them core API attributes. I mean, you can't really do things without security groups and it's an extension. So it makes sense to pull that in. Also, you know, we're looking at the whizgy layer again to see what we can do there. So that's kind of a lot of what we're looking ahead to. So this is just kind of, if you need more information, there's the document. That's the great document that our documentation team works on there. And then there's the API reference as well, which includes the core attributes as well as all of the documented extensions upstream as well. So I think that's it. So I think we left a little bit of time for the questions. So thank you. Does anybody have any questions? I think there's a microphone. If you can't get to it, we'll repeat the question. We'll repeat. Yes, we'll post the slides up the slide share. Definitely. Any other questions? So the question was about the L3 agent with the HA functionality using VRRP. So the best place to look is actually in the dev ref guide because it has a detailed, I think there's a detailed description of how that works. But at a high level, yeah, it's just using VRRP just to balance. So if one of them goes down, if you lose a node that has an L3 agent, it'll automatically fail over to the other one. So it provides some redundancy. And I think it's an important feature because even if you're using DVR, which distributes the routing functionality down, you're still using an L3 agent for the source NAT. So you're combining that with L3HA works well. And additionally, contract D is actually running to sync the states between the two namespaces so that you have a consistent connection state. So the question was around the advanced services and the different vendors with their different support maybe and features. Right. So that's one of the things when you're coming up with a common API like this. Initially, you might have to work it down to what's common between everything. So we allow the extensions so you can add extended features if you have either vendor or open source features that you want to add. Now, the downside to that is then if you're writing the APIs, you have to check to make sure that you have those extensions and things. So it's kind of a Yeah. And also building upon the small kernel over time, you're seeing the community itself now with the new load balancing service has been with v2. They're able to build on in this cycle are going to focus on higher level features. So they tail in, finish SSL termination, layer seven load balancing and seeing a more rich feature set. So from the community side, you're going to start seeing common APIs developed for that as well as the extensions do exist that you mentioned. Yeah, definitely. And I think the other thing with the extensions are if you're using an extension and it turns out that it ends up being really useful, you might see other drivers or plugins adopting that extension as well. Exactly. Kind of gives the users a chance to try that out. And over time with Neutron, that's what we've seen even with the layer two and layer three services, you typically have the vendor to is going to help pioneer and kind of help fill out the space, look at what the API looks like. And then over time, you see more broad adoption that happened with security group, sport security or some of the other features that happened with. Right. So I guess, so the question was the latency of OVS. I guess that would depend on what version. It's actually maintaining OVS. Yeah, it's what? Who's maintaining OVS? Who's maintaining OVS? Yeah, yeah. So you, so like actually working on Open V-Switch itself. Well, I, oh, I think that actually is a pretty vibrant community. And it's a separate open source project which has its own steering team. And so they have like IRC channels. They have mailing lists, which they're constantly working on. The one common thing we'll see about OVS though is, as Kyle mentioned, the version that people run with OVS that some of the distros will package specific versions of OVS. And so if you have a particularly old version, you may not get some of the newer enhancements such as some of the caching of micro flows even later to mega flows. And so they keep adding and making Open V-Switch more performant. So they're working on that, which is adding some support. And then even coming the summer, they're working on one of the things we'll hopefully do is adding integration with with connection tracking. And so you'll actually, we'll be able to get rid of the bridge, the basically the virtual hops are putting in the hypervisor to apply security groups and actually apply those rules directly in OVS, which will again improve, you know, the data path and kind of get rid of some of the bumps that we have along the way. But as far as that, I mean, it's a separately managed, very active community who, you know, there's Open V-Switch is really cool how what they're able to do and the things they're able to deliver over time. Yeah, absolutely. And they had, you know, really receptive community for sure. And I was going to say two things. So that connection tracking feature is actually, there's a talk on that on Thursday, I think, as well. So that we're pretty excited about that. Oh, sorry. Did you know I'm back here had his hand up for a while. So I'll repeat the question and let you answer. Okay. Totally, totally set. So the question was, is that they're right now, if you take a look at DNS and Open Stack, you have an overlap between Nova Networking, Designate, and Neutron in terms of NDACP of like, how do you get a host name? How does the resolving work? How does the reverse resolving work and what service provides that? Now Kyle's going to give you a nice answer. So yeah, I'll say that I think a lot of that is being discussed now, right? The Designate project owns DNS as a service right now. They're doing a great job with that as well. But I think, like you said, you know, we can work to see how we can integrate that more. And I think the IPAM stuff can help with that a bit as well. And but again, it's going to take, you know, Nova, Neutron, and Designate, I think working together to really make that a seamless experience for you. Oh yeah, yes. Then we'll go back. So the question was around tooling and how it monitoring, detecting failure, and that sort of thing. That's actually a good question. So for monitoring, I mean, we do support things like Slometer, right? We have integration with that to provide that capability. As far as the underlying tooling, so are you interested in like the underlying implementations like OVS or Linux bridge or those types of things too? So yeah, in terms of what the monitoring tooling available for Neutron, essentially the tooling that exists now for your underlay is going to be the traditional tooling that you would use for monitoring underlay and monitoring your switches. As far as keeping track of dynamically how Neutron's performing, right now there's no current. The community hasn't written a standard plugin for monitoring. You know, that's obviously an area where people are very interested in that. We'd love to see contributions in that area. So having the ability to monitor, having the ability to look into that is something. But in terms of basic reference implementation, now when you talk to some of the other implementations, either open source or proprietary, they do have add-ons for monitoring, failover. There's very pieces. As far as configuration drift, the agents themselves are constantly checking the database and validating the logical config and the instantiated config actually match each other. So that if you have an operator that goes in and changes the config underneath the hood, it's likely to get overwritten fairly quickly by the agents themselves. So we have time for two more questions. So blue shirt and then one over here. So the question was around VLAN transparency and is there support for Neutron understanding trunk ports to VM? The answer is no, there isn't right now. That's something that we're going to be discussing at the design summit, how we can maybe look at doing that in a way that works as well. So not right now. And then those over here. So the question was around the shims for the drivers and plug-ins. So there's no requirement to have that upstream. You can completely add your driver and plug-in wherever you want outside as well. It's just a matter of installing it and configuring the server to use that plug-in or that ML2 driver. I think the advantage of the shims is that if you have the shim in, then your driver is released with the OpenStack releases. And it'll just pull in through a requirement, whatever back-end logic you have. And so it'll go that way. So I think that's about it. So thank you very much. Thanks for coming everyone. Yes.