 Thanks, Allison. Yeah, so as Allison said, I'm Kyle Messery. I'm the PTL for the Neutron project, which encompasses networking and open stack. So these slides will just kind of go over some high-level details of what the Neutron team has planned for Kilo and what we're working on there. So go ahead and flip slides. Okay, there we go. So I thought I would start off like I did last time with just a high-level overview of what the networking program's mission is here. So as you see, it's to implement services in associated libraries, to provide on-demand scalable, and technology agnostic, network abstractions. One thing that's worth noting here, and one thing that I'll bring up is the services portion of this and how some of that, how we're developing that, and how that may affect operators and deployers. I'll mention that as well later on in here. So I wanted to start off with highlighting Kilo priorities. One of the things that we did this time, which came out of the Design Summit in Paris and even before that when we were talking about coming up with priorities for Neutron and Kilo was we wanted to have a way to track the community-important things. In Juneau, we tried using a Wiki page. We tried some different things around tracking upstream community work. It was really important for everybody. We've kind of evolved this a bit. So at this point, we are now tracking it as an actual thing in Neutron specs. We actually have, that file there was actually committed. We reviewed it. We were tracking the high-level features there as well. And this was important for us because it's allowing us to prioritize things. And it also is transparent with all of the developer's users and distributions around. These are the high-level things that we're all working towards. These are the important things that we're really trying to prioritize and line up and make sure that land in Kilo as well. And then the other thing that's important, we've had a lot of input from distributions from the people who make distributions of OpenStack. And this allows them to kind of plan to understand what features are going to be in Kilo so they can start to make some plans around this as well. So we'll give this a try. It's kind of an evolution from what we were doing in Juneau. But overall, I think so far it's working out pretty well. Next slide, please. So one of the first things I wanted to highlight is parity with Nova Network. So this work has been ongoing for a couple of cycles now. It started really in Icehouse where we started setting the groundwork to this. It continued in Juneau where we had features like DVR that really closed the functionality and feature gap between Neutron and Nova Network. And during Kilo, we're going to really work on migrating Nova Network and install some Neutron. So we've really done a lot of work trying to talk with developers, users, operators, employers around this. Certainly, we'd love to hear from Maura around what their expectations and requirements are here as well. We have some work ongoing which is kind of working between the Nova and Neutron teams around this migration effort. But the plan is to get something in place for Kilo that covers a certain amount of use cases for the migration from Nova Network to Neutron as well. There's also some work around the edges of the functionality gap. For instance, DVR is likely to have VLAN support, whereas in Juneau it just had tunnel network support. So that will help to kind of close some of the feature gap as well there. Next slide. So this is kind of one of the big items that we've been talking about for a while. And this is really focused around stability and palability and kind of making Neutron Maura the core of Neutron Maura of an evolvable, scalable project here as well. So this work was discussed in Paris in multiple sessions. It's something that we're going to focus on at our mid-cycle coding sprint next week as well. And really, this work is going to make Neutron much more stable, the core of Neutron. Two of the big things that we're looking to get out of this are actually the ability to better support out-of-tree extensions. So we'll be able to allow people to do a lot of add-on support to Neutron, much easier outside of the Neutron tree. And then also we plan to switch our homegrown whiskey over to pecan, which I think is going to be a huge improvement for us as well. Next slide. Plug-in decomposition. This is something that I really wanted to highlight because this again was discussed months before the summit. We also had multiple sessions at the summit for this. And we've continued to kind of discuss this on the review for the spec and the Neutron spectropository as well. But really, what plug-in decomposition is about is it's about thinning the entry plug-ins and drivers that are upstream in the Neutron core project allowing a lot of that functionality to be moved out into the plug-in and driver maintainers choice of where they want that. Really, this is going to address a lot of these pain points that have really frustrated everybody around this process. Review time, iteration speed, how do we make it easier for the vendors to do their specific modules, things like that. So this process is, the hope is and the plan is to make it so that everyone can get a win out of this situation here as well. So that's what we're hoping here. And really, it's going to allow for fast iteration of both core Neutron as well as the plug-ins and drivers. This is something that I personally plan to continue to advertise on the mailing list and talk about with people. And I've already been working with some plug-in and driver maintainers on how we can help them do this. But I think this is going to be a big win for everybody here. So this is, I wanted to highlight this at the front of this presentation. Next slide. Testing, testing is definitely something that the Neutron team has really taken a lot, you know, we've spent a lot of time on this really since Icehouse we ramped up, you know, in Juneau we got full API coverage for all of the Neutron APIs in Tempest. Really, we're expanding what we're doing testing-wise here to include full-stack testing in the tree. We've got a spec out for review on that that looks like it's very close to landing. We're going to get increased functional testing of all of the agents here. The OpenDSwitch agent, LinuxBridge, DHCP, and Metadata agents. And we're going to finish off the work around the targetable functional testing as well. Next slide. Agent refactoring. This again is, this kind of works toward scalability and stability improvements as well. Neutron has a lot of agents that implement a lot of the functionality for the entry implementation whether you're using OpenDSwitch or LinuxBridge for your L2 stuff or you're using the L3 agent to handle floating IPs or routing or the DHCP agent to handle that portion as well. So really, as you can see here, the number one thing we're trying to do with this is around scalability, trying to make these agents more scalable. You know, we're going to add functional testing to all of these as well. For the L2 agent, we're going to improve the RPC communication between that and the server as well. And we're also looking at various ways to improve the performance of the OpenDSwitch agent around how we interface with OBS through OBSDB. Right now we execute a lot of CLI commands which have a high cost. So we're looking at more programmatic ways to do that as well. On the L3 agent, we're also looking at how can we abstract out some of the service agents as well and that plays into something I'll talk about in a slide down the road here. But that's going to be a big win. And on the DHCP agent side, we're looking at some restartability improvements, some different scheduling mechanisms, both to handle load-based scheduling if you want to run multiple DHCP agents and also around what happens if a DHCP agent dies, how do we handle moving the work that it was doing to another agent. So all of these things are really going to be important for operators and we're pretty excited about the work we're doing here. Next slide. So advanced services split. This was what I was alluding to earlier and this work is going on now as well as part of sitting within the Neutron core project itself. But the team took a look at the services we have, load balancer, VPN, and firewall and decided that it made sense to split these out into separate repositories in the networking program. So work's already underway to do this. We have a spec upstream and we're going to proceed with going down this path at this point. And ultimately, the hope is we can allow operators the flexibility of running whatever services they want to offer their tenants. If they only want to offer load balancing, they can do that if they want to offer all three they can. It's also going to allow the teams working on this to iterate much more quickly outside of the scope of core Neutron. Load balancer people, firewall, VPN, they can all focus on their own piece of this networking puzzle and hopefully iterate much faster there as well. We hope to also reduce some of the gate testing complexity as the project has grown. The complexity of the testing has grown as well. And then this may also allow us down the road to optimize some parts of Neutron into more core libraries shared across all of these different services. Next slide. Plugable IPAM. This has been talked about at various design summits probably for almost two years off and on now. We're actually really hoping that this is the cycle where this works and we've kind of made this one of the priorities as well. And the idea is just like it says, we want to create a Plugable IP address management scheme in here so that third-party and vendor IPAM systems can integrate with Neutron as well. There's a spec alt for a review on this. We've had a lot of comments on it. We should be able to get this approved and we should be able to get this work into Kilo as well, which will just provide some more options for employers and operators around how they want to work with IPAM. Next slide. So speed and reliability improvements. These are both obviously really important to employers and operators. These two particular features kind of fall into this category here. So the first one is agent child process status. So it's kind of a long way of saying that we have someone who's writing some code which is going to monitor all of the agents that run when you're using either OBS or Linux, you know, bridge those of agents. You have L3D2P metadata agents. So this code will monitor and restart all of these agents if they should exit for whatever reason. So it's just a nice way to provide a little bit more resiliency and a little bit more peace of mind for operators that are using all of these agents as well. The other one is the root-wrap demon mode. This feature didn't quite make it into Juno. So the plan is we have someone working on it for Kilo. So this should go in. And this really is just about giving high performance access to root commands which are run by the Neutron agent. So both of these are really about speed, reliability, improving things for deployers and operators. So these are definitely going to make it into Kilo. Next slide. Flavor framework. This is another item which we spent a lot of time discussing during Juno, but which it just didn't make the Juno cut. And part of that was because there was just so much discussion around it. Trying to reach consensus was somewhat tough. But I think near the end of Juno we finally reached consensus, but unfortunately it was a little late to try to implement this. So we revised this blueprint for Kilo and the hope is we can get it pushed in for that as well. And so really what is the Flavor framework? The Flavor framework is a nice way for operators to offer network services to their clients. So you can envision an operator that is maybe offering something like load balancing, for example. And maybe they have a bunch of really expensive, super-fast hardware-based load balancers and then they have some software-based ones as well, which maybe aren't quite as fast but are much cheaper to deploy. So this would allow the operator to offer these to their clients with different service levels. And ultimately they could charge different amounts for these as well. So it's a nice way for them to provide this functionality to all of their tenants. And it's a nice way for them to have different service levels and things like that around some of these network services. So this is something that we're really excited about as well. Next slide. So Neutron NFV work. Again, we've been working with the NFV team in OpenStack as well around this. I think the main thing that we really would like to try to see happen here in Neutron around NFV are trunk ports. This has been discussed again for a while. There's multiple use cases around offering trunked VLAN ports to virtual machines. We're converging around a couple of those use cases and the hope is we can get those approved for Kilo as well and get that in. And then the next option is around seamlessly connecting hardware and Neutron L2 segments. There's some various blueprints around that, things like L2 gateways, things like that. Some of this work is still in discussion, but I think it's likely we'll come to a consensus and be able to get that into Kilo as well. Next slide. So, new plugins proposed. This is the current list of what you can see. There's specs proposed for all of these different things. These range from service plugins around load balancing, firewall, VPN, L3, down to just L2 plugins. There's some really interesting things on here. A Neutron OBS agent for Windows around running Hyper-V with OpenStack. I think it's pretty interesting for operators that are doing that. This work will be affected by the plugin decomposition work at the front. So, the core team is committed to working with the proposers of these blueprints to make sure that we refactor the blueprints to match the plugin decomposition spec which is like we approved this week. But this just shows that we still see an increasing amount of plugins around all of the different services in Neutron being proposed with each cycle as we go forward. Next slide. So far, the only plugin that a vendor or third party as much as deprecated is the ReU plugin. And really, the ReU plugin will be removed because it was deprecated in Juneau. And ultimately, the team behind the ReU plugin has a replacement that's been in tree for a while now. It's the OAuth agent running with ML2. It really subsumes all the functionality that the ReU plugin had. But it's possible someone else may deprecate a plugin later. I know last cycle, Melanox had deprecated something towards the end. But right now, at this point in Kilo, we just have one thing that's disappearing. Next slide. And so really, that's kind of an overview of Neutron. Thanks for letting me spend some time talking about this here. I think mostly we're focused around stability, scalability, and kind of refactoring which hopefully will bring better, more stable experiences for all the operators and deployers. Thank you.