 Okay. Well, thanks, everybody. Thanks for joining the webinar today. My name is Kyle Mastry. I'm the PPL for the network team project in Milk and Fact, known as Mutron. I've got my Twitter handle and my IRC handle down there if you need to get all of me. And so let's go and we'll talk about what's up in Networking for Liberty. So before we jump into what we're doing, I find it's always good to kind of level-set what exactly the networking project in OpenFact is doing. And so the mission statement, as defining the governance doctrine, is to implement services and associated libraries to provide on-demand scalable and technology-agnostic network abstraction. And I think one thing that you're going to note is both in some of the work we've done in Kilo and definitely what we're doing in Liberty is we're definitely going down that path. So let's take a quick peek back to Kilo. As far as Mutron is concerned, it's what we did. So these are just the statistics, the raw statistics from Launchpad, so what we actually completed in Kilo. So you can see we completed 45 blueprints, and we closed 544 bugs as well. Those are pretty good numbers, and they're pretty much in line with what we did in the June of release as well. One of the big things that we did in Kilo, which actually we're already seeing the benefits of as far as velocity of development, is the advanced services fit. So we set out the advanced services, which you can firewall, load balance, and VPN into separate Git repositories. And they actually release as their own car ball now. They still are aligned with the main Neutron release, and they still require the Neutron server to run, but it's possible in the future release that these advanced services, either some role of them, will end up with their own API end points as well. And the other big effort we did was we started this plug-in decomposition effort in Neutron. When we ended in Kilo, we had over 15 plug-ins and drivers, and at this point we started decomposing some of the back ends of them, and that was the first part of this. So we had 10 or more of these that started this decomposition effort, and again this was all about increasing feature velocity and review velocity on both sides. So as we looked at liberty, what Neutron is going to be looking at for liberty, we're going to definitely continue with this phase two of the plug-in decomposition, and we'll talk a little bit more about that in a future slide. But that is one item where the core of your team is focused on trying to complete that, you know, completing the mission and the vision that we had when we started that in Kilo. As part of that, the plan is also to decompose a reference plug-in, which is NL2, and the OBS combination bridge agents, we'll get that decomposed. We've also started working on API micro-versioning and increasing our quota support as well at this point. So let's start talking about some of the things that we're doing in liberty now. In regards to Neutron and Nova Network, I think as most folks are aware, when Neutron was originally spun up, it would actually run as quantum, and it did not start from the Nova Network code base. It actually started from a fresh code base, and it started with a different API as well. So now, three, four years down the road, we're getting close to the point where, you know, I think everyone knows that we'd like to have long networking stack, and we've been working hard for over a year, as far as ensuring we cover a lot of the use cases of Nova Network. We've been looking at migration options and things like that. So let's see where we're at in liberty now. I think most people know that in Ice House, where we started to set the groundwork for achieving this, and during June 0 in Teeville, the team worked really hard on closing the functionality gap, especially with things like DVR. So during liberty, the focus around Neutron and Nova Network is going to be on the following items. One of the things that came out of the OBS Meetup and in Vancouver was the fact that a lot of the existing people that are using Nova Network are using Linux Bridge. We have Linux Bridge support. It's just that it was not tested in many gates. We have some people who are working on enabling that mouse so that we can ensure that that support is tested as well as the OpenV switch agent is at this point. We also have some work on a specification that we've kind of closely called Get Me a Network. So this specification is specifically around providing kind of a simple wrapper API for Neutron where, if you can effectively say Get Me a Network, it will utilize pre-existing Neutron resources or create the resources and return them. So this would be things like, when you would like the boot of VM, you need to have a network, you need to have a subnet, and then you have a port on that for Neutron as well. And then if you want external connectivity, you need a router for that as well. So this would be about simplifying that for this type of use case. The API would still remain as it is, so those people that are using it to bridge network topologies, you'll obviously still be able to do that as well. And I think another item that came up during the Vancouver summit when we were talking with operators, we're going to do a much better job of documenting shared provider networks because this is the mode and the operational mode you deploy Neutron in that maps closest to Nova Network, and especially to Nova Network installs of notes where people are doing it at any sort of scale. So we support this and we have for a while. It's just that I think we're doing a better job of trying to document this at this point. So the next item I wanted to bring up was what we've kind of turned to the Neutron Stadium. So as most people are aware, OpenStack has undergone some big governance changes, which has been turned a big tent. So Neutron has actually also changed along with this governance model as well, and we've kind of turned ours the Neutron Stadium. We didn't want to feel the big tent acronym from the broader opening of that project. And what this effectively means is we're allowing much more, many more git repositories into the Nova Network team project, into the Neutron Stadium starting in Liberia now. And initially what we've seen is a lot of these plugins and drivers that decomposed and moved a lot of their back end codes out into the staff doors. Most of those have proposed themselves to move back under the OpenStack namespace under the Neutron program as well. We've also seen some potential new EPIs that have done this as well, where they started off on staff boards and now they're coming back. The prime example of that is the L2Gateway work, which handles translating typically between an overlay network like VIHlinerGRE into a VLAN network that physical devices can understand. And then another example is, you know, we have a service function changing project that started now as a separate git repository, where a bunch of interested parties are all working together to come up with an SCAPI for Neutron as well. And really this is all about continuing to grow kind of the ecosystem under Neutron as a platform for networking and OpenStack. So some other evidence changes that we had in detail, one of the things that we wanted to do was we wanted to continue trying to scale core reviewers inside of Neutron. And one of the problems that we've had is, you know, Neutron is a fairly large project at this point that composes a lot of different things. And it's incredibly hard for people to kind of grasp all the different pieces in any level of detail. So we ended up with a lot of people that are specialists in certain different areas. So we've kind of changed the model of this, and we have this new Lieutenant model, which will hopefully allow us to scale core reviewers. So we still have the PTL, which is myself. And then we have seven lieutenants for different areas underneath, for things like the reference implementation, API and database, testing, and other areas like that. And then what we've done is we've actually allowed the lieutenants to propose new core reviewers for their area so that we can add additional core reviewers. So far we've had four new core reviewers proposed in specific areas. And so far this is actually working out really well. It's really trying to scale the development and the leadership of the project kind of smaller to how one external scales itself with kind of a pyramid structure as well. And then the other big item we did was around specs as well. You know, we had a specs repository for the last two cycles where people would submit specifications. Everyone would review them and things like that. And it turns out that for the rest of these, what we typically saw happening was the specs repository became a pipe and it ended up just another pipe that got filled with things. And so initially it was great, but then once the pipe got full, we were just left with a lot of things clogged up on the back end there as well. So what we did was we defined this new process, which we call RFP, or Request for Enhancement. And it basically allows us to streamline the way that work is proposed into Neutron. So the new process indicates that you file a bug with your feature and you indicate, you know, the what, what you would like, not the how it's going to be implemented. You file the RFP bugs. Everyone is encouraged to review these. The Neutron driver's team will review these as well and move them into a confirmed entry-age state as they're looked at. And then once the work is, once the people agreed that yes, this work is relevant and we talked to Dennis, who are actually mostly supposed to be assigning the work to the people that work underneath them, we'll move the bug in. And then we still expect some sort of documentation, but the idea is when you're implementing it, the documentation becomes part of the developer documentation and merges with the code as well. So far this has proven to be pretty useful. We'll obviously have a lot more information on how this has worked out once Liberty is done. I'll really quickly run through decomposition phase two. So this, again, completed phase one during Helo phase two is effectively going to allow altered particle to be removed from the main Neutron repository. You know, we're basically dealing with things like database migrations at this time, which were still left in tree and the shim plugin as well. And the goal is to split all of that out. The reference implementation will also be split out. The advanced services are starting to decompose some of their backend drivers. The nice thing about the advanced services was people were proposing new ones without the benefit of pulling decomposition on the drivers and plugins for the core Neutron resources. So some of the new services things were actually decomposed from the start as well. So this has been, this has worked out pretty well for us as well. We're continuing to one of the other feature items we're working on is we're continuing a bunch of work around our REST, our PC and plugin layers as well. We actually have a feature branch which is looking to move us from our homegrown rigging layer into using the Pekan library. So that work is actually ongoing right now. It's going pretty good. So we're hopeful to move that before everybody too. We have an API micro versioning list that's being done at this point as well, which I think is going to benefit a lot of people. And then we also have, you know, we've done a lot of work around our PC versioning in Kilo, and now we're adding some support for upgrade checking to ensure that that actually stays consistent at this point as well. So quality of service in QoS. This was a big feature that has been talked about for probably a year and a half in Neutron. But this time we've actually set up kind of a sub-chain to actually focus on working on this. The QoS sub-chain is actually this week the first week of July is actually having a mid-cycle where they're actually focused and heads down writing code for QoS at this point. So the liberty focus for QoS is going to be around bandwidth limiting. And we're also going to make sure we lay out the QoS models for the future. So we can add and extend the API and the models to introduce additional QoS concepts that a lot of people have been interested in. We decided to focus on bandwidth management initially so we could actually deliver something in the liberty timeframe and then allow things to kind of be set up for future additions to this during the next cycle as well. And currently we're targeting QoS policies either per port or per network. So another feature we're looking at is role-based access control for networks. This is actually a really huge feature that we believe as well because currently our shared network concept is not granular. We have the concept of a shared network but it's all enough to get this point. So this is actually going to allow tenants to create and share network resources with other tenants if they want. That's one use case that's been bandied about and that people have been interested in. And then the other one is this will allow an operator to define networks as well and it will allow them to limit access as well. So right now an operator can define a shared network but it's shared between all tenants so this would allow them to create maybe two of the shared networks. I think this will be a useful key for operators to have as well. Plugable IPAM. So those of you who attended my QLO webinar will know that this was an item for QLO and we didn't get this merged near the end of QLO for a variety of reasons. The good news is we've already started merging the patches for this QLO liberty and this will be a current liberty as well. This has been a really requested feature of the operators who are already utilizing the existing IP address management system and would like to integrate it with their open stack cloud with Neutron. So that's exactly what this will allow you to do. This will be a nice feature for operators, I think. So Dress Scopes BGP and Routed Networks. This is a lot of stuff and you know, I threw this on here because I think this is important both from an operator perspective and mostly from an operator perspective. So in QLO we have the content of subnet pools that we added. And this will allow an operator or a user to create a pool of IP addresses. And then when they were booting a VM they would not have to specify specific IP addresses or anything. They could just specify the pool and the code would allocate addresses from there. You could even allocate actual subnets. So when you created a subnet you could allocate that from the subnet pool. So this kind of continues expanding upon that a little bit as well. So this is going to allow us to continue evolving Neuron and Neutron. So BGP, you know, this will allow us to advertise Neutron routes externally into an existing operator network. So this work is ongoing right now. Address Scopes. This work is also ongoing and will allow us these address scopes to be first-class citizens on the Neutron API. And then Routed Networks, the original goal was to allow Routed to connect to a network without consuming IP addresses. So this work also allows us to satisfy some use cases around things like L3 only network and Network StakeMess, which a lot of operators have been interested in. So there's been a lot of discussion on these topics and I think they will be pretty useful to operators. So Flavor Framework. It's something that we started working on during Hilo and we're hopeful to emerge this in liberty as well because it patches around and everything is going to prove. It would be great for operators because it's going to be a way to end off the network services to their clients at different levels. It also allows us to operate the driver functionality and the configuration for the consumer to be serviced. So an example of how this would work would be if you as an operator have a cloud where you have deploying something like mode balancing. And so you're offering mode balancing to your tenants. Let's say that you have some really fancy hardware based maybe like A10 networks or F5 networks and then you can offer those to your tenants but then let's say you only have a few of those and you have a lot of the tenants. You may also want to offer software-based HA crossie or something like using Octavia which also uses HA crossie. So Flavor Framework would allow you to assign kind of classes of service to those so you could assign your fancy A10 and F5 mode balances. You could assign those a gold level of service and you could maybe assign the HA crossie one bronze level and maybe Octavia which is kind of a service VM-based mode balance or give that like a silver level. So this would allow all of your tenants to be able to select their functionality based on this way. Burn it off. Use the operator to expose these that way to your tenants and likely you may end up charging your tenants based on the level of service they pick and things like that so it really is a nice thing for operators and we're pretty excited to be offering this as well. So NSV work, we're continuing to work on NSV and more functions for acceleration stuff. There is still something that's working on this in OpenStack. Some of the things we're working on in this area in the Hoopry Times can include our seamless reconnecting hardware and neutral NL2 segments. There's some work going on with IRIK which is the bare metal provisioning platform, bare metal driver and so that will actually work in closely with the ironic theme on that. We have some work on a draft port. So this would allow you to have a port with an L3 address on top of that. This has been important from an NSV perspective as well and of course we approved this back for trunk ports to virtual machines which is pretty critical to the NSV work. It's been a requested feature for a while. Load Balancer, the load balancer team has really picked up team and they're doing a lot of interesting work during the Liberty Cycle as well. So they're going to add support for layer seven and switching which is typically known as content-based routing. They're also going to move to Octavia as a default reference implementation for LDAS. Octavia is a service VM-based LDAS that uses DC product standards. So yeah, this is where they're making a lot of work and they're doing a lot of things to make LDAS pretty great. I suspect that they, at some point, may be able to do a bit of work with the same activity or the end site although the limit was separate load balancer API endpoint as well. Just a quick list of what plugins are proposed here. We have new plugins and driver's proposal. We have the Dragonflow L3 DVR plugin which was proposed as a separate repository and has come in as well as implements L3 routing and the implements DVR function are very well. These full blocks are the IPAM and DHCP plugin. upstream once the ICANN work goes up. Cattechnology has proposed an L-BAS driver for V2. We have folks who have proposed leangrass one as a VPN driver. And then, of course, we have the Octavia L-BAS V2 driver as well. And so at this point in the cycle, we actually have one plug-in that's disappearing, so the meta plug-in is going to be deprecated. I don't believe that anyone other than NTT was using the meta plug-in and they were maintaining it, so they've chosen to deprecate it at this point. So it's the only thing we have disappearing at this point. And so I think that's it for the updates from Neutron at this point. I'd just like to say if you have any questions or anything around the development and aspects, feel free to send an email to the OpenStatic dev list and make sure you tag it in the subject as with Neutron. And then also we have an IRC channel, OpenStack Neutron, where a bunch of the community congregates there. So feel free to drop in there with any questions as well. Thanks for taking the time. I'm happy to present on this.