 All right. Welcome to the PTL Overviews for Kilo. Today we are joined by three PTLs that will be going over their projects and what's upcoming for Kilo, as well as anything that's worthy to note for users and operators with Q&O. Today we have Kyle Messery with Neutron and Gental with Documentation and Terry Perez with Release Cycle Management. So go ahead to get us kicked off. We'll start with Kyle Messery. Thanks, Allison. Yeah, so as Allison said, I'm Kyle Messery. I'm the PTL for the Neutron project, which encompasses networking and open stack. So these slides will just kind of go over some high level details of what the Neutron team has planned for Kilo and what we're working on there. So go ahead and flip slides. Okay, there we go. So I thought I would start off like I did last time with just a high level overview of what the networking program's mission is here. So as you see, it's to implement services and associated libraries to provide on-demand scalable and technology-agnostic network abstractions. One thing that's worth noting here and one thing that I'll bring up is the services portion of this and how some of that, how we're developing that and how that may affect operators and deployers. I'll mention that as well later on in here. So I wanted to start off with highlighting Kilo priorities. One of the things that we did this time, which came out of the design summit in Paris and even before that when we were talking about coming up with priorities for Neutron and Kilo was we wanted to have a way to track the community-important things. In Juneau, we tried using a Wiki page. We tried some different things around tracking upstream community work that was really important for everybody. We've kind of evolved this a bit. At this point, we are now tracking it as an actual thing in Neutron specs. We actually have, that file there was actually committed. We reviewed it. We were tracking the high-level features there as well. And this was important for us because it's allowing us to prioritize things. And it also is transparent with all of the developer users and distributions around. These are the high-level things that we're all working towards. These are the important things that we're really trying to prioritize and line up and make sure that land in Kilo as well. And then the other thing that's important, we've had a lot of input from distributions from the people who make distributions of OpenStack. And this allows them to kind of plan to understand what features are going to be in Kilo so they can start to make some plans around this as well. So we'll give this a try. It's kind of an evolution from what we were doing in Juneau. But overall, I think so far it's working out pretty well. Next slide, please. So one of the first things I wanted to highlight is parity with Nova Network. So this work has been ongoing for a couple of cycles now. Started really in Icehouse where we started setting the groundwork for this. Continued in Juneau where we had features like DVR that really closed the functionality and feature gap between Neutron and Nova Network. And during Kilo, we're going to really work on migrating Nova Network installs to Neutron. So we've really done a lot of work trying to talk with developers, users, operators, deployers around this. Certainly we'd love to hear from Maura around what their expectations and requirements are here as well. We have some work ongoing which is kind of working between the Nova and Neutron teams around this migration effort. But the plan is to get something in place for Kilo that covers a certain amount of use cases for the migration from Nova Network to Neutron as well. There's also some work around the edges of the functionality gap. For instance, DVR is likely to have VLAN support. Whereas in Juneau, it just had tunnel network support. So that will help to kind of close some of the feature gap as well there. Next slide. So this is kind of one of the big items that we've been talking about for a while. And this is really focused around stability and scalability and kind of making Neutron Maura the core of Neutron Maura of an evolvable, scalable project here as well. So this work was discussed in Paris in multiple sessions. It's something that we're going to focus on at our mid-cycle coding sprint next week as well. And really this work is going to make Neutron much more stable, the core of Neutron. Two of the big things that we're looking to get out of this are actually the ability to better support out-of-tree extensions. So we'll be able to allow people to do a lot of add-on support to Neutron, much easier outside of the Neutron tree. And then also we plan to switch our homegrown whiskey over to pecan, which I think is going to be a huge improvement for us as well. Next slide. Plug-in decomposition. This is something that I really wanted to highlight because this again was discussed months before the summit. We also had multiple sessions at the summit for this, and we've continued to kind of discuss this on the review for the spec in the Neutron spectrum repository as well. But really what plug-in decomposition is about is it's about thinning the entry plug-ins and drivers that are upstream in the Neutron core project, allowing a lot of that functionality to be moved out into the plug-in and driver maintainers choice of where they want that. Really this is going to address a lot of these pain points that have really frustrated everybody around this process. You know, review time, iteration speed, how do we make it easier for the vendors to do their specific modules, things like that. So this process is, you know, the hope is and the plan is to make it so that everyone can get a win out of this situation here as well. So, you know, that's what we're hoping here. You know, it's going to allow for faster duration of both core Neutron as well as the plug-ins and drivers. We've, you know, this is something that I personally plan to continue to advertise on the mailing list and talk about with people, and I've already been working with some plug-in and driver maintainers on this, on how we can help them do this, but I think this is going to be a big win for everybody here. So this is, I wanted to highlight this at the front of this presentation. Next slide. Testing. Testing is definitely something that the Neutron team has really taken a lot. You know, we've spent a lot of time on this really since Icehouse we ramped up. You know, in Juneau, we got full API coverage for all of the Neutron APIs in Tempest. Really, we're expanding what we're doing testing-wise here to include full-stack testing in the tree. We've got a spec-outful review on that that looks like it's very close to landing. We're going to get increased functional testing of all the agents you see here, the OpenV-Switch agent, Linux-Bridge, DHCP and metadata agent, and we're going to finish off the work around the targetable functional testing as well. Next slide. Agent refactoring. This again is, this kind of works toward scalability and stability improvements as well. Neutron has a lot of agents that implement a lot of the functionality for the entry implementation, whether using OpenV-Switch or Linux-Bridge for your L2 stuff, or you're using the L3 agent to handle floating IPs or routing, or the DHCP agent to handle that portion as well. So really, as you can see here, the number one thing we're trying to do with this is around scalability. Trying to make these agents more scalable. You know, we're going to add functional testing to all of these as well. For the L2 agent, we're going to improve the RPC communication between that and the server as well. And we're also looking at various ways to improve the performance of the OpenV-Switch agent around how we interface with OBS, through OBSDB. Right now, we execute a lot of CLI commands, which have a high cost. So we're looking at more programmatic ways to do that as well. On the L3 agent, we're also looking at how can we abstract out some of the service agents as well, and that plays into something I'll talk about in a slide down the road here. But that's going to be a big win. And on the DHCP agent side, we're looking at some restartability improvements, some different scheduling mechanisms, both to handle load-based scheduling if you want to run multiple DHCP agents, and also around what happens if a DHCP agent dies. How do we handle moving the work that it was doing to another agent? So all of these things are really going to be important for operators, and we're pretty excited about the work we're doing here. So advanced services split. This was what I was alluding to earlier, and this work is going on now as well. As part of sitting within the neutron core project itself, but the team took a look at the services we have, load balancer, VPN, and firewall, and decided that it made sense to split these out into separate repositories in the networking program. So work's already underway to do this. We have a SPAC upstream, and we're going to proceed with going down this path at this point. And ultimately, the hope is we can allow operators the flexibility of running whatever services they want to offer their tenants. If they only want to offer load balancing, they can do that if they want to offer all three they can. It's also going to allow the teams working on this to iterate much more quickly outside of the scope of core neutron. Load balancer people, firewall, VPN, they can all focus on their own piece of this networking puzzle and hopefully iterate much faster there as well. We hope to also reduce some of the gate testing complexity as the project has grown. The complexity of the testing has grown as well. And then this may also allow us down the road to optimize some parts of neutron into more core libraries shared across all of these different services. Next slide. Plugable IPAM, this has been talked about at various design summits probably for almost two years off and on now. We're actually really hoping that this is the cycle where this works and we've kind of made this one of the priorities as well. And the idea is, just like it says, we want to create a pluggable IP address management scheme in here so that third-party and vendor IPAM systems can integrate with neutron as well. There's a spec out for review on this. We've had a lot of comments on it. We should be able to get this approved and we should be able to get this work into Kilo as well, which will just provide some more options for deployers and operators around how they want to work with IPAM. Next slide. Speeds and reliability improvements. These are both obviously really important to deployers and operators. These two particular features kind of fall into this category here. The first one is agent child process status. It's kind of a long way of saying we have someone who's writing some code which is going to monitor all of the agents that run when you're using either OBS or Linux. You know, bridge those of agents. You have L3DTP metadata agents. So this code will monitor and restart all of these agents if they should exit for whatever reason. So it's just a nice way to provide a little bit more resiliency and a little bit more peace of mind for operators that are using all of these agents as well. The other one is the root wrap demon mode. This feature didn't quite make it into Juno, so the plan is we have someone working on it for Kilo so this should go in. And this really is just about giving high performance access to root commands which are run by the Neutron agent. So both of these are really about speed, reliability, improving things for deployers and operators. These are definitely going to make it into Kilo. Next slide. Flavor framework. This is another item which we spent a lot of time discussing during Juno but which it just didn't make the Juno cut. And part of that was because there was just so much discussion around it trying to reach consensus was somewhat tough but I think near the end of Juno we finally reached consensus but unfortunately it was a little late to try to implement this. So we've revised this blueprint for Kilo and the hope is we can get it pushed in so really what is the flavor framework? The flavor framework is a nice way for network operators to offer network services to their clients so you can envision an operator that has maybe offering something like load balancing for example and maybe they have a bunch of really expensive, super fast hardware based load balancers and then they have some software based ones as well which maybe aren't quite as fast but are much cheaper to deploy. So this would allow the operator with different service levels and ultimately they could charge different amounts for these as well. So it's a nice way for them to provide this functionality to all of their tenants and it's a nice way for them to have different service levels and things like that around some of these network services. So this is something that we're really excited about as well. Next slide. So Neutron NFV work again we've been working with the NFV team in OpenStack as well around this. I think the main thing that we really would like to try to see happen here in Neutron around NFV are trunk ports. This has been discussed again for a while. There's multiple use cases around offering trunked VLAN ports to virtual machines. We're converging around a couple of those use cases and the hope is we can get those approved for Kilo as well and get that in. And then the next option is specifically connecting hardware and Neutron L2 segments. There's some various blueprints around that, things like L2 gateways, things like that. Some of this work is still in discussion but I think it's likely we'll come to a consensus and be able to get that into Kilo as well. Next slide. So new plugins proposed. This is the current list of what you can see. There's specs proposed for all of these different things. There's range from service plugins around load balancing, firewall, VPN, L3, down to just L2 plugins. There's some really interesting things on here. A Neutron OBS agent for Windows around running Hyper-V with OpenStack I think is pretty interesting for operators that are doing that. This work will be affected by the plugin decomposition work at the front so the core team is committed to working with the proposers of these blueprints to make sure that we refactor the blueprints to match the plugin decomposition spec which is likely to be approved this week. But this just shows that we still see an increasing amount of plugins around all of the different services of Neutron being proposed with each cycle as we go forward. Next slide. So far, the only plugin that a vendor or third party as much as deprecated is the ReU plugin and really the ReU plugin will be removed because it was deprecated in Juneau and ultimately the team behind the ReU plugin has a replacement that's been in tree for a while now. It's the OAuth agent running with ML2. It really subsumes all the functionality that the ReU plugin had but it's possible someone else may deprecate a plugin later. I know last cycle Melanox had deprecated something towards the end but right now at this point in Kilo we just have one thing that's disappearing. Next slide. And so really that's kind of an overview of Neutron. Thanks for letting me spend some time talking about this here. I think mostly we're focused around stability, scalability, and kind of prefactoring which hopefully will bring better more stable experiences for all the operators and deployers. Thank you. Thanks Kyle. We will include his IRC information and a link to the slides in the description in the YouTube video so please look and let him know if you have any questions regarding what updates will be available for Neutron and Kilo. Next we have Ann General with documentation. So Ann, if you're ready. Hi, yeah. Thanks, Allison. Great job, Kyle. Okay, we've got a follow-up on one of the larger projects in open spec. And documentation program, you know I've been working on it as the documentation program technically for about over four years now so I feel like we're hitting our stride but the growth of the project has really made us question resourcing ask how we can do this in more innovative ways. And so what I'm going to do today is talk through what is the Doctene, how are we going to shift as the scale of all the projects keeps growing and growing and talk about our accomplishments last release as well as our goals for Kilo. Let's go ahead and hit the next slide for the composition of the team. So the open spec.team, you know we have about 20 core reviewers and I'll talk a little bit more about some of the contribution stats in another slide but these are the kinds of things we can offer to teams. We are information architects. A lot of times we're figuring out where the information bests it. We're very good at audience analysis and we figure out, okay, when does a cloud administrator even know this? Is this somebody who actually wants high availability or is this an audience that's concerned more with running applications on top of OpenStack? And so we're writing community docs but also providing that overall view, the 10,000 foot view of what needs to go on docs.openstack.org. You know we also have tool builders in our community and these are people who have really done a great job of making sure that docs do continuous integration. Docs are reviewed just like code. We have talks tests against documentation and I feel like man, we're really leading the field as far as technical documentation and automation. We have ways to scrape code to get the latest and greatest and I'm just constantly impressed with some of the ways that we are just really making this technical, technical documentation. We also have reviewers and I have to brag a little bit on like OpenStack-Annuals repo we have the quickest turnaround of any OpenStack project. So we review very quickly. Now we might review a lot and we'll definitely be sort of the consistency police and make sure that quality and accuracy are the highest priorities when getting a doc through our review queue. And then lastly and I kind of have this last on purpose, we provide writers and there are people working at different companies to maintain upstream documentation in addition to documentation for OpenStack products. So this is where we're working on building out the team and also trying to find ways to coach the individual project teams to bring writers to the OpenStack upstream work. Let's go ahead into the next slide about the accomplishments. I love looking at the stats. It's pretty telling that we had 238 individual doc contributors and I think this speaks to the long tail of the knowledge that you need for OpenStack itself. There's a lot of very detailed oriented things especially when you get into driver documentation and a lot of the plug-in architecture that we have. And so even though we may only have less than 20 core reviewers, we need all the doc contributors we can get, especially as we've gone from two projects to 17 that need documentation. We have a lot of bugs in our backlog and probably honestly should have even more considering the use of something like a doc impact flag where a developer can mark in a commit message, yes this change will affect end user docs, will affect configuration, will affect the employer who wants to put this into production. And so that's actually one of the things I'll talk about that we want to do for Kilo. I tell you what though, over 2,202 minutes and 8,400 reviews that's pretty awesome. And the docs.openstack.org site itself is really a place for people to come to get the information that's being shown through our page views, through our unique page and the Ice House documentation led the way with the most page views. So we know that people want release documentation that is in sync with the code that is keeping up with the huge scale that OpenSec has become. So what were the major things we got done? I think it's great that the documentation program has a section in the release notes for what we got done in any given release. And one of the big ones was this architecture design guide. And so that's aimed at people who want to figure out what to do with OpenSec and it provides lots of use cases from just compute to just storage all the way up to hybrid clouds, massive scale clouds. And so that was done in the 5-day book springs funded by the foundation. Thank you guys very much. These are a big way for us to get experts in the community together and their willingness to give their knowledge back to community is what is going to sustain the docs over time. We also did a lot of standardization across for OpenSec institution guides and so, you know, I don't know if everyone knows this, but we write for Ubuntu, Debian, SUSE, and Red Hat Fedora sent us. So there are, you know, little variations and so what we did was we sat down and said what can we make exactly the same across all these distributions and we did a lot of maintenance easier. And, you know, the testing on that is also a huge undertaking. So that's, you know, that's a huge bit of work that we did last release. We also started splitting out documents that are very special. It's like a high availability guide. You probably have to know a lot about Pacemaker, a lot about MySQL, and not necessarily OpenSec. So what we've done is found teams that can specifically do reviews of that kind of content. And the same thing for the security guide. They had pushed this past release where they did reviews, log doc bugs, fixed doc bugs, and kind of trying to gel the team around this specialty knowledge. We've also been moving the long-form API of reference documents into the project specs repositories. I'm about halfway done. The larger projects we still need to get in there, but that also speaks to the complexity and their APIs. And so honestly, one of the interesting stats I didn't put on the slide but that I found is I kept digging is we document almost 750 API calls with GitPost plus QRI. And then if you get into counting like the headers and stuff, the numbers are really big. So it's important that we find ways to streamline the maintenance on API reference information and still give people the information they need for what should I be doing around OS, what should I be doing for rate limiting, what should I be doing for these kind of calls. And so I see myself working a lot with the API working group and the application ecosystem groups as we continue to work on useful API documentation for those people running apps on top of OpenStack and for our employers who are supporting their own users. And then honestly, before just things we do for the documentation we've added more information to the user guide about Trove, the database service both for end users and for admin users running that and also updated the command line reference. That's another guide that we automate using the strings out of the help file. Lots of updates to the cloud administrator guide and then another automated guide to the configuration reference. And it's that automation that lets us keep up with the code. It's the way that we can work within the community and apply a lot of the community and collaborative techniques to docs. Let's go ahead to the next slide where I'll talk about the QO goals. So this is basically our mission and I try to really think about our users, really think about our employers, think about the audience. And at the same time try to provide quality, accurate documentation keeping up with the code. One of the things we're going to do is minimize the driver documentation. So Kyle also had this long bulleted list and each of those needs documentation but what we're thinking is that we don't actually need that directly and upstream the real step by step stuff and so we're going to focus on upstream docs documenting the open source ones on docs.opensector.org. And then vendor plugins will certainly be documented to the point where this is what you have to do on the open stack side but then for the real step by step configuration they can go and maintain it on their own their own docs domain. And I think that lists a lot of the burden for drivers documentation people as well. And then something really exciting coming up a couple of screenshots in the last couple slides is a new web design. So the foundation gave us some of their awesome web designers shout out to Wes and Todd for giving us a new web design with a lot of the requirements around we want to look more at page based design instead of always having to have a book. And so that's really exciting and we'll have a blueprint up about that this week that I want to share. This always will optimize so that we have automatic build so that we scrape the code as much as possible especially for reference information. And then some of the other things that we just do as a doc team is support the project teams keep the API reference up to date review things as they come in and bring conventions across all of open stack. One thing that the infrastructure team has been doing recently is a doc sprint around their new self-service guide for infrastructure inside of open stack and so we were able to write conventions. Andres Jagger is an amazing contributor and can look at these things across projects to enable other teams to get the docs done that they need. So let's look at the roadmap in the next slide. And I look at some categories, quality, tools, experience we always have to maintain high quality documentation and that is part of the initiative that we do is we just maintain what's there. Sometimes we trim it out sometimes we scope it a little differently but we're always doing this monthly daily. We can get 50 catches in in a day some days and so that's kind of my call to action to everyone is to understand that this ongoing work that always has to be done and we absolutely need people to bring writers, communicators people who can bridge that gap to deployers to end users and work on our bug backlog. We actually held a successful bug squash day last week and it wasn't we did a little bit of a change of the whole idea of a bug squash where we did a bug triage day. So instead of fixing the bugs we actually went in and wrote down these are the exact steps that you can take to fix this Docbug. And I think that will help people who just kind of want to do that walk-up contribution. You know there's sitting home a little belly up to Thanksgiving dinner and just want something kind of mindless to work on. So that's the goal of this kind of thing. And I think that will help with our backlog. Our bug backlog is large because of the growth of the projects and the projects are going to have to bring the resources that keep up with their their documentation impact as they have features. We are working very hard to get these many many of the source docs to RST format instead of Docbug and I can hear the cheering already in the background. But the idea is that we want to make it fast and easy for people to contribute and you know no specialized knowledge, no white coat kind of understanding of documentation make it as accessible as possible make it easy to submit Docbug and that's part of this new redesign as well. We've always had a Docbug on every page but we want to make it very easy to get to. And then you know under experience I feel like our site has shown its age and it's about a three-year-old design and so it really is time to bring it into a more page-based layout and what we're going to do is a phased approach to migrate certain documentation to RST using a new SYNX template. And so that leads us into you know I hope to have much of that migration done by the QL release but it's a ton of work. So let's look at the new landing page on the next slide. And this is only about half a screenshot but the idea is to make sure that we match the www.opensected.org header and then offer these documents that you can use as certain release targets. We have different language documentation and so offer this listing so that people can find out oh this is Doc for me. This is relevant to what I'm trying to get done here. And do a better job of seeing relationships between pages. Make sure that the output itself is more easy to read on your iPad, on your mobile device, on your tablet, on your Nexus 6 that we're all getting for Christmas. Just make sure that all of the entire Docs.org site is very accessible. And so on the next slide I show the actual page redesign with a lot of the features that we already had but also this idea that here's all the things you can get to once you're on a landing page. And also the things you can do make sure people know this page wasn't updated in the last month. It was updated in the last six months. That's pretty telling for how up-to-date a page is. Make sure that people know every bit of Docs.org can be edited using the review system that we have in place. And then I'm trying to get towards in every pages page one. Get away from the idea of being a very linear thing and let people land anywhere because I totally believe that everyone's front page is Google.com and that is the entry to the site. We're keeping a lot of the features we had before RSS feed-on updates for Porter.Bug and a lot of the syntax highlighting that we had previously. So just trying to get this really accessible, keep it into source format that people are really excited to just walk up and do minor edits, do quick typo fixes and also work on content that's relevant to the audiences we're trying to reach. So next slide, I want to just end with, you know what, let's do this. Contact me. If you want to get involved with documentation we need people from just Doc Reviewers who are subject matter experts who know some certain part of OpenStack and knowledge of the features. Let me know if you're interested in Doc Tooling. Let me know if you're interested in Fink's templates. Let me know if you're interested in doing big reviews and quality checking. All of these things have a place in the OpenStack documentation and I'm totally available in IRC. I'm happy to set up phone calls with people as well and you know, even over the break last week we had a person trying to get through the installation who was wanting to work on the guide if it needed work and so we've worked with him on Twitter of all things. So feel free to reach out to me and just hear my call to the community that we need documentation and there's lots of ways to contribute. So thanks for letting me. Awesome. Thank you Ann and thank you again for your time. Really appreciate you jumping in and providing your updates and we'll have, like with Kyle we'll have a link to Ann's slides in the description on the YouTube video so please check out her slides and contact her if you have any questions or want to get involved with documentation. So last but not least we have Thierry Carras on the line and he will be going over the release cycle management from Juno to Kilo. So Thierry if you're ready. Thanks Allison. So like documentation the release cycle management is what we call an horizontal project in OpenStack. That means it serves the needs of all the other OpenStack projects and so our main challenge like Ann mentioned is to accompany the constant growth of the OpenStack project and reinvent ways of doing what we do at the scale of software development that we see inside OpenStack. So my own program is called release cycle management and it's really about all the coordination around releases around organizing the development and making sure that whatever we produce is consumable downstream. We have three different sub teams on the next slide so we have integrity release management that's what the current release cycle is about so we make sure that all the different pieces of the integrity release are on track and communicate what they are working on so that we can predict what will be in the next release and hit the deadlines on time and release on time on the same date. We announced at the beginning of the cycle we have stable branch maintenance after the release that's about back porting important bug fixes security issues security bug fixes and then issue point releases for our downstream consumers to use. And finally we have a team working on vulnerability management receiving vulnerability reports from various actors on the internet and following up on them and making sure that we address the vulnerability that are reported in OpenStack software in as fast as we can with the best disclosure mechanism as we can organize. So we'll start with what we achieved during the Juneau cycle during the Juneau cycle we hit all our deadlines again so we released on the date we announced we would release we made the process for publishing all the development milestones we have three development milestones in the middle of the cycle and we used to have more of a branch for a few days and let it cool off and we replaced that by a more lightweight system where we just tag when the project technically is happy with the current state of the branch we just apply the tag and move forward. So we simplified that development milestone publication process. On the stable branch side we extended Ithouse support for 15 months so we will be supporting Ithouse for a 15 month period and on the vulnerability management team front we addressed 90 vulnerability reports and issued 24 security adversaries out of those 90 reports some of them were not really vulnerabilities, some others were not considered significant or exploitable enough for justifying an adversary but we only 24 adversaries during the last cycle and finally we created a taxonomy for incident reports which means now when we receive a report we classify it and there is a given process to apply for each type of incident reports in that taxonomy so we have a much more transparent process in the way we handle those reports. For the keto cycle first we'll start with stable branch because that's where most of the changes will be coming so stable branches started to hit a scalability issue with the number of projects that we added to OpenStack so the reviews were really slow, there was a very small team driving the stable branches for all the projects and we decided to decentralize this structure so now we have a stable liaison named in every project that has a stable branch so we have a designated person that lives in the upstream projects that works in collaboration with the stable branch maintenance team we have per project stable maintenance teams so around the stable liaison we have a number of people that will directly approve the backport to the stable branch and that's a strong departure from the current system where it's a single team that was reviewing all the patches for all the projects and now we're decentralizing this organization so that we have project-specific maintenance teams that are more in the domain expertise but we still have the stable branch maintenance team looking over them to make sure that they have they follow the stable branch rules, the rules that we created for the stable branch we will have what we call stable champions, those are people that are responsible for a branch in particular and making sure that that branch is still usable from a continuous integration perspective because from time to time the stable branch is not exercised as much as the master branch so it can go stale really really fast and when we need it like when we need to merge a security fix we kind of need it to be working on the right date and the stable champion will watch the branch and make sure that we can land patches in that branch all the time we'll have stable release managers, we already had them but we will more formalize how they're intervening in the structure so stable release managers are people that are responsible for point releases so they bank the drum every months or every two months for issuing a specific point release in one of the stable branches and they make sure that all the important bug fixes are back ported in time and finally we'll have in addition to the PR project stable maintenance team, we'll have a stable core team that is responsible for answering the the respect of the stable policy so we have a stable policy that says only high impact bugs that are obviously backward compatible and will have no result in behavior changes for users of the stable branch so we have a very conservative stable branch policy and the stable core team is responsible for making sure that this continues to be respected with this more decentralized structure so that's it for a stable branch oh no, we have more we also try to introduce dependency capping in the stable branch so currently we have new dependencies coming in the stable branch and this is the main reason why the stable branch breaks from time to time because there is a non-backward compatible dependency that gets pulled from the rest of our ecosystem and so we are looking into ways to cap those dependencies to make sure that we don't get continuously broken by upgrading our dependencies so we will work on opening the team because the team was operating on a separate mailing list and so it was really difficult for people outside the team to see the work that the stable branch team was doing and so we moved all our discussions to the development mailing list and we abandoned our specific mailing list to make sure that our work was visible so that we can recruit new people that would be interested in joining the stable branch team next slide please management we will have feature freeze on the KetoCycle on March 19 that means that will probably be the peak of feature landing and gate activity within OpenStack so don't go in vacation around that date for the final release it's planned for April 30 so at the very end of the month of April and we'll have our next design summit in Vancouver in May the challenge for the release management team is to handle one more project we have Hironic that was integrated during at the end of the last cycle so we need to support it and it will be the first release of Hironic in OpenStack Integrated Release we'll also do a number of changes in the release management processes we'll switch to release liaison sync points we used to have PTLs coming every Tuesday on a NRC channel to synchronize with release management and we'll more formally allow them to delegate to specific people within their team called the release liaison we will then change, we already changed the release meeting every Tuesday at 21 UTC to become a true cross-project meeting any issue that is truly cross-project within OpenStack can be erased to that meeting to be discussed in an open forum, not just release management issues so that's a nice improvement we'll continue to evolve the design summit formats to make sure that we can sustain more project within the OpenStack community and to be discussed in the design summit and finally we'll also evolve tooling to support new practices so lots of projects for example have started to use spec for pre-approved design before implementation in OpenStack projects and we need to make sure that the spec process is as integrated as possible with the rest of our task tracking tools launchpad and the upcoming storyboard tool on the vulnerability management front we'll we'll finish to publish the security advisories on a specific website, currently the security advisories are only published on a mailing list an announced mailing list and we now have a repository for all the security advisories and we'll publish those directly on the security website on OpenStack.org that will be more official than the copies that we would find on the Wiki currently and we plan to adopt a new vulnerability metric based on the Dread framework so we'll be able to have a scale for to rate the importance of every vulnerability artist we'll try to provide a score to be helpful for people to determine how fast they need to react to a given vulnerability and I think that's it. Yes, thanks you for listening and if you have any questions you can reach out to me on IRC, on Twitter or via email any question on release cycle management. Awesome, thank you Teri and thank you again to Kyle and Anne for joining us for updates with your projects today. The links to all of their presentations are below in the description and of course you can reach out to any of them with any questions on IRC and I believe they all provided email addresses as well so please let them know if you have any questions or you want to get further involved. Thank you.