 Hello, I'm Sean McGinnis. I run the open source office at Dell. But I'm here talking about things that I've been doing in the open-stack community for the last several years. And not that I've been doing, I've been doing it recently. A lot of this has been stuff that's built up over years of people making improvements and making things better. So I just get to present all the great work that's been done. So in the open-stack community, we need to release all the work that is being done. If you're not familiar with it, within open-stack there's quite a few different projects. They each have different deliverables, usually several deliverables per project. And we need a way to package that up and make that available to downstream consumers or whoever needs that. So like typical release steps, we need to be able to compile or validate, make sure that that source code is good. We want to make sure whatever we're delivering and releasing actually is something that someone can pick up and use. We need to take the repos. We need to be able to know exactly at what point any given deliverable was released so that we can go back later. We need to create whatever artifacts and this varies depending on what deliverable is. For us, most of open-stack is Python. So that's packages that PIP can install, our tar balls of the source itself, there's some node NPM packages, things like that. We need to publish those somewhere. It doesn't do much good just to create them. We need to make them available so they need to get published on a website. They need to get pushed up to a package repository where I can get pulled down. And typically, even if you're just doing this as something within your own company, you need to send some kind of announcement, let people know that there's something new that is out there that they can pick up and use. So we've tried to automate all these steps. This is something that's evolved over time. Initially, we tried to just trigger on new takes being pushed, which was great. That took care of a lot of things. That does take care of a lot of manual steps, but there were some issues with that approach. There was inconsistent use of versioning. We didn't actually follow Simver for a while, but semantic versioning, if you're not aware of it, simver.org, really great way to make sure that you're communicating by your version number if that version contains breaking changes or if there's just bug fixes. And if you leave that up to individual teams to do, you just kind of have to hope that they're aware of what those rules are and that they actually follow them. The packages that were being published weren't always what you wanted. They could maybe grab the wrong commit, miss something important. There's the times where the wrong commit gets tagged and you end up releasing something that really, well, now it's out there and you can't really easily go back and take that down. And for stable branches, it's easy to go back and create branches, but if you know you're doing something for a past cycle, you wanna make sure you have a branch there and that everything's in place and makes it easy to make sure that the right changes are going to the right place. And publishing that, making that information available, we have releases.openstock.org. We wanna list everything on there. If that information can't easily be extracted, then it's a manual process of going through and finding what versions and where and getting that published. So we try to automate more of the things. And not only that, but make it a process that really fits into the processes that the developers already are comfortable with, with doing code reviews, pushing changes up just like they push code changes. So the process evolved where we actually are able to leverage that tooling we have in place for the development workflow. We use Garrett for code review, makes it really easy to have anybody propose something and then you can review what changes. And we have Zool CI that is great at testing these changes and being able to have dependencies between things, making sure that if you have a dependency that one change merges before the next. And the nice thing about Zool for us being a mostly Python project, it's Ansible, very Zool friendly, very easy to extend and work with Python with that. So our typical workflow for just code changes is that we have a check queue. So when someone proposes a change, that goes through check, tests are run like any kind of CI and those tests need to pass before you can even get beyond that step. And then once it's approved, it goes through the gate, really runs those tests again, make sure that nothing has changed in the meantime between when you initially test it and right before you merge that into the code base. And then it gets merged into the final repo, the code where everybody else can pull that down. So we tried to do the same thing with the release process and the way we do that is we have a YAML file for each deliverable. So each team, each deliverable out of that team has a YAML file that is per cycle. So there's a nice folder structure. You can very easily go in and see each cycle exactly what is released, which commit hash it is, all the information. It contains release information. It contains any branching information. And then as a release team, we were able to kind of spread the load of doing this release because then anybody can propose changes to this YAML file and the release team just needs to review that just like reviewing a code change. And then Zool really does a lot in there and what triggers that automation or what enables that automation of being able to run tests against these changes and go out and use the information that we capture in the YAML file to do a lot of validation. So what it really looks like is this, it's just YAML. We capture information like the name of the team that is responsible for that deliverable. We capture information about the deliverable itself that lets us programmatically evaluate certain things knowing what type of thing it is, like knowing that this is a library. There's certain things that we want to enforce for something that's a library. We have information if there's anything unique about the repo that we can capture this specific repo maybe needs to be called something else when we package it up and provide it as a deliverable. And then the actual release information itself where we can say on this commit hash I need this release version XYZ. And then if there's anything to branch we can also include that and what to call it. So it's all captured. Basically the idea here is that everything that we need to know to make decisions about the proposed release and know what it is and what the release name is going to be and all of that is all captured with any YAML file. And I know everyone loves to edit YAML but in that same spirit of trying to automate things that also then allows us to provide some tooling on the proposer's side. They can just run this new release command and tell us what series and what deliverable and what type. And then that is able to go and add that release information into the YAML file so that they don't need to manually edit it. That has the smarts where if you say type here is probably the very useful thing. I think type is, is it a major release? Is it a minor feature change? Is it a bug fix? And just by telling the tool this is a bug fix it is smart enough to go and look because we have all of that history already in the repo in the releases repo. It can go back and read the existing deliverable file and see, okay, last version was 1.5.0. They're requesting a bug fix release. Therefore I know this is 1.5.1. So it encapsulates a lot of that logic into the tooling so anybody within the community that needs to do a release can run the command and they don't need to know all the little details about how those little decisions that you need to do to decide what version number and things which file and things like that. So here's an example, new release for the train cycle for the sender deliverable and it's a feature release. So this last little part there, if you can see that it just added another version. It automatically goes and because I told it it's in the train cycle by default it'll look at the train branch and pick the most recent commits. So it gets the commit hash and because I told it it's a feature release last version was 1.1.0 and it knows, okay, now I'm gonna bump that up and call it 1.2.0. Makes it nice and easy. So now that they've made that change they're able to commit that code and submit it as a code review just like they would do if they're adding a Python file or whatever, what have you. And then as the release team that makes it really easy that we can go in and we can see here's the specific thing that's being added to this file which honestly because of the automation in place and the jobs that we were able to run in CI don't really need to worry too much about this because usually if the YAML is messed up those jobs will find that. But it's still easy if we wanna look at it exactly what's happening here. You can look at that and because it's done as a code review it highlights exactly the differences in the file that are being proposed. But we do leverage those CI jobs. Just like our development workflow we have that check and gate queue that runs tests and there because of all of the information that we capture in the YAML file there are quite a few things that we're able to automate in a validation job that looks at the change being made so that we know things can pass and then we know if we approve that change everything's good for the most part. But I'll go into some detail about some of the other things here that we can't. And then once it gets through check passes we do a review make sure everything's good approve it it does those tests one more time through the gate queue and that's when that actually merges it kicks off some of the automation that actually does all those tasks I mentioned in the beginning of packaging things up and then doing the release. So I can dig a little more into that. So this is what it looks like in the Garrett UI it runs these jobs reports them back. One of the useful things is building the docs like I briefly mentioned because we have this information in the YAML file we're able to extract that information and automatically use that to publish to the releases site so that we have all of the release details on the site. So because those docs are really integral to doing our release it's part of our overall release process we want to make sure that we can build those docs. So we use RST and Sphinx to generate those there's a Sphinx extension that we have that will read in that YAML file and from that extract all the information it needs to plug into the actual documentation itself and then that's what gets published on the website all automatically. So we wanna make sure that those docs actually build if there's some error in how they've entered information in the YAML file likely the docs will fail or there's some issue that we need to take a look at. So we don't want anything to be passing tests and look happy if it ends up at if we actually approve that there's going to be issues afterwards. It's all about catching the errors earlier in the process and making sure things are right. There are a bunch of validation checks and this is where there's a lot of benefit. The, this is some an area where we've been able to build up over time because there are certain things that we know we need to look for. We don't have to rely on the release team as a code review to know or to remember to go and look at all these things. So there's validation jobs just to make sure that all the information is good. The job will actually clone the repo of the deliverable and make sure things like the commit hash exists which has actually been a very common issue that I typoed something if I didn't use that tool. Something's not quite right here, especially on stable branches. Maybe they want to make a change or a bug fix in a stable branch but they merged it into the master branch which will be the next release and forgot that they'd need to take that extra step and actually backport it into the old stable branch. So this is a great way, right away it sees, oh no, this commit hash is not here. So really prevent some errors there. Make sure that the package can be built, readme files don't have any errors because that's used in the package publish package information. Make sure that there's permissions. This was a common one. Everything's built, everything package is fine and then we go to publish the package and oh, we actually don't have permissions to do that and by that point everything else is all done so this just makes it easier right away. We can validate, oh yep, we can do this and version numbers, things like that. So this has been an area as we've run across different corner cases and different things that oh, I never thought of that. We can add that into our validation and then it's able to go and check that automatically and we don't have to be as humans remembering, oh I should check this, make sure this weird case isn't going to happen here. It's all built in. And this is, I think one of the most useful things. This is all about automating how this process is done but there are certain kind of subjective kind of just things that need an actual human to take a look at. So rather than just leaving that up to us as a release team to go out and check these things, at least we can use the automation to pull all of the things that we need to do, that we need to look at into one place so that when it comes time to evaluate this release to see if everything is good. It's not a matter of going to five different places and pulling together information on our own. The automation has already pulled that into the output of the job. So it makes it much easier. We just pull up the results of that job. We're able to see all of the commits that are included, any requirements changes. And those two are really useful because one of the most common things that we might have to go back and say, no, I think you need to change this is maybe they've called it a bug fix release but we look at some of these changes here and they've added a new dependency or we see that one of the commits that are merged into the repo really changes or adds a new feature or even removes something that makes it backwards incompatible so that we have that information and then we can say with semantic versioning, you've requested this version number really with what you have in here. I think it makes more sense to go with 2.0 or something which if we didn't have the automation pulling all this together for us would really be time consuming for us to go out and look at the repo, look at the git commit history and pull all that information. We, it gets the lead of that project and if there is one I release the liaison so that's part of decentralizing this process is anybody in the community can propose a release but if somebody just wants their bug fix available right away, we don't wanna just do the releases all the time we wanna make sure that the people that are responsible for these deliverables actually can be aware that a release is being requested and they need to be the ones to say plus one on the review because we don't wanna release something and then have that team come back later and say, hey, why did you do this? Why is this out here? This way we can make sure that we know who we need to get feedback from to make sure that, yeah, okay, this is good, we're ready to go. And then things like generating a release announcement we can see what exactly is going to be sent out to the community on the mailing list before it's actually sent out to the world and sometimes there's errors or something like that that we wanna make sure get caught. So after we approve the release request so that YAML file change actually does get committed into the repo that triggers some post jobs that'll actually publish those things generated HTML docs to the release site so then it'll be available out there it'll show that this new release is available and that's what this looks like here if you can see that it's a list of all of the deliverables all the services with the first release of this release cycle and the most recent release of this release cycle there are links to release notes that we're able to capture in that YAML file all this information that gets published out there very extensive information and we don't need someone to sit down and actually fill out all this information because it can just pull it right out of the YAML files. Taking process will get details from the commit and then it's able to add the tag to the repo and include all that metadata so that there's a lot of really good tracking in there and if because we know where bugs are being tracked for that individual project and what commits are included in that release it's able to go and actually update the bug tracker and say this based on information in the commit message to say this is now released and if it needs to it'll create the branch so no human is actually creating the branch the automation does it all. We will, we manage our requirements by if there are any libraries that we release updating before we have everybody start using that new release we run some jobs and make sure that doesn't break anything any unintended consequences so it'll propose a change to our requirements process it'll publish the package whichever it is and it'll send the release announcement. Basically what I go through there. So to go back to what that looks like so the check and gate queue those are just the validation on the change in the deliverable YAML file itself where we're able to get that information and decide is this release correct is it using the right version number after it's committed this release post release process runs that actually is what creates the tag in the repo and then that then will publish the package and get all the information out there. So overall ideas of what we do and then overall ideas of what you can maybe think about of how you can use some of these ideas for how you manage your releases whether it's a community open source project or something internally or your company. Automate those repetitive tasks if there's if you're doing releases for your whatever deliverable it is see what pieces of those of that process you can have a script to whether you can fully automate it and just have it taken care of or like our job that gets those changes have a job that can at least pull together information and minimize the amount of brain power that you need to expand on evaluating these releases. Automate security tasks what I mean about like about this is things like creating those branches pushing those tags in the get repo signing tar balls all of that we're able to offload to the tooling. So it's not a human that's making these accessing these secret keys and doing these things. We the tool does it and by the time the tool runs we know we vetted everything and then the tool can access the secrets that it needs to do that. No human needs to do it. Automate even the things that are manual as much as you can and then keep looking for things that'll make it better. The validation job that we have we've just kept improving adding new checks adding new things that really get rid of all those surprises for us and I think having using it and using the same workflow or very very similar workflow that we do for changes to the code of the project makes it a lot easier for everyone or anyone to be able to step in and then also do releases. So that's it. You can always follow up afterwards and ask me any questions. Here's my Twitter handle. I think we have a couple minutes, five minutes. So if anybody has any questions so I can try to answer them. Yes. You mostly think of the. Mm-hmm. It should always be, I had impotent. So the question was, is it always an impotent? GitOps should be something that you can do over and over and not have different results every time. Yeah, so if someone has put in a release and a version number and that YAML file gets merged we can make other changes to that YAML file and when that merges it, the automation will see what has changed here. It will not attempt to take that commit again. It'll see, it'll check. Okay, I have hash ABC version one, two, three. Okay, hash ABC has take one, two, three and then I'll move on. Same thing with branches, if a branch exists it won't try to do that again. That's actually kind of bit us a couple of times where something, for some whatever reason someone had to manually delete a branch and then we make a change that YAML file and I'll say, oh, I need to create this branch. So it'll always make sure that whatever is requested in the YAML file actually exists out there. If it already exists out there it won't attempt to do any other operations on it. Yes. Oh, good. Yeah. So the question is about where these YAML files exist whether it's in the repo with product or elsewhere. We have a releases repo. So it is a separate repo by itself which is nice because then it contains a lot of the automation, not all of it but a lot of the automation exists right there and then there's within that repo there is the directory structure of deliverables, the name of the cycle and then the deliverable.yaml file which makes it a really nice reference. Everything's self-contained in this releases repo. If you're someone out in the community you don't care about releases you don't have all these files in your repo. You need to know to go to this releases repo to make this change and propose the release. Yeah. Any other questions? Yes. Just rather a comment and a question. As a consumer of this process for the past few years and I still remember five years ago that I was involved into myself and the current release with the L at the time about an hour sitting together on IRC and figuring out every single step to get the package out there and the last release what we did on November I think it took me about five minutes total time because the release team had the patch already. Only thing what I had to go and do was check the comment hash I go like, oh, this is not the right one. I know we still are pending merging one more half comment. Once we got that into the repo changed the hash in the review tooling and my team was done. I didn't need to worry anything after that and I knew that everything was published and out there when the time it went online. So it's just amazing how much a bunch of automation and verification can help from the actual man hour spent on so-called simple tasks as a release. Yeah. Which isn't nice but it definitely wasn't five years ago I think it has to release the offer. Great to hear. Well, for the sake of the recording I just really summarized that about a few years ago took about an hour of the project lead and the release manager together to get everything in place to do a release and now this last one took maybe five minutes of time. So that's great to hear and that's exactly the benefit of doing something like this that we're trying to accomplish. Good. To what does the learning curve to use to request a release or to okay well I'll go through from both so from the consuming side on our releases.openstock.org site there's documentation reference documentation it says with using and then that kind of goes through the steps as a consumer of this automation of here are the things that I need to do to edit the CMO file and request the release through doing a requesting a code review and then for the release team getting people on board to participate as release managers it's really there's a bit of a learning curve because they're like I said this is something that we've added more and more validation over time every time that we found something that hey we could have checked for this ahead of time. The overall concept is pretty easy to grasp and then if anything goes wrong or if you need to dig into some kind of change it's just a matter of knowing where in the automation in the scripts you need to go to look at it. I wouldn't say anything's too overly complex it just does things like check out the Git repo and does some Git commands and the rest is fairly basic Python code. Reviewing those release requests takes a little time just knowing for that semi-manual step of just knowing all of the information that's included in that job output what do I need to look at what does it mean and just understanding things like semantic versioning. Good question. Thank you very much. Thanks. Thank you.