 Hello, welcome everybody for our Qubanet Sea Relief Maintainers' Talk. My name is Carlos, and I'm going to let my other colleagues to introduce themselves. Hi, I'm Zagram, one of the co-chairs of SeaRelease. It's the first time that I have the opportunity to be at the Qubcon in person since 2019. And I'm here with Jeremy. Jeremy, also the first time since 2019 to be here in person, I'm super excited to be here and share what we're doing in Sieg Release. Yeah, and my name is Adolfo Garcia. I am one of the techniques for release engineering with Kubernetes Sieg Release, excited to be here. And also what we will do in the next couple of months. So after that, we will speak about how we turn source code into artifacts, how we actually build the release and which user personas are involved into that. And then when we have those artifacts, we will speak about how we can trust them or even if we can trust those artifacts and how end users can verify this trust. Then we will speak about how we can, on the infrastructure, so what it means to have a community on infrastructure and also what we plan for the future. And last but not least, we will speak about how to get involved in Sieg Release and how to help us on our efforts. So Jeremy, tell us what's new in Sieg Release. It's KubeCon over the last year. The Kubernetes 125 release came out. There were more than 40 enhancements. As we've seen over the last few releases, things just keep getting bigger and bigger. Too many things to call out, but there's some interesting things I think in 125. One, the default registry location has changed from kates.gcr.io to registry.kates.io. Everybody should use that when they're pulling images now. It's gonna help us with infrastructure costs and just the maintenance of the project going forward. KubeProxy is also distrueless now, which is really cool. That came out on the 23rd of August, 2022. So a little bit after KubeCon. Currently, next slide, we are in the middle of the 126 release. So we just keep going. This is gonna come out on December 6th, hopefully, with no problems coming down the pipe. The lead for this one is Leo and the Emeritus Advisor is Nabaroon. And I think this is super cool. This is the first release that's been entirely led from outside of North America. The release lead and the Emeritus Advisor. You can kind of see, like, we've had this really great international presence. Lots and lots of countries highlighted here over the last few years. But until now, we have never had a release that was entirely led outside of the U.S. So I think that's really a cool milestone for us. This year, we've also been focusing on a few things in our roadmap. When you get these slides, you can also find the roadmap on GitHub. And we're really focusing on three areas. We want to make releases more consumable, easier to pull down, easier to get the artifacts for them, more introspectable so you can come and understand what's happening, what's in each release, and then also more secure. We want to make sure that the artifacts that you're getting are non-falsifiable. We want to make sure that they have signatures, they have software build materials, and all those things that you have a better understanding and a better way to judge the risk of consuming those artifacts and knowing what you're deploying. On the introspectable side, we wanted to look at the cadence and kind of follow up on our change. Obviously, we used to do four releases a year. We've moved to three. We wanted to know how folks felt about this. So some data that we gathered, most folks seemed to strongly prefer three releases a year. How many people here miss four releases? Absolutely nobody. Who deploys Kubernetes and is really happy that there are less things to keep up with? I know I did my last job. Next slide. Going back to register.case.io, there's a lot of work going on with Cyd-Cates Infra to move away from that original GCR instance. There's a lot of work to bring community-owned infrastructure into play, but also to really work on distributing costs. There's an insane budget for Kubernetes infrastructure, and we spend a lot of it hosting container images. There's work being done right now to spread the load a lot. So if you're coming from Amazon, you're going to start pulling images from S3. It's going to go through a really transparent proxy. A lot of great information you can find in that cabinet. We're going to turn source code into artifacts now. So Sasha's going to take over and tell us how we do that. Yeah, all right. So the first thing... That's not just this one. So the first thing we would like to outline is what is necessary to actually cut a release, but the first thing really is that we need some personas who actually do the release cut. And we have two approaches here. So the first one is that we have an actual release cycle, which goes round about three to four months. And we tend to release three times per year. And then we also have the patch releases, which have yearly support period, and we also cut patch releases mostly every month and every second week of the month is the main goal. So we actually need someone who takes over the responsibility for cutting a release, right? So we also have to need persons who have the right time and the dedication, and also are encouraged to do the manual steps for the documentation and verification of the actual release. So this can take up to multiple hours per release, and also release managers tend to release multiple patch releases in parallel, for example. So who do we have on board? So the first role is that the branch managers and their shadows and those folks get selected per each release cycle. And this is a highly privileged role because they have full access to the main Kubernetes repository, and they also have to not only cut the releases, but they also have to pre-plan the releases, like the offer, the beta, and the release candidates. And this should be done together with the release team leads. And they also have to coordinate between other parts of the release, like, for example, if they have to coordinate the release before it's actually being cut with CI signal, that we can ensure that the CI, for example, screen. And on the other side, we have the release managers and the release manager associates. And those role is a more permanent role in the release engineering sub-project. And they also coordinate between cutting patch releases, right? So we just have to announce that we, when we do the next upcoming patch releases, we also have to take care of defining when a release goes end of life and things like that. They also have to maintain the release branches, which means that they have to review the cherry picks if they got approved correctly and if we don't sneak a feature in, for example, into a release. Other than that, they also actively develop features and maintain our code, which gets larger and larger over time. So this is also a very ongoing topic for us. They also work closely together with the security response committee. So if we have any security vulnerability coming up in Kubernetes, then we have to, yeah, maybe schedule out-of-band releases or things like that. And that's also part of their job. And the release managers on top of that also mentor our release manager associates group. So which steps are involved to cut a release? So the first thing we have to do is that we have to create a GitHub issue, which is our foundation for documenting how releases will be cut. So it contains all information we need to actually build the release and cut the release. It links any blockers or follow-up work and also it references under which conditions, for example, the CI signal has been the release cut. And it also lists which Google Cloud Build jobs run so that we can just see if we have any build issues or things like that. And we also create, after that, we can just create a corresponding Slack thread to attract the release just in time. After that, we also require some help from the Google Build admins and have to check for their availability because they help us to build, push, and sign Debian and RPM-based packages on app KZIO and YAMP KZIO. And if we have been on all of that, then we can actually stage a release. And I think it's the first time that we now will do a real world release stage live at the KubeCon. And Carlos can help me with that. So what we do now, we use our Krel release toolbox and Carlos will just run Krel stage. Usually, we would have to define which type of release we want to cut, so it's an alpha, beta, or a release candidate and also from which branch we want to cut the release. But, per default, everything should go well now. Or not. Yeah, Carlos has to authenticate against the repository again. Live demos? Yeah, so we can see that we already gather some information. If you go a little bit more up into the logs, then we can see that we have already found out which version we want to cut. So there are some automated things. For example, we also have the KubeCost version and things like that. So those are all variables which have influence on the release cut. And we can define them or change them like we want. But if Carlos knows there is a link to the Google Cloud build console there. And this brings us to the release. And usually, nobody other than the release managers or their associates can see those logs. And you probably have to just refresh this because it's cute, yeah? So we will not wait for the release to be finished now. But if you look at the execution details, then you can see, yeah, then you can see that the cloud build variables have been set successfully. So we have created this job. This one's in a virtual machine mainly. And the logs should now just print out anything like what we have to do for credit stage. Yeah. So it puts container images, starts building the release, and things like that. So if we go back to the slides, then we can see, yeah, then I created an overview about what credit stage actually does. So it runs this predefined Google Cloud build shop, and it also checks the prerequisites for the actual release. So for example, determining the release and the repository is kind of tricky because we have to find out which release we want to cut. So we could also imagine that we want to create a release branch or not a release branch, something like this. Then it actually builds the release. So it creates the binary artifacts and container images, and it also generates the build of materials which is kind of new to us. Then we also verify those build artifacts. For example, we double check if the file is ready for this build step. And after that, we just stage everything we have been built into a Google Cloud build bucket and leave it there. Because now release managers have to actually publish the release, and for that we have credit release, and this build version is the unique reference to an actual release which then just will be released and pushed into the actual locations. So what does credit release do? It verifies the artifact provenance for credit stage. It also pushes the artifacts into their final destination like container registries or Google Cloud buckets for binary artifacts. And it also pushes the staged objects for example. So we use the Kubernetes repository for example to generate a change look that we committed to the repository and on credit release we will actually push those git changes to KK. Then we also create a release announcement which is just a plain text file and update GitHub release for example. Then we build the provenance for credit release and after that we archive the release on our Google Cloud bucket. So if everything then goes well then we can do the same again but now in production mode. And if that's successful production mode mainly means that we just use different locations for container images and binary files then we can do the image promotion after our credit stage. And for that we have a different set of tools like K promo part of our promo tools which is able to create GitHub pull request to a repository which outlines which container images we want to promote to production. And if this pull request got merged then the job running after this pull request will actually push the container images from the stage bucket to the production bucket and then those container images are live. And if this is done then we can also run trail release in production mode. And that's now the point in time where we need the help from the Google Cloud build admin to cut the DAP and RPM packages for us. They will use the pre-built binary artifacts to create packages for it and then we can also notify Slack that our release is now finally done. So what we usually created can then be used to push the announcement to the official Kubernetes mailing lists and we have a dedicated sub-comment for example for that. So Crel is just a toolbox of use cases we can just use Crel and then it announces the release. But what's inside of a release? So we have binary artifacts for the Kubernetes client and servers and also for the test binaries which is useful for integration testing and for every supported platform so we built for huge, I think it's five or six different supported platforms we have right now and we also do the same for container images which basically re-use the binary artifacts. And those container images are signed which is new right now and we are also working on getting the binary artifacts signed. Then we also have the SASA provenance metadata for the stage and release steps so those should be used to actually verify what has been done in Crel stage and release and also the updated Kubernetes repository is part of the release as well as well as the staged sources. And then we also have the software built materials for all of that but how can we trust those artifacts? Everybody can talk. So as Sasha mentioned when we do the release process we stage things and we produce a set of images set of binaries, things that land in the staging environment. We want to promote those things to the production places that they're going to go to we sign them also so we publish the images and signatures to a staging GCR. Those are going to be the places where we can verify things and where they live until they go into production regions. Then, as Sasha mentioned we do a pull request that's generated by K-Promo that's going to list out all of the images that are going to come across. So the things that need to be promoted from staging environment to a real release are enumerated in that pull request. So the first stage of trust that you get here is really that we've got a small set of people that are able to generate these releases there are a trusted set of folks in the community and then they're generating a PR to do a review of what's going to be promoted into the release environment. Then other release managers, so again the trusted subset of folks but not the person doing the release themselves is going to review that pull request to make sure that the things we're bringing across look good and that we're going to bring things that are going to be valid and trustworthy inside of the project. Once that merges we kick off some automation to do that actual promotion process. That's going to take the images from that staging GCR and it's going to put them into the production GCR and before that it's kind of where things stopped but now we also since we are signing those images we're able to verify all the signatures before we copy them across. So we get that extra level of verifiability as part of the process. So you get that two phase kind of security guarantee that these images are the things that the release team has built and is ready to consume. Then finally those things are pushed to the production GCR and again signed with the production service account so you're able to verify all of those things and see the kind of the chain of custody. You'll see the signatures from the staging environment and then you'll also see the signatures from the actual production push. All right, Adolfo do you want to tell us about how end users can verify these things? Sure. Well, once we sign the images they get their signatures attached in the container registry. So the way you can do the verification is by using the cosine tool from Sextor and if the command is really easy you just have to run cosine verify and pass the tag of the image and that should get you an actual verification of the image. Once you verify them if you inspect the signature that's spread out from cosine you'll see those two signatures that touch the images which Jeremy was just talking about. You'll see the one signature done by our staging account that means that tells you that the image was actually released by secret release and release manager and then you'll see the second signature which all of the images across the Kubernetes project get. Do you want to continue? All right. And then the other one is we are not attaching currently the S1 to the images there's some work that has to be done yet in the image promoter to enable us to do that but we also do release a complete S1 with all of the images and we have a tool that was derived from all the work we did to get Kubernetes S1 going which you can download and use it to generate an S1 of your code project if you want. This is if you run if you inspect the S1 of an image you can see inside of the one allows you to inspect the image and see how it is you look inside of the S1 and see the structure of the S1 all of the components inside of it and the rest of it. We mentioned about having our own infrastructure and what that means actually. In the past everything was managed by the Googlers in the Google GCP project and in the past years we started moving back to a community one at one and what we have right now is the S1 we have all the containers we have all the binaries and the S bonds and provenance in a GCS packet in the registry. As Sasha and Jeremy mentioned like when we cut a release all those things goes to the community packet and the community registry but there is one piece that is still on the Google on infrastructure which is the Debian and RPM package after we like in the last mile of the release when everything is almost ready we need some support from the Googlers to build and publish the package for us. And then they build sign and push and those parts are ready and this Debian and RPM package is now like we are working to move away and there is a current POC progress for how we can build using open build services and we are working to remove this as soon as possible from the Google and to be a community one if you want to like to participate and help to work on this you can join the select channel as well. The next steps so obviously as Sasha was mentioned we have a team of release managers but we would like to invest our time in making the life of this manager as painless as possible. Sorry. So there is always a ton of work to do and if you would like to join the release we are more than welcome so this is a beautiful review of the efforts that we have going on and how they align with our roadmap. So as Carlos said we are working on these artifacts that release the system packages. We also are in the middle of moving and trying to imagine that POC and trying to find the shape of it, the final shape of it. We are also trying to come up with new ways of reimagining how the image promotion should be. This mainly because of the new infrastructure requirements that we have because of the infra changes that are happening from moving from Google-owned things to seek release and community manage infra. So we are trying to optimize how the process runs but also maybe coming up with new ideas on how to make that process more seamless. Then we are trying to come up with tiers of recommendations on how to handle first our internal repositories. So we have huge repositories that we release like Kubernetes. Then we have middle tier ones and then we have the smaller which sometimes are just a library. So we are trying to set guidelines on how we should handle and try to build the releases for those. Why are we trying to do that? Because we want to one, first our lives easier but also we would like to be a little bit more of a guiding light for the smaller projects across the org that maybe some of the tooling that we are releasing can help them. And this is why one of the things that we are about to release for in the next few months, before the next cycle ends is a new GitHub version's repo capturing the tooling that we are building inside of SeaRelease so that anybody can consume them inside of their actions workflows and more easily. Now we are also working on improving some of the way that provenance metadata is built inside our release process. Currently we are building metadata in the wrong places but we have a new utility out that will enable us to query the build from the outside, try to observe it and collect the necessary information from a safer point of view and this of course is aiming to finish our salsa kept that we have still going. Then another one another one that we want to do is start creating at the stations from the image promotion process. So we have, you can verify the signatures but we want to also let you know what happened there and who did it. And we as Sasha mentioned earlier we are working on signing for the rest of the artifact. So there is an effort going to sign binaries, the S-bombs and the provenance metadata. That one is already done but it still needs to be put into production. Southside is undergoing a few changes so the framework is getting split to make it easier for organizations to achieve compliance. So we need to rework the S-bombs and the S-bombs to make it more aligned to that. We need to build, we have a utility called publish release which should verify the artifact is publishing. And we are working together with what used to be the S-BDX official S-bombs generator to bring multi-language support to the S-bombs. The S-bombs project currently works with Go project but we have organized with a couple of other S-bombs projects to create one single repository of language parsers so that we can extract language dependencies in the same way in all of them and that multi-language support is coming to one. So about 10 languages, 10 languages and 10 languages and 10 languages are there. And finally, one thing that I'm really happy to announce is all of that work that we are putting into the Kubernetes releases, we always try to get it out and expose it as publicly and general purpose libraries and tools. But until now we didn't have repo that we are building and we are also publishing a SIG release guide to how to secure your release and basically this is take all of our tools make them play well together and you improve your release security instance quite a bit. It's my plan is to publish the earlier draft so that other people can comment and comment on the document. So if you're interested in that SIG release and the SIG release channel Kubernetes and we can chat about the document and you can add your suggestions to it. All right, now we want to speak about how to get involved and how to help us with all of those efforts and the first thing we really would like to recommend you is to apply for release team shadowing experience for the 127 cycle. It should happen this year maybe in December so there should be an announcement on KDEV and we want to get to know which roles are already existing and we linked it here as well. So for example you have technical roles like CI signal or also the release nodes team is also kind of a technical team but we also have non-technical roles for example like cons or docs and others. So if you're really interested in one particular area of the whole release then the release team shadowing program is the best way to get started with Kubernetes and also provides a good way to maintain an overview about the project at all and what is going on because you indirectly get the knowledge from the others as well. So but if you're interested in even more technical aspects of a release then this can be done by just showing more than by becoming a release manager associated for example. So it can show in the release team and then also later shadow for a more technical role and then you can just express your interest that you want to be a release manager associated. So this also means that you can start contributing to our repositories, getting involved into our discussions as I've already mentioned and staying just up to date in release management or the sick release lecture. We also have a huge bunch of recurring work like updating goaling versions and we also have a bunch of work which is related to releases for example if a new C&I release got published and we have to update the manifest for the packages which use it and things like that. So there's a amount of recurring work where everyone can express their interest and start contributing to it as well. And for sure last but not least it is always encouraged to connect to existing release managers to find more areas to contribute. And with that I would like to say that we are always happy to help. For my point of view the release team shadowing program is the best onboarding experience in the community and I started with that as well and sick release is always welcoming to discuss any process or technical changes into any direction so we have a lot of, for example we have a retrospective for each release which is split into two retrospectives for now and in the mid-cycle retrospective already expressed that you maybe think that something should be changed and that's always a good thing so we totally welcome this change and our overall goal is to be most inclusive as possible just to lower the barrier for new and also for existing contributors so we are always looking for new contributors and would be really cool if we achieved that at this KubeCon. And with that I would like to thank you for listening to our talk and we are now open for questions or do we have any time left? Actually we have quite a big investment in the way we have been doing in Cloudbill so for now it's going to stay there mostly because even if we were to consider a change to move things into Tecdon we are spread thinly in the city so the main bottleneck there would be people so it could be done I'm not sure if we would gain a lot from it because we've built tooling around GCB to make all of this happen so since we don't have people we cannot think of a change like that like a train out there There is a lot of automation already there are just a lot of pieces some of it requires manual review like waiting for PR reviews to get the image promotion PR merged some of it takes time to get Google people to do that last step of the mile we're working on fixing that part but right now that's a pretty manual process for them so there's just coordination that happens but there is a lot of automation already the tooling that's built for Crel for the actual image promotion process a lot of that is automation and it just kind of comes down to the scope of how many things we're building how many places we have to push that to now and really just kind of overseeing that process on the release team side there is a lot of work too if you went to the keynote this morning I think Priyanka mentioned Dr. Taylor one of the previous release leads about how much time he spent doing things and a lot of that is people it's working with different SIGs to coordinate which caps are going to land to make sure that PRs are merged things are kind of disjoint too so there's just that kind of getting people to the right place that's a lot of the work that goes into the yeah go for it so yeah as Jeremy was saying SIG release is really split into two so there's the release engineering sub-project and the release team and we tried to and in the release team we put all of our efforts into automation and building the tooling all of that but there's also the human parts of the release that cannot be automated like writing blog posts like engaging ensuring that people get their enhancements at time and all of that so even if we were to automate fully the release engineering part the human part is not but it's also one of the most rewarding because that's where you get to know all the community members allow people help them grow so that's the best part of the release to be in my opinion at least I think on the release team side though there are opportunities to improve and make that happen and Grace and some other folks working on enhancements have done a fantastic job spinning up this new GitHub board so we're using the new GitHub projects beta to really streamline and make the whole process of tracking enhancements easier before it was a Google sheet that had the most complex automation I've ever seen to really track things things are in the KK repo you've got the docs repo the website repo you've got to keep all of these things in line and before it was a sheet it was mostly automated through crazy scripts and then somebody on the release team going through and entering data or somebody from a state going through and entering data and it was not super sustainable and it's so much better now how far is the tile in the sense of can it be reused? you spin it up and use it for your own as long as you are running on either GCR or artifact registry this is the technical reason behind that is that it does some specific Google specific calls to the registry in order to list how the images are there I've been trying to get it to work with others and it's well if you take a look you'll understand so it's not easy but if you were running in that setup you could use it for your own organizations and that means you can get pull requests and free signing with if you want to enhance it to make it more versatile please the way that it works is a pretty good model I think you can build your own for whatever environment you're in take inspiration from the code that's there