 Okay, thanks everyone for joining us today. Welcome to a special edition of CNCS live webinar Kubernetes version 1.27 release. I'm Libby Schultz and I'll be moderating today's webinar. I'm going to read our code of conduct and then hand over to Xander Gervinsky and Mark Rosetti, both from the Kubernetes version 1.27 release team. A few housekeeping items before we get started during the webinar, you're not able to speak as an attendee, but you can definitely pop questions and comments into the chat. Feel free to do so and we'll get to as many as we can at the end. This is an official webinar of the CNCS subject to the CNCS code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct and please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today for the CNCS online programs page at community.cncf.io and their online programs. They're also available via your registration link used to join today and will be on our online programs YouTube playlist on the CNCS channel. With that, I'm going to hand it over to Xander and Mark to kick things off. All right, let me share my screen. How does that look? Looks good to me. All right. You want to start Xander? Yeah, yeah, so thank you for joining us everyone. We are going to do a just a little bit of an overview of some of the top level features that are part of the Kubernetes 1.27 release. Release day was Tuesday the 11th, so it's been out there a few days now. I want to go to the next slide. Yeah, quick intro. I'm Xander. I am the 1.27 release lead. I've been on the release team. 1.27 was my first version, and I've been on every version since then. I've led enhancements and worked on the comms team as well. Yeah, it's been about three years on the team. Yeah, I'm Mark Garcetti. I've been contributing to Kubernetes since early 2019, and I've been a part of the release team since the 1.25 release where I was enhancement shadow twice and then enhancements lead this last release. All right. I'm going to do this like, you know, one of the old movies where they do the credits first and just throw out a big thank you to the entire release team. And it's just two of us here right now, but you can see that like the whole team is actually a huge group of people that all contribute a lot of work to making this happen and got it broken down by role here. And we've got folks across enhancements, CI signal, release notes, communications, bug triage and docs. And on this slide, I have the names of the leads bolded. Yeah, really just to kind of illustrate that it really is a huge group effort to make this happen. So big, big thank you to everyone that participated. And then I want to talk a little bit about the theme to some of you if you read the release blog may have seen this every release has a theme. This one is chill vibes and got our our sloth logo here created by Brittany Leverick and thank you so much to her for doing that for us. I really love it landed on this theme, because it's the release process can be kind of chaotic and having done a few of them now like I've. There are several milestones throughout the release that features need to meet. And this was the first time that like, at least for the first milestone which is enhancements freeze every single feature. We didn't have to deal with any exception requests. And the release as it continued like really just stayed pretty calm and and chill throughout the whole time which is just an unusual thing compared to previous releases and it really end up being because there's been a ton of work going on behind the scenes to kind of tighten the process and make things more smooth. And so I think this was probably the first release that we really got to feel the benefit of all the work that's been happening behind the scenes to make things better. And so the theme is really a celebration of the the community and the work that people have put into making things as as smooth as they can be. Do you want to speak to this one mark you can go ahead. Yeah, I can speak to this so for anybody who's not really familiar with how the enhancements process work in Kubernetes. Most major changes, especially user facing changes in Kubernetes are required to go through an enhancement process. And as part of the enhancement process there's a Kubernetes enhancement proposal that gets written up and reviewed by stakeholders and that kind of outlines the work and how it's going to be tested for stability and production readiness and all of that So this release in the 127 release there were 60 enhancements that graduated or moved from one phase to a next, which is kind of a very large amount, which is even more surprising that it was such a chill release. For reference the last release 126 there was only 39 enhancements that progressed or graduated. So here's a little bit of a breakdown. There was 18 new enhancements that went to alpha. 29 new enhancements went to beta and beta is a big milestone because that's when enhancements are the functionality behind those enhancements is usually on by default and is available for a lot of users to use. And then there's another 13 that went to staple. There's also a number of deprecations and removals. Not really going to go over those they're all outlined in a blog link that's linked below. Do you have anything else you want to add Xander. Nope, sounds good. All right, so this time around we're going to do things just a little bit differently than this webinar has been done in the past. You know, usually when we've done the release webinar, kind of gone through every enhancement that made it into release and done just like a little one sentence blurb. As Mark mentioned, this is a huge release with a lot of enhancements that made it in. So we've kind of pivoted the format of this to just focus on the major themes of the release that are outlined in the release blog and yeah I think for that kind of full breakdown you can see the release notes. So we're going to start with mentioning yet again the registry change. I know we've been talking a lot about this throughout the whole release cycle. And I'm sure some folks are tired of hearing about it but it really bears mentioning again. The image registry that hosts the Kubernetes images has changed from the case.gcr.io to registry.cates.io. It's really about balancing the ingress egress load for these images across providers and it's going to provide just a better experience for everybody. So the 127 images are not going to be published to the old registry. Right now request of the old registry will redirect to the new one but if you're behind like a proxy or something there is some work that may need to be done to make things function as expected. So this is mentioned in detail in the deprecations blog which you can find on the Kubernetes blog. But yeah we wanted to start with this one because it is an important thing that that may require some changes depending on your environment. All right. So the next enhancement that we wanted to highlight was the seccomp defaults have graduated to stable this is. So if you're one once this is enabled, it basically defers the seccomp profiles to the container runtime. And so that just allows more flexibility and configuration in order to enable this does require a cubit flag to be passed. And just please be aware that you know the seccomp profiles for your container runtime will differ depending on your container runtime and also can vary depending on releases of that and infrastructure providers can also modify those two. But this is a kind of a good way to help get more security for your workloads without you know sacrificing too much flexibility or adjusting the workloads deployments. All right. And the next one we have mutable scheduling directives for jobs so this is going to stable. And really the the gist of this is kind of that bottom section there so the those fields like node affinity node selectors tolerations scheduling gates annotations and labels can now be updated before a job becomes unsuspected. And here's another one that's graduating to stable this is a downward API support for huge pages. So now, if you're, you can kind of let the workloads know how much what the limits and requests are for the huge pages of the different sizes and quantities. The workloads can understand that and be better able to react. This is kind of just bring parody to other resources like limits or like memory and CPU quota, just to help really help the workloads out. All right, and then we've got pod scheduling readiness graduating to beta. This is a pretty cool one actually being able to add the readiness gates to to pods before they scheduled so if you've got, you know, labels or annotations or any kind of mutations that need to be applied to a pod. You can set a gate on the pod to do that work before scheduling so that you don't have to deal with the churn of the scheduler and have those pods in in their fully mutated state before they make it to that stage. I think, yeah, I think this one will be useful to a lot of people. Yep. Okay, next is this node log access through Kubernetes APIs. So previously, you could stream some files from the the kubelet under like the log files of the kubelet but this enhancement kind of adds APIs to do more deep introspection of your system, kind of in a Kubernetes native way. So on Linux you'll be able to query journal D and see like system events and everything through this and on Windows you'll be able to query events and the windows application locks to this too. This is just going in alpha so it will need to be enabled and there's a little bit of information on how to enable that for anybody who's interested in taking advantage of that here. All right, and then we've got the read write once pod persistent volume access mode going to beta. So what this access mode does will it will restrict a volume to only be accessed by one pod at a time, which will be particularly helpful for stateful workloads that require that single right access. Yeah, so going into beta will be enabled by default. Yeah. All right, here's some improvements to how pod top apology spread is calculated after rolling updates. So new in, I think, going to get new going to beta. There's this new field called match label keys, where you can specify labels, the names of labels in this match labels keys and then the scheduler when it's going through rolling updates of deployments will look at the values in those scheduling keys in addition to the label selectors that you've selected and make decisions with this and this is to help like one of the big use cases is highlighted here and this is that you can use this pod template hash, which will help the scheduler distinguish between revisions within the deployments of your workloads and help them get kind of spread across whatever your pod topology spread defines. So this is in beta and should be ready for use in a lot of cases too. All right, faster Se Linux volume relabeling. I can't really pretend to know much about Se Linux, but it does speed up container startup time by mounting volumes with the correct label. Yeah, and then the Se Linux mount field being added to the CSI driver objects to Yeah, a little bit more history so normally This one, I don't know normal incremented amounts of volume. It does so with out setting the SE label permissions or contexts on the drive so then something will go and recursively kind of update permissions on all the files and this is mechanisms to skip all of that so less disguise and better performance And also kind of in a similar vein this robust volume manager reconstruction feature is graduated to beta. So now the this is a the volume management code within the cubelet has kind of been refactored to keep retain more state about what volumes are already mounted on the node. So just this will help make the node resets and everything more stable and reliable and also just better stability with respective volume mounts. All right, and then we've got mutable pod scheduling directives this is going to beta so will be enabled by default. So before a pod is scheduled you'd be able to mutate the directive so I'm giving the ability for external resource controllers to influence that placement. While still kind of offloading the scheduling work to the scheduler itself so it kind of allows building lighter touch scheduler directions without having to implement a full scheduler plugin. And this one is one that I think a lot of people have been waiting for the only alpha one that we have on the list. This one's been in the works for quite a while this is the in place update of pod resources. So with this functionality containers. You can add and subtract resources to containers that are running without needing to restart them. So this is think of this as in place vertical pod auto scaling, whereas the normal vertical pod auto scalar would reschedule pods with different resource requests here will just update them in place. So this will definitely help a lot of workloads scale faster and with fewer interruptions between. And yeah, I think there's been a lot of excitement around this and trying to get it over. I know people are really excited. And so yeah, make sure to enable that flag and and give it a shot. And I think also read the docs for this this does require specific versions of a lot of container runtimes and conjunction with the function, this feature being enabled in the cuboid. We're going to talk a little bit about the shadow program to an opportunity to participate in the release team and get involved in the Kubernetes project. So there's a little bit of a breakdown here of how this works I mentioned this before but there are various sub teams that are part of the release team. And as part of each release cycle, each of these teams takes like roughly four shadows that participate as part of the team and and learn from the lead on on how Kubernetes is released. I don't believe we have the dates lined up for the next release cycle yet but the application is currently open for for those that would like to shadow and participate in 128 and you can see the the link right down at the bottom there and if you go to the Sig release repo on GitHub. There are handbooks and breakdowns of each of the roles and what the shadow program looks like overall. And I believe a mail went out to the Kubernetes dev mailing list yesterday with the more links and a little bit more information. And that's just available even if you're not subscribed to look at so I recommend if you're interested to try and get involved. And then we also wanted to mention cube com next week I think a lot of folks are probably aware of that. Yeah, myself and mark will both be there as well as some other members of the release team and I think we're all probably happy to chat about the release or the release team the shadow program, any of these things. It should be should be a really exciting event. And yeah, you can see more details for that and other upcoming events on the CNC F event page. And that's, that's what we have. I know this one was a little shorter than than previous webinars but like I said we wanted to kind of focus on the top level major themes. I appreciate everyone dropping in to listen and yeah I hope you get the chance to try out 127 and enjoy some of the new features. Thanks y'all. Does anyone have any questions they want to drop into the chat. Now's your chance. Just a second. If it comes up always happy to field questions in community slack through the the Sig release channel would be a good place to to pulse questions. Or you could find Xander myself just by your names pretty easy to find in there. Perfect. Well it looks like we're good. Thank you both for your time today. Thanks for sharing this is exciting stuff and I hope we see most of you live and in person at Kubecon next week. We will be taking a pause from online programs. Next week for Kubecon and then also the week after for everyone to recover, and we'll be back the first week of May. Stay tuned and this will be online and just a little bit this afternoon and thank you Xander and Mark both of you for your time today and thanks everyone for joining us. Thank you for hosting. Yes, and we will see y'all next time. Thanks so much. Thanks everyone.