 and we'll kick things off. Welcome, everyone. Thank you for joining us. Welcome back after you have KubeCon, CloudNativeCon, if you were there. We're excited to get back into things, and welcome to today's CNCF Live webinar, Kubernetes 1.24 Release. I'm Libby Schultz, and I'll be moderating today's webinar. I'm going to read our code of conduct, and then hand things over to James Laverac, Staff Solutions Engineer at Jetstack, Mickey Boxell, Product Manager at Oracle, and Grace Nguyen, an engineering student at the University of Waterloo, and all members of the Kubernetes 1.24 Release team. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There's a Q&A box on the right-hand side of your screen. Please feel free to drop your questions there. Say hello to us. Let us know where you're calling from. We'll get to as many of your questions as we can at the end. This is an official webinar of the CNCF and is such a subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the online programs page at community.cncf.io under online programs. They are also available via your registration link and the recording will be available on our online programs YouTube playlist. With that, I will hand it over to our Release team to kick things off. Take it away. Hey folks, we are excited to share with you a lot about what's new in Kubernetes 1.24. I know Libby just did a great introduction of all of us, but just to reiterate, my name is Mickey. I was the comms lead for Kubernetes 1.24. We also have with us today James, our release lead, and Grace, our enhancements lead. Is there anything you would like to add or should we dive into? Let's go right into it. Thanks Chris. All right, cool. Our agenda for today is providing some updates about the 1.25 release which started earlier this week, sharing a lot of highlights from 1.24, diving deeply into all of the updates from the SIGs, and finally having a Q&A at the end for everyone to ask questions. So on to 1.25 release updates. The 1.25 release timeline, and of course this is all subject to change, it started just yesterday, May 23rd. The enhancements freeze is coming up on the 16th of June. Code freeze is the month after that on August 2nd, and our target release date is August 23rd, 2022. So everyone, make sure that's in your calendars, it's going to be an exciting day. Now, quickly segling into 1.24 highlights, I will turn it over to James to talk about our release theme. Thanks, Becky. So Q&A is 1.24, as with all Q&A releases over the past few years, has a release theme, and the release theme for 1.24 is Stargazer. So this is a theme that I picked really to encapsulate the idea that everyone in the community can work together and look forwards to try to find new ways of solving problems, and really interesting solutions are out there. The logo is made by my wonderful wife, Brittany, and I think it looks beautiful. So yeah, I think that's really all I have to say about it. All righty, thanks James. Next up, we have Grace to talk about our enhancements. Right, so for 1.24, we have 46 total enhancement after code freeze and everything. Within that, we have 14 stable, 15 to beta, 13 alpha, two deprecations and two removal. Interestingly, since 1.17, we've always delivered more than 10 stable features. Also, the alpha features are obviously new, so if anyone's interested in using it, make sure to turn on the feature flag, as well as we're always open for feedback on features as well. Awesome. Thanks, Grace. So next up, we have some major themes now. I might turn it over to James to talk specifically about our first theme, and then we can all dive in and chat about the ones afterwards. Oh, yeah. So Dockershin, the Dockershin removal has been a topic that we've been talking about and communicating about within the release team for some time now. Just to give a quick overview, for those that might not be aware, Dockershin is a compatibility with Docker Engine that was deprecated back in Kubernetes 1.20 and has been removed now as of Kubernetes 1.24. We have posted a large amount of documentation primarily aimed at platform teams and questions for administrators about how to check whether they are affected by this change. Lots of migration instructions and things to do next. I think the short version of this really is that most people who are running as application developers on Kubernetes or deploying day to day will not need to change their workflows. So if you use Docker as like a local CLI on your computer when you're developing a packaging thing, so when you see iPipelines to create containers, then that workflow will almost certainly not change. This really only affects a subset of people running Kubernetes. There's a lot more information about this out there. We highlighted the exact technical nature of this change in our release blog, and there have been a number of blog posts over the past few months and years on the Kubernetes blog around this change and about what you can do to find out. So if you have any concerns about this, as I know that some members of the community have been rather concerned, then do read our release blog and I'll give you all the information you need. Cool. Thank you, James. Also, one other thing to call out is Grace brought up the need to turn on alpha flags to check out alpha features. One change happening in 1.24 is that now new beta APIs are not enabled in clusters by default. This doesn't impact existing beta APIs, and basically it just means that moving forward beta APIs are something that you'll also have to turn on via the flag. Release artifacts are signed. This is a big one for the release team. I'm not sure what external fact this has, but essentially we're rethinking about how to guard Kubernetes release process as relation to the whole supply chain attacks. So do you want to talk a little bit about six-door, or should we just dive into that later, James? I think we can dive into that later. I think there's a cap for it later on the single lease. I think it deserves its own discussion, but it's a very interesting one for the supply chain security and foundations of Kubernetes altogether. Cool. And finally, Kubernetes 1.24 offers beta support for publishing its APIs in the open API v3 format. So you can actually request the open API v3 spec for all Kubernetes types. All right, on to the next slide. More major themes. So storage capacity tracking is something that's been introduced. It supports exposing currently available storage capacity via CSI storage capacity objects, and also enhances the scheduling of pods that use CSI volumes with late bindings. Another CSI related change to call out is that volume expansion is stable and it adds support for resizing existing PVs. Yeah, those two are really a culmination of a lot of work from six storage, and we're going to come back in a little bit more detail later. But that is, you know, these are features that have been in beta for a while, and to season and go stable is really exciting for stability of the platform as a whole. Now I'm preempting priority to stable. That's not just a long name there. So the feature adds a new option to priority classes, which can be enable or disable pot preemption. Cool. And then finally, one other CSI related feature. We're moving away from having in tree storage plugins to having everything be out of tree. This helps a lot with the releases, and it also reduces the support boundary. So in this release, the Azure disk and open stack sender plugins have both been migrated. I believe also Azure file system has as well. Yeah, I believe it has. And even more major themes. James, you want to talk about the GRPC probes? Oh, I love the GRPC probes one. So of course, liveness and readiness probes has been a feature in Kubernetes for a long time. The ability to probe a container and ask it for its couldn't status either liveness or readiness with different behavior within Kubernetes for what that means. But they have to be up until now, they had to be HTTP probes. So, but if your application did not otherwise ship a HTTP server and primarily communicated with a GRPC, that meant you have to bundle the entire HTTP server just for this probe. Whereas now we also support GRPC probes, which means if you're working in a microservices application, which is using a lot of GRPC and doesn't use HTTP internally at all, it's going to really streamline your deployment of your internal services that still need to have these probes. But don't otherwise need to speak HTTP. Cool. Thank you. That was a lot of enthusiasm. Another one that's come up is that originally released as alpha in case 1.20 Keylet support for external, excuse me, for image credential providers is now beta. And what's cool about this is that the Keylet can now dynamically retrieve credentials from a container imagery, image registry using exec plugins rather than having to store the credential on the nodes file system. That's pretty neat. Next we have contextual logging in alpha. So essentially this enable the color of a function to control all aspects of login. So that allows you to format it better at key value pairs, add more velocity, and more values and names. Do you want to take the IP collision, James? Yeah. Sorry, I was muted. So the IP collision one is another really interesting one for me. So this is the idea that you can specify that a cluster IP service should receive an IP address and you can also hard code in your service definition, a IP address. Before this feature, there was no way to guarantee that the hard coded one you chose was not already allocated by dynamic service. So this allows you a little bit greater control over a subset of the IP allocation for service IPs that will actually be dynamically allocated, which means that you can use some of them for static allocations. So you can avoid this problem and have a mix of dynamically and statically allocated service IPs without having to worry about contention. Next we have dynamic keylet configuration removal. So what this feature was is to allow live rollout of keylet configuration in live cluster. But I think there was a lack of motivation within the community or interest to promote this to to stable. So it will be removed or it is removed from the kubelet in 1.24 and will be removed from the API server in 1.26. Perfect. Thank you, Grace. And now we are going to segue into the more in-depth updates from all of these digs beginning with API machinery. So SIG API machinery covers all aspects of the API server, API registration, discovery, API crud, also admission control, encoding, decoding, conversion, defaulting, the LCD persistence layer, the open API, custom leaf source definitions, garbage collection, and also client libraries. And after that mouthful, I can start talking about some of these specific changes that have come out. So one of those changes is the deputation and removal of self-link fields. So self-link is a URL that represents the given objects. It's part of object metadata and also list meta, which means that it's part of every single Kubernetes object. With this, we are proposing deprecating the field, or excuse me, I should say we are deprecating the field and removing it at a later release according to the deprecation policy. And just because we haven't talked about the deprecation policy much yet, the way that Kubernetes does things is if something is not looking like it's going to graduate to stable, or if it looks like it's been replaced by a more capable feature, we will enter the deprecation process, which means rather than completely removing an API feature immediately, there will be a process where it'll be removed in a later version. So in this case, we've just began the deprecation process and it will be fully removed from the API server at a later date. Next up, we have efficient watch resumption. So the Kube API server watch cache is initiated from NCD at the moment when it starts with an empty change history. Now what that means is that clients that want to resume a watch immediately after the API server reboots almost always have a resource version that is out of the history window. And this is a way of addressing the problem. Basically what we're doing here is ensuring that watch can be efficiently resumed after the API server is rebooted. And this is a new feature that is graduated from alpha to beta and is finally in stable. And for more info, you can check out the cap. Next up, we have the beta graduation criteria for field validation. In this case, what we're doing is adding the ability to optionally trigger schema validation on the API server that errors when an unknown field is detected. So if I were a client sending a create update or patch request to the server, I want to be able to instruct the server to fail when the object I send has fields that are not valid in the Kubernetes resource. And so what this does is it allows us to remove client side validation from Kube Cuddle while maintaining the same core functionality of erroring out unrequests that contain unknown or invalid fields. And also I should say James and Grace, please feel free to hop in if there's anything you would like to add. One thing I want to call out is that you said Kube Cuddle instead of the correct Kube Cuddle. Oh dear, yeah. Well, we'll save that for offline. But thank you, Grace. Next up, we have open API enum types. So this detects enum types and resource types and generates a definition in the open API spec. What else can I say about this? Currently, types in the API have fields that are actually enums but represented in a plain string. And what this does is it proposes an enum marker type aliases that represents enums. So now the open API generator will have the capability to recognize the marker and auto detect possible values for an enum. And next up, and the last one for API machinery is the open API v3, which we already called out in major themes. So essentially what we're talking about here is just publishing the open API v3 spec for both built-ins, CRDs and API service types. Yeah, I don't think there's too much else to add on this one. I just want to call out that this feature doesn't include fields outside of the schema object. They might be nice to add on later, but this is not part of the enhancement. Cool. Thank you, Grace. Next up, we have SIG apps updates. So SIG apps covers deploying and operating applications in Kubernetes and focuses on the developer and DevOps experiences of deploying those apps. So now we have MaxUnavailable for StapleSets entering alpha. This implements MaxUnavailable for StapleSets during a rolling upgrade. When a StapleSets spec type is set to rolling update, the StapleSets controller will delete and recreate each pod in a StapleSet. Now, if any of you are familiar with doing application updates, being able to do rolling updates is incredibly critical. And so the fact that this is supported now is it's really helpful for the process of updating your applications. Also worth calling out, updating each pod currently happens one at a time. And now what you have is support for a MaxUnavailable variable as well. So what that means is that you have X number of pods that you are comfortable with being available simultaneously, which facilitates a speedier rollout than doing something like waiting for all of those pods to be updated in serial. Next up, we have indexed job semantics in the job API. So what this does is it adds a completion index to pods of jobs with a fixed completion account. We are essentially here adding user-friendly support for running parallel jobs. And here what we mean by parallel is multiple pods per job. Jobs can be parallel where they have no dependencies between each other or tightly coupled where the pods can communicate amongst themselves to make progress. And this essentially adds the fixed completion account which supports running parallel programs with a focus on the ease of workload partitioning. Next up we have the batch v1 which adds suspend field to the jobs API. Another jobs API update, this adds a suspend field to the jobs API that allows orchestrators to create jobs with more control over when pods are created. What you can do here is allow the creation of pods using the job controller with specific requirements such as pod level parallelism, completing a particular number of execution, and also restart policies as well. Just a quick call out, one of the goals this feature is trying to achieve is to indefinitely delay the creation of pods owned by a job. And it's also worth calling out this has entered stable. Next up we have tracking ready pods in the job status. We are hitting this jobs API really hard with updates. So what this does is it adds a ready field to job status that tracks the number of pods with the ready condition. So along with all the other jobs related updates this gives users a lot of control over the number of job pods that are running or in pending phases. This adds a ready account number of job pods with the ready condition with the same best effort guarantees of the existing fields. I think it's worth calling out that job pods can remain in pending phases for a long time. And you know this can happen in situations with clusters with tight resources or with long lasting image pull processes. And the problem is that pending kind of gives the false impression of progress or hope to end users. And so in this case adding a new field gives folks more information about the actual status of their jobs. And last up we have time zone support in cron job. And so what this does is it will create jobs based on the schedule specified by the author. And the time zone used is dependent on where Qt control and manager is running. And so this extends the cron job resource with the ability for a user to define the time zone when a job should be created. Next up we have SIG architecture. And I will dive right into this. And this is again one that we call out in major themes which is simply that new beta APIs are not enabled in clusters by default. Existing beta APIs and new versions of existing beta APIs will continue to be enabled by default. So there won't be any changes required there. But this is definitely worth noting because previously when you use alpha or beta facts for clusters there are a lot of beta APIs that are available by default. And so you may expect that when a new feature graduates from alpha to beta that you might actually see it already enabled. In this case you're going to want to check that list to make sure that the API is in fact enabled. I don't think there are any in 1.24 actually affected by this. I think all of the beta changes are either things that have already been in beta or they don't introduce APIs. So if it's just a beta change and there's something else and not an API change I don't think it's affected. So correct me if I'm wrong but I think that's the case. I can't think of an example either but it is it is work double checking. So I think that's all this will more affect future releases of 1.25 probably and beyond. Yes. Next up we have SIG Auth covers improvements to Cate's authorization authentication and also cluster security policy. So the first one is CSR duration and what this does is it extends the certificates API with a mechanism that allows clients to request a specific duration for an issued certificate. It's a little bit self-explanatory but it's worth calling out that today certificates issued through the certificate signing request API are not revocable and you also do not have the ability to control the duration of an issued certificate and there might be a reason to have trust distinction for different clients. So in this case what we're doing is enabling users to request a specific duration again to support things like trust distinctions. The great thing here is this and also of course all of the work with remote signing release artifacts from SIG Store help with the process of increasing the overall security of the Kubernetes ecosystem. My colleagues in the cert manager team were very happy about this one. Okay. And then last one for SIG Auth is the reduction of secret space service account tokens. So this very simply reduces the surface area for secret space service account tokens. Yeah, that's all I got. Anything to add? Okay. Okay. And I realized I've spoken a lot. I promise I will turn it over to my colleagues momentarily and they will handle the next few. But I think this is the last SIG that I'll be covering and it's SIG CLI. SIG CLI covers CUBE control. Grace, is that right? Yeah, you got it. Thank you. CUBE control and related tools. Oh, it's CUBE TCL. SIG CLI focuses on the development and sanitization of the CLI framework and also its dependencies and just simply improving the command line experience for developers and DevOps personas. So one of the changes is that there's a new alpha feature related to CUBE control return code normalization, which simply just normalizes the return codes issued by CUBE CTL. We're just going to go around the gamut when an error occurs and also when there's successes. Yes. And we also are adding a new alpha feature which is adding sub resource support to CUBE control. It adds a new sub resource flag for commands like get patch and edit and also replaces commands to allow fetching and updating sub resources like status, scale, and etc. Today when you're testing or debugging or fetching sub resources like status of API objects via CUBE CTL, it involves using CUBE CTL with the raw flag and patching sub resources not possible at all and requires using curl directly. And it's, you know, it kind of violates the CLI SIG CLI principle of making things user friendly because this method is very cumbersome. And with this, we're adding sub resources as a first class option for CUBE CTL that allows you to work with the API in a very generic fashion. All right. And I'm going to bring it over to Grace. Yes, I will take you from here. So first, I have a cloud provider. The first enhancement they have is to add a new field service type equals load balancer class. And what this allows you to do is to have multiple load balancers. So for the use case in which you have multiple workloads and, you know, each of them require different types of load balancer. Now you can use this field, which is service dot specs dot load balancer class. And it's also unstable. Next up, we have leader migration for controller managers. And the name doesn't tell me anything about the enhancement. But what it does is is migrate my trading cloud specific code inside the CUBE controller manager to their out of tree equivalent. And this is also a stable enhancement. Next, we have SIG instrumentation. They have three enhancements. The first one is deprecation of Kubernetes system components lock sanitization. And so this feature came out of a security audit. But essentially, we are deprecating it. So this allows dynamic lock sanitization, which essentially is a filter for logs. Okay. And then next up, we have deprecating specific K lock flags in Kubernetes component. So this came out due to due to complexity as well as performance issue. So we used to use K lock as a result of when the go ecosystem was not as developed as it is now. And so this deprecation which is in beta will remove some not all of the K lock flags in the component. Relating to this, next up, we have contextual logging, which is in alpha. And this was one of the major themes and is relating to the K lock one I just mentioned previously is it allows you to better log such as have the key value attach pairs hat name, no more verbosities and such. So this is in alpha currently. All righty. No one has anything to add. I'll move on to SIG network. And they have four enhancements. The first one is support of mixed protocols and services with type equals to balancer. So essentially, this allows you to use different protocol via the same IP address. So different layer for protocol. So the goal of this is, you know, analyze the impact of the feature is currently in beta and see how cloud provider load balancer implementation can use this in the future. Okay. And then next up, a feature in alpha, we have reserve service IP ranges for dynamic and static IP allocation. And the goal of this is to essentially reduce the risk of having IP collision via subdividing the cluster IP range into dynamic and static IP allocation. Next up. Sorry, I might have missed that one. No, I think we skipped ahead a little bit. Okay. I'm only looking at the slides because I don't have a monitor so I can't tell. No worries. Jump back to service internal traffic policy just very quickly. Okay. Service internal, which one is kind of lost now? You said second one in SIG network. Oh, okay. Okay. Maybe I missed this one. Yeah. Service internal traffic policy. So this is a new field that allow routing to the local node instead of randomizing it. So the field is back dot internal traffic policy pretty straightforward. Okay. Where are we now? Network policy status. Network policy status. Okay. So network policy status provide feedback to user when they use network policy. So currently, or not currently, but the features in beta, if you use a network policy and something goes wrong, there is no immediately feedback. And this is a feature that is going to tell the user if something has been properly parsed or not. Okay. And then next up, I think I covered this one already. Yep. Thank you. SIG node dynamic tubelet configuration and it is removal. So this was one of the major themes in which the feature essentially allow live rollout of tubelet configuration, but to due to lack of motivation and interest in promoting to stable, this feature is being deprecated. Or remove at this point. Yes. Oh, yeah. Okay. Yeah. Remove. Gone. Next up, we have pot overhead. So this feature is going into stable and it's a mechanism to account for the pot overhead require for scheduling. So accounting what the pot overhead is in the runtime solution. Next up, we have cubelet credential provider. So this is a plugin to allow the cubelet to dynamically fetch the image registry credential for any cloud provider on top of the three cloud providers that we currently have, which is Asia elastic container registry and Google container registry. So this plugin allows it to dynamically fetch image from any cloud provider. And it is currently in beta. And then this is a big one. As James mentioned before, Docker trim removal due to incompatibility as well as maintenance burden. The container runtime for Docker has been removed from the cubelet code base and there is loads and loads of documentation. I think there's FAQ on the community's website as well, if anyone's has questions. Yeah. And just to add on like truly what Grace said is correct. There's loads and loads of documentation out there about this process. On the day of launch, we have a post that was published about the history of the Docker shim removal process. There was one launched around the same time as our removals and deprecations blog for 1.24 that included the exact steps you need to ensure that your cluster is ready for the removal of Docker shim. Prior to that, there was other blog posts about why not to worry about Docker going away. And I think there's just been a lot done by the community to ensure people that we were all set for this deprecation and there's really truly nothing to worry about. Awesome. Awesome. Next up, we have hot priority base graceful note shutdown. Once again, the name is quite, quite a mouthful, but essentially what this allows you to do or the node to do is detect pots with priorities when the node is being shut down and shut down those pots gracefully and the features in beta. And I think this might be the last one for SIG node, but one of our major themes, GRPC Cropes has been added native configurations to allow users to not require user to package health check binary. So you can do it natively now. SIG release, my favorite SIG. Only have one feature, which is rare. We do even have any enhancements at all, but this is a big one. Signing release artifacts currently in alpha. So the SIG release is trying to rethink or kind of create a framework around toolings and how we want the release artifacts to be signed. And the tooling is going to be the Linux foundation, a six door project. And I think there's loads of documentation on that. Let's talk about that at KubeCon as well. But the goal of this is to support a secure software supply chain. Yeah, then just to add on, I mean this provides end users with a chance to verify the integrity of everything that you're downloading, all of the Kubernetes artifacts. So this is a pretty big change. It's also worth calling out that this helps the Kubernetes project achieve greater compliance, specifically with the Salsa standard. All right, we're running over to James. SIG scheduling. So SIG scheduling looks after anything to do with, well, the act of scheduling, which in Kubernetes terms is anything that chooses where to put a pod, more or less. So we have a few enhancements here. The first one is non-preempting options for priority classes. So priority class is a feature that has existed for a while that allows you to say that some pods are more important or a higher priority than others. And then when you hit resource contention, you can kick off some other pods to make room for more important ones for higher priority. This has now been changed to add a non-preempting option. And if you use a non-preempting priority class, then your priority class won't evict things that will instead be used to make scheduling decisions. And the real driver behind this is for batch workloads where you want to use priority classes to prioritize upcoming scheduled work on a full cluster, but you don't want to evict pods that are part way through computation. So this is really an improvement for batch workloads. Someone just said they had no sound. Can people hear me? I can hear you. Okay, cool. That person might want to check the audio settings, I'm afraid. Shall we move on? Yeah, if anyone else wants to ping in the chat from the audience, if they can't hear James, then we can validate if that's true. Cool. Thank you, Patrick. Okay. Shall we move on? Namespace sector pod affinity. So again, pod affinity. Oh, the last switch was stable, by the way, as is this one. Pod affinity is a feature, again, that's been around in Kubernetes for a while. So this allows you to say that you want pods to exist alongside other pods for affinity or not with other pods, which is anti-affinity. But that was always computed against pods in the same namespace. So now you can select, you can give a namespace selector. So if you can do this with pods across namespaces, which is a nice little enhancement there. Indomains in Pod Topology spread. So this is around tuning how the Pod Topology feature works, which is, I believe this is a feature that allows you to give Kubernetes some awareness of which nodes are next to other nodes, for example, in the data center. So this will allow you to make intelligent routing decisions, intelligent scheduling decisions for things like DR purpose, all that sort of thing. So there's another enhancement fact coming through in alpha. Next, we move on to SIG storage. SIG storage has a lot of caps. So we will try to go through these relatively quickly, but there's a lot of good features coming out here. So SIG storage is responsible for anything to do with storage. Predominantly, of course, this will be your persistent volumes and persistent volume claims. But this includes all sorts of features around block storage, file storage, object storage, even in some cases. So there's a lot going on. Right. What's our first one? CSI volume expansion is now stable. We spoke about this one in our major themes. This is something that's been around for a while. Again, it depends on support from your CSI drivers, whether it can actually expand volumes at all. But this means you can use the Kubernetes API to manage that expansion and the storage and the size of your storage for persistent volumes, which is really exciting. Next, volume health monitoring. This is a thing coming into alpha to provide some health information back from the underlying storage driver into Kubernetes. Again, to help administrators really use a Kubernetes API as a first-class way of managing their storage options. So this is something that you need to feature flag to enable it, but it is coming in. It's great to see this coming through. Next, we have... I don't think the slide is updated for me, Mickey. Oh, no, it hasn't. I just can't read. Storage capacity tracking. Like the first one, this is another one that's coming through into stable. This is reporting information about how much capacity is remaining in a particular object before and, again, is used to help administrators understand the state of their clusters. And this is going into stable again. Next, we have a handful of enhancements, three of them that are all about entry storage plug-in migration. So we have a whole bunch of entry. When we say entry, we mean that the code is in github.com slash Kubernetes slash Kubernetes. That's what we at least seem to consider to be entry. So we had a whole bunch of storage drivers. And now that we have the container storage interface, all of those are being pushed out into CSI plug-ins. For now, the APIs are remaining the same, but the logics being pushed out into a driver. And there may be future changes around that coming from six storage, but that's not now. We've seen this being done for a handful, but not all of them so far. So we're doing it for OpenStack Cinder. And I believe if you look at the next one, it's Azure Disk. And the one after that is your file. So there are, in various levels, a stable, stable beta. So this is part of an ongoing effort within six storage in order to move everything to be CSI drivers and reduce the support burden of things in tree. Next, we have volume populators. So this is the idea that when you create a volume in Kubernetes, you can populate it from somewhere. This, at the moment or before this feature, was primarily targeted at restoring snapshots. So you could tell that we're a volume snapshot and to restore it as a populator option, or from another, potentially, if your driver supported it, another volume was already around. This is expanding the idea there to make a generic volume populator so that you can provide other ways of populating volumes into your Kubernetes cluster. This is a brand new beta feature, I believe. So it's going to be really interesting to see where this goes, and it's really interesting work coming from six storage. Next, we have nongraceful node shutdown. This is really a reliability and reliability improvement around if your node just dies. I think nongraceful node shutdown is a euphemism for your node crashed or someone unplugged it. So again, this is really just a reliability improvement. We have some behavioral options to ensure that things don't get stuck, unscheduled, unable to rerun and require someone to manually poke at them to fix them. Again, this is coming in alpha, so you will need a feature to enable this, but it is an interesting one coming up. Next, we have honor persistent volume reclaim policy. This is really just a standardization on behavior. There are certain circumstances in which volume reclaim policy won't be honored, and this just makes it so that it will. This requires some behavioral changes as we came from this enhancement rather than a bug fix, but yeah. This is, again, a pretty interesting one going through. And then finally, finally for six storage, we have control volume mode conversion between source and target PVC. So this is actually, you could argue, a security fix that are if you had a kernel bug of a certain type, which have non-currently existed but have existed in the past, then you could use this to trick the kernel into doing something incorrect and cause a kernel crash. And if you can cause a kernel crash and you're creative, you can do all kinds of fun things. So this just really closes off something that no user would ever actually want to do, which is, for example, try to mount a volume mode block PVC snapshot of a monitor as volume mode file system. You're never really going to want to do that. So this is just an alpha feature to stop you trying to do that in the anticipation that eventually there will be a bug that this could be part of the next one chain for. So yeah, this is pretty interesting stuff. Most users probably won't have to worry about it, but it's nice it's there. SIG Windows, SIG Windows as a name implies, oh this is the last SIG by the way, only two more to go. SIG Windows as a name implies deals with Kubernetes and Windows, so Microsoft Windows. They have a couple of enhancements in, which is operational readiness specification. This is really a certification thing to do with our enter and test suite. So this allows you to get greater, this, well, ultimately will lead to having greater confidence in running production workloads on Windows clusters and Windows nodes. So that's a big improvement that we're happy to see. And then finally, the last enhancement for Kubernetes 124 is identify podOS during API server admission, which is just about expressing which operating system a pod intends to run so that you can make scheduling decisions much more effectively if you have a cluster which has mixed Windows and Linux, which is entirely possible. Yeah, I think that is all of our enhancements. Cool. Yeah, 46 enhancements later and it's time to segue into the release team shadow program. So I mean, essentially the three of us met and became great friends on the Kubernetes release team. Kubernetes release team is a group of folks, many of whom have had experience from past releases, others of whom are shadows and they're joining for the first time. Maybe they're existing Kubernetes contributors, maybe they're folks that are interested in collaborating on a big open source project. But whoever they are, they're a group of dedicated individuals who comes to weekly meetings, takes roles on a number of different teams. You know, enhancements where Grace was in the charge and in lead lead lead enhancements, there we go, or the team lead, which is people who have come from previous releases and one of the leaders of an upcoming release, just like James. You can also join me on the communications team. There's also a release notes team, a docs team, bug striage and CI signal. Typically, the way that it works is that there is a lead for each one of those teams with four to five shadows per role. The release cycles are typically 15 weeks. We ask folks to, you know, usually stick around a little bit after just in case they're called on for a couple of additional weeks worth of work. And we would encourage everyone watching or everyone who works in Kubernetes to consider joining their release team, you know, try to throw your hat into the mix if you want to be part of the 1.26 release because 1.25 has already kicked off and the shadows have been selected. But we're always looking for new collaborators that can help out with the project. And we'd love to see your name on the next application. It's also worth calling out that there is a shadows get a repo that has more information about how to join the CIG release team. James, do you have anything to add? Yeah, if people are interested hearing that, 1.26 will probably be starting looking for shadows around about August, I would have thought. Yeah, mid August. Mid August, give or take. That may change, but if you pay attention to the CIG Dev, sorry, to the KDev mailing list, like devac, qnasa.io, if you subscribe to that, then you will get information and updates posted to you that way. Or if you're interested in what Gemalyn, what CIG release does, you can come along to our channel on the CIG Slack and we have bi-weekly main release team meetings on Wednesdays, sorry, on Tuesdays, Wednesdays. So yeah, we're always happy to see people and talk to people and yeah. I have two things that I'll be really quick. The first one is you don't need experience to join. I joined when I was in first year, skipping school to be in this webinar right now, so no experience required. And also on top of the teams listed here, our release engineering team, or the branch managers, the people who are running the artifact signing initiatives are always looking for people to help. It's a little bit different than the release team because the commitment is more long-term, but if you're interested, that's there as well. Cool, thank you James and Grace. And last up, we have time for questions. So I think we can look over in the Q&A to see if there are any questions asked by folks in the audience. Patrick wanted the link. I'm not sure of the link to what, I presume the shadow repo. I think the link will be available in the PowerPoint presentation. Yeah, James, if you want to track that down, we can also share it right now. Yeah, there you go. There you go, very fast. Who's street has alarms going on? That would be me. Is there any place to validate and test features like volume health monitoring and storage capacity? Depending on what you mean by validate and test, do you mean by yourself in your own clusters? Or do you mean does the Kubernetes project as a whole perform testing on these features own cluster? Yeah, so I mean, though those features are enabled in a Kubernetes 125, I think health monitoring, you might need an alpha flank for the storage capacity is stable. So if you spin up any one Kubernetes 1234 cluster, then you can do it. I'm not sure if any of the cloud providers have 124 yet, but you can use a kind in order to spin up a local cluster with host password. Oh, I see what you mean. Do any of the locally supported CSI drivers support those two features? I don't know, actually. Sorry, I'm missing some of your question. Six storage might be a good place to reach out to. I would get a six storage and ask them which of the storage drivers implement that. I'm not sure if any of the local ones do. I imagine they must, but I do not know. Netpol status. I don't think I understand that question. I'm afraid. What's the network policy status? So the goal of this is for the network policy provider to add a feedback or status to the user to see whether the network policy was properly parsed with its feature. So feedback back to the user if what they requested is my understanding, could be wrong? I believe that's the case. Invalid or not, yeah. Yeah, it's worth calling out that for additional details about any of the things we cover today, for example, the network policy status. We will be sending out the deck and the deck will have links to all of the Kubernetes enhancements proposals and issues that are tracked and that we discussed today. So if you want to dive and just ridiculously deep into any of these new enhancements, you'll be able to do that afterwards. And thank you, Grace, for sending that out already. Oh, Grace, you're the poster at my left. Any other questions? About five more minutes. If anybody has anything else they want to ask. Is there anything else you want to add from a release team perspective? Just that you're only really seeing three of us here, but the release team is 30 people. Like this is the release team itself, of course, only handles, I say only, only handles kind of the release mechanics and we're 30 people. And all of the enhancements you've seen us talk about today were implemented by other teams of people in other SIGs. So there's hundreds, if not thousands of individuals who have contributed to Kubernetes 154. So this is, despite it just being the three of us, there's a lot of work that has gone into this. I would love to say that we were responsible for 1.24, just the three of us, but not to say it was a huge effort for everyone. All right. Well, if no one else has questions, we can wrap up a little bit early. Thank you, 3 of 30 for coming and giving us this great webinar and teaching us about what the updates are and what's going on. I think everybody knows where to reach you. And if anyone has any other questions, definitely reach out. You can always hit us up on the Slack channels. Again, this recording will be online later today. And thanks everybody for joining us. It's been a great chat and we will see you next time. Thank you, everyone. And thank you, Libby. Yes, thank you all. We'll see you again soon.