 All right. Welcome everyone to a special edition of CNCF live webinar, Kubernetes 1.25 release. I'm Libby Schultz and I'm excited to be moderating today's webinar. I'm going to read our code of conduct and then I'll hand over to Kat Cosgrove, Kubernetes 1.25 release team. Also communications lead at Pulumi Corporation, CC Huang, Kubernetes 1.25 release team lead with Google and Priyanka Sagu, Kubernetes 1.25 release team enhancements lead with VMware. A few housekeeping items before we get started during the webinar, you are not able to speak as an attendee, but there is a Q&A chat box on the right hand side of the screen. Please excuse the dog. Feel free to drop your questions there and we'll get to as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct and please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They will also be available via your registration link and on our online programs YouTube playlist. With that, I will hand it over to Kat, CC and Priyanka to kick things off. Take it away. Hello. Thank you, Libby. Let me get these slides going. Okay, today we are going to talk about what's new in Kubernetes 1.25 came out just a little bit ago. As already introduced, we are me, Kat Cosgrove, CC and Priyanka from the release team. I was the communications lead. CC was our fearless release lead and Priyanka was our equally fearless enhancements lead because the enhancements team sounds very dramatic. So it's a ton of work. So much work goes into getting a Kubernetes release out and we're always very proud to execute it well and this one went flawlessly. So our agenda for the day, we're going to talk about a little bit of a sneak peek on release 1.26 first, then go into the highlights from 1.25 and the specific updates from each SIG followed by a Q&A. So if you have questions while we're going through this, feel free to drop them in the chat. If I can make it work within the flow of the talk, I will ask it of the presenters while we're doing this, but otherwise we will get to it in the end as time allows. So starting with the 1.26 release sneak peek. 1.26 release has started. The start date was Monday, September 5th. Enhancements freeze is coming up soon-ish Friday, the 7th of October. So if you are in a SIG working on one of these enhancements, just know that it is coming up. Code freeze will be the 9th of November and our target release date is the 6th of December. It's always worth highlighting that release dates are kind of a moving target. A lot of things go into getting a Kubernetes release out. So we aim for that, but things can happen, but we are doing our absolute best. And the Kubernetes release cadence has changed to three releases per calendar year. So 1.26 will be the last release of 2022 because there's no way we could get another one out in December. Moving on to the 1.25 highlights, which I will kick over to CeCe first to talk about our release theme, Combiner. Hi, everyone. So as a release lead, I got this special benefit of picking up the sim for the current release. So Combiner is here we go for Kubernetes 125. And this comes from Transformers, obviously. And my song was really generous to share his favorite toys with me so that I could borrow it as a sim. And when I think about Combiner, it kind of represents that Kubernetes project itself is made up of many individual components. And also the community is built and maintained by many individuals as well, which join force to give us this awesome project. And here it comes, the Combiner. Hope you guys will love it. Yep. And we have a lot of enhancements to talk about in this release. We're going to get through all of them, but first, Priyanka is going to give us a little bit of an overview of what happened this release. Hey, hello, everyone. So this release, we had a total of 40 enhancements during the cycle. Out of them 13 were graduated to GA or when stable 10 graduated to beta. And we had 15 new introduced newly introduced alpha features. And we also had two deprecations this cycle. Just for a note, alpha features are the new features. And if you want to try them out, you would need to enable the feature flag. Also, since 1.17, Kubernetes release has constantly delivered 10 plus stable features. So we are also doing that. This release, that's a good one. Oh, that's cool. I didn't know that. That's a fun little thing. Oh, cute. We're so good at our jobs. And we've got some major themes to talk about quite a lot of them. This release, we have three slides of major themes, but there were a lot of big important impactful and very cool changes here. So CC, did you want to take these? Sure. Let's start with the major sense. I know this release, we have like so many major sense. So that's because we have so many amazing features we want to deliver to the users. Let's begin with the first one, the policy is removed. So I'll give the brief introduction on each of the major sense, and we'll definitely talk about the details in later. So please be patient. And the first one is the policy. Post-sacrity policy was initially deprecated in way 121, and this release is being removed. And together with the replacement post-sacrity admission graduates to stable in this release as well. The post-sacrity admission, which is also a building admission controller that evaluates post specifications against the predefined post-sacrity standards by simply just adding a label to the namespace. The next one, excuse me. The next one would be the informal containers, which also graduates to stable in this release. As everyone know, the informal containers, which is a special type of container that runs temporarily in an existing port. This is particularly useful for troubleshooting when you need to examine another container, but cannot use kubectl excuse because of that container has crashed or its image lacks debugging utilities. It's especially useful for destroy less images, obviously, and this feature graduates to beta in 123, I believe. So now it's being stable. And the next one I'm going to talk about is the support for the C Groups V2, which is also graduates to stable in 125. And it's actually, as everyone knows, is a Linux kernel feature, and it's being announced stable for a couple years by Linux, I think, and with some distribution now defaulting to this API, Kubernetes must have supported to continuously operating on those distributions. So no worries, C Groups V1 still continue to be supported. And this enhancement pulls us in a position to be ready for its eventually deprecation or replacement. And the next one I'm going to talk about is the Windows support. As everyone knows, Kubernetes keep putting continuous effort on the Windows support. And in this release, we added support for the performance dashboards for Windows. We added the unit test support for Windows. We added the conformance test support for Windows. And also, they have a new GitHub repo for Windows operational readiness, which we will introduce later maybe. And the next one is the container registry movement. In this release, they formally moved our container registry service from Kubernetes.GCR.IO to registry.Kubernetes.IO. This is an effort of spreading the load and cost across cloud providers and users who have the older registry in their configurations needed to make the necessary switch. But no worry, we will continue to support the older registry for quite some time, but you should think about migration if you need it. And we have more come the next slides. We have the promote second default to beta in this release. And this one is just providing a native way to specify second profiles for workloads, which is enabled by default now. And second as a layer of security that could help registering allow set of this cost to a smaller set, which can help to make Kubernetes more secure. And the next one is the endpoint in network policy was also graduating to stable in this release. And this one provides the support of the endpoint field, which could be specified a range of parts to apply network policy instead of targeting a single port previously. And the coming one would be the local informal storage capacity isolation is also moved to GA in 125. And this was introduced as alpha in 18 and beta in 110 is now a stable feature. It provides support for capacity isolation for local informal storage between ports so that a port can be hard limited in its assumptions of shared resources. And the next one is the CSI migration. And migration is an ongoing effort, we've been talking about it for couple releases now, which is lead by six storage. And the goal is just to move the entry volume plugins to out of three CSI drivers and eventually remove the entry volume plugins. So the core CSI migration feature moved to GA in this release, the CSI migration for GCPD and AWS EBS also moved to GA in this release. The CSI migration for this fair remains in beta, but it's on by default and CSI migration for port works moved to beta, but it's off by default now. So the next one would be the CSI informal volume, which allows the CSI volume to be specified directly in the port specification for informal use cases. And this feature initially introduced in 115 as an alpha feature and now it's moved to stable. So thanks for moving. We have the final slices for the major systems, which is exciting for the CRD validation expression language also promoted to beta. This one introduced the expression language of the common expression language called cell, which make it possible to declare how customer resources are validate using the cell and validate the customer resource based on the validation rules you specified. And the next one is the server side unknown field validation to beta. This, this feature is now turned on by default, which is allows an optionally triggering schema validation on the API server that errors when I know fields are detected. And this is the last puzzle of removal of the cloud client side of validation. So hopefully after this feature, we could safely remove the client side of validation and the next one is the KMS way to API. So in the 1.25 release, we introduced the KMS way to offer one API, which targeted to address all the shortcomings from the way one API tried they are trying to add performance rotation and observability improvements. So no user actions required now. And previously encryption way is continuous to be supported and allowed. And the last one is the cube proxy images are now based on digitalized images. So in previous release, as we all know, cube proxy container images are built using Devon as a base image. Now, starting with this release, I switched to use digitalize and this change reduce the image size by almost 15% and decrease the number of installed packages and files to only those strictly required for cube proxy images. So now we're going to start a full cube proxy to do its job. Yeah, that's it. Hope you enjoy all the features. It's a lot of a lot of major themes that is that is the end of the major themes but now we are diving into these sake updates of which there are also many because this is all of the actual enhancements. We're going to take these in chunks because there are quite a lot of them and we don't want anybody to get too tired. I will introduce each SIG as we go though. First up, we are going to be talking about the enhancements from SIG API machinery. API machinery covers all aspects of the API server, API registration discovery, generic API, CRUD semantics, admission control, encoding, decoding, on and on and on. It's a fairly large SIG. So we're going to try to go through these pretty quickly, by the way, because we only have another like 43 minutes and there's like 40 enhancements. So here we go. API machinery, Priyanka, were you doing these or CeCe? I can take this one since SIG machinery is my home SIG. Okay. So thank you for patience. I'll make sure I quickly go through this. And the first one I mentioned in the major CM as well, which is the CRD validation expression language, which we were using common expression language called cell for the CRD validation. And by introducing a new field called x-criminatives-validations, you can specify the validation rules and it will get other customer source will get validated, which is simple and easy. Hopefully people will get rid of validating webhooks for those kind of job. Yeah, that's our hope. And the next one is server-side anode field validation, which I also mentioned in the major CM. So now like the, it allows the removal of the client-side validation from Qtado, and it triggers the schema validation on the API server at errors when anode fields are detected. So now whenever a client sends a current object create update or patch request to the server, server will validate that no extra fields are present or invalid. That's great. And just to be noted, it's in beta, so it's turned on by default now. Next up is SIG apps covering deploying and operating applications in Kubernetes and it focuses on the developer and DevOps experience of running applications in Kubernetes. Yes, we can go through the details. The first features brought up by SIG apps will be the demo sets support max search. As we all know, demo sets allowed it to update strategy on delete and rolling update. So this feature is now on demo sets now supports search during a rolling update. So which is part of the effort of minimize the downtime on those. And this max search field allows a demo set workload to run more than one part on a node during a rolling update, which hopefully will minimize the downtime as it is supposed to. Thank you. The next one would be the mini ready seconds for state fullest. Sorry for the state for says. So the mini ready seconds field now ensures that the state for says workload is ready for a given number of seconds before calling the port available. So the goal is to adding those optional field to state for says which hopefully would Pro which hopefully will provide the buffer time to prevent killing pause in rotation before new post showing up. And the next one. Thank you so much is the time zone support in crown job. As we all know the current job creates jobs. Based on the schedule specified by the authors but the time zone used during the creation depends on where on cube controller manager is running. And this feature aims to extend the current job resource visibility for the user to define the time zone when a job should be created. Yeah, it has the field on time zone there in kind of the specification. And next one would be the thank you so much would be the retrieval and non retrieval port failures for jobs. So this feature extends Kubernetes to configure a job policy for handling port failures. And in particular the extension allows determine some of the port failures as caused by infrastructure errors or the software bugs and so on so forth. It's an alpha feature. Yeah. Next up we have a SIG off. It just covers improvements to Kubernetes authorization authentication and cluster security policy. Is this CC or pre talking about the big one. Yeah, I can go through it since I already mentioned it in the major same is the first thing we mentioned in the major same, which is the policy created policy removal and the replacement of the policy created animation got graduated to stable as well. And the new policy created animation is also a building animation controller, which, like, you help you to evaluate the policy specification against the predefined policy created standards. And you could do so by simply just adding labels to the namespace. That's it. So the next one would be the came management service on way to API, which also be mentioned in the major same. So, yeah, we don't need to go too much into the ones that were major themed, which is a lot of them. So, hopefully we'll have one for questions. I think we will. Sure. So, next. API. Oh, yeah. The network is up. SIG network is responsible for the components interface as an API is which expose networking capabilities to Kubernetes users and workloads. SIG network also provides some of the reference implementations of these API is for example cube proxy as a reference implementation of the service API. And this is where it takes over right. I will cover this one. So the, yeah, so there are like full exciting features coming from SIG network as well. So the first one is the network policy to support all the ranges, which is graduating to stable as well. And now the network policy provides that support and port field now, which can use to specify a range of ports instead of a single port. The next one would be the enhanced node IPAM to support it. This continuously cluster on Siders. And this one is interesting because previously, when Kubernetes node IPAM controller allocate IP ranges for on port Siders for those, it uses a single range allocated to cluster. And this node get a range of a fixed size from the overall cluster Siders, but now we see this feature, it enables user to dynamically allocate more IP ranges for port, which is great. The next one is the reserve service IP ranges for dynamic and static IP allocation, which is graduating to beta. And I remember a lot of people are excited about this feature to offer in the previous release as well. So as we all know, the service service cluster IP can be assigned in two ways, either dynamically or statically. So this feature will allow you to use a different IP allocation strategy for services, which hopefully will reduce the risk of collision. And the last one from the network would be cleaning up IP tables to ownership. And we know that some Kubernetes components create IP tables chains and rules as part of their operations. These chains were never intended to be part of adding Kubernetes API guarantees, but some external components nonetheless make use of some of them. So as part of the V125 release, SIG network make this declaration expressively that the IP tables chains that Kubernetes creates are intended only for Kubernetes own internal use. And third party components should not assume that Kubernetes will create any specific IP tables chains, or that those chains will contain any specific rules if they do exist. So as a result, if you have components which do use those kind of IP table chains, please start thinking about migration. And as a result of the cleaning up on Qubelet, no longer unnecessarily creates IP table chains after the on-dockship removal and the Qubet proxy creates all the IP table chains it needs. Next up, we have the absolute largest section of enhancements from a single SIG node. SIG node is responsible for the components that support the controlled interaction between pods and host resources. They focus on the lifecycle of pods that are scheduled to a node, enabling a broad set of workload types, including workloads with hardware-specific or performance-sensitive requirements, and they maintain isolation boundaries between pods on a node as well as the pod and the host. Priyanka has the dubious honor of taking on this long list of enhancements. Hey, thank you, Kat. So let's start with ephemeral container. The cycle is going to stable. So as Kubernetes gains in popularity, it's becoming the case that anybody troubleshooting an application is not necessarily the person who have built it. So operational staffs and support organizations do want the ability to attach an automatic debugging environment to the pod. So this feature adds to Kubernetes a mechanism to run a container with a temporary duration that executes within the namespace of an existing pod. These are initiated by user and intended to observe the state of other pods for troubleshooting purpose. Next up, we have support for user namespace. This is a new feature introduced as alpha. This feature adds support to user namespaces in pod. So the goal is to support user namespaces in Kubernetes to be able to run processes in pod, the different user and group IDs then in the host. So specifically, it would be helpful if any process, any process that is running as a privileged process in the pod should run as an unprivileged process in the host. So this feature will allow us to do that. Next, we have quotas for ephemeral storage. This one is a beta in this release. This feature applies to the use of quotas for ephemeral storage metrics gathering. The mechanism proposed as part of this feature is to utilize file system project quotas to provide monitoring of resource consumption and optionally enforcement of limits. Project quota initially in XFS and more recently ported to EXT4FS over a kernel based means of monitoring and restricting file system consumption. And that can be applied to one or more directors. Next up, we have forensic container checkpointing. This is another alpha from SIG node. The goal of this feature is to provide an interface to trigger a container checkpoint for forensic analysis. So what is container checkpointing? It's a means to provide the functionality to take a snapshot of the running container. And then we can move or transfer the checkpointed container to another node and the original container will never know that it was checkpointed. Next up, we have liveness probe grace periods. It's a beta liveness probe currently use a dominate grace period seconds filled on both normal shutdown and when probes fail. Hence if a long termination period is set and a liveness probe fails, a workload will never be prompted promptly restarted because it will wait for the full termination period. This feature proposes adding a new field to probes that is probed or termination grace period second. When this field is set, it will override the previous termination grace period seconds for liveness or startup termination and will be ignored for readiness probes. It also maintains the current behavior if desired while providing configuration to address this unintended behavior. Next up, we have C groups we do. This feature adds support for C groups we do to the equivalent finally. So the new kernel C groups we do API was declared stable more than two years ago and new features in the kernel such as PSI already depend upon C groups version two. Some distros are already using C groups version two by default and that prevented Kubernetes from working on those distros as it required to run as Kubernetes required to run with C group version one. So introduction of this feature helps us with that. Next one we have enable Secom by default. This is a beta feature. So Kubernetes provides a native way now to specify Secom profiles for workload which is disabled by default today. Secom adds a layer of security that could help prevent CVs or zero days if enabled by default. So if you are enabling Secom by default, we make implicitly Kubernetes more secure. Next up we have port conditions for starting and completion of sandbox creation. It's a alpha in this release. So confusion of the creation of any board sandbox is marked by the presence of sandbox with networking configured. This feature proposal surface a new port condition called port has network. It's introduced as a field that indicate the successful completion of port sandbox creation, including concluding with configuration of networking for the port from Kublet. So it will benefit cluster operators, especially of multi tenant clusters who are responsible for configuration and operational aspects of the various components that play a role in port sandbox creations. Such as CSI plugins here I run time etc. Next up we have Kublet open delimitry tracing. This is another alpha. This is to enhance the Kublet to allow tracing GRPC and HTTP API requests. The Kublet is the integration point of a nodes operating system and Kubernetes and can make use of distributed tracing to improve the ease of use and enable easier analysis of trace data. This is unstructured data providing the detail necessary to debug requests across service boundaries. Next up we have add CPU manager policy option to align CPUs by socket another alpha from signal. So starting with Kubernetes 122 new CPU manager flag has facilitated the use of CPU manager policy options. These options, these policy options allows user to customize their behavior based on workload requirements without having to introduce an entirely new policy. With this feature a new CPU manager policy option is introduced. So now you can there is a new policy that ensures that all CPUs on a socket are considered to be aligned. We're moving to scheduling. Six scheduling is responsible for the components that make pod placement decisions. Do you want me to take you through windows for you Priyanka? So the first one in six scheduling is scheduler component config API. It's a stable one. So the cube schedule configuration API actually was in alpha and beta stages for several releases. And it finally graduated to GA this release cycle. So with this feature you can customize the behavior of the cube scheduler by writing a configuration file and passing its path as a command line argument. For example, you do something like cube scheduler followed by the config flag and the name of the configuration file. A scheduling profile allows you to configure the different stages of scheduling in the cube scheduler and each stage is exposed in an extension point. Plugins provide scheduling behaviors by implementing one or more of those extension points. Next up from six scheduling we have mean domains and topology spread. It's a beta feature from six scheduling. But this kept a new field mean domains as introduced to the port spec dot topology spread constraints to limit the minimum number of topology domains. Mean domains can be used only when the condition when unsatisfiable equals do not schedule satisfies. Next up we have take teams tolerations into consideration when calculating port topology spreads queue. So currently when calculating port topology spreads queue tainted nodes are treated the same as any other regular nodes which may lead to unexpected pending boards. As the skew constraint can only be satisfied on the tainted node. So this feature introduced two new fields for us. One of them is node affinity policy and another one is no taints policy. That will provide an option for us as end users to specify whether to respect teams or tolerations or not when calculating port topology spreads queue. And this was a alpha. Next up we have respect port topology spread after rolling upgrades. This is another alpha. So the port topology spread feature allows users to define the group of boards or which spreading is applied using a label selector field. This means users should know the exact label key and value when defining the board. With this feature a complementary new field is attached to label selector called match label keys in topology spread constraint which represent a set of label keys only. So the schedule will use those keys to look up label values from the incoming port and those key value labels will be ended with label selector to identify the group of existing boards over which the spreading skew will be calculated. Over to you Kat. All right. Next up is security. Security covers horizontal security initiatives for the Kubernetes project, including regular security audits, the vulnerability management process, cross cutting security documentation and security community management. Security has a lot of their plates folks. We have only one enhancements one feature coming from six security this cycle. And it's a very important one auto refreshing official CV feed. It's a alpha this cycle. So currently it's not possible to filter for issues or PRs that are related to CVs announced by Kubernetes. But this kept, we are introducing this, we are addressing this concern by labeling these issues or PRs with the new label called official CV feed. And using the proud mission and what it will do is it will create a periodically auto refreshing machine readable list of official Kubernetes CVs. The CV feed will allow end users to programmatically fetch the list of CVs and allow them to get the latest information from Kubernetes community. Pretty cool. All right. Now it's time for sake storage sake storage is responsible for ensuring that different types of file and block storage are available wherever a container is scheduled. Storage capacity management influencing scheduling of containers based on storage and generic operations on storage like snapshotting, etc. I will take this one so Priyanka can have some water and rest her voice. All right, first up is local ephemeral storage capacity isolation. In addition to persistent storage pods and containers may require ephemeral or transient local storage for scratch space caching and logs. Ephemeral storage is unstructured and shared the space, not the data between all pods running on a node in addition to other uses by the system. Local storage capacity isolation as a feature provides support for capacity isolation of shared storage between pods, such that a pod can be hard limited in its consumption of shared resources by evicting pods. If its consumption of shared storage exceeds the limit set, and this is a graduation to stable. Another graduation is stable ephemeral inline CSI volumes. Currently volumes that are backed by CSI drivers can only be used with the persistent volume and persistent volume claim objects. This feature is to implement support for the ability to nest CSI volume declarations within pod specs for ephemeral style drivers. It is going to allow driver developers to create new types of CSI drivers such as ephemeral volume drivers, which can be used to inject arbitrary states such as configuration secrets or similar information directly inside of the pods using a mounted volume. Yet another stable we have so many stable graduations this time it's really rad. This is CSI migration for the core AWS and GCE. So CSI defines a standard interface for communication between the container orchestrator and the storage plugins. And this is migrating the internals of the entry plugins to call out to CSI plugins because we will not be able to deprecate the current internal plugin APIs due to Kubernetes own API deprecation policies. The CSI migration for vSphere, by the way, remains in beta, but it is on by default. And the CSI migration for port works has moved up to beta, but is off by default, as we mentioned in the major theme section. If the CSI migration is working properly, by the way, Kubernetes end users should not notice a difference at all here. A new alpha feature, speed up SE Linux volume relabeling using mounts. This feature tries to speed up the way that volumes, inclusive of persistent volumes are made available to pods on systems with SE Linux in enforcing mode. Currently, this includes recursive relabeling of all files on a volume before a container can be started, which is pretty slow if the volume is large. So this feature uses the mount option flag O context is XYZ to set SE Linux context of all files on a volume without recursive walking through the volume. Yet another alpha, node expand secret for CSI driver. This feature adds a way to add a node expand secret to the CSI persistent volume source. And so here we're enabling the CSI client to send it out as part of the node expand volume request to the CSI drivers for making use of it in various node operations. And a deprecation. So we have deprecated the cluster FS entry driver cluster FS was one of the first dynamic provisioners which made it into the Kubernetes release back in 1.4. And then when CSI plugins and drivers started to appear cluster FS is CSI driver came with it. However, this project isn't maintained at present it hasn't been maintained in years we did discuss the possibility of migration to a compatible CSI driver. The discussion should be linked from that feature dot ks.io link on the slide, but ultimately we decided that it was best to deprecate it. This enhancement begins the deprecation process of cluster FS plugin from entry drivers. And lastly, I think we might this might be the last one is SIG windows. SIG windows focuses on supporting windows node and scheduling and windows containers on Kubernetes and pre Yonka. Did you want to take this one? If you did, you are muted. You're muted in the webinar window. I can take it if not. I think you're unmuted now. I'm on a delay now. So if you can hear me well, I can take this. Oh yeah, I can hear you. Okay. So we have one feature coming from SIG windows that's identified boards was during API server admission. So identifying the OS of the boards during the API server admission is very crucial so that we can apply appropriate security constraints to the board. In the absence, some admission plugins may apply unnecessary security constraints to the board or in the worst case, don't apply them at all. This feature adds a new field to the board spec called OS to identify the OS of the container specified in the board. With this new field, all the admission plugins which validate or mutate the board can identify the board's OS authoritatively and can act accordingly. This one is a stable. Thank you. Yeah, so that's that's a lot of enhancements to go through. Thank the both of you for your help with that. I will talk about the release team shadow program some here and again if y'all have questions, drop them in chat. I think we are going to have a few minutes, although not too terribly much time because we do only have an hour but first we're going to talk about the release team shadow program. So with every Kubernetes release, there is a new Kubernetes release team made up of community members who handle like the day to day logistics of the release itself, and it is quite a lot of work. So it's broken up into seven different roles. Each role is one lead and usually four shadows, but sometimes that number varies a little bit. And the point of the shadow program is to train new leads cover for leads because the leads can't like we can't be there every single minute of every single day. Share knowledge about the release process help contributors broaden their areas of knowledge and participation and just like over time throughout each release cycle gradually improve the state of the given release team. Most release leads do like make make changes after they serve their time and some of those changes are solicited from their shadows. It's very, it's very cool. It's very fun. So the application process usually goes out towards the end of one release and this like kind of gap area where there's like there's there's actually only like two weeks between a release where we're not, we're not doing anything other than putting together the shadow program. We've already selected the shadows for the upcoming release version 126 both Priyanka and I are release lead shadows for 126 CC you are a ranch manager associate right. So we're all on the next release team to but the release cycle generally lasts around four months the one at the end of the year can be a little bit shorter because of the holidays we can compress things a little bit. And the workloads do kind of like ebb and flow some teams are more busy than others at certain times like I know on the on the comms team. We were always very little to do at the beginning of the release cycle but at the end it was very very busy and for me as the lead it was maybe you know like 10 1520 hours of work a week at the end of the release cycle and Enhancements I assume is considerably busier at the beginning of the release is that is that true Priyanka. Yeah. So it depends on the team but it's it's like that a lot it ebbs and flows based on which team you're on but regardless there are enough people on each team that know no one person is particularly overloaded the lead does take the bulk of the work but If you are interested we would love to see you apply for a shadow position in a future release you go to the release team shadows get hub repo. There will be a bunch of information on the different roles there handbooks for all of the different sections that make up the release teams you can get an idea of what the responsibilities are before you apply but like we said earlier in the beginning of this webinar. The target release date for Kubernetes version 1.26 is December 6. So you can expect like sometime mid December or yeah mid December to see the application for the shadow program for 1.27 to go out so we would we would love to have you and you can request to shadow more than one thing you're only gonna get picked for one but you can you can try for several teams in the application process. And that's all we have for you out of the slides. Does anybody have any questions for us. Got about 10 minutes so we got room for a couple. Big release. If not we can just sit here and dance for two minutes 10 minutes or we can. Pop this in the chat if you have your questions here we go. I'm interested in C groups v2 and rootless Kubernetes for Docker builds and Kubernetes. No problem Richard what do you want to know. Do we need to do a major change in a Kubernetes install. For C groups. It shouldn't change anything it should just. Yeah for three groups. Yeah this is needed. For rootless Kubernetes for Docker builds and Kubernetes. Do we need to run the cubelets as non root. Priyanka do you know. Like can you run the cubelet is non root. I was trying to find out the link to the Kubernetes blog that will have more details about the. Indicator of this feature just give me a minute I'll put a link. Yeah yeah so we should we also it's important to know for this release that we had more feature blogs which are deep dives on like. The specifics of the feature than any other Kubernetes release before by far we ended up with like 19 or something because some came in after. So it's. I've looked at how long ago did you look at the blog Richard because they are there are so many blogs that we build them out for like three days a week. I think there is a reference for that kind of question is that regarding regarding the username space on feature which is still offer right. So if you run yeah that is a pretty good reference and if you want to run as a rootless mode. But remember this feature is in alpha so you have to turn on the feature gaze and. Cooperative will not guarantee it's bug free or anything. Yeah it's basically right at your own risk. Yeah always run an alpha feature at your at your own peril. And yeah I'm not sure for secret way to you have any specific questions for that because secret way to it's just the low level Linux kernel capacities. Yeah I don't think that. Yeah management so thank you. Any more questions folks see there's so 34 attendees in here. How y'all doing. How y'all doing. Somebody says a thing to note when using secrets v2 is that JDK. Less than 15 does not properly deduce available resources anymore noted. Yes. Yeah in order to use secret way to there are like specific requirements such as like OS distribution requirements of enabling for sure and the kernel version has to be. I think and later. And also you have like I think also the requirement for continuity. If you use any of them has a specific conversion requirement as well. Yeah. Big release fun release. I think we did great. It went so smoothly. And like we have the most feature blocks in this release I guess it's more than three releases. I think it's going to take a while to beat us I think because there were like three more got added after release that I had to add to the spreadsheet retroactively. So now we're publishing release blogs like halfway through September. No no we're all the way through September where we're still publishing them in October. Wow. Seriously there's one coming out in October. It's a lot. It is it is a lot. I don't think I don't think people are going to beat us. Because we have so many amazing features once you get over. We did. We have so many amazing features and I was lucky lucky to have a team of comms shadows who just really really hustled with the sakes to get get people to write stuff about their features. So it's a very pleasant cycle. We had no we had no major delays. Oh yeah it's the smallest release ever. Yeah it was it was tight. Well if there are no more questions I guess we can wrap it up. Good for everybody one more minute and see if any flow in but in the meantime thank y'all so much. I really enjoyed your talk today. This will be like I mentioned the recording will be available on our online programs YouTube playlist as well as through your registration link. And you can also find it on the CNCF website so get out there and start working with all the tools and I think y'all let them know where to reach you if you have if anyone has questions and how to get a hold of y'all. You've probably got all your channels already out there so it doesn't look like any other questions are popping up so I really feel with that. Thank y'all so much and everybody thanks for joining us and we'll see y'all again next week. Bye. Bye.