 Welcome everyone. Thanks for joining us for a special edition of today's CNCF live webinar, Kubernetes version 1.26 release. Some exciting stuff coming your way. I'm Libby Schultz and I'll be moderating today's webinar. I'm going to read our code of conduct and then hand over to Leo Paul Kay, Fred Munoz, and Mark Rosetti from the Kubernetes 1.26 release team. A few housekeeping items before we get started. During the webinar you are not able to talk as an attendee, but there is this lovely chat box that everybody is adding their hellos and locations to. So thank you. That is where you can leave your questions for the team. We'll get to as many as we can at the end or if it so pertains, we can interject in the middle. But we're going to have a full webinar today, so be sure to leave your questions here for us. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct and please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They're also available via your registration link used to get here today and will be on our online programs YouTube playlist. With that, I will hand it over to our release team, Mark, Leo and Fred to take it away. Thank you, Libby. I'll take it from now. Well, let me start to say that it's a huge honor to be here and be able to present the major updates of Kubernetes 126. I think in our independent opinion, one of the best releases of Kubernetes so far. And let me start by presenting the team. So obviously we have our 126 release team leads Leonard Parker from Liquid reply, which led the entire team in terms of the 126 release efforts. And Mark Rosetti from Microsoft, which was a part of the 126 enhancement steam and it's the key and currently is the enhancements lead for Kubernetes 127. And, and I, Frederick Munoz from SAS, which, and I was the communications lead for Kubernetes 126. And we have a interesting, although not completely original agenda, because we tend to follow the same major approaches in each Kubernetes release to share with you today. So we will start by providing a status updates on the 127 release cycle that started very recently. We will then go on into focusing on some of the major highlights, removals, applications and aspects that we want to really talk about in terms of Kubernetes 126, including the major teams, etc. And then we will go through a per se lists and description, quick description obviously given the time constraints that we have on the enhancements that were not only track but implemented during this release. In the end, we will give a quick description of the release team shallow program, and we'll hopefully end with plenty of time for any question and questions and answers from our side as needed. Oh, sorry. Can't you hear me. Leo and I can hear you but I wonder. I can hear you. I wonder if other attendees cannot. Okay, perfect. You can't hear maybe leave and rejoin us. Okay, so. Excellent. Glad it's not. On my end, because sometimes it happens. I'll, I'll, I'll proceed them. I think with some updates on the Kubernetes one 27 release cycle. As you are likely aware, Kubernetes has several releases a year and just when we end one another one starts and the cycle continues. Currently, we are on the Kubernetes one 27 release, which has the following timeline it started some weeks ago in the ninth of January. It will have the enhancements for freeze on the 10th of February code freeze on the 15th of March, and is scheduled for release on the 11th of April. And it will be released to Kubecon here because it will follow very shortly after the release Kubecon and cloud native con Europe will start on April, April 17 and 21. So, this Kubecon will be will be made in in the footsteps of the 127 release. Currently there are no changes in terms of the plans timeline. Changes do happen in the one 26 release cycle, for example, we did have some slight adjustments to accommodate the security updates and go. But the process itself is made with this in mind so there are the necessary leeway in place in order to avoid making any potential rescheduling an issue. And with that, I would, we will now go into the Kubernetes one 26 highlights major team changes were over the deprecations. And there's nobody better to be able to provide this to us than Leonard our release team beat. Hello everyone so you could go to next slide. So every release cycle we have a theme so the release team has a name and the logo and a theme of the release. It does not need to have like a, like a special meaning. So it can also be just a joke or anything like this. For this cycle, we've been a little bit more, more, maybe, I don't know, we had like a little bit more thoughts in mind with electrifying in terms of environment. So we want to raise awareness that Kubernetes clusters are deployed everywhere, powering huge systems, and therefore also like requiring a lot of energy. And this is at the moment in the caps not being reflected as much. So this might be like a big theme or major theme also in the future for this cycle, unfortunately, not. But yeah, we want just to raise awareness. And I hope you also like the logo. There will be some, some stickers at KubeCon if you, if you catch me. All right, for the enhancements, we track the cycle 49 enhancements. So depending like what you count as an enhancement, we also count removals and deprecations. So if you break it down, we have 11 stable ones, 10 beta ones, 16 alpha caps, nine removals and three deprecations about the stable beta and alpha features. We will talk about them in the updates in a bit about the removals and deprecations. I believe this is the next slide. I know this is the one after that. Okay. First, for the major themes, because we will discuss every one of them in like a little bit more detail, like on a slide exclusive sites, I will just like run it down so you have like an overview, like a teaser of what we will discuss this the rest of the session. So first, we now use exclusively registry.khs.io. This was broadly discussed in the past we use the GCR registry. Now we just publish new artifacts registry.khs.io. So if you use, or if you pull the artifacts manually, you now have to use a new endpoint. We now graduated designing artifacts to beta. We support now or graduated stable privilege containers to stable for Windows. We have some in hand, like changes in the CS migration, which is a larger effort. Now, we also include Azure Fights and VS Fair over like external plugins. We delegated the FS group to CSI drivers, which is now stable. There's some changes in the metrics framework. And we added some changes to component SLIs, some feature metrics. We have a very big change with dynamic resource allocation which introduces a new API, which is very exciting. It's an alpha and we'll graduate to beta and so on later. We have a cap focusing around admission control. We have one alpha feature for bad scheduling readiness. Made some changes for node inclusion policy for topology spread, which graduated to beta. And node graceful node shutdown graduates to beta as well. As I said, we will discuss them separately, so I just want to go over them quickly so we have more time. For the removals and deprecations, the first one removal of the cube proxy user use space modes. We deprecated the user space proxy mode for over a year. And now it's no longer supported on either Linux or Windows, so we removed it after it was deprecated. The next one deprecations for cube API command line arguments. So there's a flag, the master service namespace command line arguments do not have any effect anymore. And it's basically already deprecated, but not officially. So this cycle, we now officially deprecated it. So in the future, we can remove it, which is like policy and Kubernetes. The next one removal of the V1 beta one flow control API. We remove the flow control API server dot KHS dot IO V1 beta one. And now you need to migrate over to V1 beta two if you want to use it. So this has also been deprecated before. And the V1 beta two is available since 1.23. So there's like a good chance that you're already using that one. For the next one, under sick auth, we remove removal of the entry credential management code. So this cycle, the legacy vendors. Did we lost you? Suspenseful. It looks like it. Yeah, I think a couple of seconds. Unless you didn't notice and we will only catch him in 20 minutes, but I think that he will join. Otherwise, I can pick it up or marks. Do you want to? Oh, you can pick it up. Okay. So we were on the entry credential management codes. Essentially, this is not completely unlike the CSI migrations and that's there. There is internal logic that was contained within the Kubernetes code that dealt our specific cloud providers. They had the credential management and this has been externalized into plug-in. So it's being removed from the entry source. The removal of dynamic kubelet configuration group is a slightly different reason. The removal following the policy that when there's not enough uptake and essentially enhancements don't progress into stable. After a while, they get dropped because they aren't used as much as initially taught. In terms of figure out of scaling, we have the removal of the V2 better to horizontal part of the scalar API, which not unlike flow control one means that there's a need to update to the updated version of the published horizontal part scalar API. Hello. You are. Sorry, I don't know what happened. My internet crashed or something. No, there's no problems. I picked it up exactly where you were. And now you can start at the top. Okay. At the top again. All right. Sorry about that. We are like a single deployment with several boards. Yeah. All right. Thanks for for catching that thread or mark. I don't know. Okay. So the next one removal of the legacy command line arguments related to logging. Right. So there's a couple of flags which we removed. I have not listed in my notes. So I need to get them later. I don't know Mark or Fred. Maybe you can jump on this. Otherwise, I can move to the next one. I had the list, but I don't have it present right now. Yeah. Right. Okay. So if you have any questions about this, I can, I can get it in the meantime between the scenes. The next one deprecation of non-inclusive cube CTL flags. So, right. So it's part of, so we have like an initiative in Kubernetes to remove flags which are not inclusive. So we have, for example, removed now the prune right whitelist flag. And so we don't want to have like whitelist thing, blacklist things, stuff like this in flags or like in the community project. So we now replace it with the prune allow list, which has the same meaning. It's just a different name. The next one deprecations of cube CTL run command line arguments. So there's several options and arguments in cube CTL run, which are marked as deprecated. So cascading, so the flags cascading, fine name, force, grace period, customized, recursive, time out and wait. These arguments already ignored. So we don't expect like any problems for the next one. The CRI v1 alpha 2 API was removed. There might be some like problems in your classes. So some context for this. So after we removed the Docker shim in 1.25, we added in 1.24, we added in 1.25 the v1 alpha spec, v1 API spec, and deprecated the v1 alpha 2. And we now removed this v1 alpha 2 spec. And there might be like, as I said, like some implications with your CRI. So for example, if you use container D and you run a version 1.5 or another one, this will not work. If you run something else, you need to consult the documentation or like the vendor. Likely this is also like just handled by your platform provider, public cloud provider, so they upgrade it for you. If you do it like everything on your own, then you need to watch out for that. For the next one, the Glaster FS plug-in was removed and is available in three drivers. There's not much to say about that. We deprecated it last cycle. The entry open stack cloud provider is removed. So this is the Cinder volume type. There's also not too much about that. If you have any questions, drop them in the chat. Otherwise we can move on. Just a note in terms of the logging flag, it's essentially the realization of an overall of the catalog component. And just about every single flag was deprecated, only minus v and others were caps. The rationale for this is on the cap, but it's aligned with the restructuring of the log infrastructure in Kubernetes. So the functionality is there. It's just that the catalog itself will not have the flags and the information will be accessible in other ways. Awesome. Thank you. And so now we will do a per se updates in which we will pick some of the ones that we've briefly discussed before and a lot of new ones and give a one page overview of each one with links to the active to the enhancement to the cap. And in some circumstances also a link to the feature blog, we have 15 feature blogs in this release, which is quite a high number. And these feature blogs are a very good way to have a more direct knowledge of those specific features. If not, you have always the cap, which is the Kubernetes and enhancement description that is used and the and the enhancement itself. We will start with seek API machinery. And I'll pass the button again to mark or again now directly to me. Hi, everybody. We are there are a lot of enhancements and we are kind of only have an hour here so a lot of these will be pretty quick too so just give a warning once. First up is the enhancements from sake API machinery, sake API machinery is the special interest group responsible for pretty much everything related to the API is an API server here. Next slide. Yeah, I'm. Yeah. Okay. The first one is a validating admission policies this one this enhancement is going ever went alpha status. This introduced a common expression language for doing some basic validations and for your as an admission controller. This is an alternative to setting up a meat or like an admission web hook, which can be very burdensome to kind of maintain and deploy. So this is a much lighter weight option to do simple validations. Next is the aggregated API discovery feature, which is also going into alpha. This centralizes the discovery of all the supported API endpoints in your that the Kubernetes API server knows about. And there's so there's new to two new endpoints one that gives more information about each end point and one that's the kind of the aggregation of them. And using this will help reduce load on your against your API server because clients no longer need to spam the API server to see what all the available endpoints are. And last is the cube API server identity which is graduating to beta. This just gives a unique identity to each API server instance so you can identify if there's problems or things like that. Okay, next is sick apps sick apps is responsible for basically defining and how application and workloads are defined and managed in their cluster in the clusters. So first is this job tracking without lingering pods is graduating to stable so previously before this enhancement if you were scheduling batch jobs, the job controller needed to keep completed jobs around in order to maintain state, which caused the next a lot of extra kind of stress in your cluster, but now that that the job controllers been re architected no longer need that and some enhancements with this are the job controller can now scale up to like 100,000 current 100,000 concurrent pods which is much more parallel and scalable than previously. Next is this allowing for a stateful sets to set the replicas numbering to control the replica numbering. This enhancement is going to Yes. And this this is this graduated to alpha in this release and this allows for your staple sets, you can specify the where you want to start the numbering for replicas from and this is useful if you need to restart your workload or you want to work or migrate your workload, cross namespace cross cluster and and all of that. Next is this pod healthy policy for pod disruption budget. This is this has graduated to alpha. This enables specifying pod disruption budgets for pods that are not ready. And this can help prevent data loss by preventing not ready pods from being evicted. So until somebody has a chance to either automatically or automatically or manually go and recover that data. This also prevents some deadlocks in the system where you have a lot of pods that aren't ready. You can request that they do get evicted now. And the last one for sick apps is this retrieval and non retrieval pod failures for jobs, this is graduating to beta. This provides a mechanism to enable workflows to differentiate between retrieval and non retrieval failures. This can help enlighten your your cluster or things like transient or infrastructure failures to retry on those and not if it's like a workload failure. And for sick often is sickly and seeing instrumentation. I'll pass it back again to Leonard. Alright, so on my side. I don't know I have the next bug maybe I still see the validating admission policy slide. So it's kind of stuck. But I can just pull up the slides on my side and but it is I think that is everyone. Yeah, you see reduction of secret base service account. This is some bug on my side. Okay, but it doesn't matter because I have the slide as well. So I can I can just pull it up one second and okay. Right. So for this one, so we are in sick or reduction of secret base service account tokens awesome. So this cap graduates to beta to the beta stage and it introduces actions to reduce the surface area of secret base service account tokens. So the goals are no auto generation of secret base service account tokens. Standard. You can enable this and removal of unused auto generated service based account secret based service account tokens. So these are some tweaks, I would say for the next one. That's all right. I don't see when the side changes. Okay. Yeah, I will tell you when I change. Okay. Also API. Okay, we have. Okay, all the API to get self user attributes right so this graduates alpha as you can see on the right side. It adds a new API endpoint. And then you can use to run who am I, which you basically on cube CTL, which is a nice addition, I think. So this is a new API endpoint which gets added and also a new cube CTL command. So pretty neat feature. Secret CLI. Right. That's one. Or should I introduce the six. I mean, it's a sick about the CLI basically so it should be sort of the safe exploratory I guess the six CLI has one kept this cycle it's a stage beta about cube CTL events. So there's in the past, or we are still using cube CTL get events. Now there's like a new, like top level command, which is cube CTL events. So they're like some internal reasons why this is needed some limitations. So, yeah. As far as I know, there's at the moment, no, like new features, but this enables us to add like new features which the community requested to it. Okay, I've changed the sick instrumentation. Okay. So the sick instrumentation, I can just read out covers best practices for cluster observability through metrics logging events and traces across all communities components and development of relevant components such as K log and cube state metrics coordinates metrics requirements of different six for other components including finding common API's starting with the open API v3. Right. So this cap is at stage alpha and it basically updates cube CTL explain to use the open API v3 spec and no longer API v2 spec. So this just enrich the data which you get if you run this command. If there is, if there's no API v3 spec available, it will fall back to the v2 spec so there should be no issues expected on this side. Right. For the next one. Right. Right. Kubernetes component health SL eyes is a new cap graduating or it's stage alpha. It exposes new health check endpoints which should allow creating new SL eyes and then like related services and agents and so on can create new SLOs based on that. So for the next one. Extended metrics. Exactly extended metrics stability. So in which which at graduates to alpha. So this is in addition to the metrics stability framework, which we introduced a couple of cycles ago. There's also very nice blog post if you're interested about that. So this cycle we add a little bit more like two new classes to it. Internal and beta. Which is like, like, yeah, for the most part like internal internally. I don't know actually actually how like if how much does this affect like the end user. I think the metrics stability framework was very, very well received. So adding to this is a nice addition. And and you're done. I think now we're entering. Well, then for now at least. Okay. Now, we're entering sick network, which covers networking in Kubernetes and as is often the case, the network that's quite, quite a few enhancements. I will actually start by saying something that I think would be important. We've been talking about alpha beta and stable issues, just a very quick overview on what does this mean. Obviously this in enhancement starts as alpha proceeds to beta and then reach stable. If they do not reach stable after a while, they can be obviously deprecated as we showed before. And the major differences is that alpha features are this in general are off by default, but can be enabled by using a feature guide. And stable features are considering a maintained supportive within the long term part of the off of Kubernetes. And so with that, we have this enhancement of surface internal traffic policy that great great ways to stable the service internal traffic policy that gratuates to stable and which essentially allows to define a specific policy in terms of the service that will allow routing to be limited to services running on that same node. So to limit the forwarding in terms of the service and limited to the specific target. We have, and these are all relatively related that are connected enhancements. Proxy terminating end points enables zero downtime deployments for services with external traffic policy equals local, this because in some use cases and depending obviously on many different configurations choices, it was possible that when changing the external traffic policy, some downtime would happen. This enhancement reduces the death potential traffic loss from Q proxy unrolling updates, exactly because the traffic was being sent to pods that were terminating and that's unable to address that request. Tracking terminating end points graduates to stable as well. And this one adds a way to track the terminating states of an endpoint through the endpoint slice API. So not by getting the information by pod but specifically using the endpoint slice API which obviously is much more scalable and enables consumers of the API to make smarter decisions when it comes to handling terminating end points. Minimizing IP tables restore input sizes. This is a new feature, an alpha feature, which is actually quite important in terms of performance for very large clusters because the IP tables restore command can take quite a long time to run due to the sheer amount of network rules that end up being created by the several Kubernetes networking objects and policies. So this enhancement drastically improves the performance of the IP tables restore command. Support of mixed protocols and services which type load balancer. So this is a enhancement that graduates to stable and it enables the creation of a load balance service that has different port definitions with different protocols allowing users to expose their applications through a single IP address but different layer for the protocols with a cloud provider load balancer. This for example means that there were some cases where this was already possible through TCP or UDP, but this one generalizes it and reduces the coordinate cases that were not really well documented that could result in this not working. This adds that support explicitly and does makes it both possible and also consistent and resilient and defines that expected behavior. Reserve service IP ranges for dynamic and static IP allocation graduates to stable. So this is essentially around this. Plus IPs can be assigned dynamically or statically with this enhancement does is it splits the range that is used by static and dynamic allocation. So that's a specific situation can be avoided which is someone specifying a static IP because it was free at the time but by the time it gets used, dynamic allocation would already have used it because it was free in the meantime. So by splitting between these two uses, it's clear that part of that range will be manually consumed and the other will be dynamically. Expanded DNS configuration allows Kubernetes to have an expanded DNS configuration that allows more DNS search pots and a longer list of DNS search paths. This graduates to beta. And essentially is adding to Kubernetes the evolution that have been done but only in terms of the DNS supports in the new Lipsy and other underlying Linux components that adds more configuration options and remove some of the limitations that existed at a certain point in time but do not make sense to keep right now. And with that, we end the list of enhancements from SIG network and we have the enhancements for SIG nodes. And if I'm not mistaken, this is picked up by Mark, right? Yep. Yep. Thank you, Fred. So SIG node is they are responsible for all of the components that run on the nodes and well, that control the interactions between the pods and the hosts most notably the KubeLit. Okay. So this enhancement dynamic resource allocation which is graduated to alpha. This adds a new set of APIs that allows for workloads to specify resources other than memory or CPU. And it also allows for sharing of resources between multiple containers or pods. This is quite a big enhancements over the previous way of allocating resources so anybody who's interested, I'd recommend them to go read the blog post and read up on the cup. Device manager, device manager is graduating to stable. This enhancements has actually been in beta since 110 and there was just finally time to graduate it. The device manager is kind of a plug in and I think it's an API. It's a way of kind of advertising and allocating different external devices which you can use and assign into your containers such as like GPU devices and FPGA devices. Next is CPU manager which has also been in beta since 110 and is graduating to stable. On the CPU managers, it's part of the KubeLit and it's responsible for assigning the CPUs to containers. And prior to this, there was potentially a lot of issues. If you specified more than one CPU for your resource limits or requests, you could land that could get split across multiple CPUs and you would spend a lot of time just cycling with that. And next is the KubeLit credential provider. This is graduated to stable. This provides a plug-in model for registering and basically for offering two different container registries. So previously, every time the KubeLit wanted to authenticate to a new container registry, it needed code needed to be added to the KubeLit to understand how to talk with that. The KubeLit credential provider provides a plug-in model to eliminate that, which is supplying maintenance and making it easier to support more registries. Next is this improved multi-numa alignment the topology manager. This enhancement is graduated to alpha. Previously, the topology manager, you could assign pods to specific numa nodes or ensure that all of your containers land on a specific numa node, but it didn't have any kind of awareness of the distance between the numa nodes, which is becoming more of an issue with multi-socket processors. So this enhancement allows you to basically specify like a minimum distance or between your numa nodes to get these workloads to kind of land in the same proximity of each other. Next is this KubeLit-evented PLag for better performance or the POP Lifecycle Event Generator. This enhancement is graduating to alpha and this enhancement is tracking outlining changes to the KubeLit and the container runtime interface to move to a list watch model instead of a continuous polling model. And the goals of this is to reduce the steady-state CPU usage of the KubeLit and the container runtime. But last I think is the CAdvisorless CRI full container and pod stats enhancement, and this has graduated to alpha. This allows for the KubeLit to get all of the container and pod stats over the CRI or the container runtime interface from the container runtime instead of also having the KubeLit query into the CAdvisor to get some missing stats. So this is just kind of removing some duplicated work and putting more responsibility onto the container runtime which is where it belongs, thank you. All right, back to my slides. So for SICK release, which handles all the releases. So for example, the release team is part of the SICK release. We have one cap, Signing Release Artifacts, which graduates to beta. So this release or with this cap, we now sign all the release artifacts. So users can verify the integrity and of the downloaded resources. So this includes all the like binary artifacts. So what we listed here, so the tar balls, the binary artifacts the software bill of materials, the S-bombs and we use cosine, yeah. So there's also, for example, if you're interested in this entire like S-bombs theme and topic and everything, the Q&A release team is quite like a forerunner or like a pioneer, I would say a little bit. So if you're interested in this, there's also a lot of good resources, blog posts about this topics. All right, to the next one, SICK Scheduling, which is responsible for the components that make pot placement decisions. We have two caps. The first one is pot scheduling readiness. So currently, pots are considered like ready for scheduling as soon as they are created. And in general, this is fine, but in some scenarios or like often or in few cases, this is not the case and pots are not ready as soon as they are created. And at the moment, or there have not been like any options to control that behavior that these pots are not ready. So with this cap, we introduce a new parameter to the API spec, which allows you to control this behavior. So you can set scheduling gates, which defines if a pot is, for example, unschedulable or you can define like a pot scheduled condition. For the next one, take tains, tolerations into consideration when calculating pot topology spread skew, it graduates to beta. So this defines a node inclusion policy and topology spread constraints, which you can set to control the behavior where the pots are scheduled. So I will pick back to describe several enhancements of six storage, which is responsible for ensuring that different types of file and block storage are available wherever a container is scheduled. So it tackles everything storage related with quantities and trying to speed things up a bit. I'll go through this relatively briefly. Non-grateful node shutdown, this is a beta feature that allows stateful workloads to fail over to a different node. After the original node is shut down or is in a non-recoverable state such as hardware failure or the broken OS. Allow Kubernetes to supply pods FS group to the CSI driver on mount, graduates to stable. This essentially allows the CSI driver to have the option to apply the FS group setting during volume mount point in contrast with what was before that was defined in one place and was not setable by the CSI driver. Provisioning volumes from cross-namespace snapshots. This is an alpha feature that allows it to specify the data source for a persistent volume claim even when the source data belongs to a different namespace. With this new feature enable specifying the data source rep-filled, you specify data source rep-filled and once that Kubernetes checks that access is okay, the new persistent volume can calculate this data from the storage source specified in that separate namespace. Retroactive default storage class assignments graduates to beta with this enhancement. There's no need to create a default storage class and only after that the PVC to assign to that class and also allows that any PVC without the storage class can be retroactively updated to specify to that storage class even if the PVC is recreated before that storage class was defined. This fear entry to CSI driver migration part of the overall movement of migrating from removing things from the entry Kubernetes source to external drivers. This one graduates to stable and migrates the internals of the Vista plugin to the Vista CSI driver while maintaining the original API and exactly the same is done for the Azure file also graduates to stable also with the same motivation and the overall same approach as the one before. And with this, SIG Windows that Mark will cover. So SIG Windows is a kind of a horizontal SIG focused on supporting Windows functionality across all of the different components like networking, storage and node. So this enhancement for support for Windows privileged containers has graduated to stable. Support for privileged containers on Windows were kind of allow the containers to access host resources and are very useful for many operational type workloads. And so now they have access to all of the host resources and this enables running things like your CNI solutions, node exporter, node problem detector on all that and managing them as demon sets. And next is a host network support for Windows pods. This is graduated to alpha. This adds support for the cubelets who request that pods getting scheduled on Windows nodes get added to the host's network namespace. This is one kind of a parody feature with Linux but it also helps issues with a port exhaustion for large clusters. And this ends our SIG updates. And I think we are actually on time just some final words around the release team shadow program and QCon. So the release team shadow program is a Kubernetes release team apprenticeship model program that in each release, it recruits a certain member of apprentices that gets involved in each specific release team. There are several different release teams enhancements, signal, comms, docs, release notes, et cetera. And these teams take on this shadow shadows in order for them to have hands-on experience with developing a release. And they participate in the release cycle and hopefully they with this get involved not only in other Kubernetes community efforts but can also potentially lead the teams in the future. And this is the program. There are several different teams as I mentioned, release team leads, enhancements, CI signal, but triage, doc, release notes and communication. Each team has one lead that selects between three and five shadows and each release takes four months. What I can say right now is that 127 is already ongoing. So the shadow program is not accepting people for the 127 release right now, but anyone listening, please pay extra attention after the 127 release is published because in the following weeks, an update is sent to the Kubernetes mailing lists that shares a form in which anyone can volunteer and hopefully be integrated into one team and experience being a part of a Kubernetes release team. A final word about the Kubernetes 1.26 at KubeCon Europe. As I mentioned, KubeCon Europe will happen in Amsterdam in the 17th of April, well, it starts in the 17th with the contributor come and then 18th KubeCon Europe. Several of the release team members will be there. So we will be more than happy to share some additional thoughts and ideas with anyone attending. We share here some transparency reports for both KubeCon Europe and North America, but suffice it to say that is one of the biggest Kubernetes and cloud native related meetings. So we do either physically or virtually, we would like to be able to see any of you there. And as I mentioned, share some thoughts, discuss Kubernetes present and future or just any other topic. And with that, we're done. We still have five minutes and we are more than open for any comments and questions. Thanks everyone. All right, let's see if we've got any questions or a little chat, about five minutes left. So ask those burning questions now. Have y'all maybe where they can reach you if you think of something after we end? I think that perhaps the Kubernetes Slack would be a good place. So let me... Okay. So in the community Slack, we have a channel for each SICK, SICK Release and all the other SICKs. So if you want to reach the SICK in general, you can, this would be the best place to go. If you have something very specific, maybe about internal processes, I don't think there will be some questions about this, but there are also other channels which are more about those. So for example, if you have questions about the enhancements team, there's like a Release Enhancements channel as well. And you can also obviously find us or anybody else from the team and send a direct message if you don't want to ask publicly. This is totally fine too. I just shared the Kubernetes Slack address in the chats. And obviously, I think all of our names are easily findable on Google. And so feel free obviously to... And actually reach us in Slack there individually if needed. Perfect. And yes, if Fred, if somebody will send me the slide deck that you used, I will upload that to the website as well. So you'll be able to get both the recording and the slides. So yes, Victor, you can. All right. I don't see any more questions coming through. So we will wrap this up, but thank you so much Release Team. Y'all are wonderful. And thank you everyone for joining us today. We will have this online by this afternoon. So if anyone missed it, it will be ready to go. And everyone have a great day or evening or depending on where you're calling from. Thank you. Thank you all. Yeah, thank you everybody. Bye-bye. Cheers.