 Hello, everyone. Thank you for joining us. We're going to go ahead and get started. Welcome to today's CNCF live webinar, Kubernetes 1.22 release. I'm Libby Schultz and I'll be moderating today's webinar. I'm going to read our code of conduct and then hand it over to Savita Raganathan, James Laverac, and JC Butler of Kubernetes 1.22 release team. A few housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There's a Q&A box at the bottom of your screen. Please feel free to drop your questions there. It's actually on the right hand side. And we'll get to as many as we can at the end. Also join our public Slack channel that I posted in the chat to keep the questions going. This is an official webinar of the CNCF and as such a subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. And please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They are also available via your registration link and the recording will be available on our online programs YouTube playlist on the CNCF channel. With that, I will hand it over to Savita, James, and Jesse. Hey, thank you so much Libby. And thanks for having us. We're really happy to be here to talk about the reaching new peaks release Kubernetes 122. My name is Jesse Butler. I served as the comms role lead for 122. And I'm joined by James who is the enhancements lead and Savita who was our release team lead. This was my first experience in the release team. It was pretty fantastic. I got to admit and I loved it so much. I jumped into 123 working on the docs team. So we can take a look at the agenda. We're just this, this webinar follows the same structure. So for those of you joining us from watching previous release webinars, you'll be familiar. We're going to start with sort of a sneak peek at what's coming in 123. And then we're going to talk about some of the 122 highlights, the theme and some of the, the bigger feature releases are blanked on that word. And then the bulk of the presentation of the session is going to be going through the feature and enhancements updates for each of the SIGs. And we will leave some time for Q&A. But if you have any questions that come up during the session, we'll be in the chat and we'll be happy to talk with you. So with that, I'd like to pass it to James to take over and start talking about the 123 release updates. Thank you, Jesse. So much like yourself, I'm actually on the 123 release team as well. I enjoyed the process so much, went back and did it again. So it's going to be great fun going through this process again. The 123 release is currently in progress. We are about halfway through at this point. We started back at the end of August and we've already passed the first big, big milestone, which is enhancements freeze where we confirm which enhancements are actually going to make it into the release subject to further deadlines. Code freeze is approaching pretty soon. It's only a while away. So we're going to be seeing activity picking up then and we are aiming presently for a release on Tuesday, December 7. One important thing to note with 123 and with 122 as well, actually, is that the Kubernetes project has adopted a slightly longer release cycle. So the release cadence has changed from four releases per year to three releases per year. As such, Kubernetes 123 is scheduled to be the last release of 122 with 124 scheduled to begin in January next year. And with that, I think that's all I had to say about the upcoming release. Of course, look out for this webinar again for 123 sometime in January, February next year. And with that, I'll hand over to Savifa to talk about 122. Thank you, James. And thank you, Jessie. Next slide, please. Thank you. A little bit about the team and the logo for the release. So the theme of the 122 release is reaching new peaks and related to the entire release team and the contributors of the Kubernetes community in whatever form that you might have contributed by reporting an issue of work, fixing something, creating features, providing review, marketing. Anyway, this logo is dedicated to you. We, so this is like a remembrance and kind of motivation to the tough times must depend on make in the constant burnout and everyone moving and taking up new opportunities their way and lots and lots of struggle over the past two years, but still the community came together. And we did deliver something really awesome. There was a lot of chaos, but this is like till date the biggest release, like we had 56 enhancements ship in this release and curious to everyone who worked on it to talk about the logo a little bit. The logo is designed by my girlfriend Boris Zodkin. And he I just told him the vision about having a Mount Rainier and the backdrop of the communities flag in the Milky Way and everything and he just brought everything to life. It looks really beautiful. And personally for me whenever I visit Seattle Mount Rainier gives me hope and motivation. And I see it that this will help everyone like whenever they look at it they get a little motivation and hope and joy and little adventures and also the achievements I hope it all everyone can remember those tiny little things and keep moving on. That's all about the logo and the team. Can we move on to the next one please. All right, like I mentioned, this release is like by far the biggest one. We had 51 enhancements ship in 1221 and that was the biggest then and now we have like 56 including three applications. We have like 13 stable enhancements and 24 of the enhancements got graduated to beta and we also have like a lot of new features that got introduced in the 1.21 22 release. I'm sorry about that. And we also have three applications. The alpha features are super new and in order to use them, you need to enable the feature gate and there is a request for the community that I want to put forth. So if we are using it, the communities contributors in the community is always looking for feedback so that we can actually improve and include the feedback and improve further versions and we could include all those things in the beta make it stable and then graduate the feature. So that is a little request from my side. And from our side from the release teams aside, and we are shipping like so far we have shipped like 10 plus stable enhancements the past two releases and I'm hoping that that will continue and the future releases as well. We do see a healthy backlog of enhancements and there is good flow like we see things graduating and then things going up the ladder and there is also cycle of deprecations which to me feels like communities as a project is becoming more and more mature. Can we move on to the next one please. Thank you to we do have a lot of features and we have bubbled up a few of them. And they are grouped together as the major teams. It's just not limited to these we do have a whole lot which we will talk in the latest slides, but to just give a sneak peek of what we will be talking about the first one is server side apply. This feature was since beta in one of 21 and recently it went to stable in one of 22 before this feature of the apply usually did it on the client side and then on the server side and that caused a lot of confusion and it was not the most declarative approach that you would want. And the state of the cluster the state of the configuration resources were not maintained properly. This feature it's all server side and it also uses a new object merge algorithm and then it keeps track of the field ownership and it runs on the communities API server. This is like the most declarative way to use the apply and it also helps the non client girl versions and also languages and also like non cubicle plugins to use apply. So that feels like a huge feature boost to me. And moving on to the next one. Everyone. This is a little bit about the quality of service, especially for the memory resources, whoever has been an administrator or whoever has worked on putting applications would know the pain of benchmarking the apps and trying to put them on communities and all of a sudden like it gets killed by Oh I'm right. That is like the infamous in famous bug are like it's always one one way or the other. That's that's my experience and I have been I work as an administrator for like, for five years, it comes down to So with communities initially used to see groups we want implementation and that didn't give enough that didn't provide enough ways to measure the memory resources properly and it used memory limit invites and then it used the oms course to Kill the parts whenever and oh I'm it's nothing but out of memory. I think it's out of memory because I've used oh I'm all my life and I just don't remember what almost all of it and right now it's whenever an out of memory occurs. Issue occurs so it takes these two into consideration and it kills it off. But with C groups, we two things have been made better. There is guaranteed memory resources whenever it's resolved, which was an issue before and then get a way to provision the First of all memory allocation and so many other new features with That that went into the one that went to release and Moving on to the next one. It's about security. This has been the long requested feature of one might say This is about rewarding the cube idiom control plane in an on user non root users. It includes the overall security and this feature is up right now. And if you want to use it, you have to remember to turn the Kubernetes cube idiom specific rootless control plane feature get on. And if you have any feedback do reach out to the I think it's from sync art. We won't get to that later do reach out to the Keep owners or just shoot a mail in the Kubernetes slack and people will be able to read it in the right direction. Next slide please. Thank you. Continuing with a couple more actually three more of the major teams. The first one is like the no swap support. This was also a problem that I have often seen Whenever there is this Java application that got containerized and then they want to run on the Kubernetes. It takes a lot of thought of time and not start up start up memory resources and Sometimes containers don't even start because the platform wouldn't have been configured in the way that it has enough resources to start Or sometimes that you have to over provision just because that Java game is the Java settings, the heat memory takes a lot during a start up. With this feature, the communities if configured right by the administrators, it can take advantages of the swap support in the underlying Linux machines. So that actually is a real big deal for administrators or whoever wants to pack their pack the clusters well and make sure that there is no resource wastage. So that is that is this is an alpha two that is it's a new feature. So if you want to turn it on, you have to remember to enable the feature great moving on to the next one. I want to give a special shout out to the sick windows folks, they have worked really, really, really hard to make sure that all the good features of Kubernetes is also available in windows. And they also have released a tool called sick window depth tools and that repo is in Kubernetes six. It supports like multiple see a nice and can run on multiple platforms by platforms here they mean that like hyper B watch a box we swear or or any vagrant compatible provider and it actually provides sandbox for running the cutting edge windows features from scratch and you can do that by building the queue blood on the windows queue blood and I think they are more instructions on the repo and I will post a link later in the chat so that folks can take a look at it. So special shout out to the windows for sick windows folks and thanks for making communities available on the windows side as well. Moving on to the next one. It's all it's again about security. This is also an alpha feature and this is basically making the cluster when you deploy the cluster make the cluster secure but like super secure. Or maybe secure then how, however, it is deployed right now in order to enable this feature you have to turn the second default flag in the public configuration and once that's enabled it takes a default runtime second profile. And it also assisted preventing some zero days. I don't want to go out on a limb and say like all zero days so I'm going to be safe and then say like some zero days. So do you give this feature a try. And if you like it, let us know. Feedback or welcome again. Moving on to see updates. So what is it saying, it's like a special interest group within the communities ecosystem of projects and it's nothing but there are like multiple core confidence and there are like I think verticals inside communities like storage network. No. Art security documentation. So each of them are each of them are unit on their own and a special interest group people come together and work on improving that little unit and then make sure that it works well with other units to there are like multiple work other different groups called working groups which can span across the six when they want to interact with multiple six it becomes a working group and so on. So just that's a little bit background about the sake because so many people here sake and they don't know what it is so it's like nothing but a special interest group focusing on one little core component of communities might not be accurate but that is the easiest way that I could come up with so that folks can relate to it in a way. James can you move on to the next slide please. Thank you. The first one that we will see today would be a pair machinery machinery next next slide please. And the first feature is sort of said apply we have discussed a little bit about it already. The goal was to provide a comprehensive declarative approach to use apply and this feature delivers it and the highlight like I mentioned before is that it can be used by non girl languages and also like non queue cuddle. Clients can use call you can use anything like those are not non native cubicle clients. Next one please. Thank you. Thank you. And the next one is warning headers when using deprecated APIs. If you have if you ever have been a cluster admin and your company is supporting. Your company has a product but that's not Kubernetes and Kubernetes is just an additional thing that's supporting the major product you didn't have that time to update Kubernetes every release cycle. As much as you would have tried there are like other objectives. I have been a victim of that I never got to update the communities versions. Like on time I was always like two versions behind three versions behind or I was jumping between like two three versions like I'll be on one dot one four and I jumped to one dot one eight. And there was no way to know like what are the APIs that got deprecated. What is the way to audit what to do like how to communicate it to the users or is the platform users are using any of the deprecated API and that they are not catching it. So this feature is for that. This actually enables when it enables it lets the users in the cluster administrators to recognize in remedy the problematic APIs or deprecated EPS that they are using. And it can also provide auditing information that can be later processed into like you can reach out to the group and say hey these are all the rest deprecated things and you are using it. So it's really, really useful for that in that case. Next slide please. The next one is immutable label selectors for all namespaces. Basically, before this feature was available, the namespace didn't have any identifying labels per se and you need to have a right access to add a label on the namespace. And with the labeling, it was very, it could have been very hard to apply network policies and you cannot just group the policies together by telling like exclude these namespaces or include these namespaces because there was no identifying flags before. But with this feature, it by default supports an immutable field that label that gets added to the namespace and then it can be used everywhere like policies or like or back anywhere and everywhere and it's super easy and the users don't have to have right access to create labels. That's it. Moving on to the next one. The next one is about priority and fairness for API server request. So the API server has a mechanism on its own to protect itself against the ECP on the memory overload. It has a unit called max inflight limits and then it puts that resource on mutating and read only request and there is no distinguishing distinction between these two request other than like it's mutating and read only and consequently there the request one of the one type of request can overload the system while the other is just waiting to be served. This feature which is beta in 1.22 actually aims at providing protection against the overload and also ensuring there is fairness among the tenants that not one tenant is getting prioritized over the other and also in addition to that as a bonus it also works on optimizing the throughput. So I think it's overall a great feature and I much need a one. Moving on to the next one apps. James is gonna take over and I can talk to them about sick apps and enhancements. So sick apps primarily focuses on changes that or on features more generally that help people to deploy applications to keep it at ease itself that's their primary focus so a lot of things we're going to see are related to that. Just as a note that feature dot gates.io slash link at the bottom there. If you follow that in will take you to a GitHub issue which has a lot more detailed information so if anyone following along what's on long at home or some more information if you just type on that link with the issue number it can it can get you some more information. So the first one we're talking about is conjobs of course conjobs have been in Kubernetes itself for quite a while. So this isn't adding any major functionality in fact this is just upgrading in an API so we're taking a V1 beta one to a V1 us promoting this to general availability. So it's really encouraging to see features that have been around for a while entering this this period of stability. She's quite exciting sighting to see someone using Kubernetes quite a lot for pod disruption budget eviction and the 85 things are pretty similar. This is a feature that's been around stable for a while and this is moving the policy V1 so the pod disruption budget if you're you're less familiar allows you to specify a policy to say if too many of a pod shouldn't be taken down. Under certain circumstances or under eviction so you can evict a pod instead of deleting it. So again it's really nice to see this this stabilize. Moving on demon set max search this one's coming in at beta. This brings demon sets are a policy a way of deploying applications that allows you to deploy one pod per node as opposed to a deployment or replica set or stateful set. This actually brings them with parity with features from those other deployment mechanisms in that it has this this max search feature. The idea here is that you can save under certain circumstances you want Kubernetes to actually have more than one per node. This is primarily useful in upgrade to avoid downtime. So this is coming in at beta which is really interesting to see. Next we have a logarithmic scale down coming in at beta again. This one affects replica sets which is what underlies a deployments in most cases. The idea is that when a replica set scales down it needs to decide which pods to remove. And in doing so it typically it has always picked always picked the eldest but there is a change coming to pick with some randomness. In order to to improve the way that this is done and the logarithmic scale down. Changes it so that it will take into account the time which the pods have been running relative to each other logarithmically in terms of choosing the longest to go. So that's a change that's coming in and that can be that can be used using a feature flag at the basic grade. Next we have indexed job semantics. This is a enhancement to the jobs API. So if you create a job it will create a number of pods to fulfill that job. This allows you allows a user to specify an index for each job that is created. And this is mostly used to solve embarrassingly parallel problems. So if you have a problem and you have a thousand things and each thing you need to process can be done completely independently. Then you could launch 100 pods and process 10 things each or or whatever you need to do. And this this indexing mechanism allows you as a pod to know which number it is. It knows it's pod 50 and therefore it must grab these from the middle. For example another feature we have beta staying on the topic of jobs. We have the suspend field. This has this delightful idea that you can while a job is running you can suspend it using the API. Any pods that are currently running will be will be stopped. And then some later date you can come back and resume it. This is intended to give good control for things using the jobs API in order to in order to manage how it is executed. Again coming in the beta. Pod deletion cost. This is a never change to make it more flexible in how scale down for replicas works. In particular this allows a annotation to be used to specify the cost in some generic terms of deleting a pod. As I said before typically it would delete the oldest but that might not be under some circumstances the cheapest pod to or the most correct pod to move. So this allows application users to give some hints to the Kubernetes control plane about which pods it would prefer to keep if it has to be scaled down. This again coming in a beta. So this is coming in an alpha so this again is behind an alpha feature gate. This solves a limitation with how again jobs are running the current versions of Kubernetes where in order for a job to be marked as complete all of the containers which comprise a job must finish and must stick around. And this is at worst messy in not particularly active clusters. But if you had a particularly long running job that required lots of executions of containers and your pods I should say, and you had a very busy cluster which had a large amount of churn in pods. You might hit the circumstance where the earliest pods in the job are removed and cleaned up from the cluster before the last ones execute in which case the job will not be able to mark itself as completed. This system introduces a new way of tracking completion without relying on those pods been lying around in order to understand that something has completed. So this is really an optimization for really high scale use cases. Next we have memory seconds and stateful sets. This is another case of different ways of applying applications being brought into parity. So this is already something you can do on deployment statement sets replica sets to say that you want to wait for a pod to be marked as ready for a period of time or marked for the containers in the pod to become ready for a period of time before you consider the whole thing ready. So this is again coming in at alpha and is another kind of unifying how you can think about applications within Kubernetes. Then we have sick off and I'd like to pass back over to be fair to talk about sick off. Thanks James. Can you want the next please. Thank you. The first office external client care credit to the providers. So before this feature this features been in beta for really really long time since one dot 11 and it finally graduated and one or 22. Before this feature existed. So all the certificate rotation needed a client restart and there were no support for the standard key management solutions. But what this one with this feature in it in addition to supporting the key manage standards key management solution it also supports like token based protocols. And more than that it has made communities vendor neutral. Recently the Azure and the DCP plugin has been removed. It also provides template or like an interface for external providers where the external providers authentication providers they can just create their own package separately and that can be easily integrate integrated with Kubernetes. So this feature is decoupled a lot of things and it's a it's a it's a win for making Kubernetes interface when neutral. Moving on to the next one. We have bound service account token values. So committees provisions. Just some tokens for workloads and this functionality is on by default and widely used and the current the previous the JWT system had a lot of securities issues and also scalability issues that it required a Kubernetes secret first service account and that was not a scalable option. Security issues like someone gets hold off the JWT stick and just impersonate and must create a someone else and so many other things not time bound. So many issues but with this feature on it. It allows the API to specify like create web tokens that are like audience mountain bound and keyboard and this feature also introduces a new mechanism to distribute and support tokens and also provides back with compare compatibility. And I just stable now moving on to the next one. It's certificate signing request duration. So basically this This feature helps in providing a new optional field where you can specify the number of specific duration time to for the certificate to be active before that I think it was like an ear and you have to read it out every year or something like that. But now with this a feature in beta it actually provides a time bound certificate certificates. Moving on to the next one. PS part security admission. So this is the most awaited feature after the PSP application last year. This is an alpha. This as the main motivation of this feature is to avoid the pitfalls created by the PSP originally and it does it by supporting multiple modes and one is called enforcing mode, auditing mode and the other one is like warning mode. And you can have multiple modes on the same namespace. And this is enforced through the namespace labels and the highlight of the feature is that you can also have a driver and flag passed before applying the policy so that you will know how the changes would affect the existing parts without hurting them in any way. This is an alpha flag after feature. So behind the feature gate and if you want to use it, you need to enable the feature gate on. Moving on to CLI with James. So six CLI as the name implies CLI stands for command line interface, deals with all of the command line tools which power Kubernetes and which help you use Kubernetes, principally Qtl but also other tools as well. So the one I want to talk about here is Qtl commands in headers. So this is a enhancement to Qtl where now when it is used it will send a header to the Qtl API server in order to inform me what the original command was. This is designed to help cluster administrators understand how their clusters are being used because this information is collatable and exposable from the Kubernetes metrics and logs and other things like this. So they can start to help with debugging and they can help with understanding usage patterns. That's a really interesting one to see come in and again that's coming in beta. Handling back over to Savita for Sieg Cloud Provider. Next slide please. So this Sieg Cloud Provider plugin has always been an entry and we are actually working on moving it out of the tree. The entry means that it's part of the core Kubernetes code base which is in KK and when it moves out of free, it can live in the Kubernetes ecosystem or it can actually live as an external package. And Sieg Cloud Provider has been working on it and this feature actually helps in migrating the workloads from the older feature to the new external out of three equivalents without having a downtime. If your system needs to be HHA, if your system needs, like if your cluster cannot afford any downtime at all, so this feature can be used. They do recommend that the entry cloud providers be disabled and then deploy their respective out of free cloud controller manager when you bring this on. So this is in beta and they are working as much as possible to make it compatible and making sure the move is smooth and everything for the users. Moving on to the next one, cluster lifecycle with Jibs. Thank you. So Sieg cluster lifecycle as the name implies again, things about clusters and how to manage them and how to administrate them going forward across the entire lifecycle of a single cluster, as the name implies. The big change this time is the one we spoke about during the major themes is the ability to run the control plan as a non-route user to increase security. We've already touched this a little bit so I'm not going to talk about it too hard, but it's coming in as alpha and again requires that feature flag. So it's really exciting to see this from a security point from it. Moving back to Sieg implementation. So the next one up is API server tracing. And before, in order to support the large communities cluster, we don't have to create a lot of activity and keeps our costs to keep server API contents a cache in memory cache basically. And that's a watch cache that has all the request. And if there is an inability to re-establish the watch, what happens to the cube API server restarts and that causes a couple of issues. There would not be no, there'll be like empty change history and there were, the resource version would be like out of the history window in order to avoid all those things. This new feature which is an alpha feature that aims at avoiding these major two pitfalls and make sure that there is API tracing available even with the cube API server restarting. Moving on to the next one, network with James. Sure. So Sieg network deals with everything from a service implementation to DNS to CNI plugins. And really goes a wide gamut of what Kubernetes feature set does. So one of the big ones we've seen is endpoint slices. This, I believe, entered either stable or beta last time in 121. So seeing this common increase approach stability is really interesting and really cool. So this is the core idea of a feature is to take kind of load and other issues off the existing endpoint API. So this is another information, but this allows you to talk a little bit more about the probe that is used on a more granular level. So again, seeing this go stable is really interesting. Service disabling, we're balancing node ports. This allows you to if you create a normally if you create a community service, it can be cluster IP or it can be node port or it can be a load balancer. And if you create a load balancer, you still get a node port assigned. So this lets you opt out of getting that if you don't need it for whatever particularly reason, depending on the implementation of your load balancer. This is coming in at beta. Next is the load balancer class. So this is a lighter weight approach of dealing with load balancers in Kubernetes. So I believe this. Yeah, so this this this is very similar to an API elsewhere that has like a gateway class resource and things like this. So again, seeing this coming up beta in part of the networks ongoing ongoing work in this area. Network support ranges pretty simple. You can have a network policy which can lock down communication between pods and this allows you to put a a port range on that rather than a single port, which for certain classes of applications is is really quite useful and really quite interesting to see. So the internal traffic policy. So this again addresses certain topologies that you might see in networking and allows to do things like a node local and over topology where things. So this is again to make traffic policy more powerful as a feature set in in Kubernetes and to address a few more use cases. Namescaped scoped ingress class parameters. This allows you to this allows you to specify more information about how a ingress works at the namespace level. So this is again a new beta field is really just making this much powerful and and easier to use. Moving on graceful termination for local external traffic policy. This, again, is just a very simple improvement is only at alpha so again you will need a feature gate in order to properly understand in order to properly activate this feature. But this is again the big, big thing that signal work is working on and it's really exciting to see them bring so much change to the table in this case. And then finally from the network we have expanded DNS configuration. So this allows just more fine grain DNS control. So this allows configuring resolvers with more detail. This allows more search paths things things like this. So, you know, it's really great to see this coming in at the alpha stage at least. Moving back to serve the first for signal. So, starting with huge pages, huge pages, memory page status larger than the size for kilomites, and there are certain use cases and then communities and applications that needs more than that. They request more than the normal usual size. So this feature actually supports having a pre allocated huge base pages configured on the know by the administrator at the build time and whenever a part request a number of pages huge pages, it can get scheduled to that particular node and it can already like have an account of like where it can get scheduled and how it can take advantage of the available huge pages. Moving on to the next one. And so this is like this features about configuring fully qualified wine name has also named for parts of people have used servers before communities like a red hat or sent to type predominantly used CentOS and rail for a while. When I was doing platform administrator and then it actually takes the host name as FQN and this feature actually enables portability between the older application legacy applications to hosted in the servers before into the communities. So this is a platform without having to, you know, change a whole bunch of things and now the part supports FQN, sorry, FQD and I'm like, what am I saying I was like just interchanging things in my mind. It's fully qualified of my name to try just stick to that. It supports it now and it's a stable, it's a stable feature right now. Moving on to the next one is about sizeable memory bagged volumes. So communities supports empty their volumes whose backup storage is basically memory and that is like 50% of the memory on the Linux host. So this prevented the parts from getting ported from one or two another when they had to because they're already like depending on the memory on the host that was hosted. But with this feature, the portability between the portability of parts between the nodes has been increased and it also allows minimal memory configuration in addition to an explicit optional user provided value. So this actually helps in scheduling as well as like putting it from like when one node had to be like drain and quote on as the parts need to move to a new one that can easily do. Moving on to the next one is ephemeral container. So everyone who has been working on the applications using communities would have come across a point where you wanted to debug and the one way to do is like keep girl is Exit and then you get in the pod and you run as a process within the pod, but this feature is in beta and it lets you create an ephemeral container that attaches to the part and you can do all the troubleshooting and debugging within the container. Moving on to the next one. Lively, liveness probe grace seconds and so Before this feature existed, the liveness props used a termination grace period seconds for the normal shutdown and whenever the prop fail. So if the termination period that was set was long and then the liveness prop fail, the workload was not start restarted because it was actually waiting on the full termination period. In some cases, this is not the actual like this cause like delay in things and this was not like there were like different use cases which can be seen in the cap link mentioned in the slide. Use stories to this feature actually support setting an optional probe termination grace period seconds when said it actually ignores the readiness probes and then the part can be terminated without having to go through the full wait time. This features in beta right now. And if you're interested in more details, please click on the links in the slide which will be available later. Moving on to the rootless mode containers. So this is one of the highly Wanted features to avoid a container breakout and not have root access practically Thereby exploring everything so this feature is an offer and when the future gets enabled it lets you run the components in the user namespace. This is really important security feature available right now and take advantage of it. If you'd like to keep your cluster secure. Moving on to the next one. So see groups we to Kubernetes initially was implemented with the sea groups we want Washington and recently the sea groups we to in the corner when stable a couple years ago, I think. And since then most of the distros have started supporting see groups we to as default. That's providing a Kubernetes which is using see groups we want. Working in the internet way so this basically adding the support to support the Kubernetes platform to support see groups we to which is already supported in the kernel. It's an opposite and if you want to use it please take advantage of enabling the feature gate. In the next one is the memory QoS with the sea groups we to we discussed a little bit about it in the major teams. So this is nothing but a traditional support memory QoS with taking advantage of the sea groups we to Moving on to the next one. No system swap support we talked a bit about it already. It's mainly for certain type of applications and it also can take advantage of the Linux system swap. Which even this this features now for when enabled and the containers the parts can take advantage of this to Moving on to the next one is enables a comp by default. This is this is something we talked about earlier as well. This is another security feature which is an alpha state and when enabled it makes communities secure or more secure by default. Moving on to the next one is CPU manager policies and what what this feature is about the the the CPU isolation is available. But some of the applications might use simultaneous multi-threading enabled system and they want to take advantage of thread level allocation as well. And that is what this feature supports and it is an alpha state. Again, if you want to use it the feature gate needs to be enabled and more details can be found in the links provided in the slide if you're interested. Moving on to the next one scheduling with James. So with six scheduling we should talk about the first enhancement is schedule a component conflict API. This allows cluster administrators to more fluently express the configuration for various schedule components. So this is a beta iteration and some additional plugin functionality. So again it's going to be interesting to see where we go with this feature. Next we have preferred nominated node. This can speed up scheduling and eviction if you're scheduling a lot of pods at the same time principally. By allowing pods to nominate a node that they particularly prefer and then the scheduler will try its its best to get you there but no guarantees of course. Next we have namespace selector for pod affinity. So this allows pod anti affinity to work without knowing the namespace's name ahead of time. So this allows you to use labels or what point principally levels on namespaces in order to select them for affinity and anti affinity. And then finally from the sick we have the single scoring plugin for node resources. So this is a change which improves the complexity of the community source code but improves it by reducing it in order to allow to compress a couple of ways of talking about scoring node resources into like one one format. Moving back to Zephyra we have six storage. So first up with six storage is a CSI service account token. So this features table right now and it provides a service token for the parts that the CSI drivers are mounting the volumes for and the tokens are actually valid only for like a limited period of time and this feature also enables the CSI drivers. This feature actually provides an option to the CSI drivers to re-execute the node publish volume to mount the volumes back again. Moving on to the next one is volume populator data source redesign. This features an alpha. This is nothing but a major redesign of the data source. It previously only allowed two types of data source references one for the existing PVs to take the to take clone of the volume and then another for the snapshot to like if you want to restore for the internet of restoring basically and it didn't have any other options before to. Add any other data sources with this design it provides expanded semantics and it also adds a new data source difference field which can which provides options to add more data resources other than the existing PVCs and the snapshots. Moving on to the next one is delegating FS group to CSI driver instead of QBLAD. So currently for most of the volume plugins QBLAD applies an FS group ownership and which means that permission based changes that it's going to recursively change ownership and change the mode of the files and directory inside the volume. And this is not applicable for all the CSI drivers. For example, Azure file does not support CH mod or CH own. So when this feature is enabled, which is an alpha state. So there is an explicit field that can be added to the CSI driver and they can be applied during the mount time. Moving on to RTO pod access mode read only read write only pod access mode. So this is a new mode in addition to the existing read write once more, which actually says the single node can mount the volume. But any like the volume is just attached to the single one particular node but all the parts in the node have access to that. But with the new feature which is like read write once pod access mode. It's like a single one to one relationship only the pod single pod on the single node like whatever node it is on have access to that particular volume on nothing else have read write access to it. And this feature is the alpha again. Moving on to windows with James. Hi, so with sick windows. First of all, we have CSI plugins for windows. So CSI is container storage interface. These plugins are going stable in 122. So again, this is part of the theme we were talking about about windows support, getting a lot of parity with with Linux support and Kubernetes so the exciting to see this stabilize. Next when alpha we and our final enhancement that we're talking about today is windows privilege containers. So this has is the idea that you can launch a privileged container on on windows which previously you were unable to do. Typically for configuring networking or storage or some other thing that requires privileged access. And this is coming in alpha. And with that, I think we are done talking about every just about every enhancement in incubator 122. I'd like to pass back over to Savita to talk about the release team shadow program. Thank you James. Oh, that was a huge list of enhancements that we went through. And to like, speed, I feel like that we just run through it. Moving on to the release team shadow program. We wanted to talk about like, what is to be a part of the release team and they're like, Oh, what is the release team and what do we do there. And how to apply or how to be a part of it. Basically, the release team is there to facilitate all the enhancements the feature of application that goes in the release. We are just there to help the contributors and and then the process and everything so that that whatever they are working on gets into the release. To do that, we have various sub teams within the release team itself. Starting with the enhancements. And that's my team keeps tracks of whatever the feature that is going into the release and the CICC signal team keeps track of the stability of the release. In a way that they keep watching for test plagues and make sure that all the test pass in its healthy and give signal for the release. And ghost a bug triage they keep track on the free call box that got open against release and help coordinating with coordinating with the contributors and the six so that they can be closed and making the release more stable and reliable documentation team helps with coordinating between the enhancement owners and if their future needs new dogs or like updating the existing dogs, they help with the coordination. Then comes a release notes team. They help in automated generation of the release notes, the features that's going on in the release just not the feature bug fixes anything and everything that goes in the release basically that needs to release know what they keep track of it and help coordinate Then comes the communications communications responsible for a lot of things. Starting with coordinating the release within the community with C and CF setting up this webinar and making keeping a track of all the feature blog that goes after the release that has been checked for release. They coordinate they work with the owners of the blog and make sure that it gets published. So they help with everything and release lead team helps all the sub team. And also works with the contributors and radio six, wherever some kind of help us needed and making sure that the release is on track and it goes up the door successfully. So what's special. It's like, if you're a newbie, and you don't know anything about communities and you want to be a part of like to know what to do where to start and how things happen in the big project like communities. It's a great place to learn, and you can apply to be a shadow. You're a beginner already participate in a sake but you want to expand your knowledge. It's again a great opportunity. And if you are is like a really advanced contributor. And you want to act as a liaison between the sake, your sake and the release team, there are opportunities for that as well. So you can ensure that whatever the future you're sick is working on gets like you can you can be the person who can help the people understand what's going on and also like who can help the sake understand like, Oh, these are the red lines and we need to get things done by this time. So you can be that so the shadow program is for everyone who wants to learn and develop their knowledge around the ecosystem or get started, or even to try something out new. One thing to keep in mind is that it has gotten very complicated over the years. So don't be disheartened. Like last time I think James we had like what for 123 James isn't released even lead and I think they had over 180 applications for 40 roles, not 40 like what 35. Yeah, so I will at least lead shadow for 123. We had I believe 185 ish applications and we were able to take on I think about 26 shadows I want to say something like that. So it's getting very complicated out but that is not the only way to help out. So if you are interested in helping out there are like a lot of ways come say hi and reach out to folks start contributing to the new year like good first issues and work on area improve your skills, reapply. And you can learn more about the release team the links mentioned in the slide. There are like radio, the roles that we talked about. We, there is a role handbook so you can go and take a look at it. And let us know if you have any questions about that. We are super sorry that we are way over time. And apologies for the inconvenience and thank you for staying with us. Thank you James to see Libby and all the audience who are here with us listening to our session. Please feel free to ping us in the slack if you have any questions and I think Libby already shared the slack channel. If not please correct me. Yeah, I'll share it again right now. Thank you Libby. Thank you so much. What a great presentation. And thank you for your time and everyone else as well. And here is the slack channel once again. All right, well with that, we will let everyone go and follow up on the slack channel and all of this will be online. In just about an hour or so. So you can rewatch and get all your questions answered. Thank you both so much. And we will see you all again next week for another live webinar. Thank you everyone. Thank you for coming.