 All right, we're going to go ahead and kick it off so that we can get moving and then we have a lot to cover. I want to welcome everyone to today's CNCF Live Kubernetes 1.21 release. I'm Libby Schultz and I'll be moderating today's webinar. I'm going to read our Code of Conduct real quick and then I'll be handing over to Divya Mohan. A few housekeeping items before we get started. During the webinar you were not able to speak as an attendee. There's a chat box at the top right of your screen. Please feel free to drop questions there and we'll get to as many as we can at the end as well. If you join our public CNCF Slack channel and I will put that in the chat. CNCF online programs. We can continue all the questions and discussion post event as well and get to anything we didn't answer. This is an official webinar of the CNCF and is such a subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. And please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today on the CNCF online programs page at community.cncf.io under online programs. They're also available via your registration link that you use to sign in today. And with that I will hand off to Divya to start today's presentation. Divya, I'm so sorry to interrupt you. I think something going on with your audio. It's kind of been cutting in and out and crackling. Okay. I'm really sorry about that. Is this better? There we go. Let's try again. That's perfect. Is this better? Yeah. I'm really sorry about that. Is it perfect right now? Sounds great. Yeah. Yeah. So moving through now for today. Anna will be first walking us through a sneak peek of what you can expect for the 1.22 release and post that we will be walking through some highlights of 1.21. Here is where Nubroon and Anna will probably be speaking about how the 1.21 logo theme came about to be. And we'll also be going through some stats for the 1.21 release. Next up obviously we are going through the meat of the presentation which is going to be the updates from the various sites in terms of the feature enhancements. And a little bit of the time we will be reserving for Q&A towards the, however, likely be mentioned. And if you've joined in late, if you're not able, if we are not able to get through to your questions at the end, we will be answering them on the CNCF Slack channel, which is CNCF online. So please, please request you all to join in there and, you know, directly post in there. We will be answering and taking a note of all the questions and answering there if we could not, if we cannot get them towards the end of the webinar. With that being said, I will now be handing it over to Anna and over to you, Anna. Cool. Thanks, Divya. Hi, everyone. My name is Anna. I was an enhancement lead for 1.21. And let me give you a sneak peek of the release. So we started the release, the 1.22 release on April 26 and is targeted to release on August 4. Last Thursday was actually the enhancement freeze deadline where all enhancements wishing to be included in the 1.22 release must have their caps updated and merged. As of today, right now we are tracking 67 enhancements for the upcoming release, but the number will change. We have a few exceptions coming in. And with the code freeze, I'm not sure if the numbers will go down, but this does look like another big release for us. So expect a lot of things coming up in August 4. And also, just if you're following along with the release, you'll actually notice that 1.22 release cycle is a little longer than usual because Kubernetes release cadence actually has changed to three releases per year. So 1.22 will be actually a 15 weeks release cycle compared to 1.21, which was only 13. This was changed to give people more time to develop and etc. So now I will pass it to Nevarun for 1.21 highlights. Yeah, I knew I was on mute. Come back. Thanks Anna and Divya for starting of the session really well and thank you Anna for the 1.22 updates. So I will go through the 1.21 release highlights. The first thing that I want to emphasize on is our release team. So this time we chose the release to be titled as Power to the Community. Now you may ask like what does it mean. So one of the things or a few things that we have been doing since the past few release cycles have been to make the release team more accessible and more inclusive to each and every nook of the globe. In a sense like facilitating people to allow facilitating facilitating people to participate in the discussions of the release through alternative meetings like meetings which are more in the Asian or like European time zones. So that people don't have to like be awake until late night to attend the meetings. Although we do recognize that having synchronous meetings even like even multiple of them may not serve the purpose of like going to each and every person on earth. So we have also transformed a lot of the processes that we have into asynchronous processes. So that you really don't need to come to a meeting to discuss things you can just post something on the mailing list or the Slack channels for the release and they will be discussed there itself. We also started more likely keeping keeping more lazy consensus to decisions and like letting people see and review stuff. In all we have been moving ahead steadily towards our goals inclusion and more sustainability in the release teams which are continued continued further in the future releases. Now what did we even ship in Kubernetes 1.21. So every Kubernetes release we have been like breaking records of the number of features that we ship. Kubernetes 1.20 before 1.21 shipped the highest number of features in recent history. In 1.21 we upped the game again with shipping like 51 announcements. So enhancement is basically a term in the Kubernetes community which is given to like how we track a feature from its inception to its stability or deprecation in certain cases. Also throughout the journey we label each of those enhancements into several buckets which are alpha beta stable and deprecations. What this means for the end users is that alpha users are usually like very new features which are introduced by any of the code owners who essentially introduce that feature. But then in the Kubernetes community or in the project we give a little bit of time for every feature to mature and then graduate throughout the several stages. So alpha features are usually like disabled by default on each conformant and shipped Kubernetes cluster. You can obviously enable them using a feature flag. So all of the alpha and beta features are gated just that alpha features are disabled by default. You have to explicitly enable those features. Coming on to beta features. So when contributors or co feature owners think that hey this alpha feature has been in the release or in the project for some time and it has gained enough maturity to graduate to beta they will just enable it by default. So the feature flag is set to true by default. Although if you feel that it's buggy or there are some things that are not suited to your use cases you disable it when you are bootstrapping the cluster. So alpha enhancements see a lot of changes along their journey but when you graduate to beta making changes becomes a bit difficult because now users end users do use them a lot and there are guarantees established. And then once beta enhancements stay as beta for some time they eventually graduate to stable and they have certain they have like strong guarantees that the feature won't change for in future releases. It requires a lot of maturity for features to graduate to stable. So that's about how we categorize things coming to numbers of 51 enhancements we the community graduated 13 announcements to stable which means they are like they will be there in the communities project for some time and then 15 enhancements have been graduated from alpha to beta that means we see a lot of confidence in all those features that they are going towards stable and are consumable to end users. We have also introduced like 21 new features as alpha features in Kubernetes 1.21 that you can just check out by enabling the feature flag when you are bootstrapping a Kubernetes cluster. And apart from that we have deprecated two features which we will discuss in detail when we go through each of the core ownership updates. So you will come to know. Moving in. We have certain major themes for Kubernetes 1.21. We are just emphasizing a few of them in this slide in the next one. Number one, the cron job resource has graduated to stable. What this means is cron jobs have been beta for something and back in if I remember correctly 1.19 there was an effort to actually revamp the cron job controller to newer controller standards and make it more faster. That happened in 1.19 and just 1.21 it has graduated to stable and the whole controller code has been removed. Also the feature gates have been removed. The next feature which graduated to stable is and which is one of our major themes is immutable secrets and config maps. What that means is that whenever you create a secret or a config map you can set that hey this is immutable so any more update requests to those won't be visible. We will also see it in detail when we talk about the storage updates because this is a really cool feature. Next comes up is IPv4 IPv6 dual stack support which has graduated to beta. This is a revolution and a lot of work has went into making this happen. Kudos to all those people involved in it. Graceful note shutdown has also graduated to beta. We will see in the signals updates what does it mean for you and a bit more in detail. We have more major themes in this release. One of the things that happened is whenever you create a persistent volume there were no mechanisms where you can check and the Kubernetes API server could check whether the underlying resource of your infrastructure provider is healthy or not. So now we do have a mechanism for it. All that has graduated to alpha you can still check out this feature by enabling the feature flag. Along with that we have reduced a bit of I think a lot of Kubernetes build maintenance. This is mostly for the upstream contributors in the sense that before this happened we had two kinds of build tooling. One was the native go tool chain based tooling and the other one was Bazel. With this enhancement Bazel based tooling has been removed from the core Kubernetes repository and each of the processes have been transformed to go tooling based. Now remember I talked about two deprecations. We deprecated part security policy which was a massive change. It created a bit of uproar in the end user community as well. Although we will understand a bit more in detail later on what does it mean for the end users and how they can mitigate around it and what comes next. On top of it we are the specific code owners who own topology key have deprecated it in favor of like better options. Going ahead so Anna and I will go through the special interest groups which have shipped features in Kubernetes 1.21 and we will go through each of those features and give you a short overview. Why do I say short in because in the interest of time we did ship a lot of features this really cycle and it is a long list. But before going through all of that I want to just say two things here. What special interest group means is special interest groups are what are units of people inside the Kubernetes community or their community groups which own specific areas of code. So each sync is delegated with one specific area in the core Kubernetes repository. Similarly like you have API missionary you have node your CLI API missionary handles everything related to the Kubernetes API server the API types API expressions node maintains Kubelet any other any other code which is relevant to the operations of node and surrounding it. CLI handles kubectl and anything which you need to do in kubectl. These are just examples we will go through more six. The other thing is since we will be briefing out on things you still can go ahead and read in more detail. Each of the slides like each of the slides will have linked to a tracking issue and an announcement proposal. So the announcement proposal actually a feature proposal where each and as an aspect of a feature owner have written like what are their motivations what are their goals what is the non goal and some implementation details of those. So you can just skim through the cap in order to understand like what each feature means. So now having said all that the first thing we will go through is API missionary. And one of the first announcements that they shipped and they shipped it they graduated to beta is efficient watch resumption after cubes API server reboot. So what it means is so whenever you restart API server and you do a tons of like so whenever you restart API server it needs to like refresh the watch cache from its city. And many times it may happen that the resource version is like out of sync. So if you have like a lot of like watches to the API server you may do a ton of real lists which may create unnecessary load on the accident API server. So this has been resolved and which has resulted in like avoiding tons of real lists during the API server rolling up rates basically at the time you stop one old API old version Kubernetes API server and then you start anyone. This also avoids like different instances of API server being stuck with like the watch cache sync to different resource versions for a long period of time. You can obviously go through the announcement proposal and read about it more in detail. Next up is you might have heard about this feature called server side apply. So what it does is whenever you apply any new Kubernetes resource earlier it used to happen the diff used to happen on the client side and then the diff used to present now. The calculation can even happen on server side apply but then what if you want to do in a programmatic way in client goal. So earlier what you needed to do is you needed to use a patch type called apply patch type and give it a binary of bytes of YAML or JSON to the API server so that the API server takes in and does SSI operations with client to go shipping Apply configurations you don't need to do it anymore you have like types that will help you doing server side apply from client goal. With that server side apply can go GA which is I think slated to happen in Kubernetes 1.22 the current release cycle. So do track that. The third thing that API machine is shaped is so oftentimes you want to select new spaces reliably using the traditional methods of label selectors with a small change. It may not be small but it's like whenever you create a name space a result label called Kubernetes.io slash metadata.name gets added as a label to the namespace metadata. So that you can efficiently like choose that namespace using this label. With that those are the three announcements that API machine area shaped and they did a great job with a lot of those announcements they're going into beta and making it available for users to use by default. Moving over to the next thing it is apps apps also shipped a lot of interesting things. Primarily the first thing we cron jobs graduating to stable as I mentioned the old controllers are now removed and feature flex are also not present. So the new controller has become the way to go for your cron jobs. So if you have been using cron jobs since Kubernetes 1.19 you might have been using the new controller by default. It's just that now the old controller is not there anymore and it's more transparent for you. Next up is pot disruption budget has graduated to stable which also makes pot disruption budgets mutable so you can change them after even after you create them. Along with that the team has also addressed a lot of performance issues with the pot disruption budget controller. Next up. So this is a bit interesting. So suppose you're a cluster admin and your cluster users are creating a lot of jobs. And if you have like a high ish number of cardinality and high ish number of completion counts you will have a lot of pods which will be there in the cluster pod resources which will be there in the cluster. You cannot clean automatically by default. You do have to run an operation with this feature. It makes it easy for users to actually specify TTL a time to live for those those resources. The controller will basically read what you specify as the TTL and then keep on deleting jobs and parts which have completed and finished successfully that that's. Next up is random part selection and replicas at downscale. Now, if you as a user have been using replica sets and have been like constantly upscaling or downscaling them, you might have noticed that the. Pod that is killed on a downscaled downscale event is usually the last pod which was created. Now, it may happen that if it was created later, it may be doing some some work. It may be handling some workload which has recently started and it may be detrimental for your use case to actually kill a pod which has started like very recently. So it introduces a randomized heuristic so that randomly any of the pods in the replica set are selected and killed. So that that behavior does not come into picture where your workload may be hampered. Next up is index job. Also, I have to mention that this feature is in alpha. So you would need to enable the feature flag to have this logic working in your cluster. In case of. So the next up is index job. So often. People may run like machine learning workloads. So machine learning workloads may be one of the cases or there may be cases where your workload may need or may require some kind of index for the completion of the job. So with this change, you can actually specify. That this job is indexed and a job completion index environment variable would be there in the container of the containers of in that pod created by the job. Here in this example, you can see that a specific process like image processing task is taking the index, which is reading the environment variable basically and it can also take a host pattern to effectively talk to another pod created by the job. So here it becomes like a bit deterministic in talking to other processes if you want. And this is this can be easily handled by adding an headless headless service, which points to each of the pod in that job, specifically using a label selector. The next feature is the ability to suspend jobs. So if you have been users of the job resource, you might have noticed that in order to like halt a job, you can easily like delete it. But when you delete a job, the metadata like how many, how many times the job has completed or how many times the job has failed to complete is lost. Now with this change, there has been a suspend field added to the job specification, which you can set to true, which will just suspend the jobs execution. So any existing pod will be running and then there will be no new pods created. And then if you want to resume it again, you can just unsuspend this job. This is also alpha, so you need to enable it through the feature gate. Next up. So we have been talking about a lot of changes to replica sets. And this is one other change where you can influence the order of part deletion on downscale events. So you might think like intuitively it is opposite to the randomized one, but it's like adding features layer over layer. So one number one replica set downscale events will randomly delete a part. So if you want to control the heuristic a bit, you can specify an annotation called controller dot Kubernetes dot IO slash pod deletion cost and specify a value. So the pods with the lower value will be deleted first. So this is how you can actually determine a little bit of the heuristic of or control the heuristic of how your pods get deleted when there is a replica set downscale event. This is also alpha, so you need to enable the feature flag. Having said that I just want to shout out just want to just want to give a shout out to sync apps. They have been doing an awesome job in enhancing the user experience for jobs, replica sets and crown jobs and thank you to them. Also, a lot of these features are in alpha, so please feel free to intentionally enable them and use them and give feedback to the community. The community should be really indebted for that. Next up, we have a special interest group auth who handle the authentication mechanisms in Kubernetes. We now come to a very interesting enhancement, which is port security policy. There have been a lot of discussions on social media on several channels about it, but one thing I would like to mention here is that port security policy has been deprecated and is slated to be removed in 1.25. It does not mean that you can't use port security policy now. You can still use it, but we would highly recommend you to use the other replacements that are there so that your transition or your clusters transition in moving from PSP port security policy to the alternate is smooth. So you can read the deprecation blog which by the authentic security folks who are the primary drivers here that Kubernetes is a community work. So a lot of people have been driving it just that I want to give a shout out to them. So a replacement is also being worked on. The link is also in the slide. So please, please, please, if you are a user of port security policy, I would urge you to go and look at what is the replacement and give your feedback. The community, the communities community and the upstream contributor community strives on such feedback. So please do that. Moving on to the next enhancement from what is. So client who has needs to click can have some way to authenticate requests right from external providers with this enhancement client provides for a mechanism for you to implement out of three providers. Now what do I mean by out of three. Out of three in Kubernetes communities context is that the code does not reside in Kubernetes slash Kubernetes, the code does not reside in the Kubernetes code base. What you can do is you can implement an out implement a credential provider and then when you use client code you can basically specify that as a provider. Now you may have already used like GCP and Azure providers which are in built into Kubernetes Kubernetes. Now they will eventually be deprecated and in favor of like out of three providers. This is still in beta. So it would need to go to and then however things will get progressed. One thing to note here is that it also essentially means that credentials can be rotated without even restarting the client processes. So since the credentials lie out of bounds or out of three from the client or the process that you are running to talk to the API server. You don't need to essentially restart that process. And resulting in like your workloads not being hampered. So bound service account tokens are a cluster of features. If I if I can freeze it that way, which involve like separate enhancements together. So one of one of them is like separating the root CA config map from bound service account token volume. So with this the audience of issued JSON web tokens would be bound and also like auto configured service account tokens in parts can use those projected tokens. So it will now become more efficient in while you are using this. Remember I mentioned like bound service account tokens are cluster of different things. With that like root CA config map also goes to GA eventually like paving path for the other parts of bound service account tokens to become or to graduate or evolve in their functionality. With this config map called cube root CA dot CRT will be published to every namespace so that it can be used by any workload to server and verify those connections. So this will also be helpful when you are designing workloads which run in cluster. Next up service account signing career travel. So what happens now is when you have like service account service account tokens inside a cluster having this graduate stable will allow the authorized systems to discover any information they need to authenticate those tokens. One of one of the important facts one of the important goals actually with this feature is that the Kubernetes API server should eventually be like open ID connect compatible and not have our own like API structures of our own interfaces. This is also going to stable. With that, there has been a lot of work in the outside of things specifically around PSP. So do give feedback in any of the things that you feel necessary. Having said that, moving over to CLI, there have been two improvements to QC tail and both of them are alpha. So one of the things which caters more to cluster admins who want some metrics to understand the behavior of users who are calling the Kubernetes API server. So each of the Kubernetes command operations, not each there are like specific cases where the Cube CTL will along with the request to the API server also include headers like Cube C tail hyphen command, Cube C tail hyphen flags and Cube C tail hyphen session, which will help you to essentially build more telemetry operations in like in like several use cases where you want to know like what kind of operations your engineers or your cluster users are doing. The Cube C tail session value is basically new ID which will be like different in case of each session. Next up, this is also one of the interesting things in like user behavior. So let's say whenever you do Cube C tail logs or you do Cube C tail exec, you specify the pod name and along with it, if your pod has like multiple containers, you have to specify a flag minus C. And specify the name of the container that you want to operate on be it an exec operation, be it a logs operation. With this feature, you can actually write an annotation for that part like what is your default container. So if you don't specify the minus C flag, the Cube C tail will assume from the annotation which container you mean. Now minus C still takes precedence just that if you don't specify the flag, it will take the default one. Those are the two things shipped by 6CLI. So I think as a cluster user, these two enhancements would be really like awesome to see being used and gather more insights. Moving to cloud provider. So especially does to cloud provider shipped a leader migration mechanism for controller managers as alpha. What this helps with is now, so all of the out of the three cloud providers that you have, like if you want to do a migration of Kubernetes API server, which has an entry provider to a Kubernetes API server version, which is like out of three cloud provider. There is now a mechanism which will help you to do it in a highly available way. So this announcement basically defines all the guidelines that you need to follow like any locking mechanism or any resource logs that you want to put on the Kubernetes API and then do the migration. Kudos to them for shipping such an useful thing. With that, I will hand over the bat into Anna, who will be going through a few more sigs. Cool. Thanks, Nevereen. Let's take a look at updates from SIG instrumentation, which have five enhancements in 1.21. First one up is metrics stability enhancement graduates to stable. Metrics are categorized as either alpha or stable. And when alpha, alpha metrics can be deleted at any time in stable metrics are guaranteed not to change. But when a stable. So this enhancement actually gives a better ability to deprecate stable metrics. So it will start marking things as deprecated and you'll see deprecation notice in the description packs in the warning log. And then eventually the metrics will be hidden and then removed. Next, please. Next, we have the structure logging, which actually still remains an alpha structure logging defines the under structure for Kubernetes log messages. And starting in 1.21, it is available for a coupe flight. And even though it didn't graduate to the next stage, there was a lot of effort put into this during 1.21. So, yeah. Next, please. Exposed metrics about resource requests and limits that represent palm model graduates to beta. This enhancement allows coupe scheduler to expose optional metrics that reports the requested resource and the desired limit of all running pods. Next. Defend against voting secret via static analysis. This is the one where static analysis now can be used during testing to prevent various types of sensitive information. Yeah. Sorry, this graduates to beta. Next, please. Metrics. Metric cardinality enforcement is a new enhancement and this mitigates the memory leak that has been identified with metrics. So this enhancement introduced the ability to turn off metrics and set a list of allowed values for the metrics. So a lot of great metrics related changes from sick instrumentation in this release. So shout out to them for everything and specifically the structure logging efforts. Now we can take a look at the site network. So sick network had nine announcements. First one up is IPv4 IPv6 dual stack support. It graduates to beta and dual stack support in Kubernetes means that pause and services and those can get IPv4 and IPv6 addresses and it's now enabled by default. Next. Next we have end point slice API graduates to stable end point slice slice API was introduced to solve existing performance problems with the end point API. And this does that by like splitting the end points into several end point slice resources. And with V1 topology field has not been removed in favor of fields like node name and zone. And it adds an annotation to indicate over capacity for endpoint resource with more than 1000 end points. Next. Next we have service type load balancer class. This is one of the new enhancements that enables option to specify the class of a load balancer implementation for services type load balancer. So to allow user to leverage multiple service types in a cluster. And this is the lightweight approach until the gateway of API becomes mature. Next. Next we have network policy port range. Another new enhancement from SIG network. I think this would make a lot of people happy. This enhancement allows you to write one rule for network policy that targets range of ports instead of writing one rule for every port. And there's a new field now called and port to leverage the range of ports. Cool. Next on is service internal traffic policy. This is also a new enhancement that introduced a new field in service called internal traffic policy that is used by coup proxy to filter the endpoint it routes. So when it's set to cluster, all end points are considered, which means that it will behave as usual. But when it's set to local, only no local end points will be considered, which means that only it will only send traffic to service on the same node. Next. Block service external IP via admission. This enhancement is new that graduated straight to stable in response to burn ability that was identified, which allows unprivileged user to hijack on IP address via service. And this enhancement blocks the use of external IPs by allowing user to disable the external IPs and block the deployment of any resource that uses external IP fields. Next namespace scope ingress class perimeter. This enhancement was created to support many use cases that needed the ability to reference namespace scope parameters. Now you can do that just by specifying parameters for the ingress class with the namespace scope. Topology aware hints is a new enhancement that provides hints to cluster components to influence how traffic is routed so that components like coup proxy can be more efficient and keep service traffic within the same zone. Next. Next one is separation top topology aware routing service. Specifically topology keys API is now deprecated in favor of top topology aware hints that was just mentioned before the slide. So to summarize on sick network introduced many new Elsa enhancements and focused on sick ability improvements. So shout out to them for all their hard work and getting total of nine enhancements into 1.2.1. Now let's take a look at sick node. So sick node is also another big one with nine enhancements and first one is. Next slide please. First one is the TL support. So this one actually has been around since 1.4. And it allows interaction with Linux this CTL service to tune OS parameters and it's been beta since 1.11 and now with 1.21 stable. Next. Provide a run as group feature for containers in a pod. So this one graduates to stable. This is another old one that's been around since 1.10. It supports the run as group field inside the security context field in a pod and it's been beta since 1.14 and now stable. Next we have memory manager which is a new enhancement from sick node. It's a new component in Kublet ecosystem to guarantee memory allocation for pods in a guaranteed quality of service class by using single or multiple new allocation strategy. This will be useful for any app that require memory optimization like pocket processing or databases. Next graceful no shutdown graduates to beta and is now enabled by default with this enhancement enable Kublet will detect no system shutdown and try to gracefully determine when the pods running on the nodes. Add downward API support for huge pages. Graduates to beta this enhancement allows pod to fetch information under huge page request and limits using the downward API. Next remove see advisor Jason Metres from Kublet. So this enhancement has been deprecated since 1.18 and now by graduating to stable it has been removed permanently. Next add configurable grace period to probes. This enhancement introduces a probe level termination grace period seconds in addition to pod level termination grace period second that was already available as a solution to an edge case when liveness probes are used with a long grace period. Extend pod resource API to report allocable resources. This is another new enhancement. This extends the pod resource and points to allow third party consumer to learn about the compute resource allocated to the pods using the get allocable resource endpoint. This also means that it will be just easier to evaluate the node capacity. CRI container log rotation is another enhancement that's been around for a long time and finally graduates to stable. This enhancement enables container log rotation for container runtime interface and like I said it's been around since 1.10 and now it's stable. So it's really nice to see a lot of old features finally graduating to stable from SIG node and new enhancements like memory manager that was a big up first so huge shout out to SIG node. Now let's look at SIG scheduling. So SIG scheduling had two enhancements. The first one is honor nominated node during use scheduling cycle. This allows user to define a preferred node to speed up scheduling a pod. Instead of evaluating all the node to find the best candidate you can now define the preferred node in a new field nominated node name inside a pod now. Next namespace selector for pod affinity. Another new enhancement for SIG scheduling. This enhancement introduced namespace selector to allow setting namespaces for affinity term dynamically to allow namespace specification by labels instead of names. In addition it introduces cross namespace affinity that limits which namespaces are allowed to help pods with affinity term that cross namespaces. So shout out to SIG scheduling for two awesome new enhancements. Now I'm going to pass it to Neverwind to go over SIG storage and testing. Thank you. Thank you for all the updates on instrumentation, network node and scheduling that was really awesome to hear about them. So I'll go over the final bits of this webinar and go over like storage testing and a bit about our releasing shadow program. So as I discussed earlier, as I mentioned earlier, immutable secrets and config maps have gone to alpha. So you can specify and protect, specify that secrets and config, specify secrets and config maps to be immutable, which will eventually protect against like unnecessary updates or like accidental updates. Also Cubelet does not poll for such secrets and config maps which results in like much better performance. The other thing that I mentioned as a major theme was PV health monitor. So right now the user experience with this and if you enable this feature gate, it will drastically enable the user experience of handling the issues with underlying storage. So you will know in a better way and this also gives you a really early signal of any storage failures that may happen, potentially preventing your workloads going down in future. Next up is storage capacity constraints for pod scheduling. So with this feature, when the scheduler tries to schedule a pod to a node, it will now keep on checking like whether the requested storage capacity, for example, if you say like, hey, I need 10 gigs of storage, 10 gigs of a PV along with this pod, but does it not even have the backing capability to have that storage? So now you can specify those constraints and it will block pod creation on those nodes. Generic fmr and inline volumes. So with this change, you can have like really lightweight local volumes, which need to be eventually provided by those CSI drivers. But now what will happen is like the pod will be the owner of the volume claim and if those kinds of fmrl volume claims or fmrl volumes, which are created due to this pod being present, if the pod is created, the volume claims will also be created. And this is done through the owner ref mechanisms. Next up is prioritizing nodes based on volume capacity and this will result in like pods being scheduled on nodes where the available capacity is actually close to the requested capacity. For example, let's say you have like 100 gigs on one node and your pod or a volume or a claim requires like 10 GB. So it may try to schedule there if you don't have any other. But let's say you have a node where you have like 20 gigs of storage available. It will try to schedule there. So it does some kind of heuristic based scheduling where it will eventually try to optimize your volumes resource usage. Again, saying this is alpha, this sounds really cool. So probably should enable and try it out. One of the things that graduated from alpha to beta is Azure file CSI drivers migration to out of tree CSI driver. So earlier in one of the previous releases, Azure disk also moved out to out of tree. And this has been done for Azure file provider as well. Now one interesting bit to note here is that the feature flag here is by default set to false, although this is beta and this is because of the injury to out of tree driver migration. For more details, do look at the tracking issue in the cap, which has like more details and more discussions on why this is even necessary. Service account for CSI driver. So now the CSI drivers can essentially request for like audience bounded service account tokens of those specific parts from Kubelet to not publish volume. So with that, you can or the CSI drivers can essentially like re-execute re-execute not publish volumes in a best effort manner. That was all for six storage and you might see like there have been lots of improvements on the CSI driver side of things and how to involve or how to evolve and improve the user experience of cluster admins or end users when they're trying any workload on workload which attaches storage volumes. Moving over to testing. SIG testing shaped in the basal removal cap, which basically meant that now the basal based build and related and related release tooling are now removed. And CI processes which used to use basal are now using the native tools like make build, which essentially goes tool chain. This is also resulting in like radio stress on the community that they need. They don't need to think and maintain like multiple build systems. They can just use the native tool chain. That's all on the updates from each SIG and you have seen like how 51 features have been shipped out by the community as a whole and we have seen like lots of interesting things that you should try out and lots of interesting ways that the features that has evolved over time. Having discussed all about the Kubernetes release, the exact Kubernetes release, I'll talk a little bit about the release team shadow program. The release team shadow program is basically an apprenticeship or an internship program through which any new contributor or any contributor who is interested in participating in Kubernetes releases can start up with the Kubernetes release. Sustainably like they will be mentored by each of these role leads and each of the role leads sign up like three to five shadows depending on the role and what kind of workload is involved in the team. And this program is usually like four months, four months. Why? Because the Kubernetes release cycle is for four months now and all throughout the release cycle, the shadows are mentored so that they can take on the lead role next time. So with that, that is the end of the session. You might have already asked questions. So we will try to see if we can answer a few of them. Otherwise, we'll take it to Slack. One thing that I want to ask like, and I'm going to stop sharing. And one thing I would like to ask an interviewer to maybe mention a bit like when they started with the Kubernetes release, which cycle so that people can also get motivated hearing your journey. Yeah, so I started with 1.17, I believe, right? I actually was an enhancement shadow with Navarune. And then I've been part of it since then. So I shadowed for multiple roles like enhancements, bug triage and dust. And then I actually led multiple roles as well. I was a dust lead and enhancement lead. And now I'm participating in the 1.22 release as a release shadow. So, yeah, I've been here for a really long time and really enjoy it. Yeah, what about you? Well, I think I am a relate to the both of you here. So I joined last year as the release shadow on the 1.19 release cycle and it was as a dark shadow and I've worked alongside Anna for that. After that, I shadowed the comms role and led the comms role last cycle as you already probably know from the introduction bit. And along with Anna, again, I am one of the release leads for a release lead shadows, sorry, for this cycle that's 1.22. So it's been an amazing experience and it's a highly recommended experience that I advise every student, every aspirant to get into open source to join in because it's a different thing to contribute to something that's larger than yourself. So yeah, that's about it from me. Awesome. Thank you both for your experiences. And yeah, I also started in like communities 1.17. I started out as an announcement shadow and then shadowed announcements again, then led announcements in 1.19 then eventually like became the release lead shadow and let the release in 1.21. So it's a fun journey. If you want to learn a lot about how the communities community works. This is one of the programs that you should look at. The program has been like really competitive in the past few cycles. So don't worry, even if you are not selected, like you can still contribute to the community in a lot of different ways. We do hang out in the Kubernetes Slack. So I'm just leaving the link to the Slack on the chat. So if you want to join, please do so. We are on the channel called SigGrily's where most of the release team hangs out. So thank you all for joining today and thank you, Divan, Anna for hosting the session with me. This was really great to enumerate through all the features in Kubernetes 1.21. So hand me over to Libby. Thank you all so much. Thank you for joining us at CNCF and our live webinar. Thank you, Divya, Anna and Namroon for leading us through this and check the website later today and the recording and slides will all be up and ready to go. And thank you guys so much. Keep the conversation moving on the Slack channels and we'll see y'all next time.