 Okay, let's get started. Welcome everyone, thank you very much for joining us today. Welcome to today's CNCF webinar, What's New in Kubernetes 1.19. I'm Jerry Fallon and I'll be moderating today's webinar. We'd like to welcome our presenters today. Nebrune Powell, Infrastructure Engineer at ClearSites, Taylor Dozal, Senior Developer Advocate at HashiCorp, Max Kobachar, Manager of Cloud Native Engineering at Storm Reply. Just a few quick housekeeping items before we get started. During the webinar, you are not able to talk as an attendee. There's a Q&A box at the bottom of your screen. Please put your questions in there and we'll get to as many as we can at the end. Please note that this is also an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of the Code of Conduct. Please be respectful of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at CNCF.io slash webinars. And with that, I'll hand it over to our presenters for today's webinar. Thank you very much, Gary. So welcome to this webinar about the Kubernetes 1.19 release, I'm Max. I'm the Kubernetes 1.19 communications lead and we'll be the moderator for today's session. With me, we have Taylor and we have Navarune. Both were like the key figures to move the release. Taylor as the release lead and Navarune as the enhancement lead. So for today's agenda, we're first looking on the 1.20 release, what's coming up there. We're moving on to the 1.19 stats. Going over to the 1.19 highlights, like how we came to this awesome great name of the Kubernetes 1.19 release, but also what are maybe some really interesting changes on what is new to the Kubernetes. Then finally, we move on to all the different updates through the different six. And in the end, we will discuss a few of your questions and hope you will also find some answers for it. Please remind that you can anytime ask us questions, I will try to answer some of them during the session. And if we have some which we really like to discuss or need some broader discussions, we will move them to the end to give there also some further insights. With that said, please go ahead. Sure. So I am going to go ahead and cover some of the 1.20 release states that we have coming up. Jeremy is leading the 1.20 release, Jeremy Ricard, and just spoke with him yesterday. We talked a little bit about 1.20 will likely be the last release of 2020, unless I don't anticipate anymore jumping in there and surprising anyone. They're working on defining a test freeze, I believe last week Shadda, there's a PR going in for that. The release is targeted for Tuesday, December 8th. The enhancements freeze is Tuesday, October 6th, and all of the shadows for that release have been onboarded. So that did kick off on September 14th, and then the original target date was for the 8th of December. So looking at like that is going to be the release date, but happy to have you all here today. And really excited to cover what came out in 1.19 and answer all your questions on that. With that, I'm going to turn it over to Navarue who is going to kick us off with enhancements. Hi, everyone. So I'll give you a brief overview of what enhancements did we ship out in the last release. So we shipped out a total of 34 enhancements in 1.19. So by enhancements, we mean features in Kubernetes. So they can be like APA changes, they can be like usability fixes, they can be internet test changes, they can be conformist changes. So we usually categorize changes or enhancements of features into three broad categories, alpha, beta, and stable. So we had like 10 stable enhancements. What this means is there is like nearly 100% confidence that these features are here to stay up, adding some changes. They may be like improved, but not substantially, which would affect the users. We have 15 enhancements graduating to beta in 1.19. So beta stage usually features tag in when they feel that they are confident enough that this may go to stable or generally available in the next few releases. So the API remains more or less constant between beta and stable changes. And then we have like nine alpha features. They are like majorly new features into the Kubernetes project. There may be new feature additions which we will go through later on when we do the SIG updates. With that, I would hand over to Taylor again for some highlights on the 1.19 release. Thank you so much, Navarune. So with 1.19, the release name that was chosen for this is Accentuate the Positive. I have to give a huge shout out to Hanabath Lagalov for designing this logo and truly capturing what it felt like to be on the release team during this time. We faced a lot of uncertainty, which I'm sure you've heard that word numerous times over the course of this year, but it was really true. We started the 1.19 release in one world and ended it in a completely different one. It was also quite a marathon release. And I believe it was the longest release that we've had to date. And really just wanted to focus on the community. That's why it got stretched out so long. We wanted to give time so that people were able to really work through their enhancement proposals and features and really focus on the community, right? Because if we don't have the community working and behind us and we're working well with each other and communicating, we don't really have an open source project. That's the community makes the project not the other way around. And so really was happy to see everyone in good spirits while working on this project and while being sensitive to all that was going on around the world. But so within this logo, you can see there's some fun little Easter eggs. If you look close enough, you can see the Kubernetes logo and hat and all of these hints at things that might have come out during the 119 release that a lot of people enjoyed and found fun. And then I'm pretty sure, I haven't asked these characters here, but I'm pretty sure they're using a green screen because I don't think I've ever seen that part of the beach just yet, but I'll have to ask them later. So in terms of new things, we are going to cover each of the features individually but just at a glance, some new things are structured logging, which I'm quite excited about. So looking at JSON logging or just standard out and standard error, that's going to make it a lot easier for ingest and you don't have to do any wild regex rules or anything like that on that front anymore as of 119. So very excited about that. Storage pools for capacity management. Storage got a big uplift in 119 and there are a lot of new rules around how to deal with storage. It's not treated as just nebula storage that's infinite. There are ways to atomically control how that is used and utilized within Kubernetes 119. So again, we'll talk more to that, but that's also something I'm quite excited about. Allow users to set a pod's host name to its FQDN. This will help with getting some legacy systems or other things that need that FQDN to transition over. So if you have a service that was called FU, you can get a whole qualified domain name when you set that and you call that within the pod itself rather than just getting the service name. Allow CSI drivers to opt in to volume ownership change. That again is a storage interface improvement kind of within that same vein of being able to have a little bit more control over storage and how that's defined. And then same thing with generic inline volumes. But again, we'll cover more of that as we proceed forward. 119 marks a brand new support model for us as well. Previous releases were only supported for nine months, which worked really well with the, it was one release or two releases back, just kind of supporting that three release cycle. So at any given point in time we were working on a release, we have one out and then there's one behind. And so this is the first time that we are moving to that one year support. And this is in reaction to a lot of the, what the community has expressed at some organizations, it's much more difficult to, even though you have a year, it's difficult to uplift a lot of these workloads and get them ready for the next version or N plus versions of Kubernetes. And we heard that and wanted to react on that front. So very excited to announce with 119, we're gonna have that year of support to allow people a little bit more time to shift their workloads over and dealing with things like the deprecations in 116 and other things of that note. Again, hoping that this makes things easier for most teams. With that, let's jump into the SIG updates and for to kick us off, I'm gonna turn it back over to Navarune. Thank you, Taylor for all the highlights from 119. That was really awesome. So I'll go through all the SIG updates. Basically, we have categorized all the enhancements into SIGs. So when any Kubernetes feature is added to a release, it has to be driven by a group of people. So the Kubernetes communities structured into logical isolations of groups called special interest groups who actually own code in particular areas. And that's why we're doing it like SIG by SIG. And the first SIG to come is API machinery. So let's jump ahead and see what API machinery shipped in in 119. The first feature that the shipped in was, so when you see Kubernetes resources, you have a status field and inside the status field, there's a thing called conditions. Now the scheme of conditions has been like a bit, it varies a lot depending on the resource. Now with this release, there's a feature shipped which actually specifies some guidelines or a default thing that you can use any API can use. So there's a condition type for conditions in status objects that the API designers can use and then they can also like drive more features out of them, more attributes out of them. This is graduating is stable. So it is available in 1.19 by default. And the next feature in those API machinery SIG is warning mechanism for deprecated APIs. So Kubernetes follows like a graduation mechanism for APIs and then you go from alpha, then beta, to like stable releases, even in cases of rest APIs. So what you see here is the ingress API. So ingress API resides in two different API groups. So one is like, one version is like V1 beta one, the next is this table. So what you see here is if you try to access V1 beta one resource, ingress resource, you will get a deprecation warning that, hey, this resource will be phased out in 1.12. Can you please use the new one? As you see here, when you use networking.KADS.IO slash V1 beta one slash ingress, it actually prompts you that, hey, use the stable one, networking.KADS.IO slash V1 ingress. We have a beautiful feature block on this. So when we post the slides, you can just click on the slide where we wrote feature block and then see the blog on the Kubernetes website. Moving on, the next in focus is SIG architecture. So the first thing there is a clarify use of node role labels within Kubernetes and it also attempts to migrate the components which actually use the node role labels. So traditionally there has been a label called node role that Kubernetes.IO slash something. And it was seen that several components even inside the Kubernetes architecture use that to change behavior. But the purpose of the label was to give a API. I should not tell it as an API, but as a resource for other people or in attribute for consumers of Kubernetes to actually modify behaviors or migrate, like do stuff around it. So in this Kubernetes release, it's a beta rollout. So along with clarifying the usage of the label, there has been like identification of who consumes all those labels and an attempt has been started to actually migrate them out of that behavior. Now the next feature of SIG architecture that we rolled out was enabling running the conformance tests without beta or STPAs or features. So this does not usually conform to our alpha beta stable thing because it's like internal APIs and then so this is kind of a stage thing. So the conformance tests can, so what happens is if you see Kubernetes projects or projects outside the Kubernetes ecosystem who actually use Kubernetes as a resource or a product, so they need a stable and reliable foundation for actually using Kubernetes. Now in order to achieve that, when you run Kubernetes conformance tests which actually verifies whether a distribution is conformed to the spec or not, you now don't need to run the beta features, like the tests are not run using beta features. The next feature is very important in a sense. It is kind of related to the deprecation warning. So there have been instances in the past where certain Kubernetes resources, like let's say, Kronjox or Ingressus or port disruption budgets, policies basically, port disruption budget policies, they have stuck in beta for a long time. Now it's like a double S sword. So in one sense, when people ship it to beta, they get enabled by default in the Kubernetes distributions. But if you see it like that, there is little incentive to actually make it to GA but then this may lead to instabilities or user friction. So with this release, it has been mandated as a policy that along with any new beta features that are coming in 1.19, any old beta features that were there have to reach GA and deprecate the beta or have a new beta version. For example, they can go from B1 beta 1 to B1 beta 2 or they have to go to B1. There has been like some resources that have been already transitioned in 1.19, like Ingress, which was betaed, I think a few releases, a lot releases back. I'll come to that later on when I give the signature updates. With that, I'll hand over again to Taylor for more updates on the sync front. Thank you very much, Nubrin. We needed a baton to keep passing back and forth. So I'm so glad that these authentication callouts will no longer be a secret, haha. So looking at the first one in authentication is the Cubelet Client TLS Certificate Rotation. And so what that is is, so before with the Cubelet Client, there was an out of cluster system that was set up to kind of handle that rotation. And while that was done automatically, it was not as efficient as it could be and it didn't operate within the cluster. Thus kind of, even though it was secure, we get even more security by having this happen within the cluster. So now this feature has moved to stable. And as that expiration date comes up, this automatically gets rotated out. And then you'll notice with each of these slides, we have stable tracking issue and Hansberg proposal. So as these slides get distributed, you can click on those and kind of introspect these in a little bit more detail if you want. So the next one is limit node access to API. So this is a security conscious enhancement in that previously nodes were able to set labels and some of those could be done within the Kubernetes namespace, Kate's IO and Kubernetes IO. That now is protected and no longer the case. This has been moved to stable and just overall makes your workloads a little bit more secure. You can operate with a little bit more confidence of things not changing that on your actual nodes. Certificate signing request API. So this one is that the certificates API would handle the root certificate authority used to encrypt traffic between a lot of the core components within Kubernetes. And this now adds a registration authority such that the signing process is a little bit more secure. And you now have that endpoint able to be called if you want to include this in any of your machine, your operators or machinery within core Kubernetes. With that, I'm going to, cluster life cycles is mine as well. Cluster life cycle. So the first new feature that we have here is new cube ADM component config scheme. So the cube ADM component that configuration management is getting a huge refresh. Some of those changes include the stop defaulting component configs and delegating that config validation. This is a new feature. Again, you know, cube ADM is getting a lot of work done to it in the releases to come. And I'm quite excited about that in terms of a little bit more granular configuration and hitting on some of the things that we've seen problems with or rough edges in the community. The next one is customization with patches. So this one I also found quite interesting in that new flag experimental patches has been added very similar to the cube control type of declaration. And so if you want to set different values for dev test prod, other environments, you can do so. Once this moves out of alpha and into beta, that flag then becomes dash dash patches. And with that, I would like to hand it back over to Nubrin to talk about instrumentation. Thank you, Taylor. So with instrumentation specialist group, we have a beautiful thing called events. So if you see Kubernetes, there are a lot of resources or components with generate events. But one thing that people need to ensure or workloads need to ensure that the rate at which they churn out events that should not have impact on the other part of the cluster, other parts of the cluster. And users should also be able to track what changes are happening. It can be related to verbosity of the events or let's say some event, like let's say I want to find out an event by which component of the cluster actually generated that event or which controller generated that event. Now with this release, a redesign happened of the event API. You can go ahead and read the enhancement proposal. It has a lot of details on how the structure is changed. And there is one more really good enhancement coming up next, which also deals on similar lines, which is structured logging. So as Taylor spoke about it earlier when giving the highlights. So if you see Kubernetes logs in controllers or any other component, you will see that they were traditionally strings. So now there is a backward compatible change in 1.19. It's also in alpha by the way. So if you want to use structured logging, you have to enable the feature flag, which enables structured logging. So what this does is, you have an additional function called InfoS in Kellogg, where you can basically specify object key value pairs and which will basically parse out the references later on when you see the logs. So as you see, we have added an example. So if you do a Kellogg.infoS, pod status updated, say that hey, the key is pod and the object is this, Kellogg.keyObjectPod and then again status is equal to status. And then it comes out beautifully into the logs. Also one more thing is, you can actually see all the changes in the JSON format. What you need to do is basically pass a flag called logging format is equal to JSON, which will also turn out this logs in the JSON format wherever applicable, whenever you did using InfoS or errorS. What this enables the end user to do is, you can basically ship out the logs to any log turning or log warehouse and then basically filter those or index those logs based on the keys, which makes debugging a lot easier. How do you want to see what happened at a certain point of time? Moving ahead, we next have network, sync network related enhancements. The first thing is SCTP support for services, pods, endpoints and network policies. So it was added as an alpha, I think last release or few releases back and it graduated to beta. So this feature is now available without, the feature gate is enabled by default so that you don't need to make any changes when bootstrapping the cluster so as to take, use SCTP protocol ports. So if you see, I put a screenshot of a service resource where you say that, hey, talk to Maya, but then the protocol is SCTP. So this is very useful in the telecommunications world where they use SCTP a lot for switching. And one interesting thing is this feature is also slated to go to GA in this release, the current 1.20 release, which is like a great win moving ahead. We have the endpoint slice API. So this is a very substantial change. So what traditionally happens is every service resource has a way to track pods to which it has to direct the traffic coming into the service. It used to do using something called endpoints objects. So you can think of endpoints, endpoint objects as basically areas of references to pods to give a very 1,000 feet view into it. What happens is, let's say if you have like 1,000 pods which are pointed to by the same service, what happens is when you want to update that reference to the object, it becomes a bulky data transfer across the network. You have to send like around a megabyte of endpoint blob that you need to modify and then patch back to the resource. Now instead of endpoints, what you can use is something called endpoint slice. So it basically chunks out endpoints into slices as simple as that. It is enabled in Uproxy by default with this release. And there's a beautiful blog on the website again which shows you a scenario, a real life scenario which actually explains why this was needed. I urge everyone to actually go ahead and read it. Obviously, you can also go to the enhancement proposal to see the intricate details of the same. Going ahead, we have graduated Ingress to V1. As I was saying earlier, that Ingress was in beta since Kubernetes. I think it was alpha in 101 and then around 2015-ish, fall 2015-ish since then it was in beta. But then with this release, it has reached GA. A very important change here is that earlier in backends, you needed to put service name and service port as keys, like attributes of the structure. Now they have been like shelled out into a service structure and then inside that you have name and port attributes. It more or less remains the same, just that it's like better. Going ahead, so adding app protocol to services and endpoints. So this is a feature which I think did alpha a few releases back and then I think in one dot, in one dot one seven, it was added to service port and endpoint ports as beta. And now this has also been added to services and endpoints. So now you don't need to basically have those arbitrary resource annotations. Let's say you have some controller which actually sees those labels and then acts upon them. Now you have something called app protocol which actually makes things much easier for you. So there have been instances where users have reported that it actually creates incoherences. There's an issue linked, I linked an issue like the link called user, where I hyperlinked user frustration too. You can go ahead and see, it's an issue on code access communities, the main code base repo. Going forward, we have signaled updates with that I'll hand over back to Taylor for something. Thank you, Navarine. So quite a few signaled enhancements. The first one of which is Seccomp which has a lot of people. I've seen a lot of demos on this actually and worked with David Racco about one of these on a different stream as well. And what this does is it provides you the ability to set a Seccomp profile for pod using pod security policies. It also allows for that control of privilege giving two pods so you can, again, put a Seccomp profile onto that pod and include that in. And you could either define one via the local runtime or you can set your own and configure that. I wish you all lots of luck in configuring Seccomp profiles. That's something I usually do, but typically in my nightmares, but very, very critical. And that's something that you should do, but it is no, it is quite an effort. Moving on to node topology manager. So for this feature that has moved into beta, the use case is teams that have to spin up a lot of compute and have a low latency kind of response time. They need that low level of latency and they prefer just one core on the CPU. They don't wanna break it up across those multiple cores and kind of risk that overhead in that time. So this is really just giving more control about how to burst out and work within those clusters, thus providing get prefer to allocation and get pod level topology hints. The next one is building Kubelet without Docker and that's exactly what it does. Really just about removing that dependency on the Docker, Docker Golang package and then allowing Kubelet to compile and work without that Docker dependency. However, that doesn't mean that this feature is not focused on that removal of code in tree at the current point in time. Next one is allow users to set a pods hosting to its FQDN. We talked about this a little bit in the highlights and really so this is very much to help out with interoperability with legacy systems and very easy to set for your pods, just hosting FQDN, true. Those are my favorite types of features are the ones that are a little bit easier to enable than not moving on to the Kubelet feature, disable accelerator usage metrics. What this is really is so the, with the third party device monitoring plugins, separate issue and pod resources API about to energy a, it's not expected for the Kubelet to gather metrics anymore. So really this enhancement is just about that deprecation around Kubelet collecting those accelerator metrics. And with that, I'm gonna kick it over to Nabaroon to close this out on these SIG enhancements. And with that, you're up Nabaroon. Thanks Taylor. I am assuring you I won't switch back to you again of the updates. So with scheduling, we have, I think five features that they have shipped. The first of them is graduating the cube scheduler component config to V1 beta one. To give you its small context on what component config is. So as you saw in the QVDM updates too. So using component config, so the idea of being component configs is like what if you can configure the Kubernetes components themselves with Kubernetes resource manifest kind of things. So it has been, this effort has been going on since the past year, I guess, under the cluster life cycle, say WG component standard. So here in this specific enhancement, they were focusing on cube scheduler. I have put in a small snippet, a very basic snippet of how a component config looks for cube scheduler. So this went on beta in 1.19 and the cap owner wants to soak it for at least two releases and hopefully it will go GA in 1.21 or eventually. So next up is run multiple scheduling profiles. So this is a very interesting enhancement, I would say, like from a personal point of view, like we face a lot of problems when scheduling workloads in a Kubernetes cluster. The problem comes up when you have heterogeneous workloads. Let's say you have long-running jobs, let's say you have batch workloads which you can't really interrupt. If they are not interruptible, what if you have jobs which are very ephemeral, like web servers, which don't store state, you can basically kill them all. Now, you could also solve this problem using multiple schedulers, but there's a big issue with that is race conditions and scalability concerns. So what multiple scheduling profiles does is it actually introduces profiles in a single scheduler. So you can have different algorithms for different kinds of workloads. It went alpha, last release, sorry, in 1.18 and in 1.19 it graduated to beta and is eventually stated for GA in 1.21. Moving ahead, even part spreading across failure domains. So if you see, there's a feature called, so you can basically set up affinities or anti-affinities or you can basically kind of to a certain extent model your workloads, what do you say? Spreading criteria across a cluster. Let's say you have like a web server running that and you don't want them to run on the same node. You can say that, hey, please don't run on the same node. You can say that no two parts should run on the same node. They will get spread over in your cluster as much as possible going to other restrictions like if you have like more replicas than there, there are nodes, obviously this can't be satisfied. Coming back, so there's a feature which added like more controls at the end of the end user to give scheduling heuristics like this and also achieve high availability and resource utilization. With that said, there's a very important thing we should see here is that you actually have an option to say that, hey, this heuristic is a hard requirement or a soft requirement. Basically you can differentiate between a predicate or a priority based on that, whatever you set, whatever you configure your scheduler to or your workload to, your workloads will get scheduled in that manner and it went stable this release cycle. So it is here to say, going ahead. So adding a configurable default constraint to part topology spread. So this is kind of related to the previous feature which went GA, this release, but this was added as a new feature in this release. So what can happen sometimes is in your pod specification, if you have like many different, like let's say thousands of workloads, thousands of different kinds of workloads, it may be tedious to go ahead and specify topology spread constraints for each resource. What this enhancement does is it allows you to set a default spreading constraint. After that, you don't actually need to like set on everyone unless and until you want to overwrite the default. This went alpha in this release and hopefully it will go to beta in the next, I think this release on next release. I'm really excited about the scheduling features, they're like really boon. So coming to the next feature, adding non-preempting option to priority classes. So priority classes have been a GA feature since I think 1.14 which impacts the scheduling and eviction of pods. So what happens is if any pod has, so pods are actually scheduled in descending priority. So lower priority pods are preempted or killed if there is a higher priority pod coming in and there are resource of exhaustion in your cluster. Now this enhancement actually adds a non-preempting option. You can say that, hey, this priority class can like it may or may not trigger preemption as in like you can disable the preemption feature. So what this will do is it adds a default, it's as an attribute called, I think it's called, sorry, I, so this is written in the enhancement proposal. You can go ahead and see the cap where it is written, but the default value right now is false. So it will still follow the previous behavior of preemption, but then if you want, you can go ahead and enable like non-preemption. So you can say that, hey, don't preempt pods. Going ahead to the next thing, storage. So storage also has a lot of updates coming in into this cycle. The first of which is immutable secrets and config maps. This went alpha last release and in this release, it's just graduated to beta, which implies that you can use this feature without any switching of feature flag. Now to give a small context on what this does, it actually lets you, like what happens is if you, if you can check. So the default behavior is you can, if any secret or config map gets changed, it directly gets watched by Kubelet and then updated on the pod. Let's say you have like thousands of those objects. Now watching those objects becomes tedious job. This actually tries to solve that problem. So going ahead, so we have a few CSI driver migrations in this release to be exact too. First is Azure disk entry to CSI driver migration. So if you are using Azure disk for storage in your cluster and you have the CSI driver installed, you can just turn on the feature gate called CSI migration Azure disk to basically use the out-of-tree code. Same case for vSphere. So you have the feature flag called CSI migration vSphere which actually enables out-of-tree code usage. Going ahead. So we have two again kind of related enhancements. So one is storage capacity tracking. So what happens is often the Kubernetes scheduler has literally no information about whether a CSI driver can create a volume in a specific node. This feature, what it does is it gives a attribute called storage capacity which is seen by the controller and then determines whether you can schedule the pod or not. There's a feature block again on it which describes this very feature. This is alpha right now. So you have to enable the feature flag so as to use this feature. Next up is generic fmrl inline volumes. It is also kind of related to the previous one and it gives you a beautiful way where you can extend Kubernetes with CSI drivers but which provide lightweight and local volumes. So there's a new resource called fmrl volume source which contains the fields that are needed to create the volume plane. And another thing if you see here is the pod which creates that resource gets tagged as an owner of the resource. So if you delete the pod, the resource gets deleted automatically. There's a default garbage collector scheme using Kubernetes. Again, reiterating all the alpha features need to be enabled explicitly with the feature flag. So the last enhancement in storage is allowing CSI drivers to opt into volume ownership change. So to give you a short background, what happens is if you specify FS group in your pod security context in a pod, any volume that you mount gets masked with that FS group. Now this is really not necessary that the back end, the CSI type that you're using supports ownership modifications using FS group. For example, NFS, it does not support. So you now get a feature called supports FS group where the CSI driver can say that, hey, I supports up FS group or not. Based on that, your FS groups would be honored or not. The one that you said in the pod. This is again an alpha feature has to be enabled so as to use it. Going ahead, so it's the last thing in the roster for us, Windows. So we have a sole enhancement in Windows which is supporting CSI containerity on Windows. What it tries to do is basically improve the metrics of Kubernetes features that you can use on Windows by going through this containerity spec. So users cannot choose to run only see containerity CRI instead of Docker Enterprise. Now this change going to stable gives a path to the implementers to actually implement Kubernetes specific features that are not available in the Docker API which are available just on the containerity API. And with that, that's all on the announcements. And I will hand it back to Taylor to talk a bit about the release team shadow program. Hey, so release team shadows. With each new Kubernetes release there is a new Kubernetes release team that is comprised of Kubernetes members who handle the day-to-day logistics of the release itself. The team is broken down into seven different roles. So for those interested, I definitely recommend checking out the Kubernetes SIG release repository. Inside of there are roll books and it talks about each of those different roles. You know, we could see them here depending on what you might have an interest in. So personally, I started in similar shoes to Max and worked through the communications roles and then in the Kubernetes 116 release was the communications lead took 117 off came back in 118 as a release lead shadow and then led the 119 release. So the program is really fantastic in that you just show up and you learn and you gain a lot of information from being part of the shadow program. You aren't expected, you know for those of you nervous about having the world see your contributions whether it be code, documentation or what have you. No worries, the Kubernetes community is really fun, friendly and engaging. And just, you know, if you have questions that's the best place to ask them. Again, the team is fantastic and very easy to work with. And as you read through roll books, if you're interested you can kind of see what might be a good fit for you or something you might want to learn might be something you don't do in your nine to five job or just something you have a innate curiosity about. And then we walk you through that shadow program and then kind of try to get everyone set up to potentially be a lead in some capacity. So again, the goal is to train new leads to when leads aren't able to make it, you know they have a conflict, you know life event happens something stuck at the stuck at the motor vehicle, you know place then, you know, we typically would call on shadows to help out with that. I leaned very heavily on Jeremy and Bob those are my release lead shadows. I went through a job transition so that just yet another example of a life event coming up and Bob and Jeremy were always just really keen to help out and really appreciate that from both of them. For each role, there's one lead and typically three to four shadows that are selected via the shadow application process. The application typically gets advertised near the end of a release or just before a release starts. And typically you'll see that shared out on LinkedIn, Twitter and several other venues and avenues. The release cycles generally last around three months but with 119 being a prime example we went quite a bit beyond that. Weekly workloads, ebb and flow as some teams are a little bit more busy than others. So like Enhancements, Nobarun and Max I'm sure after this I'll turn it back to you but I'm sure you can attest to this where Enhancements was very busy during the beginning of the release cycle and communications was very busy near the end of the cycle as an example. So that is also covered in the role books just with the weekly breakdowns gives you a really good understanding of what you're in for and what time commitment you can, if you're willing to make that put in to the shadow process. For more information, please click on the release team shadows GitHub repo link here once those slides are all shared off. So with that, I'd like to open up for questions and turn it back to Max and Nobarun if you wanted to say anything about the shadow program or your experiences on that front too while we're waiting for questions to come in. I would like to say a few words about the program, like it's a great program. I would say anyone looking at like many people ask me like how to get started contributing to humanities but then there's this beautiful program, right? Release team shadow program, you can just come in and say like, hey, I am interested in working in this vertical, this really interests me. Can I work? And then you get to meet so like so many awesome people who actually help you get started and then grow into the community. So I started nearly one year back with the Kubernetes 1.17 as an enhancement shadow. Then I shadowed again in 1.18 because I really liked the role. Then I led the 1.19 enhancements. Then again, I am shadowing the release lead this cycle. So this is really fun. And you get to feel the responsibility that is bestowed upon you and that, hey, you have some important role in the community. You are looking after please. And the team is like very diverse folks from the community, like there is like huge times on diversity, like Taylor and Max can also say like right now in this call, we are like 12 and a half hours span in time zone. Me from IST, Max is from Sierra Leone time and then Taylor is from Pacific time. US Pacific Coast. Now, so if you see like, there are literally no barriers other than you just going ahead and applying to the program. Even if you are not applying to the program, just go ahead and just come to the Slack channel and say, hi, I want to work on this. Nobody would say, literally nobody would say no. I assure you that. Absolutely. And it's really great to see how all the massive contribution from this really huge fantastic community all get brought together in this step by step. You see more and more getting this version, it's quality and then it's getting ready for release. And then everyone's sitting there and it's getting nervous and then you see like the countdown and then in the last moments I'm writing, now we are going to release it. And then you see like what's all going behind the scenes, it's really great to see this. Gary, how much time do we have left to potentially go into some of the questions? We have right now no open one, but we can at least highlight one or two of the questions which we answered already. We have about four minutes left. Okay, that sounds like we at least can have a look at the one or two things which we got so far. So one question which I really liked is about that. It talks about actually the endpoint slices with the question that there are some issues with the demon sets and a really huge cluster in the update cycle of the demon sets. So the answer to this is yes, with the endpoint slices, this problem is also going to be solved in future. There's a really great blog post about it from Rob Scott who drafted this and worked mainly on it. And why does so every of the communication within Kubernetes goes through the Kubernetes API server and not just something which comes from the outer world, but literally every communication. And that's why the huge area cluster is getting the, well, not slowlier, but sometimes a little bit inconsistent performance you will find through the API server. And this is exactly what the endpoint slices will solve in the future. So if it's growing, then you have smaller chunks and the update cycles shouldn't be that much infected from it. Then there was another question in the beginning actually about the alpha and beta features which end up in the CKA and CKAD. So officially we cannot give the best answer to it because we do not write down the curriculum and what should be included, what shouldn't be included, but they're actually answered perfectly. Increases a really specific baby because it's so old. I mean, it's actually it's like, it should go already in the retirement action section. But no, it's a stable thing even though it was so long time and better. Now it has become officially stable. So that's why there end up some alpha and beta features. Normally shouldn't happen. Except we find something else which is super duper old and stuck in a better alpha version. Also really interesting is about the Qubectl and the client rotation. So with a certificate client rotation if implemented or if configured, it will does this by its own. Yes, you can force it also if needed, but actually it just takes care about it. And then maybe the last comment, this release is cool, thank you. It's matching with our mandate of resilience. Thank you very much for this comment. As I said, it's mainly about the contributors out there who spent their most of the time free time for contributing to the Qubectl release, giving their heart and blood and sometimes sweat even in this difficult times to the community keep contributing and support this really great open source project. That's just about all the time we have for today. With the support, the presentation and the slides, we would like to thank everyone again for joining us today. Have a wonderful and safe weekend and we'll see you next time. Thank you so much everyone. Thank you. See you around the cluster. See you around the cluster. Wait a minute. Bro, where did they don't know? They're in his own business.