 There we go. Thank you for joining us today, everyone. Welcome to today's CNCF live webinar, Kubernetes 1.20. Welcome to our very first live webinar of 2021. Thanks for kicking it off with us. I'm Libby Schultz, and I'll be moderating today's webinar. We'd like to welcome our presenters, Jeremy Rickards, Software Engineer at VMware, and Kirsten Garrison, Software Engineer at Red Hat. A few housekeeping items before we get started during the webinar. You are not able to talk as an attendee. There is a Q&A box that I will activate right now. So you should be able to see that next to your chat. Leave your questions there. Feel free to pop them in now towards the end whenever you think, and we will get to as many as we can at the end. This is an official webinar of the CNCF, and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Please be respectful of fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF website, as well as back through this registration link at community.cncf.io under online programs. With that, I will hand it over to Jeremy and Kirsten to kick it off. Hey, thanks Libby. Just before we get started, are the slides coming through okay? Do they look good size? Is it pretty readable? All right. So like Libby mentioned, my name is Jeremy, and I was the release lead for 120, and joining me today is Kirsten. She was the enhancements lead for 120. I work at VMware. I work on an internal platform. So my team runs Kubernetes. I build things on top of Kubernetes. I do a lot of things like that. I participate in the release, and I think there's a couple of really awesome things that I'm looking forward to deploying in 120 whenever we get around to upgrading to that. Kirsten, do you want to introduce yourself? Sure. I'm Kirsten. I'm a software engineer at Red Hat. I work on the machine config operator, which is an operator in OpenShift. And I've been on the enhancements team, I think since 1.17, being a bug-share shadow enhancement shadow a couple of times, and the enhancements lead last release. So that's a great experience. And I think we'll talk about that a little bit at the end of the presentation as well. So if you have any questions about that process, we're happy to talk. Yeah. I didn't realize that you had been a bug-tria shadow in 1.17. So we started around the same time. I was an enhancements shadow in 1.17. It's a pretty cool coincidence. Time flies when you have fun, I guess. Yeah. It's been quite a journey. All the things happening. So we're here today to talk about the Kubernetes 120 release, which we lovingly called the Radis release. A whole bunch of reasons for that. First and foremost, this is a big, big release. We'll see some numbers in a little bit, but really this was one of the biggest, if not the biggest Kubernetes release in quite some time. One of the fun things that a release lead gets to do is to pick the theme, pick like a mascot or a logo. And in this case, I wanted to pay homage a little bit to Kubernetes 114. That release was known as Caternetties. And Caternetties, if you hadn't seen it, was this really great picture with the Kubernetes logo and a bunch of cats. So I wanted to pay kind of tribute to that and just have a little bit of fun. 2020 was kind of a rough year in a lot of ways. And just wanted to end the year with a little bit of fun. So here's my cat. And we styled him up in kind of like 1990s school pictures with lasers in the background. He's a very fun, happy, happy guy. So kind of captured my feelings. Like the release was really fun. Everybody was super positive the whole time, despite all the things going on. So it just kind of set the stage for me. Yeah, it was definitely a great experience. And it was the thing I look forward to a lot last year, like all of my meetings and participation. It was just a really nice thing. Yeah, definitely a highlight for me for the year. Okay, so today we are going to start off by giving you a little bit of an update on 121 and what you can expect. And what you can expect just generally around releases going forward. And we'll talk a little bit about 120 in numbers. Just kind of look at that compared to other releases. And then we'll run through some highlights and show you what's new from each one of the six. We'll go through that kind of in a rapid fashion because there's so many of them this time around. And we'll leave a little bit of time at the end for Q&A. So first up, the 121 release updates. 121 is actually going on right now. It started a few weeks ago. And the next major milestone you really need to be aware of if you're kind of following along is that enhancements of the release is going to come on February 9th. So coming up pretty quickly. And that'll set the stage for the release on April 8th. So kind of between there you'll have a bunch of milestones. A really important one will be code freeze. Around those two dates you'll get a pretty good understanding of what's going on. And we'll talk about how you can kind of dig into those things as we walk through some of these issues. But one thing to be aware of is that if you look back at one at 2020 of the year, there were only three Kubernetes releases. So 120 was the third release of the year and it was the last release of the year. In previous years, there've been four releases. So generally like one a quarter. But because of all of the uncertainty and kind of the turmoil of 2020, the 119 release became pretty long and extended. And that really ate up most of the year. So when we got to the 120 release, we didn't have enough time to do another release after that. It ended up being at the end of the year. There's been some ongoing discussion about whether that's the right cadence or not. Do we go back to four releases? Do we go to three releases? And there's pros and cons in both directions. I think the decision for this is hopefully going to be made around enhancements freeze. So in a few weeks, we'll have a better understanding of what it's going to look like going forward for the 122, 123, 124 releases when they'll land. But if you're interested in this at all and you would like to provide feedback, we've linked to a discussion on GitHub that you can go to. This is using the new GitHub discussions feature where you can provide your feedback and what your thoughts are on the release cadence. We would really, really recommend and encourage you to go and add any feedback, positive or negative, to three releases or four releases. You know, when we, when CIG release and the community are building these releases out and pushing forward with new Kubernetes versions, it's really for the community and the people that are going to consume the releases. So we really want to make sure that we're, you know, satisfying the desires of the community and balancing that with the needs of the contributors. Okay. So now the super exciting part of the presentation where we're going to talk about new things that are in 120 and there are so many. So we'll start out, Kirsten, do you want to give us a little bit of background on the numbers? Sure. So as Jeremy said, it was actually quite a large release. It was a bit hectic, but there was also a lot of sort of pent-up demand for enhancements. So, you know, a good number of them were in a really great state by the time we started the release. And then we had sort of the normal amount of enhancements that also went through the process like normally, like one step at a time. So we had 44 total enhancements in 120 and to compare that to 117, which I believe was the same sort of time period, there were 22 in that time, which seems a little low, but you can see that we had a lot more than the prior year. So we had 16 stable enhancements, which are basically GA, which is, I think, up. Probably we had eight in the previous release. We had 15 graduating to beta, 11 new alpha features, three applications which we started tracking. And then something that I think is really cool to think about, like there were at least like five new authors, like people who are sort of new to the enhancements process and really getting involved with CEPs and, you know, adding features. And I think that as we get more new authors participating, it's also going to decrease the load on everyone else, but also bring in some great contributors and great ideas. So I just wanted to highlight that in case anybody in the audience is interested in, you know, doing an enhancement, this is also open to new people by collaborating with the CIGs and really just working to get your features in. And it is this possible. So yeah, these are like pretty huge numbers. And I think everybody should be really proud of what they have accomplished last release. I totally agree. I think there's two really important things to just hammer in on there. One, you know, this was an end of year release. And typically if you go back and look at 117 and previous end of year releases, you know, they overlap with holiday seasons, they overlap with CubeCon. There's so many other pressing concerns that come in that, you know, the bandwidth for getting these things worked on and hitting code freeze on time generally just kind of limits the number of things that has limited the things in the past. I think this shows that it's not necessarily true that the kind of view that the end of year release is a bug fix release or stability release or is kind of a waste isn't really true. I think this showed that with proper planning, I think the extended 119 timeframe gave us, give all the CIGs more opportunity to get things, you know, planned out and ready to go that it doesn't really have to be, you know, a waste. And then just generally your last point there that at least five new authors are responsible, you know, enhancements are how we track new things coming into Kubernetes. So if you have an idea for something, you know, the way to get that done is through writing CAP, writing a Kubernetes enhancement proposal, and that tracks through this process, the new features, the beta things, and finally the things that have graduated disabled. So I think it's just really, really cool to see, you know, of that number, a good chunk of them were brand new people. It's not just the same old people contributing to the release. Yeah, I think it's really exciting. And, you know, hopefully that number increases as well. Like I think that that may be something that we track in future releases. I think that's really nice. Yeah, I think that's a great point. All right. So we're going to go through all of the various CIGs and show you the new things, the things that have gone to stable that you can start counting on. Just one kind of quick thing to mention. We mentioned alpha, beta and stable. The real difference between those things is that alpha is not turned on by default. So you'll, if you want to use these features, there are generally feature gates that you have to turn on on the API server or configuration options you need to pass to the cubelet. Beta are turned on by default. So you can turn them off if you need to, but by default they're turned on. And then for alpha and beta things, there's no guarantee for backwards compatibility. Those things can go away. And there's actually a policy put in place starting with 120 that things have to promote to beta, or sorry, from beta to stable, or they need to go away. I think we'll see some more deprecations down the line, kind of falling out of that. But once they get to stable, you can have some guarantees that those things will be there for a much longer period of time. And we're also in the cap starting to include the production readiness review, which I believe also includes considerations about upgrades and downgrades. So we're trying to add more sort of safety measures into it before things go, before features are submitted into the upstream. Yeah, definitely. So let's jump into a couple of highlights before we, things that you should really, really be aware of from this release and before we dive into the specifics from each thing. And the first one that I think everybody is aware of is Docker shim has been deprecated in this release. And that sounds really scary. And there was a lot of traffic on the internet and a lot of effort from contributors to come and write blogs to kind of set some of those fears aside. But it's not as scary as it sounds. This is just another example of things that have been beta or have been not stable existing for a long time and kind of the pressure to move those things along or get rid of them. Docker shim in particular is pretty old and has been in Kubernetes for quite a long time, predating, adding container runtime interface. So things like container D and cryo. Docker shim was a separate code path that existed in the Kubelet and just introduced another area that had to be maintained and it's kind of backfitting the Docker engine into the Kubelet. Or anybody that wants to continue using the Docker engine like that, Brandtis is going to work on a CRI implementation around that. So you'll be able to use the same kind of functionality, but you can find a ton of information on the Kubernetes blog. It kind of points you in the right direction and you'll see some more of this coming along. But the big takeaway here is that Docker is not going away and even the support for the Docker shim isn't going to go away. Starting in 1.20 when you start up the node and you're using or started the Kubelet and you're using this feature, you'll just see a deprecation warning. Things will continue to work as is for a few more releases. So you have time to kind of get around this. Okay. So there's two other areas that we wanted to highlight real quick. Stability work and then some cool new things. And I think playing off of that Docker shim deprecation, we see some foundational work along CRI to move that towards beta. It's been alpha for so long. It was introduced in the 1.5 release. So we're on 1.20 now. The thing about if we're doing four releases a year normally, that's been there for quite a while. The same thing for cron jobs. That was actually introduced as scheduled jobs before it was called cron jobs, but in 1.8 it became cron jobs and became beta. So another one that's been there for a pretty decent amount of time. Another stability sort of thing, exec probes. So if you've ever set up an exec probe for a pod, there is a field for the timeout. Actually that timeout was never honored. So this is kind of a longstanding bug that's been fixed. We'll hit that one a little bit. And then just generally, SIG node has had a lot of things in this release. There were something like 13 or 14 enhancement issues that were just owned by SIG node. And of those five of them graduated this table. So there's a big push in this release. And I think we're going to see that in releases coming up to push some of these things that have been beta for a long time into that kind of stable camp. And then of course there are some new things. And I think this is where it's really exciting for me as a cluster operator, because some of these new things are really great in terms of just making my life as a cluster operator better. My life as a person deploying resources and workloads better. Graceful node shutdown. So when the node is going to shut down, Cubit can become aware of that and it can properly send signals to the workloads instead of just kind of going away. Better metrics for like what resources are being consumed in the cluster rather than having to cobble this together. There will be a starting in 120, a nice metrics endpoint where you'll be able to get a good view of requests versus limits and make better planning decisions going on from the scheduling point of view. Another really cool one I think is the ability to auto scale based on container resources instead of a pod. So generally if you're using the HPA, it looks at the pod metrics. So if you have one can give a multi container pod and one of those things is maybe skewing that result, either positively or negatively, you couldn't really scale based off of the individual containers and starting in 120, you'll be able to do that. And then finally there's a bunch of security related improvements that have come along. Not really new, new, but they're new in the sense that they're fixing some problems and making things just a little bit better all around. Okay, so let's dump into the SIG updates. And for this one, Kirsten and I are going to go back and forth and give you a little bit of an overview of these things. Kirsten, do you want to start with API machinery? Oh, sure. API machinery had I think four enhancements. I think two beta, one alpha and one stable. So we have the priority and fairness for API server requests, which is now beta. And there's like a ton of work that's been going into that. It's been really great. We also have the deprecation of the self link field, which was alpha and 116. They've been waiting a year between each. So then for releases from now, I think they're aiming to finish that in 124. So they're spacing that out and really communicating that through their caps and other communications, which is pretty great to see. And for any of the questions that you have on any of this, there's the tracking issues, enhancement proposals, as well as the other comments that come from the release. But that's been pretty great. We do have the defaults, built-in API types defaults, which is going to go into the bill IDL, and it's going to be transformed into an open API default field and then routed to defaulting function so that it can be done declaratively. And that's stable in 120. And then we have this cube API server identity, which is for AK clusters and also like a pre-rec for other AK sensors. So it's semi-foundational work, and I think that that's going to be really interesting where you have each cube server self-assigning a unique ID during bootstrap and storing in a leaf object, and then controllers will have access to the list of those living cube API servers in the cluster. So that's some pretty great work that's going on as well. All right, next up, Axe. So I think we mentioned this already. This was the previous schedule jobs, prawn jobs. We're trying to not have things sit in any stage forever and actually move through the process, and this is one of those. This is for all-time-related actions, like backups, report generation, so that each of the tasks can run repeatedly or at any given point in time. And this is moving to beta, and I think it's going to be dual support, so the V1 of controllers is still available. Yeah, that's exactly true. All right, and architecture. So this is a conformance test without beta rest APIs or features, and I guess we would call this like a stability route. Would you put this under stability or...? Yeah, I think so. I think an interesting thing is that we don't really track everything as a cap, right? So things that are kind of process-related, and this one to me feels kind of process-related, kind of internal. That's true for some of the security things that came in the release as well. But it's interesting to see, the work is happening kind of behind the scenes that you may not directly use, but I think this performance testing stuff is super cool because they're making sure that these releases are adhering to the contract that they say they're adhering to, and as the release goes forward, people can still meet that conformance requirement. It was really cool just as an aside to see all of the work that the conformance group had been doing and identifying things that hadn't been covered by any testing and working to get that done during the 120 cycle. And they have a website that you can hit called API Snoop that is really cool and it'll show you when a test was introduced and what things are covered and what things aren't covered. So if you're on the Kubernetes Slack, it's a really good performance channel that has a lot of really great people working out of it. And if you're interested in any of that stuff, it's a great place to go. And I believe the API Snoop actually came in handy during one sort of critical feature that we were trying to merge. So like this extra tooling is so valuable to I think the community and all of the hard work that goes into it is, I don't know, if people don't know about it, it's like if you knew about it, you would greatly appreciate it and thanks to everybody working on that. All right, and then off. This one was one of our late-breaking issues kind of came in towards the end. I think it's pretty cool because it's, you know, breaking out credential providers. There's a similar issue that we'll see for Node, but really this is allowing you to specify different ways of doing authentication and allowing you to do these things outside of the tree. If you look back at the history of the Kubernetes repo, there were a lot of things that were built in tree and there's work right now to move those things out because they don't necessarily need to be released and versioned with each Kubernetes releases, some of the cloud provider stuff. And this was some of the work that was necessary to kind of unblock some of that other work. Here we have a security-related one improving the security of service accounts. I think that's always a nice benefit. And again, this is beta. Anything that you see here that says beta will be available to use out of the box with 120. It's more security account stuff, the ability to provide OIDC discovery inputs. That's pretty useful. Okay, and then on to auto-scaling. So I mentioned this one in the kind of overview highlight section, but this is the ability to use the HPA to scale based off of an individual container instead of the aggregated pod usage. It does this by adding a new container resource type. So if you're familiar with the ML for defining HPA, then under metrics you'll be able to define new metrics that are at the container level and make those scaling decisions based off of that. I think that's really cool as you start getting these kind of complicated multi-pod, sorry, multi-container pods, things with sidecars that may or may not help you with that kind of aggregated view of resource consumption. All right, CLI. So there's a few in CLI that are interesting to look at. This one is Qubectl debug, and it's going to beta in this release. This is cool for me because I was the enhancements lead for 118, and this came as an alpha feature. And I think it's really cool to see these, you know, these kind of useful things come to Qubectl. So this one particularly kind of pairs with the ephemeral containers work that's happening. And if you think about when you're deploying workloads to Kubernetes and, you know, you try to shrink that image down as much as possible to reduce the attack surface, right? So maybe you're using a distrilous image, maybe you're using a scratch image. It doesn't necessarily have tools you might need to debug a production problem. And that's where Qubectl debug and the ephemeral containers work to kind of come together to allow you to maybe add another container to that pod that'll allow you to do some more debugging there or maybe make a copy of it if you want to go and do some kind of looking at it after the fact. To use this previously, you would have had to use that extra field Qubectl alpha debug. Because it's graduating to beta on the road to stable, you no longer need to do that. It becomes kind of a first class citizen that you can use. All right, next up is Cloud Provider. Kirsten, you want to take this one? Sure. So this, so support out of tree Azure Cloud Provider is something that Jeremy talked before, the maybe client go slide, I think, where we're moving certain things out of the KK repo. And I think that it's going to be really helpful. Different teams have trouble keeping up with the releases or that cadence also isn't the best for them. So even just from a development standpoint, I think that being in your own repo and then running Cloud controller manager is going to really sort of clean up the interface and also clean up just the process of developing on some of this. So that's pretty cool. All right, cluster lifecycle. So this one is kind of a new feature, but it's really a deprecation. And I think this one is a really cool one to see in 120. It's starting to address the use of non-inclusive language. There's a kind of community wide effort with the inclusive language initiative and working group naming inside of Kubernetes to look at terms we use and find better uses, things that are more inclusive, things that don't have bad connotations associated with the words. So in this case, QVADM is starting to replace some of the tainted labels that would have been applied to previously like the master nodes and that's becoming control plate node. So this is marked as a deprecation because the existing master taint and label is being removed and it's being replaced with this control plane one instead. So again, because it's a deprecation, you'll be able to continue using that existing word, but you should really start to migrate towards this new one starting with 120. I think this one was really... And this isn't like a trivial amount of work either. This is like... I think people put a lot of thought into and naming anything is hard and renaming things is even harder. So this is a lot of effort and I think that it's really appreciated by a lot of people. 100% agree. So that was the only one for that sake. Let's move on to instrumentation now. And there's a few in here that I think are really cool. I think we mentioned this one before earlier as well. Again, just like my bias towards the things that I think are cool, but as a cluster operator, one of the challenges we have is just getting a really good view about who's using what. So we have to go do specific queries. We have to look at a lot of the things that are running. Does this deployment, the single pod, actually need 16 CPUs or could it be reduced? And kind of right-sizing all that stuff in the cluster and also figuring out what capacity are we going to need down the road? It's not a unified single picture right now, but what's going to be cool in 120 is that there's this new feature that will enable a new metrics endpoint to be scraped. So you'll be able to use Prometheus or whatever to scrape this endpoint and get a view of all of the resources that are being consumed from a scheduling standpoint. So the decisions that the scheduler would make are reflected by requests and limits and what the node has available, is the node over committed or not, and all of these things would be bubbled up to a single endpoint that you can scrape and get a much better view of what's happening in the cluster. I'm really looking forward to using this in our environment when we get to 120. A security related one here, this is related to the security audit that happened in Kubernetes and two of the findings were related to logging of sensitive information. So this is work that went into kind of applying a logging filter that can be applied to all Kubernetes logging components to make sure that sensitive information doesn't end up in logs. This is another really great one. We're running Kubernetes in production, and especially if you're in any kind of environment that has compliance concerns or security concerns, which is probably everybody. This will be a great feature to have. And then another one that is really more of a process sort of thing, this is defending against logging of secrets of the infrastructure. So when a job runs in Prow, when you make a PR to Kubernetes, Prow is responsible for running all of the tests and whatnot. And what's cool here is that this is the ability to, using static analysis to figure out, is any of this stuff likely to leak information? If it is, then that PR will fail. So this is kind of adding some more upfront guards to make sure that we're shipping secure software by default. All right, next up is SIG network. And I think just as a note, for SIG instrumentation, that's not a huge SIG. SIG landed three substantive enhancements in one release. And I think that sometimes we also, from the outside looking in, overestimate how many people are working on things. So just as a reminder, some of these SIGs are a few people doing this. And I just wanted to kind of call that out because they've been doing a lot of hustling on these things. And it's pretty great. Yeah, that's a totally great shout. I 100% agree with that. So for network, we have a couple things. Well, more than a couple. But also we have IPv4, IPv6 dual stack support that I think a lot of people have been looking forward to. We have graduating to GA, SC2P support for services pod endpoint and network policy. Let's see. We also have new endpoint API that's beta. And this is a case where it didn't go to beta, it's staying in beta, but they did some substantive work in this release. And so we tracked that work to make sure that it can get in. They, I think added the node name to the endpoint slice API. So it's not always that everything has just graduated, but sometimes there's also significant amounts of work going in between, you know, the sort of name status changes. So you might see something that is alpha. And then it's, and then you're like, well, why didn't it go to beta? Why is it still in alpha? Well, they might still be landing some things that they think are important to have in alpha. They might be doing some sort of foundational work. And that's, I think, the case for this. There's, there's a lot of work going on. I think that was the case for the dual sack as well. I think that was a huge, huge rewrite of it that they, they thankfully landed in 120 and not 119. And so we have another alpha, which is support of mixed protocols and services with type load balancer. And this is alpha 120 and it's behind the mixed protocol LB service feature date. Like Jeremy was mentioning before, but alpha features. We have stable adding app protocol to services and end points. With the end point slice beta released in 1.17 app protocol was added that would allow the application protocols to be supplied per port. And kept is basically adding support for that same attribute to services and end points. The feature gate is supposed to be removed in 121. So yeah. And then we have tracking terminating end points so that we can handle terminating end points gracefully. This is the end point slice API and includes a terminating condition. And again, it's under feature gate. This is feature gate enabled. This is alpha. And then we have disabled no ports for service type load balancer. This is good for bare metal on prime environments that rely on VIP based load balancing implementation, which is I think probably a lot of users. So I think they're going to be happy with this as well. This is alpha. So. Oh, no. The big one. This is the big one. And I think it's really cool to see all of the work that node has done this cycle. There were actually a few more that didn't make it into this release that you'll see in 121. Just like kudos to the node team for all of the stuff they've been doing and kudos to Elena, who's been kind of working on getting prioritization for 121 and really making sure that they have a great story and some planning going forward. So the first one. So they're planning while they're planning while they're doing enhancements. So like these are basically the same people doing implementation work and the planning work and the architecture work and all of the review. So I mean, this is like a tremendous amount of work. Yeah. I think it's really impressive to see and I'm really excited to see how much lands in 121 are going to. Yeah. So here is another deprecation. So we had deprecation back a little bit ago. We also had the Docker shim deprecation. So this one is kind of simplifying down the number of streaming requests that can happen to a node. Again, this is an area where there's multiple code paths and it's complicated. Configuration is hard in the end users and it just opens you up to more security problems. So this is condensing things down. You can read the tracking issue. We'll make these slides available after the fact, but you can dig into this to see exactly what you need to be aware of going forward. And then we'll start with the stable things and kind of move to beta and alpha. There were 14 things in this release, pretty impressive. This one is runtime classes. So this allows you to basically have multiple run times in a cluster and specify which one you want to use for different workloads in the pod spec. That's going to stable. So it's been around for a little bit going to stable. You can count on it going forward. PID limiting. This one's really cool. Another security related thing graduating to stable. And this allows you to do PID isolation between pods as well as node pods. Adding pod startup liveness probes. So another one that's graduating to stable. This allows you to, if you haven't used it, allows you to define kind of a startup delay for all the other, before any other probes happen on your pod lifecycle. So this one is going directly to stable. It's really a bug fix, but it's a bug fix that has pretty deep implications. And we actually saw this towards the end of the release. Excuse me. A report came from some of our friends at Azure. They had some exact probes and some of their pipelines and those things took more than a minute. And previously that timeout was never, was never honored. People would just continue forever if you define an exact probe and something took, you know, five minutes or 10 minutes. Now starting in 120, the default is respected and the default is one minute. So if you don't specify a timeout for the exact probe, it will, actually it's one second, I think. It will default to that one second timeout. So things that previously had worked may no longer work. So there is a feature gate that you can turn on called the exact probe timeout that will go away in the future. So this is really just kind of a helping you get over the hump of fixing your workloads that may or may not be impacted by that. Third party device monitoring plugins. Again, this is another thing, you know, where things have moved out of tree, supporting things are out of tree. This is finally going to stable as well. Next up, the beta issues. So this one is no topology manager. So this allows you to use different kind of hardware resources for different parts of the Kubernetes components. And this is really, you know, going to beta now. So it's turned on by default. You'll be able to use this kind of out of the box and that's pretty cool. Another one that's going to beta is allowing you to set the FQDN as the hostname for your pods. So, you know, generally, this is another field. You would have been able to set hostname subdomain before. Now this field set hostname as FQDN is just available in the pod spec that you can just use going forward. This is kind of a deprecation. It's removing some metrics that are really for GPUs. And there are three of them that are going to be deprecated. So they're turned off by default in this one. Memory total, memory use, and duty cycle. And this is really only impact GPUs. So if you're using GPUs, this is a good one to be aware of. Support to size memory-backed volumes. This is another one. If you use empty-dure volumes in the past, size limit was not used to actually to bound it. It was used for eviction purposes. Now it's going to be used to create a resource of that size so that it's portable between cluster providers. You know, different cluster environments you might have gotten different behaviors. It's just kind of simplifying that down. This one's an alpha feature. So to use it, you have to turn on that feature gate, sorry, that feature gate. But in subsequent releases, you'll be able to just take advantage of this one. Graceful node shutdown. Another really useful one that I'm looking forward to. So this one is, you know, basically just making the cubelit aware that the node is going to shut down and propagating that signal down to the pod so they can shut down without, you know, just being killed and unexpectedly go away. CRI support. So this one was introduced, I think, in Kubernetes 1.5, you know, way back when. And it's going to go to beta probably in 1.21. There was some work that needed to happen kind of before that could happen. Part of that was deprecating the Docker shim. A few other things had to happen. So again, it's marked as alpha here. It's staying in alpha, but there was a lot of work that starts that train down the road. And then another alpha feature, adding huge page support to the downward API. So downward API allows you to project things into the pod. You previously could not use huge pages with that. So this can give you size and limits for huge pages into the downward API, previously not available. Kind of going back to that client go, when we mentioned earlier, this allows you to use exec plugins and pull imageable secrets and stuff for the Hubelet using these external plugins. So two new flags come to the Hubelet, and then there's a YAML resource where you can define how these things should work. So basically it will run some external executable to get the credentials and make them available to the Hubelet. So Sync Node had five alpha and five GA, with three beta, which is pretty tremendous. I'm just, that's a lot of work. That's a lot of work for one of the busiest, one of the busiest things at the most request. Yeah. And you think about, that touches so many different things. So when those things go through, it may not just be Sync Node that has to reboot things. There may be, yeah, I would use that to come in. There may be storage things that has to be reviewed. It's really a team effort. A lot of people depend on that work, and it's not a lot of people doing the work. So kudos to them. All right, scheduling. Kirsten, do you want to take this one? Sure. This would be add a configurable default constraint to pod topology spread. The spreading rules are going to be defined in the pod spec and tied to the pod. So this is going to add defaults and allow cluster operators to define spread. This is beta, so available by default, I guess. And we're moving into storage. I just did a note for storage. They've also done, they've put in a huge amount of effort to really refine their, their kept handling process. Ziyang and Michelle out, like they've just been doing so much work that just as an enhancement fleet, I just have to do the shout out because they've made it really easy. Like in the past couple of enhancements to really get the kept through, get the code reviewed and get everything merged. So there's a lot of organizational work going on behind the scenes as well that he doesn't get acknowledged, but like we really appreciate you guys. So this would be the GA for a snapshot or store volume support for Kubernetes. And this is going to provide the standard API design and PV snapshot or solar for CSI volume drivers. This is beta. So this is skip volume ownership change. And this will allow user to optionally skip recursive ownership and permission change on a volume if the volume already has right permission. And this is allow CSI drivers to opt in to volume ownership and change. It's going to be beta adding a new field called CSI driver spec, FS group policy. And then this is the service account token for CSI driver. Basically using CSI service account token, pulling it down to the pod service account token CSI driver. And this allows you to obtain service account tokens for pods that CSI drivers are mounting volumes for. All right. The last one window. This one's pretty cool. I think another one I think that I remember from 118 pretty distinctly having been at Microsoft for a while. It's really cool to see the things that have been happening with Windows support in Kubernetes. When you think Kubernetes, you may or may not think Windows containers. And it's really cool to see this work happening. There were some more things that CIG Windows was trying to land towards the end of the cycle. Unfortunately, didn't make it in. But this one I think is a pretty huge, huge win, right? You're getting CSI support for Windows. And that's a stable thing in Kubernetes 1.20 now. I think it opens up a lot of possibilities for people that, for whatever reason, can't migrate off of Windows or are just Windows based shops and their workloads depend on that. This is just making it more inclusive for them to be able to take advantage of all of the benefits of using Kubernetes. So I think we quickly rolled through all of those enhancements via the slides that are going to be provided by Libby afterwards. We've provided all of the links to the caps and to the issue trackers so that you can kind of dig into those and get a little more detail or ask the questions. But also, these are obviously substantive caps. These aren't just like bug fixes going in at the end of the year. This represents a ton of work that people have done throughout those months, especially in a really tough year. So it's a pretty amazing. Yeah. I wasn't sure what to expect coming into the lead this week. I think just thinking like the rest of the year and kind of getting, I was the lead shadow for 119 and kind of seeing in all the turmoil and the changes that were happening and how we were responding to concerns for contributors. At one point, we said, should we even do a 119 release with how the year was going? So I was super unsure how 120 was going to go. But in the end, I think it was super exciting. I think there were so many things done by so many people, just so much good work that 120 is, I'm really proud of it. Yeah. Proud of the whole release team, proud of all the contributors. I think it was just, it was a great experience for me. And kind of along those lines of being a great experience, you know, this is a volunteer job, right? We are not paid by Kubernetes to be the release lead or the enhancements lead. Everything in Kubernetes is really community. Some people are paid full disclosure. My job allows me to do some of this work, but it's not my full-time job. And I volunteered to do this back in 117 to shadow enhancements. And that's really how you get started with this. So if you're interested in being on the release team for 122 or any of the releases after that, the way to start that is with the release team shadow program. So I've been through that. Kirsten's been through that. Nabroon, who's leading 121 right now, we shadowed together in 117. There's lots of opportunity. There's lots of demand too, just kind of full disclosure there. But I mean, we wanted to give you a little bit of information about the release team program and the shadow program and let you know where to look, when to look, and just kind of more information. And if you have any questions about it, definitely feel free to reach out. Jeremy was saying the workload really varies on the team. So I think if you're interested in the program, asking, trying to talk to people during a quiet time about what those expectations might be, it's really helpful. But I'd also just reiterate what Jeremy was saying, where people forget that the release just doesn't happen by itself. There's not just people doing random pull requests and that's it. There's a lot of work that the SIGs have to do outside of just coding. There's a lot of architectural and organizational work that they have to put in that. I think we don't appreciate enough. And then the release itself doesn't really come to fruition without the entire release team put together with all of these shadows and all of these other teams. And all of these SIGs that you might not always think about, like docs or release notes or bug triage. We have a huge CI signal effort underway as well to get CI stabilized. All of these are really important parts of getting a great release that it's not just like, can this PR go in or can I get this feature? There's a ton of work to get done. And I think any help that anybody wants to provide would also be welcome because it takes the corny. It takes the village, right? It's not just the one thing that you're looking for. There's a ton of people doing a lot of work behind the scenes. Yeah, definitely would recommend this to anybody that's interested. One of the comments that I remember from the cycle was Rob, who was the CI, Form 120. Rob, who was the CI signal lead, likened the release team to Montessori school for Kubernetes. So Montessori school is a method of educating kids where you can go and kind of figure out what the kid's good at and what the kid wants to do and they can go from thing to thing and gives them exposure to a lot of different things. And the release team is definitely that way. You might think that the release team is only experienced contributors and that's not true. I think when we pick shadows, we definitely look for a mix. When I was an enhancements lead, I picked a mixed people. I picked John Bellomarek, one of the leads from SIG architecture. I picked Kirsten, who had a lot of experience with OpenShift and Kubernetes. I picked people that were brand new to the project because I think you get great experience for those people, but you also get a lot of great insights that you may or may not have otherwise gotten. That kind of diversity in people really helps build up those really solid teams. So the way you do this is by applying. If you are not subscribed to Kubernetes Dev, the mailing list, we definitely recommend that. Towards the end of each release, an application is sent out and you can select a few roles. If you go to the Kubernetes, sorry, the SIG release repo and GitHub, you can find the handbooks for each one of these roles. It will give you a much better idea about what kind of time commitment or what the actual job of that role is. You think, what's enhancements do? What does CI signal do? Each one of those handbooks gives you a really good idea about what that team does. I think these are great because they can springboard you into doing more of that stuff. If you are interested in CI signal, it can set you up to do a lot of really great things with SIG testing because there's a really tight integration between what the release team's CI signal team does and what SIG testing is doing, tracking down flakes and tests, figuring out, is this really a problem or not? That team was super critical to us at the end of the release. And again, like that three months thing, kind of tying back to our release cadence, that may change depending on what happens with the release cadence going forward. All right. I guess with that, we can open it up to questions now. Can scroll back to the chat and see if there's anything. The top where it says general and click Q&A. All right. Thank you. Having right in there. Yeah. So the first question, could the presenters opine on the fact that most enterprises are still toward trailing the adoption of 116? My team specifically just upgraded to 116. So I definitely feel that. And I think that really is an important one to consider for the release cadence. There was some work by the, by working group called LTS, working group LTS, but they, you know, they were looking at supportability. So how long does a given Kubernetes release stay in support for? Right now it's three releases. So that's not that long. Their, their work was looking at how do we shift that towards a year? And there's a lot of things that go into that, you know, maintaining old branches is extra work. You know, if fixes come along to, to like go and you need to rebuild all of the components. That's extra work. And a lot of that's done by the CIG release. But just in general, as these, this train kind of continues, it's really hard to keep up. And I definitely feel that I think that going to three releases from a, you know, consumer standpoint makes it a little easier for us. There's less kind of train to have to keep up with. That 116 upgrade, if you haven't done it yet is, is a challenge. There were a lot of breaking things that happened in there, deprecated things that had been deprecated for quite a long time were finally removed. So it was quite impactful. I think, you know, we tried not to do that in 119. We were really mindful of the time of the year, but definitely if you're one of those people, you should definitely go and give comments on that, that issue we linked the discussion issue. Yeah, I would say I'm a developer. So I don't necessarily have the same pain points, but if you have an opinion, then I think like you have to share it with the community. So I would definitely follow the link that's in the slides to kind of make your voice heard. And you know, if you feel like there's something that's not being considered in the decision making, then really like articulate your concern so that people can can discuss them. I see another question at the bottom here. We can answer real quick. Is Istio plan to be part of Kubernetes in 122? The answer to that is no. You know, Kubernetes and Istio are separate projects. Their life cycle is separate. Istio is something you can install onto Kubernetes, but they are not intrinsically tied together. I mean, Istio runs on Kubernetes, but you can run Kubernetes without Istio. So I don't think that would be planned to be part of 122. You would have to confirm that with SIG network, but I really don't think that would be part of the 122 planning. Question. I'm heavy. I have QtTail installed in third-party machine and I'm accessing its Kubernetes cluster with QtConfig. Is there any provision to make a check whether this third-party machine is legit or not? Kind of rephrase that a little bit. Is there a way to verify that when you're using QtTail to access a cluster that it's the right place? Is that the... Do you think that's a good summarization of the question, Kirsten? I think so. Maybe if it's not, yes. We got confirmation. Yeah, so for that one, certificates I think are the answer. When you're connecting to these things, when you look at your QtConfig, inside of that is certificate authority or certificate data. And I think the important one there is that at some point you have to have that kind of trust relationship with those. If you're using self-signed certs, you lose a little bit of that assurance. But in production use cases, you can control what servers the API server is providing and whether the client trusts that or not. All right, any other questions? Good. Anybody else have one they want to pop into the question box? I just wanted to thank everyone for showing up. We weren't sure if anybody would show up to our webinar. I appreciate the support. Oh, so there's another question. There's already a timeline for cloud providers adoption of 120. So again, that's a super good question. Having worked with a cloud provider before, it's difficult. There's a lot of work that goes into consuming these things. As a cluster operator now, not as part of a cloud provider, we have a lot of work that goes into making sure that we can ship that. They have the same amount of work, probably more so because they have to make sure that it works across all of their infrastructure, so that it fits into their existing tooling. So we don't coordinate with them to say, hey, we're going to launch 120. When are you going to have AKS 120 or GKE 120? If you look at the cloud providers now, they do lag behind. So I would expect down the road, you'll see early access to it. But no firm guarantee of timelines between when the release happens and the cloud providers move to it. What are some of the few container image scanning open source solutions? Also the same goes for real-time container scanning. There are different options for that. We inside of my team use a not open source tool for a lot of container scanning. But I've also used Trivi in the past, which is a tool from Aquasec. Can't make any firm recommendations either way, though. Right. We have about two minutes left. Is there anything you want to wrap up or conclude with? I just want to say thank you again to all the contributors that made 120 successful. Everybody that worked on a cap, everybody that worked on tracking down test flakes towards the end of the release. Like a fun, quick story. We had, I think, API priority and fairness had an exception to the code freeze date. And we were super nervous because it was very impactful. Every request basically goes through APF. And we got to the end and looked like it was good. And then we started getting test flakes over the weekend and maybe more than test flakes. So Monday came and we were two days away from the release. Like, is this a problem or not? What do we do? And, you know, it was a lot of effort by a lot of people to get over that line and make sure we were comfortable doing the release when we did it. Yeah. But then it was in the high five. So that was nice. And I still just thank Libby, actually, because I didn't know what to expect for this platform. And it's been very easy, very seamless. Oh, good. I'm glad I liked it. But yeah, it was great. Well, thank you all for helping us test it out and kick off 2021. With that, I want to thank everyone else for joining and Jeremy and Kirsten for a great presentation. I think we'll go ahead and wrap up and these slides will be online later today. So take a look and the recording will also be up. So you'll be able to rewatch or watch if you weren't able to join us right now live. So thanks again, everyone, for joining and we'll see you soon at another webinar. Thank you so much. Bye. Bye.