 Get started. I'd like to thank everyone who is joining us today. Welcome to today's CNCF webinar, What's New in Kubernetes 1.18. I'm Karen Chu, community program manager at Microsoft and CNCF ambassador. And I will be moderating today's webinar. We'd like to welcome our presenters today. We have Jeremy Rickard, enhancements lead, George Alicorn, release lead, and myself, the comms lead from 1.18. And just before we start, a few housekeeping items. During the webinar, you are not able to talk as an attendee. There is a Q&A box at the bottom of your screen. Please feel free to drop in your questions there and we'll get through as many as we can at the end. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, please just be respectful of all your fellow participants and presenters. Please also note that the recordings and slides will be posted later today to the CNCF webinar page at CNCF.io slash webinars. And with that, I will hand it over to Jeremy and George to kick off today's presentation. So much, Karen. So we'll start off today just by giving a little brief overview about the logo for the 1.18 release. Every release has its own personality and its own kind of representation. If you remember the 1.16 release, there was a great logo based around breadsticks, Lockie, who was the release lead. That was a big fan of breadsticks in the Olive Garden. So that kind of featured into the release. So let's start off this webinar by having a little bit of background on how this came to be. Yeah, absolutely. So at least one of the secrets about the release team is whenever you get into the release lead position, you have the privilege of designing the logo for the given release as lead for 1.18. I took the opportunity and ran with it. And the logo for 1.18 is inspired by the LHC, the Large Hadron Collider, which is an physics experiment meant to explore the really fundamental questions of physics, at least part of the motivation is because before I actually started working as a software engineer, I was a physicist and I keep the physics around still, even to this point every now and then. And I wanted to take the opportunity to talk a little bit about physics. And the other one is that the LHC is like the Kubernetes community is a really large collaboration. They have thousands of people from all around the globe, constantly working towards trying to gain a better understanding of the underlying laws of physics. And like the Kubernetes community, they are very inclusive and they are doing a lot of really meaningful and super interesting work. And I wanted to take the opportunity to just give a shout out and not only to them, but to a lot of communities of scientists and engineers who are not only working on the LHC, but another experiments for physics, biology and like, because ultimately it is my belief that if you want to go far and if you want to build really cool and interesting and useful things, having a community, having a really large and diverse community is gonna be the best opportunity that you have to advance things. Awesome, thank you so much for that background. I think that it's a super cool logo. And I've been trying to 3D print it to have like a physical memento of it. I think it's turning out pretty cool. Also working on the swag. So at some point there'll be shirts and if people are interested, please feel free to pin. Nice, thanks. So today the agenda for this webinar is gonna allow us to give you a little bit of an overview of 118. We'll give you a couple of quick highlights of things that we think are super important to bring forward. We'll also give you a really quick update on the 119 release with things that are going on and then dive into a little bit of an overview of each one of the enhancements that came into the release. We'll also have a period of Q&A at the end. So first up, let's talk about the 119 release because that's kicking off right now. So we're here to talk about 118, but 119 is pretty relevant I think as well. As consumers of Kubernetes, you might be interested to know when the 119 release will happen. Originally that was supposed to be June 30th, but because of everything that's happening in the world, it's been extended a little bit. So the new target date is August 4th. And there's a lot of changes that are going into this with regard to the timelines and the dates within the release, but we wanted to make you aware of just when the target of the 119 release would be. So you can kind of do some better planning about when you might adopt that release. If you're interested in reading more, the slides will be available at the end, but I've included a link here to discussion forum in the Kubernetes Dev mailing list where a lot of these things have been spelled out. So you can see some very specific things like what's going to happen with 120 afterwards? What are some of the changes we're trying to implement within the release this time? Okay, so with that out of the way. Sorry, one quick thing to add. Also with the changes in the 119 release and this will possibly also be very similar for the 120 release. So the 130, usually we have four releases per year. This year we're only gonna have 118, 119 and 120. 120, that's a grand plan. And one additional change that's going to happen is that the release team is going to start publishing a lot more release candidates. So all the things that we're going to be talking about right here, one good place to test them out was with the Kubernetes release candidate, which was released before the official 118.0 release. And 118 is gonna have a lot more of those. So all the new enhancements are being worked on. You can actually try them out and kick the tires with them. Yeah, that's a great point to bring in. Okay, so let's dive into the 118 enhancements now. So I was the enhancements lead for 118, so I'll give you a quick overview. In 118, we had 38 total things that we tracked. 15 of those things were stable. That means that they've graduated to being fully released and supported. You can expect them to live forever with some level of confidence within Kubernetes and not change too much. 11 of those things graduated to beta, which means that the people working on those features think that the API is reaching the point of stability. Changes could happen going forward, but you can expect them to probably live and have a certain set of stability and testing going into them. Those things are usually enabled by default as well. And then 12 things were introduced as alpha features. Those are brand new things that are being added to Kubernetes. They're gonna be behind feature gates. So to use them, you'll have to enable them to take advantage of those features. But you can see it's a pretty good spread. Almost equally divided between the categories, giving us a good mix of brand new things and some promotions of things that have been around for a little while. So I'll turn it over to Jorge to do a little bit of highlights. Thank you, Jeremy. Yeah, so as Jeremy already mentioned, one of the themes that we chose for this release was fit and finish because we have a lot more things that are being worked on, really cool features that people are testing out and seeing how they work in the world. But we also have an equal proportional features that are going to be stable. And we believe that they are completely ready for production use. And with that, let's go on to the next slide to see some of the changes that we want to highlight out of the bag. One of them is actually with Cliingo. One of the, a lot of people use Kubernetes to host their applications. But one, another very important use is to actually build operators or build programs on top of Kubernetes. And this is where Cliingo comes in with 118. There was a really large change that went into the codebase where now every single method that every single method in Cliingo is now going to take a context as a first argument. And this more or less aligns Cliingo with to be more co-idiomatic, to align more with other API libraries that you might have seen in the while. So now whenever you start your application, if you create a context for your application to keep track of what is actually happening, you can pass that context for every single Kubernetes API call. Couple other things that also changes that most, a lot of function signatures also change. And now for the most part, most functions besides taking a context and some information about the Kubernetes object that you want to handle, for example, a config map or a pod, they're also going to take some options, kind of a options kind of argument, which where you can specify some additional metadata for your operation. And one project that is being developed within the Kubernetes organization, you can actually find it in GitHub.com, Kubernetes 6 Cliingo fix. This one is going to help you actually adapt your programs to this new way of writing Cliingo applications. And Jeremy has actually given it a try. Is there any highlights that you'd like to share with the audience at this moment? Yeah, I mean, I think the tool is really useful, manually upgrading to Cliingo 118. Obviously introduces these changes. The context one is pretty big, but the other ones too, like when you have a delete operation, previously it accepted things by reference and now it's my value. So there's just a lot of changes that you have to go through. I had changes across like 54 files in this project that I was updating yesterday and Cliingo fix is pretty nice. It's still in beta, so make sure you have a backup of things because it does in place changes, but it can help you out a lot when you're dealing with this change. So on to stability. A couple of the enhancements that one in for 118 are to improve the stability of the project in really cool ways are a pain-based evictions. CubeCuttleDev, API Server Drive Run, CSI Block Storage Support, and Windows Improvements. Overall you can see that people have been working in almost all areas of Kubernetes equally. You have some tools to improve how developers and operators work and apply changes to their clusters. And this is where CubeCuttleDev and API Server Drive Run actually come into play. These two enhancements are going to enable you to have a better feel for what is actually happening and what is actually going to happen after you do some operation. A pain-based eviction shows a better user experience for people who actually make use of things or they're doing some system administration on the clusters, a Block Storage Support that's been a major one. And we are going to talk more about it later on on this webinar and Windows Improvements. This is by far one of the most, one of the most exciting areas in Kubernetes during the past couple of releases. We have seen how Kubernetes started to support using Windows OS as actual worker nodes. In previous releases, the adoption grew more and we started to actually run a lot more, a lot more complex. And to end test on Kubernetes using Windows worker nodes and finally in this release, we're gonna see a lot of changes on Windows that make it feel a lot more like the usual Linux-based worker nodes along with some other enhancements enable people to manage Kubernetes clusters that use Windows for example, for example, that you see in tools such as Cube Admin. Yeah, I think it's a really interesting thing to come into the project and it's awesome to see it progress. And as far as new things go, we have also seen some enhancements that try to tackle some complex issues that people have come across. A lot of organizations are using Kubernetes and for the most part you have your standard clusters with some things or hundreds of nodes with a lot of people are really pushing the boundaries with the kind of things that you can do with Kubernetes. And we have seen a lot of issues when you are running thousands of bots, thousands of nodes and one of the really cool enhancements that came into one 18 is priority and fairness for API server requests. This more or less enables us to have a more reliable installation of Kubernetes where we can actually separate the requests that are landing into the API server and we can throttle some of them to allow for the most important request to go in and allow people to have a more consistent user experience when interacting with a really loaded cluster. Another cool one is KubeKart on the bug end. Like all the other enhancements that I'm mentioning, we're going to talk more about this one later on but if you have heard about it, just know that this is on the works and it's close and each day is closer and closer to be in GA with KubeKart on the bug. It's definitely going to be a game changer, configurable HPA scale velocity. HPA at this point in time is something so ubiquitous. So having more ways to tune it, it's gonna be amazing. And immutable secrets and config maps also help developers interact with their Kubernetes clusters in a more secure manner to prevent any accidental mistakes or applies. Great, thanks for giving us those quick overviews. So next we're gonna go through updates for each one of the six. So when things are worked on in Kubernetes, they generally fall underneath of the purview of a SIG. So that could be something like authentication or API machinery or storage. So each one of the updates that comes into the release is kind of shepherded by the SIGs and they have responsibility for getting it across the line. So we've organized things by SIG so we can give you a better understanding of how things have changed in the release. We'll start out with API machinery. So like we previously mentioned, priority and fairness for API server requests is one that came in. And for each one of these items, when you get the slides later on, you can actually click through to the tracking issue and the enhancement proposal or CAP to see what's been proposed, what's been implemented so far and what the plans are for each one of these things going forward. In the notes, we've also included blogs. So some of these features had dedicated blogs written for them. Some of them didn't, this one did. So in speaker notes for this, you'll be able to find a nice overview of this feature. So but at a high level, this is a brand new one coming into the release into Kubernetes. And we mentioned previously that things kind of broke down between the lines of alpha, beta and stable. This was one that's brand new, so it is an alpha issue. And really, in addition to what we've mentioned for this one already, this really helps us prevent, clobbering the API server under heavy load, keeping things going, keeping people from stepping on each other when they're making API server requests. Next one is API server network proxy kept proxy to beta. Some of the titles here are not, is everybody seeing it, okay? Okay, so these are the titles from the issues in the repository. So they need probably a little bit of massaging, but here we're moving the API server network proxy to beta. So this is something that's existed. Almost landed in 117, didn't quite make it. So it's made it into 118. It allows you to separate user initiated network traffic from API server initiated traffic. Doing that kind of segregation to, again, help people control how requests are going. Yeah. Adding to this enhancement, one really quick notice that if the usefulness of this, at least one of the things that you can really tackle with this one is that if you can differentiate whether if your traffic is coming from users or from action applications, you can be more restrict when it comes to security and compliance to only allow certain entities to do certain types of operations. So this enhancement is something that's been mentioned a lot by cloud providers who want to enable users to have a more secure experience when working with managed Kubernetes offerings, but same in this enhancement can really be used by anyone in that kind of manner. So if anyone has uses where you will actually want to know whether our user is doing something or managed application is doing something else, keep an eye on this one. Yeah. So one other point to mention as we go through these slides, we're going to mention the status so alpha, beta or stable. One thing to keep in mind with these is that alpha features and beta features both have feature gates or feature flags associated with them. When something's in alpha, by default, it's not going to be turned on. You'll have to go and enable that in the feature flag. When things go to beta, they're on by default. So you can turn them off if you don't want to use them, but by default they will be on. And then when things move to stable, the feature flag is dropped so that feature just becomes part of all installations of Kubernetes. So here we have a stable. So we've gone from alpha to a beta to a stable. This has existed in Kubernetes for quite a while. It's been beta since 113. So it's had quite a few releases where this thing has been around. But in 118, we see it finally land as a stable feature. And this one is particularly useful for people who are working on developing some application on top of Kubernetes. And if you're doing some complex change on the cluster, at least one of the things that didn't happen before is that if you just want to run a pod, for example, in that pod you're running the devian, you can do a keep code of apply, try running. It's going to tell you whether something happened or not. But at least with the API server drive run, actually sending the request all the way to the API server, passing through any admission web hooks or anything else that you have configured along the way that might change the way that your manifest is going to be handled and created on the cluster. This is going to give you all that information that you need. And if you use a keep code of apply, drive run, it's going to do the client by default. So if you just do drive run, that's what's going to happen. If you actually want the API server to do it, you have to specify a drive run equal server. Yeah, that's a great point to add. Thank you so much. Okay, so back to our client go issue we talked about before. We're giving you a little bit of an example here, but again, this is a change to client go. You can see that the structure of calls change. So when we want to get pods using client go, it looks a little something like this. You ask for the core v1 API group and then you would ask for pods in a certain namespace and then you do the operation of get. Before you used client go from 118, you wouldn't have to specify that context parameter as the first parameter. Now you have to. There is, if you go through the tracking issue, you can actually use some generated clients that will be around until 121, don't have that. So you still have to change imports and whatnot, but you can keep the signatures around for a little bit longer. That will be removed in 121. So you'll eventually have to make this transition anyway. In the speaker notes here again, we've linked to the client go fix tool. Thanks to Jordan Liggett for that. It's really useful. All right, now on to SIG architecture. So one of the first enhancements from SIG architecture that we are going to talk about is enabled running conformance test without beta rest APIs or features. So at least some context that is really useful when talking about conformance is these tests are supposed, these are end-to-end tests that are managed and maintained by Kubernetes contributors and they sleep with the rest of the core Kubernetes code. Conformance tests are supposed to give you some sense of reliability. They're meant to tell you whether your Kubernetes installation is actually working the way that it is intended to. And to that end, a conformance test tend to only test a production ready features because anything that is in alpha, it might partially work, it might have some changes with a we don't really know and hence it's been kept in alpha. Things that are in beta are a little bit more stable but we don't really have any sense that they are, they might still change. So having the set of conformance tests only test out GA APIs and features, it's really helpful for people who manage their own Kubernetes clusters to ensure that your installation process is working as expected and equally useful for anyone who manages Kubernetes cluster for other organizations or other users. Well, let's move on to the next one, which is SIG-OFF. First one we'll talk about is a brand new alpha feature. So again, to use this, you'd have to enable the feature gate for this but this is providing the ability for OIDC discovery endpoints in the API server to be used outside of the API server. And this is cool because it enables you to do things like use authentication tokens from Kubernetes to be used as a general authentication mechanism. So you could use it for services outside the cluster as well and federate things to other clusters. It's kind of neat. Next is some changes to the certificate signing request API. So if you want to use Kubernetes to generate a CA and a certificate from a CA, you can do something called a certificate signing request. And this API had some changes from 1.18, it's planned to go stable and graduate to stable in 1.19. Okay, on to SIG autoscaling. And one of the enhancements that I already mentioned from SIG autoscaling is the ability to configure the scale velocity of horizontal HPA which stands for horizontal point autoscalers. So in this case, whenever you have a deployment staples or something of the sort, and normally you have the capacity, normally you have the capacity to specify what threshold of CPU or memory utilization these resources should be using and how to scale up, whether you have a minimum number of replicas or a maximum number of replicas and then HPA handle how many replicas to create based on the utilization and the target utilization. One new way that SIG autoscaling is enabling us to find in this process is by allowing us to tell HPA resources how fast to scale. And for example, in the slides, you can see how that's going to look. So whenever you're defining your HPA manifest in YAML, you can have something like behavior, scale up your policies and now you can actually say, I want to scale up by this percentage and that way just accommodate the behavior of the cluster to your particular application. All right, let's move on to SIG CLI now. And the first one that we'll talk about is that debug command that we mentioned earlier. This one actually builds on top of the ephemeral containers alpha feature that was introduced a little while ago. And what this allows you to do is to add pods. So the ephemeral containers enhancement allows you to add new pods or containers to a pod. So maybe you deploy your workloads normally and you follow good practices and you don't include things like a shell or the ability to run bash inside of that container. You want to lower the attack vectors, right? But that also makes it a little bit tricky to debug sometimes. So ephemeral containers allow you to spin up a new container inside of that pod that might be able to share volume or do some other things to help you take advantage of or do some debugging. To do that though, there wasn't really anything exposed in kubectl to help you do that. So that is what this debug command is doing. This is allowing you to take a pod that's already running and add an ephemeral debug container to it. You can find some more information in the enhancement proposal to see what they plan on adding in addition to just this, running the ephemeral container alongside things. And again, it is an alpha feature. So to actually use this in kubectl, like a feature gate, you have to use kubectl and believe it's alpha debug to trigger this. But I think it's cool and it shows some progression and some more kind of developer user focused changes that are coming into the platform or into the ecosystem. And to give one quick example on that one. For example, let's think of a typical application. Let's say that you're writing a web server and go. The best thing that you could do is to compile that web server so that you end up only with a binary. Then that binary copy it into a container using a distro-less container image. That distro-less container image is only going to have enough resources for your binary to be executed and run. But it's not going to have any batch, it's not going to have any other utilities. There is no way for you to install anything. So if you actually do something like kubectl debug, you can create a new container and run it within the same set of Linux namespaces so that you can actually interact with your application. So something that you could do in production, for example, is to have your web server open running and then you run your additional container in that when you install something like curl and then you curl from localhost. Yeah, I think it's a perfect use case to show how to use that. So the next one is kubectl diff. So this allows you to compare an object on your file system so if you have a YAML file that defines maybe a deployment or some other resource and you want to compare that against what's actually running in the cluster so you can kind of compare that state that you think it might be with what it actually is, then this command is pretty useful for that. Again, this is a stable one, it's been around for a little while and it's ready to use in production. This one's not really a super user-facing one but it's interesting to track, especially if you're building anything that depends on the code. The kubectl package itself has moved to a new repo. So you can find some more information about this in the enhancement proposal as well. All right, let's move on to cloud provider or hey, take it away. Absolutely. So with cloud provider, another one of the enhancements that came in during 118 is supporting the vSphere, a cloud provider. One of the areas where we have been trying to improve in the past is to move the cloud provider from inside of Kubernetes to it's own place so that they can be developed and maintain separately from the core of Kubernetes, similar to the previous enhancement that we just mentioned where people wanted to move all the code for kubectl outside. And in this case, vSphere is going, it's one of the first cloud providers that has actually been reworked on this matter. Cool, let's move on to cluster lifecycle. So the first one here is support for Windows and kubedm. So you've actually been able to use kubedm a little bit with Windows, but this gives you the ability to add nodes to a kubedm Windows cluster, which is pretty cool. Before you couldn't easily do that. kubedm is a way of provisioning clusters. There's tons of ways that you can get a cluster so kubedm is a really useful way. It allows you to do a lot of things, rotate certs, add nodes to clusters. Before this enhancement, you weren't really able to do that with Windows containers, with Windows nodes. The ability to use Windows with kubedm has been around since 114, but until 118, you weren't really able to add nodes to clusters and make it really a useful tool. All right, on to SIG network. The first enhancement from SIG network that we want to discuss is adding an app protocol to services and endpoints. This one is alpha gated. So the only way that you can actually give it a try is to enable the feature gate for this one. And this enhancement essentially allows us to specify whether a service or an endpoint is going to be using TCP, UTP, SITP, or something of the sort. This is related to a previous enhancement from the 117 release cycle, where endpoint slice went into beta. And endpoint slices actually introduce the concept of app protocol, which will allow a way for people to specify that a given port is actually dedicated for a specific type of protocol. And this enhancement is going to... It's proposing to allow the same set of functionality from introduced by the endpoint slice escape into normal services and endpoints. The next one is IPv6 support. One huge area of development in Kubernetes for the last couple of years has been on IPv6. And now with this enhancement going into beta, which again means that for the most part, a lot of people are going to be getting this by default and you can actually give it a try with your Kubernetes clusters. This enhancement means that IPv6-only clusters are expected to work. The IPv6-only functionality was actually added back in 1.9 as an alpha feature. And this allowed for a cluster to use only to replace all IPv4 networking. And the other couple of things that come to mind is that with backing Kubernetes 1.13, the default DNS server also changed code DNS to have full IPv6 support. So as we continue rolling in with this one, it's gonna be possible to have all your Kubernetes components just use IPv6. And the next enhancement, which is a new endpoint API, this one is again, innovative status and this is meant to replace the core view one endpoint to mitigate some performance and scalability issues and this one already mentioned it by another name, but to provide the full story, this is what we mean by the endpoint slice API. The long-term plan is to have this one be the core API to use when doing anything with networking. The current service API has some, as we mentioned, some performance issues that were discovered with scalability test where we ramp up the number of nodes up to 5,000 and try to see how things we have. And this enhancement is going to, essentially instead of recomputing the entire list of endpoints that a service is using and then notifying all the watchers, all the entities that are actually watching for changes to these endpoints. This enhancement is going to allow this loop to be broken down into different groups and only the group that is using a certain endpoint has to be recomputed and updated. We'll move on to the next one now, which is graduate node local DNS cache to GA. So graduating a node local DNS cache, this is a new item that runs a DNS cache bot as a demon set. And overall we just expected to improve the performance of DNS across clusters. The next one, Ingress Changes. This one is in beta state. And this will, this enhancement for 118 adds a support for wildcard host names, better path matching and the declaration of Ingress classes. This one's pretty interesting I think because it's just more work building on top of Ingress to make it a better resource to use as they work towards doing a GA release, which is hopefully soon. Duplicate slide there. All right, so we're going to just signode now. So the first one here is changes to pod overhead. So when you are running things, there's a little bit of non-negligible overhead associated with keeping track of the pod, quota management, things that go in, that are necessary to run the workloads. And this is accounting for those when it's making for the pod sandbox, not just as the specific containers. There's a sandbox that goes along with each one of your pods. And that wasn't really taken into consideration when scheduling decisions and things were happening. So this is moving that along. This has been in alpha for a little while and moving to beta in 118. So you'll be able to take advantage of this without turning the feature gate on. Next one up is the topology manager. So giving you the ability to run pieces of Kubernetes and workloads in different hardware topologies. So basically like if you have some GPU nodes or other things that you want to run low latency workloads in, this is allowing you to do that. And this was introduced as an alpha in 116 and it's going in beta in 118. So you can see that in a fairly short amount of time this has turned around. You can expect this to be around as a stable thing somewhere down the road. This one also has a dedicated blog post associated with it. So you can go read a lot of information about this on the Kubernetes blog. And again in the feature notes, or sorry, in the speaker notes here we've included a link to that blog. Next up is adding startup liveness probe holdoffs. So sometimes you have pods that are really slow to start and maybe the health checks or the liveness probe start to fail and then the pod gets killed. This allows you to set an initialization failure threshold so that you can back things off and not necessarily start handling the failures on really slow to start containers. This one's beta as well. It's been around for a little bit since 116, but now it's beta so you'll be able to use this without turning feature flags on. This one is actually a change to a feature that's been stable for a little while. Huge pages is a feature that I haven't really used, but for using clusters that have a lot of data, this one allows you to make some changes to use the feature a little bit more efficiently. Again, you can find more information in the tracking issue and the enhancement proposal. All right, six scheduling. And like other six, the people from six scheduling have been really busy and there are a lot of really cool new enhancements coming in for 118. One of them is a more even spread of products across failure domains. This one is in beta. So for a lot of people, this is going to be turned on by default. And one of the things that often comes up is that people are, if you have a cluster in a given region, and that region has multiple availability zones, if you have multiple replicas to allow for a high availability configuration, the optimal configuration that you will want is to have at least one replica of your application running in each availability zone. That way, if there is some issue with the infrastructure, there's always going to be something open available and this enhancement essentially improves on that by playing around with the... It essentially allows for that configuration to happen even when using anti-finity rules or other interpot configurations. This one's been alpha since 116. So in a pretty short amount of time, it's gone to beta and will be around for people to use. It's pretty cool. Yeah. And like that one, think based on fiction. This one was actually, was also mentioned at the beginning, we also mentioned at the beginning of this talk webinar. This one is in stable and this means automatically. Any tainted nodes, with no execute will become unready and unreachable. So no bots, nothing will get scheduled on these ones. This enhancement has been beta since 113. And... Yep. This also builds on the taint node by condition feature that went to GA in 117. So some awesome progression there from that sake. The next enhancement is, it's adding a configurable default event bot spreading rule. This, again, talking about highly available configurations. This adds a default spreading rule to both set on the fine one and allow operators to define it. So again, another tool to ensure that your applications are highly available and tolerant of an infrastructure failures. The next enhancement, running multiple scheduling profiles. This is particularly interesting. For the most part, a lot of people can get by with the default behavior. And there are a lot of people that are using hybrid, that are using and building hybrid clusters. For example, if you have a bunch of nodes to run web servers, some other nodes to run your databases, but then you actually start doing a lot of machine learning and the like, and you start bringing in really specialized CPUs or GPUs, that kind of thing. If you start treating all your worker nodes like they are the same, you might probably be missing out on a lot of optimization. And this enhancement that just went stable on one thing is actually going to enable users to specify their own scheduling profile to tell a scheduler or multiple schedulers how to run different workloads. This is also going to help for people, it is going to be useful for people who are actually running multiple schedulers because now you can kind of get by by using fewer schedulers, but just providing different providers and different profiles for the nodes. That was a lot of crazy things coming out of that SIG and a lot of them are stable. The interesting thing, I think, is that a lot of work goes into making that happen. We see the same thing in SIG storage. So the first one we'll talk about is the ability to use raw block devices as persistent volume sources. This one's been around since 1.9 as an alpha feature. Graduated to beta in 1.13 and now it's available just as a by default feature. Another stable change is skip volume ownership changes. This one has been in beta since 1.15 and has graduated now. It's pretty cool to see that evolution. And here's another one supporting raw block storage inside of the container storage interface. So this has been around since 1.12 as an alpha feature and beta since 1.14. All of these things taken together, I think, are showing that there's a lot of stability and just maturity happening with SIG storage. Another one in SIG storage that's stable is the ability to pass pod information to the CSI drivers to allow them to make a little bit better decisions. Skip attachment for non-attachable CSI volumes. Again, another stable that's been around in beta since 1.14. So just in general, when we go back to our themes for the release, fit and finish, SIG storage has really taken on the mantle of pushing things towards being stable. And then a pretty interesting one that it's brand new is the ability for specifying secrets and config maps as a mutable. So right now when you make a config map and you load it into a pod, there's actually a sync loop that happens. So if you make a change to that config map, it'll be any mounted as a file system volume in the pod. It'll actually be reflected in the pod at some period later. So you may have a deployment with a couple of replicas of maybe your web server and it's serving some information that came out of that config map or maybe you have some configuration that you reload when things change, like maybe rate limiting rules for the envoy rate limiter proxy. And you're running those things and you're loading that as a config map. Right now, you can actually edit that config map and cause your application to blow up if you make a bad edit to it. So what this allows you, like a better practice is to actually make a new config map and then do like a rolling upgrade where you reference the new config map. That's what really this is enforcing. So you're able to specify that secrets and config maps are immutable and it'll prevent edits to those things from happening and it'll also disable the watch loop. So you won't actually spend time, against the API server looking at those things. And then another new alpha feature is generic data populators. You can go take a look at the enhancement proposal for some more information here. But in 112, the data source field was added to the persistent volume claim spec and this is just enhancing that a little bit more. And another stable. So we've seen a whole bunch of stables and this is the last one in SIG storage. This one is enabling the PVC, excuse me, to use the data source parameter for creating a new one, basically for cloning PVCs. All right, on to SIG windows. SIG windows is another, like every single SIG that you've seen until now, windows is definitely, the people from windows are definitely doing a lot of work. The first enhancement that we want to talk about, this one is in alphas, this one is in alphas status and they are constantly working on this one is the support for a CRI container D on Windows. The Windows Server 2019 version actually includes an updated host container service that offers more control over how containers are managed. And this can remove some limitations and improve some Kubernetes API compatibilities. However, the current Docker, a lot of people are using the Docker Enterprise tools. So for example, the Docker Enterprise 1809 release has not been actually updated to work with the Windows host container service. Only container D has been migrated. So this enhancement is actually about getting off the bar and getting a lot more tools and runtimes available for Windows working or not. These users will be able to take full advantage of the latest features and improvements that have been shipped with the Windows to the server 2009 in 1809. The next one, implementing runtime class on Windows. This one can be used to make it easier to schedule pods into appropriate nodes based on the always, the version of the always, and CPU architecture and the other information that you have around. With Hyper-V available, Windows can run containers on multiple windows or multiple windows operating system versions. And Linux containers might be able to do this in the future as well. This enhancement is going to document the process controlling these features through runtime class and will add new fields to the pod API changing the current behavior of the Kubernetes scheduler. And the next one is support of GNCA for Windows workloads. Sorry. So Windows GNCA. GNCA stands for Group Managed Service Account. This is a Windows thing for actually managing kind of like user accounts and service accounts in general. This enhancement went stable in 1809. And this will in general just provide more flexibility and the full set of features for anyone who currently makes use of GNCA. And the last enhancement from sync windows that we have is run as user name for Windows and this provides ability for people who are running a pod on a Windows worker node to actually specify the username for that application to run under. This is somewhat similar to a plain old container running on Linux where you can specify I want this process to run with some non-serial user. And this finally became stable on 118 and it's ready for production use. And with that, we actually just cover all the enhancements that went into 118. And with that, let's stick it out. We just want to take the opportunity to talk a little bit about the release team shadow program which encompasses a lot of the work that we have discussed until now. For example, the tracking of enhancements and the like. Yeah, so the release team is made up of a few different roles. You have the team lead itself and then enhancements, CI signal, bug triage, all handling different pieces of the release process. Each one of those is led by an individual. I was the enhancements lead for 118 but you also have some shadows. So part of the process to make sure that this process or that the releases continue to be healthy and that we can grow the pool of people that can help us with this is that we bring on shadows for each one of those roles. And it's generally between two, three or four shadows for each role. And then you come onto the release team and you help out and learn the responsibilities of that specific that role. Enhancements, for example, the team and I split up all of the enhancements that are tracked in the Kubernetes enhancement repo. We would ping each one to figure out what was gonna happen in that release and kind of shepherd it through the process. Generally the releases are around three months but that's changing a little bit right now. The workload varies depending on which team which part of the release you're in. So enhancements is kind of front loaded. Release notes and docs are kind of back loaded. So you can kind of gauge where your interest is and what your time commitment might look like. But we definitely recommend that you, if you're interested in this at all, apply to be a shadow for the next release, which would be 120. We've already formed the 119 team but at the end of each release, you can generally find a link to the application to be a shadow for that upcoming release. So that would be 120 for the next release. And if you're interested in learning more about the release team or the Kubernetes community in general, the release team like any other sub-project or SIG within the Kubernetes community has all the meetings completely open. So if you can, you can join in. You can ask questions and the like. And with that, I guess we now open the ground for actual questions about the webinar in 180. Yeah, so thank you both for that awesome presentation. We'll do a few questions since we're already a little bit over time. So there are a few questions in the queue right now. Let's see. A couple of people asked about concrete examples for things that could be done with cube cuddle debug. I guess, are there any concrete examples of things that could be done with cube cuddle debug? So one of the best examples, I mentioned this during the last slide is one of the best security practices that we have is if you're using a compiler language for your application or if you can just get a binary or a really small surface collection of files that you need to run your application in order to avoid actually having an entire operating system inside of your containers, please do that. For example, this is for distrubless containers coming to play. If you Google distrubless containers, you will probably come across a repository owned by Google, I believe. And the distrubless containers essentially just give you enough of an operating system. For example, they give you the basic CA certificates and enough tools to just be able to run a binary. They don't have a shell. They don't provide the opportunity, the tools for you to install any other tools or executables. For example, in Ubuntu, you cannot do something, you cannot do an app get installed, blah, blah, blah. And if you're using those distrubless images, you are more or less securing your application because now you have an additional barrier that is going to ensure that only your application is running, that no one can exit into the container and install something else that no one can inject any kind of traffic into that. And that is a good practice to follow. However, that makes it somewhat difficult to actually develop and debug your applications because now you cannot execute into your container and you don't have any tools to actually test your application or to monitor the state of the container. And the only, without kubectl the bug, if you actually wanted to debug your application while using a distrubless container, you will have to go back to your container image, a Docker file, for example, you will have to change from GCR distrubless to from Golang, Wanderthane, that will actually give you a Debian-like operating system, then you have to build your image again, you have to push it, you have to wait for things to redeploy. kubectl the bug is essentially a shortcut to all that work. And with kubectl the bug, you can just say kubectl, I have this bot, I want to run a Debian container inside of it. And as soon as I get that container in the same network, hostname, namespaces, then I can now install any tools that I want and essentially have your Kubernetes cluster serve as your local machine. And it's really cool because you can, just like when you run Docker and you specify the dash it flag to do an iterative, to get the console on the shell, you can do that with the kubectl debug command. It'll start on ephemeral container inside of your existing pod and give you a shell if you do the dash it. So you can do like kubectl debug dash it, run busybox or whatever Debian or whatever you want to use and insert it as an ephemeral container inside of my existing application to give you that extra ability to debug things that are running too, right? Like maybe you want to be able to look at this thing that's deployed without having to go through and rebuild the image and redeploy it and lose the state that exists already. The debug command allows you to use that ephemeral containers capability, get a shell into that thing and do whatever debugging you need to do. Awesome, okay, we'll do one more question. Can you shed more light on CSI support enhancements like block storage support, different cloud providers and performance benchmarks? So I would not be the right person to provide more insight to that. I think if you want to know more about those things, what I would recommend is that you join the SIG storage channel on the kubernetes slack and read through the enhancements proposals themselves. So I don't have a lot of the detailed information at hand. I would have to go read the caps themselves to try to answer any specific question for you out of those things. But again, the caps are like the source of truth. So anytime a change is going to be made to Kubernetes, it has to go through this enhancement process and that's called the Kubernetes enhancement process and all of those things live in kubernetes slash enhancements on GitHub, so github.com slash kubernetes slash enhancements. You can find all of the things that have been previously implemented, things that are proposed and are being iterated on like sidecar containers as an example. You can find all the things that have gone into previous releases there as well. When those things are merged and approved, they end up in a caps directory. So in GitHub, you can find in the enhancements repo, you can find all of the caps that have been merged previously and they'll give you more info. But if you have specific questions about direction or what the SIG is planning on doing with some more, it would be super, that's the best place to go. It's like the source of truth. Great, thank you. Jeremy and George, thanks for a great presentation. That's all the time we have questions for today. Thank you everyone for joining us. The webinar recording and slides will be online later today and we look forward to seeing everyone at a future CNCF webinar. Have a great day. Thank you so much for coming today. Thank you very much. Happy Friday all.