 Okay, I want to thank everyone for joining us today. Welcome back. Happy 2022. And welcome to today's CNCF live webinar, the 1st of the year, Kubernetes 1.23 release. I'm Libby Schultz in today's webinar. I'm going to read our code of conduct and then hand over to Karen Chu, Ray Lohano and Xander Kravinsky with 1.23 release team. A few housekeeping items before we get started. During the webinar, you're not able to speak, but there is a Q and a box on the right hand side of your screen. Please feel free to drop your questions there and we'll get to as many as we can at the end. This is a webinar of the CNCF and as such a subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. And please be respectful of all your, all of your fellow participants and presenters. Please also note that this recording and slides will be posted later today to the CNCF online page, community.cncf.io under online programs. They're also available via your registration. And the recording will also be available on our CNCF YouTube channel under online programs playlist. With that, I will hand it over to the team. Hey, everyone. As mentioned, I'm Karen. I'm the communications lead for the 1.23 release team and should we go to the next slide, right? Cool. Yeah. And Brady, do you want to introduce yourself? Yeah, my name is Ray Lohano. I was the 1.23 release lead. I'm Xander Kravinsky. I can put myself in here too. I was a 1.23 enhanced and fluid. So on today's agenda, we are going to be going over the 1.24 release timeline and updates. And then we'll go back to the 1.23 highlights, SIG updates, and then Ray and Xander will do the Q&A at the end. We've got just a brief overview here of the projected timeline for the 1.24 release. So we'll be kicking the release off on Monday, January 10th, next Monday here. And then all of the following dates after that are subject to change, particularly the enhancements freeze date. But this is kind of what we have laid out so far. Looking at an initial enhancements freeze of Thursday, January 27th, followed by code freeze on March 29th, and then targeting final release for 1.24 on Tuesday, April 19th. Alright, next we're going to go through the 1.23 highlights. First off is a theme of the release, so Kubernetes 1.23. The theme is the next frontier, and this is the logo. The next frontier represents three things. One is the new and graduated enhancements in the 1.23. Secondly, the Kubernetes history of Star Trek references. And third, lastly, the continuing growth of community members via the release team. And then we've got a brief little overview of the enhancements that we had tracked for 1.23. We ended up with a total of 47 tracked, 11 of those being stable, 16 beta, and then 19 new alpha features and one deprecation, and, you know, since 1.17, there's consistently been north of 10 stable features, which is pretty fun. And then I guess for those unfamiliar with the terminology here, the alpha features, you need to have a feature flag enabled to try those out. So those are blocks behind the flags and then beta and stable are enabled by default and can be used right away. Yeah, and also just a helpful note, if each go to kuminace.io and search for feature flags, there's actually a feature flags page for all those feature flags. All right, I'm going to go through some of the major things or all the major things for 1.23. There's a few slides here. We'll go into more detail when we go through the SIG updates. Firstly, is dual stack IPv4 and IPv6 networking went to stable. Dual stack was first introduced as alpha and 1.15 and refactored in 1.20 because before 1.20, you had to have a service per IP family, so 1.20, the service API supported dual stack. So it's now stable in 1.23 and the IPv6 dual stack feature flag has been removed since it is stable. Secondly, the pod security admission is now beta. So for those of you are familiar with the pod security policies which were depicated in 1.21 and pod security policies, also known as PSPs, is targeted to be removed in 1.25. So pod security mission is the replacement for pod security policy. And what it is, it's a mission controller that evaluates pods against a predefined set of pod security standards to either admit or deny the pod for money. So we'll go into a little more detail when we go through the SIG updates. Thirdly, the horizontal pod auto scaler V2 API is now stable or GA. So this V2 API allows for multiple and custom metrics to be used. The V1 API is not being deprecated. Also, the Kubla container runtime interface is now beta. So the CRI V1 API is also the default since it is beta. We'll go into that a little bit more detail with the SIG updates as well. Some more major themes here. The TTL controller is now stable, which cleans up. It's a little like a garbage collector for cleans up jobs and pods after they finish. You do have to set a specific field in the jobs. The TTL second app are finished to be set. Then accumulating what there's a controller that will watch all the jobs and it'll compare that field against the current time to see if the pods are done, or the job is done, and then it will delete those corresponding pods. Another one is simplified multipoint plugin configuration for scheduler. So this is for the new, this is for the cube scheduler. It's adding a new simplified config field for plugins to allow for multiple extension point to be enabled one place. Another one is the generic inline volumes to is now GA. So this allows any existing storage driver that supports dynamic provisioning to be used as an ephemeral volume that's bound to the pod. So coming down to gear. Another one is software supply chain salsa level one compliance. So Kubernetes now releases and generates provenance and station files describing the staging and release phases of the release process. So the artifacts are now verified as they're handed over from one phase to the next. More major themes the skip by ownership change. So this feature allows you to to to choose if you want to change ownership when a volume is by not inside a container. Otherwise it would go back recursively to change the ownership for each for each volume. I'll go into this, like I mentioned more in the sick updates as well. And the problem is that it could kind of take too long for very large volume. So this allows you as an option to you to skip that. Also allows CSI drivers to opt in volume and permission changes. So this allows CSI drivers to declare support for FS group based permissions. Structured logging is now beta. So most log messages from cubelets and cube scheduler has been converted. And there's more CSI migration updates. So there's this is a continuation. This is a continued efforts to move from entry plugins to CSI. So it's beta for GCPD, AWS, EBS, Azure disk and alpha for stuff, RBD and portworks. Few more major things expression validation CRD is now alpha. So if the feature gate is enabled custom resource can be about it using the common expression language. No one's open API v3 open API v3 is more transparent than an open API v2. It's also more expressive because we actually lost some fields when we published with open API v2. So this is now alpha. So the speech gate is enabled users will receive warnings from the server when they send a Kubernetes objects and their request that contain unknown or duplicate fields. Deplication of flex volume. So this is one of the, this is actually deprecated previously. This is the out of tree CSI drivers now the recommended way to and deprecation of K-Log specific flags. So this is Kubernetes in the process of simplified logging in its components. Now we'll go into the SIG updates. So we're going to go through each various SIGs and talk about the enhancements per SIG. Starting off with SIG API machinery, which covers all aspects of the API server. So first one's priority and fairness of API server request. So this actually extends the existing max and flat request handler in the API server so that we can make more distinctions among requests to provide prioritization of fairness among other categories of requests. So this, so we get priority, so we can prioritize things like node hard beats and maintenance during high traffic situations. That is went to beta. In all to each of these slides, while there's also a link to the enhancement issue. And there's also a link to the Kubernetes enhancements proposal. Next one for SIG API machinery is the custom resource or CRD validation expression language. So this is where we can use, where we currently use or can use mission webhooks to validate custom resources, but it is intensive can be very complicated. So this feature enables the use of common expression language or CEL to validate those custom resources and also makes those CRDs more self-contained and you could actually write those validations or as code so it will be in the definition of the CRD object. Add server side and unknown field validation, which is now alpha. So this is the links here again to feature enhancements issue 2885. So this allows you to send before if there was a server side, a node field validation, it will let go through but now what the server side validation, if there's an if there's an misspelled field or invalid field extra field, or any field that's duplicated, it will not allow that. This also is also kind of linked to there's the client side validation might is there's a proposal for it to be removed since client side validation is very painful. So this is enhancing the server side validation. Open API enum types is alpha. And so the way it is now currently or before 123 or currently since this is still an alpha the API fields that you see are actually enums but they're actually represented as plain string. So this adds in in a marker so that those in them types. So that allows for in them type support for open API. Open API v3 goes to alpha. So open API v3 like I mentioned before it's more transparent and expressive and with open API v2. There are some fields that were dropped when published so this feature adds a new endpoint to publish open API v3 specs for the built in objects CRD is an API service types. Alexander. Yeah, I'm going to touch on the caps that were part of sake apps so this covers just deploying and operating applications in in Kubernetes and the developer experience related to that. So the first one that we have is cron jobs and cron jobs have been stable for a little while now 1.21 was when that change was made and there just was some cleanup work with the old controller that happened in the 1.23 release. And then this one Ray touched on as a major theme I believe the TTL after finish controller so this adds a field to jobs the TTL seconds after finish to allow this new controller to clean up old pods related to jobs so this actually went stable this time around. And yeah like Ray mentioned in the the major themes it does require that that field set to make use of that. And then this one was auto removing persistent volume claims created by stateful sets. So previously those wouldn't be deleted as part of cleanup of the stateful sets it was a manual process and so this adds an auto cleanup of pbcs that are managed by stateful sets. And then job tracking without lingering pods so currently jobs rely on completed pods to our existing pods to to count the you know the job completion status and this removes that requirement by utilizing a finalizer rather than keeping those existing pods hanging around. And then min ready seconds on stateful sets allows and users to specify a number of seconds that a pod must exist without crash looping for the stateful set to be considered ready and have that status. It's an existing feature with deployments and Damon sets and replica sets so this adds parody with stateful sets. And then add count of ready pods in job status. So this take a look here feature adds a field ready that counts the number of job pods that have a ready condition. So a status reflection on the the job spec. All right, now I'll go through the enhancements from sick off, which covers improvements to Kubernetes authorization authentication and cluster security policy. So they just have there's one enhancements from sick off it's but it is one of the major themes it's pod security mission, which replaces pod security policies. Like I mentioned before, pod security policies is targeted to be removed in one that 25 so pod security mission went to beta in one that 23. There is a feature blog on this on the Kubernetes.io website and with some tutorials as well. So pod security mission controller and forces the pod security standards on pods within a space. There's three pod security stand three levels of pod security standards privilege baseline and restricted, and you can set the policy enforcement in three ways as well enforcing audit or warning. And you use this use policy enforcement through label through namespaces and three labels on the namespace stuff we've got. Auto scaling and this relates to all things auto scaling and resource estimation things like that for the control plane. So we've got one kept for for this sake and that is graduating the horizontal pod auto scaler to stable. And yeah, so this adds supports for multiple and custom metrics for horizontal pod auto scaling and nice to see this one go to stable for sure. Next is six CLI which covers keep control and related tools. There's a new command, which is enough as keep control events it's different from keep control get events so it this does add a new command adds more features then keep control get events so there's like default sorting of the events. It can manipulate events more so you could sort events with other criteria, I could also list events in the timeline for last and minutes. It also extends the behavior of dash dash watch as well could also you could change the convenience of fields and custom columns options so there's a little more it extends the keep control get events and so now there's command of keep control events. We've got cluster lifecycle and so this it deals with everything cluster lifecycle deployment and and upgrades of Kubernetes clusters. And the one enhancement that we have here is for QBDM. So when QBDM QBDM does its initial init it creates a config map in the cluster and this is just a cat that changes the naming of that config map to a more simplified form. Next is SIG instrumentation, which covers best practice for observability through metrics logging and events across all the components. Well, so structured logging went to beta so structured so structure logging defined a standard structure for log messages before structure log before this announcement there was no structure for log messages and add some methods to to K log K log is a fork of G log to enforce the structure. And so with this most log in 123 most log messages from cubelets and cube scheduler has been converted related to to K log. Like I mentioned before have K log is a fork of G log. 123 out what it was alpha this deprecation of K log specific flags and Kubernetes components to make logging more simplified. But there are some of the the ones that are being the flags are being deprecated now they're now they're leaving them with defaults. So, so for the K log flags, they all the plants remove all the flags besides dash B and dash B module. Next up we've got SIG network, they're responsible for the components and interfaces that expose networking capabilities to Kubernetes workloads. They also do some of the reference implementations for for those API's like Q proxy and things like that. So first up was one of the major themes that that Ray touched on which was IPv4 and v6 dual stack support. And we're going stable this release so it adds adds dual stack support for pods nodes and services and yeah it's it's a super exciting feature I know that this one. A lot of folks worked really hard to deliver this and it's it's really great to see it go to stable. And next up we have namespace scoped ingress class parameters so it adds a new scope and namespace fields to the ingress class parameter ref field to allow referencing namespace scope parameters resources so this is an actually not super familiar with this one but you've got a description there I encourage folks to go take a look at the cap on the features website. And then last topology aware hints is going beta and so this works to enable topology aware routing. And it adds that that automatic topology hinting mechanism to the endpoint slice controller and then node so this is the work under this SIG encompasses a huge, huge amount of things. And so this is everything to do with the cubelet and schedule it like I guess life cycle of pods that are scheduled to a node. Yeah, lots happening here will go right in this one I'm actually really excited about a thermal containers, along with the cube cuddle debug feature. So it adds a mechanism to run a short live container that executes within the namespace of an existing pod and allows debugging capabilities against running pods without having to do the whole cube cuddle exec workflow. Yeah, this one's super cool. And then we've got a container runtime interface support going to beta. Yeah, and next up we have a C advisor list CRI full container and pod stats. So this will enhance the CRI API with additional metrics to be able to support pod and container fields in the summary API directly from CRI without having to utilize C advisor. So some additional metrics information there and then extending pod resources API to report allocatable resources. So it ends up enhancing the metrics information adds to the the cubelet pod resources endpoint, which will allow third party consumers to get more information about compute allocation. So super useful for getting a clear understanding of the state of resources within a cluster and utilization. And then next up we have CPU manager policies. So this will provide some additional isolation guarantee that no physical core is shared among different containers improves cash efficiency and mitigates the interference with other workloads that can consume resources of the same physical core, which should help with a lot of noisy neighbor issues that that folks operating clusters can deal with. And then got priority pod priority base graceful node shutdown going to alpha so graceful node shutdown itself was a feature that moved up in one of the more recent releases, and this ties pod priority into that feature. So it should take pod priority values into account to determine what order the pods are stopped when going through a graceful node shutdown. And then also add some flags to specify the total time for shutdown and time to reserve for shutting down critical pods. So I know this feature has been definitely a hit with cluster operators as they deal with upgrades and things like that. And then next we've got g rpc probe to pod so it adds the ability to use g rpc to check for liveness readiness and startup probes rather than just typical HTTP. This is alpha so another one of the features that would need to be enabled with the flag. And then, lastly for node CPU manager policy option to distribute CPUs across Numa nodes. So it adds a CPU manager policy field. And when enabled that would trigger the CPU manager to distribute CPUs across my notes. Next is a six scheduling so six scheduling is responsible for that make for the components that make pod placement decisions. First one is the schedule components config API went to beta so this allows cluster administrators to build and validate and version their configurations. So one out 22 is awesome in beta as well but there's been some changes. There's been so the next one 23 there was a beta iteration to be when beta three was introduced. So this is the cube schedule configuration API. Next is the simplified multi point plugin configuration for scheduler. This went to beta. So this feature defines a simplified field for any users can to configure scheduler plugins which use multi point extensions. Next is allowing updating scheduling directives of jobs went to beta this feature makes no definitive notes lecture tolerations annotations labels labels of pod templates to make immutable for suspended jobs. Next is six security. So six security covers the horizontal security initiatives for the for the project includes external security audits vulnerability management process cross any security documentation as well. Well, defend against logging secrets via static analysis went to stable. So the motivation of this announcement came from the 2019 security external security audit, where an exposure or we what was discovered was at secrets where we're exposed to logs or execution environments. In three ways the bear tokens are revealed in logs environment variables exposed to the data. I SCSI volume storage, storage clear text secrets and logs. So with this enhancement, there's a type analysis content propagation analysis, which provides inside how on how the data is spread from within the program. So with this feature with their state, there's a tank propagation analysis tool called go flow levy. So runs as a blocking pre submit tests in which which will detect if the secret is is being exposed anywhere with that pull request. So during the testing of that and will block any pull request that log any secrets. Next is six storage. So six storage is responsible for ensuring the different types of file box storage, which are available when a container is created scheduled. Also responsible in storage capacity management, and also influence scheduling of container space on storage, and also just storage operations as well like snapshots. First one is skip by ownership change. So this feature went to stable. So this is one of the one of the major themes. So the problem before was that when a volume is by mountain side of container. The permissions on that volume are changed recursively to to that FS group value that is provided this change in ownership can take a very long time if divine is very large. So any issues this we saw a lot of issues with databases with very large volumes. So what this feature allows the user to specify how they want the permission ownership change volumes. You could set it to always so always change you the permissions and ownership to match FS group or on remiss match to only perform the permission ownership change. If the permissions do does not match the expectations compared to the top level So there's like I mentioned in one of the major themes that there's a continued efforts to move entry CSI plugin entry to CSI plugins. So this is one of the enhancements that went to is AWS EBS entry to CSI driver migration. So this is the part of that continued effort to migrate entry storage plugins to CSI. So this migrates internals of entry AWS EBS plugin to call out to the EBS CSI plugin. This is another one for GCE PD. Another continued efforts to move migrate entry storage plugins to CSI so migrated internals of entry GCE PD plugin to call out the PD CSI driver so this went to beta. Azure desktop entry to CSI driver migration went to beta. This is another one of the continued entry storage plugins to CSI. Next config FS group policy and CSI drive driver objects went to stable. So this feature allows the CSI drivers to opt in to those volume ownership change that there's a new field CSI driver dot spec dot FS group policy which allows to define if that driver supports volume ownership modifications with the FS group. Generic and lightning thermal volumes went to stable. So this is one of the major themes as well. So this allows you similar to empty the air but with CSI plugins it allows you to use any existing storage driver that sports dynamic provisioning to be used as an environment mobile. Recovering from resize failures went to alpha. So the issue before is that when a PVC is expanded. Let's say if you expand a PVC to that was 10 gigs and you expanded to 500 gigs but underlying source provider doesn't support it. It only supports like 100 gigs. So this feature allows you to resize to change that request to change that to that that resize from 500 gigs to 100 gigs so that you can recover from that from that volume expansion failure. Delegate FS group to CSI driver instead of Kubelet went to alpha. So when the FS group is specified, like we mentioned in the skip volume ownership, the math volume is recursively challenge or tone or CH own or to mod it. So this allows it to you don't the cubit will do this but there are some storage drivers CSI since tone and to mod. So this allows it so that the cubits handles it. So this features moves that to the CSI driver, which can apply that FS group on its mountain step. Portworks file entry to CSI driver migration went to alpha. So this is part of that continued effort to migrate entry storage plugins to CSI. Always on our reclaim policy went to alpha. So this is when if you familiar if you work with pvs and pvcs. You know the issue is that if the person of the process of volume is deleted before the PVC, then the reclaim policy is ignored or associated. So that this feature and so there was a certain order you had to delete it you have to delete the pvc first then delete the persistent volume. So this feature make sure that the persistent volume reclaim policy is always honored, even if you delete the persistent volume first before the pvc. Another. So this is stuff RBD entry provision to CSI driver migration. So this went to alpha. This is part of that continued effort to migrate entry storage plugins to CSI. Next is sick testing sick testing is interested in affecting an effective testing of Kubernetes and automating away project oil. So there is one enhancement for sick testing in 1.23, which was reduced Kubernetes build maintenance went to stable. So this feature reduces the Kubernetes build maintenance by moving to single build system. So with Kubernetes, part of this what part of this proposal was to remove the base of build and any associated tooling and to just use that to use the make build. So this feature allows it just simplify the process by moving to that single build system. We've got sick windows which deals with supporting windows nodes and scheduling windows containers. And it's just the one enhancement for sick windows and that is allowing windows privilege containers so extending that same capability that exists for for Linux containers run as a privilege container and have more health level and getting that working for for windows containers and that is moving to beta with this release. Alright, so next. So that covers all the 47 enhancements in 1.23. What's what's next is going to talk about the release team shadow program and the release team itself. In each Kubernetes release there is three a year there is a release team for each release in that release team is made up of community members from all sorts of different organizations to students as well. And those these release team members handle the day to day operations of the release. Each team is broken down into seven different roles with the release team lead enhancements CI signal bug triage docs or documentation release notes and communication. So and with each of these roles there is one lead with about four to five shadows per role. In recent releases we've had about five shadows for the enhancements team since there there's quite a bit of work on the early part of the release for the enhancements team so so there are a few roles have had five shadows for and the goal for the release team is to train new leads and for shadows to become new leads as well and eventually the future and for and for role leads to share the knowledge that they've had or that share their tasks and knowledge that they've had through many release teams or release cycles like I myself have been on release teams since one dot 18. It also is a good way to for new contributors as well to be to be introduced to the Kubernetes project. So and each release cycle generally takes about four months a give or take or about 15 weeks. Every week depends on the workload for each release team role is it varies on the week and it also varies on the role itself like enhancements is very early on the release heavy so they do quite a bit of work and within the first three to four weeks of their of the recycle and then do quite a bit of work around code freeze but there are some roles like docs and communications and that will there more detail and of the release heavy so they do most of the work towards docs does quite a bit of work around code freeze and code freeze to the release and they do quite a bit of work and really stay itself and save the communications where they tend to do a lot of a lot more work towards the middle to tell and of the release. Once we know what enhancements are going to make the release. So I just want to invite folks who are interested in learning about joining the release team to check out the GitHub repo for the release team shadows. Like Xander mentioned in the early in the beginning of this webinar the one that 24 release cycle will start on Monday so so the release so every before every release cycle we release a shadow application. So I do suggest to join the kubernetes slack and join the the the sick release channel or the sick release main list or the or the kubernetes developers main list to get notifications when those shadow applications are out. So I'm going to go to any questions. Check the chats. One question in the chat about CNI plugins that are compatible with Windows base nodes. And I think I'll just kind of add a a all up thing. Don't know specifically on that but I think for questions on specific caps that you want some more detail on a good place to go is going to be the sick channel for that kept on the kubernetes slack. So if you are a member of the kubernetes slack the the sick windows channel would be a fantastic place to ask that question and I'm sure you could get it answered super quickly. Yeah thank you Xander scroll through if there's any other questions here. I think that was it. So I want to thank everyone for their time. And you know, I do actually one more question. Well the other container on time. So there's a question about container runtime and do you want to make a note in one that 24 Docker shim will be removed. And it's not a container runtime it's a shim so that folks can use the Docker Docker engine or the Docker container runtime. With kubernetes into when starting 1.24 Docker shim has already been deprecated so be removed in 1.24 so 1.24 you would do have to to to to use a container runtime that is compliant with the container runtime interface so container runtime there's there's quite a few that are out there like container decryo and there's more we more on there are several blog posts on this as well so you can look it up on kubernetes.io. So with that I do want to thank everyone for their time for joining us on this release webinar for 1.23 I'm learning what's new 1.23. I want to thank my other co-hosts here Xander and Karen as well thank you for your efforts and the 1.23 release cycle and thank you Libby for hosting us. Of course thank you all so much for kicking off 2022. And thank you everyone for joining us look for the recordings later today. And with that, I will say goodbye to everyone and thanks for thanks for joining. Hi.