 And thank you, everybody, for joining us today. Welcome to today's CNCF webinar, What's New in Kubernetes 1.15. I'm Kim McMahon on the marketing team at CNCF, and I'll be moderating today's webinar. I'd like to welcome our presenter today, Kenny Coleman, the Kubernetes 1.15 Enhancement Lead at VMware. We also have on the pan list, we have George Castro, who will be available to help with some Q&A as well. A few housekeeping items before we get started. During the webinar, you are not able to talk to an attendee. There's a Q&A box at the bottom of the screen. Please feel free to drop your questions in there. And Kenny will go ahead and answer those as they come in and as he can. Now, with that, Kenny, I'll hand it over to you to kick off today's presentation. Thank you, Kim, and thank you to all at this point, 188 of you that are dialed in and joining this and to everybody that'll be watching the recording later. It's a pleasure to be once here again presenting on what's new in the latest stable release of Kubernetes. I was your Enhancement Lead for 1.15. Big shout out to Claire and the entire release team for having an awesome session and release that actually got this out the door. So today, we're going to be focusing, of course, on really what's new. What do you all care about as users and contributors to this great open source project that we all love? So we're going to be looking at three of the major features that came out and kind of looking at them in really a lot of depth. We're also just going to highlight and see sort of what the numbers are at a high level. And then we're going to look at really every single one of these individually. And we can see exactly what each one's sort of bringing. We'll touch on them just so that you're a little bit more educated. You have an idea of maybe what's important to you or what's important to the SIG that you're involved with as well. And you can kind of see exactly what's happening there. OK, so on the 1.15 enhancements, as I mentioned, just from a high level, we didn't have a whole lot happening. This is kind of what happens with every single release cycle. There is a big influx of new requests, new features. However, one thing that is required by an enhancement to be able to be graduated or to be included in a Kubernetes release, it must have a KEP or a Kubernetes enhancement proposal that is tied to it. This KEP has a few different things involved. It has to have graduation criteria. It has to also be sort of consensus by the SIG in the overarching community that it will be integrated into Kubernetes and it will be owned by that particular SIG. So if you're new to this ecosystem or if you think that you have an idea for an enhancement or a feature that you would like to see with inside of Kubernetes, I encourage you to do not just go ahead and think that you want to just start coding right away, but instead get involved with your special interest group or your SIG and bring it up there for consensus, making sure that everybody is on board. And then from there, it can go through the process of figuring out who is going to be the owner, who is going to be the code reviewer, who is going to be taking care of everything else from that sort of point. So that's sort of the process that you want to go through if you are looking to bring a new enhancement into Kubernetes. So as I said, right here, we have 25 that were tracked for 1.15. It usually goes in this sort of cycle where we have about double this. There's probably about 50 to 60 that were being tracked at one point. However, because of missing enhance it freeze, missing code freeze, not having proper Kepster documentation, therefore, they were punted from this particular release and they weren't being tracked. So we're going to touch on all these, as I said, pretty lightly, but kind of at a high level overview. But just as you can see on here, we had 10 new alpha enhancements that got introduced into Kubernetes 1.15. 13 that are now graduated to beta. And to have an idea exactly what alpha and beta really mean in regards to this is that anything that is an alpha feature is usually behind what's called a security gate or a feature gate. That has to be manually unlocked with inside of your configuration to be able to use that particular feature. However, when it graduates to beta, it is no longer behind a feature gate. And that means it is a feature that is used or can be used without having to go through that. And then, of course, stable is something that has been very, very well regression tested, had some bacon and some burn in time by the community, had a lot of feedback. And it's ready to move to that next phase where it is just part of the core release at that point. So we're going to see some of these numbers change ever so slightly with 1.16. I'm also your 1.16 enhancement lead as well. We'll save that for the very, very last slide as we go through this. So some highlights. Let's look at the first one here. First is the ability to create dynamic, highly available clusters with QVADM. Most of us are probably aware of what QVADM is. It's a tool that allows Kubernetes administrators to quickly, easily bootstrap a minimum viable cluster that's fully compliant with a certified Kubernetes guidelines. It's been under active development from SIG cluster lifecycle since 2016. And it graduated from beta to GA at the end of 2018. And this tool is really supposed to be a composable building block for making higher level tools on top of it. And the core of QVADM, it's pretty simple. You have new control plane nodes are created by utilizing QVADM in it. And worker nodes are joined to the control plane by utilizing QVADM join with on those particular nodes. And these are all common utilities now that are be used for bootstrapping clusters. You can do control plane upgrades. You can do token and certificate renewal as well. Now, one of the things to also understand about what QVADM brings is that it is not an infrastructure provisioning tool. There's no third party networking. There's no add-ons. There's nothing that it does in regards of monitoring, logging or visualization or traffic or anything that's really specific to a particular cloud provider. And really what this is supposed to be is that kind of common denominator for every Kubernetes cluster utilizing the control plane. So graduating to beta in 115 is now the ability to create dynamic, multi-master, highly available clusters with QVADM. And there's a lot of additional bonuses that come with this, such as automatic certificate copying and rotation during these upgrades. This is going to make it easier for highly available clusters utilizing QVADM to use that same init and join commands that you've already been familiar with. The only difference now moving forward is that you want to pass this control plane flag to the QVADM join when you're adding more control plane nodes. There was a lot of effort that was done with inside QVADM to achieve this particular goal. Among them was a redesign of the QVADM config file and making sure that we had some graduation criteria built in to actually have that control plane workflow flag added to it. And we'll touch on the config file a bit later. But to set this up, just how you're, if you want to be aware of how this works, it still requires a load balancer to be pre-provisioned. So you can use anything like HAProxy or Envoy or anything that's actually provided by your particular cloud provider. And with inside of the QVADM configuration file, you set the control plane endpoint field to where your load balancer can be reached. Then you run init with the upload certs flag. So you would do something that's pushed here. It says like sudo QVADM init, put your config flag and then you have your upload certs flag. And so both these nodes can now be joined in any order. So whether you want to add more control plane nodes or add worker nodes, they can all be done concurrently. And in the background, there's a lot happening here. So QVADM implements automatic certificate copy features. So it is automatically distributing all of the certificate authorities and its keys that have to be shared amongst all of the control plane nodes. So that gives your cluster working and that's what utilizes that upload certs flag. And when you're not providing an external SED cluster, QVADM automatically is going to create a new SED member for every new master that's added and it's kind of running as a static pod on that particular control plane node. And this is what's called a stack configuration file. So you have the concurrent joining that's available to you. It's also building more workflows to actually make it upgradeable. So if you want to properly handle this highly available scenario and starting with the upgrade, you can hit the applies per usual and users now can kind of complete that configuration process by upgrading the node on both the remaining control planes and then moving to the worker nodes after that. All right, so moving on to the next one, allowing PVC data sources to extend or what most of us have messed with storage in a long time, talk about volume cloning. So features like volume cloning are pretty prevalent in most storage devices. And then not only is it incapable on most storage devices, it's pretty frequently used in various use cases, whether you wanna duplicate data or you wanna use it as a particular DR or disaster recovery method. And clones are different some snapshots in that regard. Clone results in a new duplicate volume that is being provisioned from an existing volume. It also counts against the user's volume quota at that point. And it follows the same creation and workflow and the validation checks if you'd be using if you're gonna be utilizing some other kind of provisioning requests. Where snapshots on their hand result in that sort of like point in time copy. It's a point in time copy that self, but in it's still a usable volume. And it can be at that point, you can either provision a new volume from it or you can restore the existing volume from a previous state. So the Kubernetes storage thing identified the clone operations as kind of one of those critical functionalities for a lot of the stateful workloads that we wanna run inside of Kubernetes today. So if you're a database administrator, you may want to duplicate a database and actually create another instance of that utilizing one of the existing databases that you have already. So providing a way that we can trigger a clone operation utilizing the Kubernetes API. And now users can handle this without having to go around the API. And you can now you don't have to worry about adding cloning support and all those other kind of pieces into it. So the cloning feature is enabled utilizing the persistent volume claim data source field. And adding support for that is allowing you now to clone a volume. So with this, there are no new objects that are introduced to enable cloning. Instead it's utilizing that existing data source field in that persistent volume claim object. And what this is able to do is now able to accept the name of it in the same exact name space. And it's important to note that when you're wanting to use this from a user perspective, cloning is just another persistent volume and another persistent volume claim. The only difference being that the persistent volume is being populated with the contents of another persistent volume during that creation time. So after creation it behaves, it just behaves like any other Kubernetes persistent volume and it's gonna have to follow the same behaviors and rule sets that you would expect. And at this time, cloning is only supported for CSI drivers not for entry or flex. So if you want to use this Kubernetes cloning feature ensure that you are utilizing the CSI driver that is also implementing this cloning feature with inside of it as well. And as you can see in this example right here, this is assuming that we have a persistent volume claim with the name PVC one and it exists in the name space called my NS and has a size less than or equal to 10 gigs. And the result would be a new and independent persistent volume and persistent volume claim called PVC two. And on the back that device is then going to duplicate of that data and it's going to now push it into the same exact name space. All right, so moving on. Now I didn't put anything right here for CRD mania because there's a lot of stuff that happened in regards of custom resource definitions. And that's going to follow along when we talk about each one of the individual SIGs and their components together. So I kind of lump these all together when we get into the SIG API machinery portion of this. I'm going to take a second just to look and see if there's anything in the chat, if there's anything that you need to catch up on. Progression test, performance question test. If anything about regression tests, performance regression tests, if you need to know anything about that, you need to talk for your individual SIG or you need to go to SIG scalability and talk to them if you're looking for something right there. Okay, so moving on to SIG API machinery. So this first one isn't all about CRDs. However, it's about admission webhooks and these are all kind of playing into the CRD landscape and we'll kind of talk about how this all fits here in a second. So admission webhooks are currently widely used for extending Kubernetes and they've been in beta now for three releases. The admission webhook is a way that you can extend Kubernetes and you can put a hook on an object during its creation, its modification or deletion of that particular object and these webhooks can mutate or validate those objects in itself. And right now it supports namespace selectors and this is great, but it's like an all or nothing within that type of namespace and you may not want to get all the activity that's happening. So this has now been extended to include a single object selector within this new beta enhancement. So now on to CRDs. If you're not familiar with what a CRD let's make sure that we set a baseline. So a resource is an endpoint in the Kubernetes API and that stores a collection of API objects of a certain kind. So for example, you have built in pod resources and that contains a collection of pod objects. Now a custom resource is an extension of that Kubernetes API that is not necessarily available on every Kubernetes cluster but it represents a customization of a particular Kubernetes installation. And today there's many distributions out there that are utilizing CRDs as their own special sauce. So when we start looking into what this particular enhancement is doing is adding defaulting and pruning for these custom resources. So defaulting is implementing for most of the Kubernetes API types. It's gonna play a crucial role because what it's gonna do is now it's gonna make sure that there's API compatibility when you're adding new fields and custom resources don't do this natively. And so this was all about making sure that we're specifying default value fields that are following along with the open API version three validation schema. And this is all happening inside the CRD manifest. And so once this now has this native support it's gonna have a default field for arbitrary JSON values. And what is gonna happen now these default values during the deserialization it's gonna assume some sort of structural schema. And these custom resources now store this arbitrary JSON data without following a typical Kubernetes API behavior to actually prune these unknown fields. And so this is also like a security aspect of this as well because what it's gonna do is pruning is now going to enforce consistency of those data fields that are stored inside of NCD. And objects just can't suddenly render inaccessible data because it's lost breaking any sort of decoding aspect or construct of that as well. And if the unexpected data inside of NCD is at the right tight and doesn't break decoding but it hasn't gone through its validation it's gonna be probably an admission webhook or something along those lines that isn't going to exist and it'll get pruned out of there. So pruning is again is a countermeasure to kind of look at a security attack vector here. And what it's gonna do is make use of the knowledge of future versions of APIs with those new security relevant fields. So without pruning, an attacker could potentially come in prepare a custom resource with privilege field set and on a version upgrade of a cluster those fields can suddenly come alive and lead to unallowed behavior with inside your cluster. And as of this point since this is Alpha this is gonna be standing behind the feature gate called custom resource defaulting and it is disabled by default in Alpha so you need to go and manually enable that. So the webhook conversion for custom resources kind of plays in a little bit what I talked about with the previous slide on defaulting and pruning because you can default and prune with a webhook conversion but it's not a native style and it requires additional work to make that happen. So the existing problem is that when a webhook needs to make a request to another service but those APIs have progressed or changed CRD user wants to be certain that they can involve their API before they get down the path of developing the CRD plus the controller function. And the webhook conversion allows developers to now evolve their API and still maintain backwards compatibility utilizing versioned API resources. And this is gonna allow objects and services to hold multiple versions at the same exact time and now you can convert a webhook from one version to another based on its need. Now the CRD open API schema I kind of mentioned that already is that it's utilizing open API v3 to enable this server side validation for custom resources. And this validation format is compatible with creating open API documentation for the custom resources. It can also be used by clients like CubeCuttle to perform client side validation. If you're using CubeCuttle create or CubeCuttle apply it can also do things like client generation. And this enhancement again will be using the open API v3 schema to create and publish those open API documentation for the custom resources. Now the watch API is one of the fundamentals of the Kubernetes API. And right now there's a recommended pattern for utilizing this to retrieve a collection of resources. And it's following this using a consistent list and then initiating this watch starting from a resource version and that then returns this list operation. And now if the client watch is disconnected a new one can be restarted from the last returned resource version. Now this new proposal to add in bookmark support is going to be actually creating a cheaper resource consumption model from it as well. And it's gonna be looking that from the performance of the Cube API server. And different scalability test has been shown that when you want to restart these watches it can cause significant load on the Cube API server. And when you're, especially if you're looking for like a small amount of percentage changes due to like a field or a label select or anything like that. But in extreme cases reestablishing this watcher can lead to falling outside of the history window and getting an error back that says there's a resource version that's too old. And the reason for that is that the last item received by the watcher has a resource version kind of tally or an RV one next to it. And we may already know that there aren't any changes that are given to the watcher and is now it's interested in saying I wanna kind of level up here and I wanna get my RV two or my resource version two. And so the goal of this is to reduce the load on the API server as I'd already mentioned and it's gonna be doing this by minimizing the amount of unnecessary watch events that are needed to be pre-processed after restarting a watch. And so the proposal is gonna introduce a new type of watch that called bookmark and this type of event called bookmark it will represent information that all objects are utilized as they're given a resource version as they've been processed for a given watcher. So even if that last event of the other types contain the object with resource version RV one receiving a bookmark with the resource version RV two means that there aren't any interesting objects for that watcher in between. So it can kind of just put it to the side. All right, so that's for that SIG. Now moving on to SIG apps. That was one of the larger ones for most of these and I think storage is sort of a larger one too but moving on to SIG apps. So the pod disruption budget. This is again sort of like a custom resource that's been graduated to beta here. And this is an important tool to control the number of voluntary disruptions for workloads inside of your Kubernetes cluster. So the pod disruption budget or PDP if I'm gonna try to say it without stumbling over myself it allows a user to specify the allowed amount of disruptions through either a minimum available or maximum unavailable number of pods. So in order to support this where a maximum number of unavailable pods is set the controller needs to be able to look up the desired number of replicas. And it does this by looking at the controller and the controller in itself is gonna have four basic workload pieces that are supported by the pod disruption budget. That's the types of controllers are deployments, staple sets, replica sets and replication controllers. And there's also a scale sub resource that's a part of this as well. And it allows any resource to specify as the desired number of replicas. And in a generic way, it can look up this information. So this will now support using the scale sub resource to allow setting these pod disruption budgets on any resource that implements this scale sub resource. FabianS is strategic merge patching already supported with custom resource definitions. I'm not sure I can answer that one for you. That would be one to take to API machinery and hopefully somebody here can answer that one for you as well. So on to SIG architecture. So go module support. This is one that was kind of weird where it went from didn't go through Alpha or Beta just went straight to stable. And this is because go modules have already been very well tested with inside of just the ecosystem in general for go lang. And to give you kind of the background and the history here since the inception Kubernetes has utilized things like go depth to manage its vendor dependencies with inside the code. And so to ensure reproducible builds and it does that to actually have an audible source and reproducible builds. And as the go system kind of matured here vendoring became sort of a first class concept. So go depth became unmaintained. Kubernetes started using a custom version of go depth. There's other vending tools like glide that became available and dependency management was ultimately added directly in the form of go modules now. And so the plan of record is for go 1.13 to enable go modules by default and deprecate the go path. This is mostly gonna be utilized for anybody that's actually contributing and utilizing a lot of these pieces and vendor modules with inside of Kubernetes. But to understand that there's, it's the addition of just trying to keep things simple with inside the go ecosystem because go modules provide a whole lot of benefits. It can rebuild the vendor without utilizing go modules that provides that 10x increase in speed over go depth from some of the preliminary tests that were ran. Go modules can also produce a consistent vendor directory on any operating system. And if semantic import versioning is adopted, consumers of Kubernetes modules can use two distinct versions simultaneously. Their recording will be shared. I see somebody's asking that in chat. So it will be available after this. It will also be available on YouTube. So thank you for asking. All right, so six CLI. So a cube cut will get in described should work well for extensions. This is now graduating to stable in 1.15. So the, to kind of kind of look at this, it's a server side, the git and the partial objects. This is now being brought to GA. This is also coming sort of feature complete as we're looking at removing some of the legacy printers in the subsequent versions. And this also updates the controllers that would benefit from the use of things like partial object metadata without the fear of deprecation. And so partial object metadata allows these controllers to perform a protobuf list on actions. And so that's just one of the things that you'll be able to see with inside of here. But again, this is going to allow you to get columns back from the server and not the client. And it's going to allow extensions to work a lot more clearly. So moving on to say cluster lifecycle, as I mentioned, we're going to start rolling through this pretty quickly here. So we already kind of talked about QBADM and this HA features of why this was graduated to beta. But one big part of this was the QBADM configuration file. This is now graduated to 1.5. And so if you're familiar with utilizing QBADM today, you should re familiarize yourself because a lot of the things that you might be utilizing inside of the configuration file might have changed and you just need to make sure it's validated against it. And so this is really one of those first touch points that for a lot of Kubernetes users or any for any higher level tooling, they use to actually build these conformant clusters. And utilizing a configuration file is a lot more stable and reliable than utilizing something like a bunch of flags using a join or init command or anything like that. And the QBADM file is just a set of YAML. And it's got version constructs in there that follow the Kubernetes API conventions. So it follows the regards of API version and such like that. Now the file was originally created, as Ed mentioned, like an alternative to those command line flags for the init and join commands. But over time, the number of options that were supported by the QBADM config had just grown so much that it had to be kept under control and limited to like the most simple use cases. And so today, the config file is really the only viable way for implementing many use cases like the use of an external LCD cluster. Customizing the Kubernetes control plane components are utilizing cube proxy and cubic parameters. And so the config file today sort of acts as a persistent representation of the cluster specification. And so it can be used at any point anytime after the QBADM initialization. And it can be actually utilized for the QBADM upgrade actions as well. And so these are new config options that have been adding now for new and existing features for QBADM. So over time, you're gonna see QBADM is gonna be gaining new features, which are going to require the addition of newer config file formats. And one of these was of course the V1 beta one API version that was added in for the certificate copy featuring as we saw with AHA control plane of actually having that new flag of just join control plane. So moving on to SIG network. Node local DNS cache. This is now graduating to beta. And this is an add-on that runs a DNS cache pod as a daemon set to improve your cluster DNS performance and its reliability. And this add-on runs as a node local DNS pod on every cluster node. And this of course runs core DNS as it's DNS cache. And it runs with the parameter of host network is set to true and creates a dedicated dummy interface with a link local IP to listen for DNS queries. The cache listens to the instance that's cluster DNS is there in case of any sort of cache missions. It had been an alpha since 1.13, but based on initial feedback, HA seemed to be one of the most common asks. And so now we're looking at providing enablement for high availability and a full implementation to go into that next level, which will be the GA criteria moving forward here. So on the load balancer side, there are various use cases where a service controller can leave an orphaned load balancer for resources after the services already been deleted. And so there needs to be a better mechanism to ensure that there's a cleanup of these load balancer resources with inside of Kubernetes as well as with inside your infrastructure. And so now there's a finalizer service. And what this is doing is it is attaching itself to any service that has tight load balancer. And if the cluster has a cloud provider and integration enabled, what it's gonna do is upon the deletion of that particular service, the actual deletion of the resource will be blocked until this finalizer is removed. And so the finalizer will not be removed until the cleanup of the load balancer resources are actually considered finished by the service controller. So hopefully at the end of the day, this all saves us resources and money within all particular clouds as well. So moving on to SIG node. The quote is for ephemeral storage. Now you might think this would be part of really like SIG storage, but in fact, this is more along the lines of like metric gathering for ephemeral storage. So local storage capacity, isolation or ephemeral storage provides support for shared storage between pods. So a pod can be limited in its consumption of shared resources and it can be evicted if its particular consumption of that shared resource actually exceeds its particular limit. And so the limits and requests for the shared storage are similar to those that you would see for like memory and CPU consumption as well. And the current mechanism relies on periodically querying sorry about that, kind of looking at periodically, looking at each one of these. Nope, I'm already supposed to be at node, there we go. And it's gonna be look at each one of these and what it's gonna be doing is querying it and then summing up the space consumption at the end of it. And so today this method is pretty slow and has high latency involved with it. And this mechanism proposed here is gonna be utilizing a file system project quota. And this is gonna provide monitoring of resource consumption and optionally actually enforcing the limits itself. So project quotas sort of are in the form of the file system quota that apply to these particular files. And then it also offers a kernel based means of actually monitoring and restricting the file system consumption and it can be complied to one or more directories as well. So support for third-party device monitoring plugins. This is now moving to graduating to beta and really this falls under extensibility because since there's a whole ecosystem built around performance monitoring and management of your clusters and device monitoring and device management today it typically requires external agents to be able to determine if those sets of devices are actually in use by the containers. So types of these container logging exporters like FluentD they have like a container monitoring agent something like CAdvisor. And you also have device monitoring plugins like you have the NVIDIA GPU monitoring plugin and each of these face a similar problem. If you given a metric or a log associated with a container how can it use this information with its metadata so it can be filtered by its namespace pod container or whatever. And the goal is to remove this device specific knowledge from the Kubelet and requiring that device specific knowledge to be out of tree now. And this means that a cluster administrator knows that every device is pulling data from a specific vendor agent and is doing so in a compatible fashion and doesn't require a myriad of implementation differences. And so now device vendors can provide tools that live out of tree and aren't gated by the Kubernetes release cycle. And PID limiting. So PIDs are fundamental resources on Linux hosts. It's trivial to hit the task limit without any other resource limits because the instability of a host machine at this point. So administrators require mechanisms to ensure that users and their pods can't induce PID exhaustion. And that prevents host daemons such as the runtime and Kubelet from running at that point. In addition, it's also important to note here that making sure that there's enough that PIDs are limited amongst these pods to ensure that they have limited impact to other workloads on that particular node. So to enable PID isolation amongst pods, you can utilize support pods limit feature that is now no longer gated with inside of here. And at the same exact time, the nodes can be allocated to a well-established feature concept with inside the Kubelet. And this allows the isolation of user resources or user pod resources from host daemons at the Kube pods C group level that parents out to all end user pods. All right, moving on to SIG scalability. So adding more structure to the event API and this is gonna also change the deduplication logic so events aren't overloading the cluster. This is an alpha improvement, really performance side of things that you're gonna see with inside the event API now. And there's relatively a wide agreement that the current implementation of events in Kubernetes is problematic. So events are supposed to give an app developer insight into what's happening with their particular application. And important requirements for the event library is that it shouldn't cause performance problems in the cluster at the same exact time. So the problem is that neither of these requirements have actually ever been met. Currently events are extremely spammy. So event can is really admitted like when a pod is unable to schedule every few seconds or it's set with unclear semantics because a reason was not understood by the developers for a quote unquote reason taking action or a reason for emitting this particular event. So this effort has two main goals. First is to reduce the performance impact that the events are gonna have on the rest of the cluster and secondly to add more structure to the event object which is first and the necessary step to making sure that it is possible to create event analysis and be able to automate that in the future. But moving on to SIG scheduling. Into the, or should I say there's a lot of features that are being added into the Kubernetes scheduler. And this is a new framework that's being added as alpha. And as new features are being added to the scheduler the code base just becomes very large and the logic becomes more and more complex. And the more complex the scheduler becomes it's just the harder it is to maintain. That means it's harder for bugs to be able to find a fix and users that are running some sort of custom scheduler have a hard time catching up and integrating these new changes. And so the current Kubernetes scheduler provides web hooks and that allows it of course we talked about earlier to extend some of its functionality. However, it can also be sort of limiting as it hinders building high performance and versatile scheduler features. And so now the scheduler framework defines a new extension point and go APIs inside the Kubernetes scheduler for use by plugins. And plugins add scheduling behaviors to the scheduler and these are now included at compile time. So the scheduler's component config will allow plugins to be enabled, disabled and reordered. Custom schedulers can actually write their own plugins that can be now out of tree and compile a scheduler binary with their own plugins included while keeping the scheduling core simple and maintainable. If you go and you check out this particular link on this issue, there's actually somebody that's already written their first custom scheduler that utilizes the scheduling framework as well. So you can go and you can check that out on this link. So adding non-preemption option to priority classes. Now priority classes are going GA or generally available in 115. And this is impacting the scheduling and eviction of pods. So if you're unfamiliar with this, pods are scheduled according to a descending priority. If a pod can't be scheduled due to insufficient resources, lower priority pods will be preempted to make room. And this enhancement makes preemptive behavior optional for a priority class. And by doing so, it's adding a new field called priority classes, which is going to populate with inside of the pod spec. And if a pod is waiting to be scheduled and it doesn't have preemption enabled, it will not trigger preemption of other pods. So batch workloads typically have a backlog of work with unscheduled pods. Higher priority workloads can be assigned a higher priority via the priority class. And this may result in pods with partially completed work being preempted. And this is adding non-preemption, this is actually going to allow users to prioritize the scheduling queue as well. And it's going to kind of, it's not going to worry about discarding and complete work at the same exact time. So this is going to be again, adding into the pod spec, adding this preempting field into this and the priority class. So if preempting is true for a pod, then the scheduler will preempt the lower pods to schedule this particular pod, which is the current behavior. And if preempting is false, a pod of that priority will not preempt other pods. So setting the preempting field in the priority class provides a straightforward interface and allows resource quotas to now start restricting preemption. All right, SIG storage. We're coming down here to the end folks. So the online resizing of persistent volumes. This is something that if you're a database owner or if your particular application is simply just running out of room, there needs to be a capability to resize the volume on demand while it's still in use. And this is critical for applications that support many concurrent users, but perhaps haven't taken advantage of cloud native database type. So if you're a MySQL user and you're running out of space and you want to dynamically increase the size without losing data and staying online, if you use a rewrite many file system like GlusterFS, you can resize a lot of stuff without taking it offline all the time. But however, this feature enables users to increase the size of the persistent volume claim, which is already in use and is already currently mounted. And the user will update the persistent volume claim to request a new size. And underneath we expect that the cubelet is going to resize the file system for the persistent volume claim accordingly. So providing environment variable expansion and subpath mounts is graduating to beta. And this feature allows a user to provide really, should I say dynamically allows you to kind of generate host paths with particular mounting volumes. So the subpath feature creates directories on demand. And as the names get assigned to these directories, they become sort of static. So supporting downward API variables provide a way to share storage as well. So in this example here that you see on the screen, the field path and the subpath combine and over time the host storage is gonna be looking for creating something that you see underneath of it. Where the containers would need to change any of their logging logic of actually how they are tying themselves to these particular persistent volumes. The entry storage plugin to CSI driver immigration. This is something that is going to be continually working on for the next few releases as everyone is probably aware. There are no more code pull requests that are being merged into KK for any storage features or any storage changes. And the initiative is to move everybody over to CSI or the container storage interface and away from this entry driver. And that is the current path forward right now. There was more continual work that happened to migrate some of the internals of the entry plugins to then start calling CSI plugins while maintaining that original API such as CSI volume resizing and now translating storage class objects at the same time. Now the execution hook, or I'm sorry, this is, I think, okay, nevermind. This is, I think I believe this slide is incorrect up here because this is actually, should be talking about volume snapshot features. I apologize if there's a copy paste error on my end right here. But I'll kind of tell you a little bit about what's happening with inside of the volume snapshot feature. Of course, as you know, it creates, or it allows deleting and creating volume snapshots and the ability to create these new volumes from a snapshot natively utilizing the Kubernetes API. However, the application consistency is not guaranteed and a user has to figure out how to kind of queues that application before taking a snapshot. It unqueues it after taking a snapshot. And I'm sure that many of us that are familiar with utilizing applications that need to queues the data down the disk or flush into disk, this is a pretty big feature. So this is now being introduced as an execution hook to facilitate this sort of queuesing inside of here. And so there is an existing lifecycle container hook with inside of the container struct. And this lifecycle hook is called immediately after a container is created or immediately available or immediately after the container is terminated. And so the proposed execution hook here is not tied to the start or the termination time of the container, but it can be triggered on demand by the callers or the status that can be updated dynamically as well. So this proposal is introducing an API as a form of an execution hook to dynamically execute user's commands in a podding container or a group of podding containers through a execution hook controller to manage that hook lifecycle. And so this execution hook provides this new mechanism to trigger hooks or commands inside the containers for any use case that you could possibly think of. So I guess maybe the slide was halfway, right? So I apologize for maybe throwing some confusion there. All right, so that's it for storage. So what's coming up next in version 116? So we are currently already in four weeks of the 1.16 release process. The enhancement freeze is gonna be set for July 30th. If you're curious of just checking out what is gonna be new, what's gonna be graduating stable, what's gonna be moving to beta, what's on track to be kicked out, so on and so forth. You can use that bit.ly link that's located here and you can go check out that particular spreadsheet. The GA target for Kubernetes 116 is set for September 16th. As usual, this is a moving target. It's all based on a lot of factors that go in due to the release process. This is from measuring CI signal to make sure that there's no bugs or anything like that, that we wanna make sure that are squashed as we go into this as well. All right, so I know we have a few more minutes left. I'm gonna try to reach through some of these questions here and try to answer them. I'm gonna try to scroll back here. Does PVC cloning lock up the underlying disk? Robert Johansson asked this. I cannot answer that for you completely because I'm not part of the six storage people that had created this. However, I would dig into the documentation and check it out. If not, you can always, anybody that ever has a question, you can always go to the CNCF Slack or you can go to the Kubernetes Slack. Fine, or should I say go to Kubernetes Slack, find the SIG that you're interested in. You can ask those questions. A lot of the maintainers are all located inside there. If you want to get any more information or you do want to get involved, I'd encourage you to go and just Google Kubernetes SIGs and you can get yourself involved in those individual SIGs and be a part of the community and understand exactly what's happening for those particular aspects that you find interesting as a part of Kubernetes as well. I'm not too sure what SIG-Free was asking here about not being able to add a damage set or something similar. Paul Werner asks, what is the typical use case for dynamic HA cluster? So today, when you are creating a Kubernetes cluster, you have your controller node and you have your worker nodes. Most of us that want to run in a sort of higher level production scenario, you want to have the ability for failover because if that controller node goes down, the API server or a lot of the components, not necessarily, you can't make any changes or anything. Your workloads will still be running because they're utilizing the node workers that are actually running your workloads. However, the controller is needed to make any changes to the cluster or run any new pods or to do anything like that. So what you want to be able to do is you want to try to figure out how can I make my controllers highly available. And so that's been available for a while. There's plenty of ways today that you can go and you can read of how to do certificate sharing and how to do TLS sharing and TLS setup and load balancer setup amongst multiple controller nodes. The documentation is already on the Kubernetes website. However, what is being introduced with inside of QBADM for HA is allowing you to kind of do it in a one-liner, doing it very, very quickly and very, very easily without having to perform all of those manual steps. So that is the beta feature that's now being introduced with inside of that particular tool set. Let's see. John Owings asked, does resize require a pod restart? I have not tried this myself so I cannot answer that. Once again, I would encourage you to just check out the documentation or try it yourself and kind of see what you can find out about it. Of course, anybody is always happy to accept documentation help coming into all of this as well. Did, let's see. In asked, did NetApp contribute Trident Management software of PVCs to the open source community? I have no idea. Can answer that for you. Myer, does dynamic HA cluster scale the LCD cluster if yes, what will the impact be on its performance? So yes, as I mentioned in here, the dynamic HA cluster will also scale the LCD cluster. So every new controller node you add, it will, and if you do not configure, you not specify that you want an external LCD cluster to be managed, then it will create another LCD pod on that controller node and it will be added into the existing LCD cluster. This is running in a stacked configuration or a stacked architecture. I can, and he says, what will be the impact on its performance? Unfortunately, I can't answer that question for you. I can just only answer from the architecture changes to this. Lucas asks, are there any breaking changes to the ingress resource in 115? As far as I know, no. But if there's something that you're very passionate about, I would encourage you to go and ask within side of the network. Oh, so Siegfried was asking about the pod disruption budget and why it couldn't just be added as to a daemon set or something similar. I can't answer that for you. It was not my feature. I am just the cat-birder of the enhancement. So I can't go in depth of each one of these individually, just some of the ones that I've kind of had a little bit of touch on with. But again, I would encourage you to just reach out to the owners and ask them sort of the idea. And as well, I can tell you for a fact that this pod disruption budget had gone through a cap. So you can go to the K enhancements repo, search the caps, and you can see everything inside there of documentation, architecture, code, everything like that, that's gonna be going into that particular feature. Okay, any other questions? Kenny, we have a couple of questions in the chat that somebody couldn't post to the Q and A. There's one down there. What is the migration path to 1.15? And I believe that was in response to Paul's question, the typical use case for a dynamic HA cluster. I cannot give you the operation manual of how to upgrade from 114 to 115 utilizing the new QBADM tooling. I would always encourage you to test with a test cluster or something like that first. As I mentioned, the QBADM file has gone through some configuration changes. It is supposed to be maintaining backwards compatibility. So I would just encourage you to check that out and try it before you move on to a production cluster. There's another question of, is it available on G-Cloud build like Q-Cuttle is? I can't answer that for you. I don't push the bits for QBADM, so I couldn't tell you exactly if it's available on a particular build site. Sumon asks, does Minicube currently support 1.15? I don't push Minicube, I don't mess with it, so I'm sorry, I can't tell you exactly what the maintainers of Minicube are actually using. The fact that 1.15 has been out for almost over a month now, if it's pulling from latest stable, and if you are pulling from latest stable of Kubernetes, and it will pull 1.15. Okay, any other questions? Well, you are welcome. Anyway, I think Kenny, thanks for a great presentation. For everybody joining us, the webinar recording and slides will be online later today, and we look forward to seeing you at a future CNCF webinar. Have a great day. Thanks, Kenny. Thank you all.