 Good morning, good afternoon, good evening, depending on where you are, welcome, KubeCon. My name is Machae and I'm joined here by Ken and Janet. And we are going to give you a quick update on what SIGApps has been doing over the past year. So first of all, let's start with what SIGApps is responsible for. In short, we are responsible for deploying and operating application Kubernetes. So that means all the controllers and everything around DevOps, CI or basically running applications. If you're interested in showing your cool application core feature that you build, whether that will be a third-party controller or an external tool that we use as Kubernetes API, SIGApps is the place that you wanna present that kind of information. We are basically meeting every other Monday. Like I mentioned before, Janet, Ken and myself, we are the chairs of the SIGApps. So we are meeting every other Monday. Our next meeting will be on October 18th. The call is happening at 6 p.m. Central European time, at noon Eastern time or 9 a.m. Pacific time. We also hang out on Kubernetes Slack channel. SIGApps is the channel as you might expect. And we also maintain the email group so you can always reach out to us and we will be able to answer your questions. So we have some leadership changes. Since the last time we update, we have a new chair, Maček is our new chair. And then we have emeritus chairs at Man and Matt. Thanks, Janet. So the two new features that we're taking to GA are graduating CronJob to Staple and graduating Pot Disruption Budget to Staple. Pot Disruption Budget is kind of the eye duck in the portfolio of SIGApps, but it is important for managing intended and unintentional disruptions during work or rollouts. And CronJob is the last piece of the workloads API that's still in beta. It's under the batch portion of the API. So getting those two to Staple is kind of a last piece that we have to having a full GA workloads API service. And we're really happy to be able to provide that to the community. Slap. So things that we're working on, presently we're working on MacServe and DemonSense. One of the limitations in DemonSense right now is that it really tries to optimize to provide a single pod per node during rolling updates. And this can somewhat hamper the availability of administrative or infrastructural workloads that are running on the node. So what MacServe will allow you to do is to launch a new pod prior to killing the old pod. This adjustment to the rolling update strategy is still gonna be very sensitive to scarce resources. So that's something that you're gonna have to manage yourself. But we found via survey that ultimately it's better to provide this to the community and to allow people to manage it than to say, okay, because of scarce resources on nodes, things like GPUs or ASICs or other things that a DemonSense might use because that is difficult to manage those. We should not touch it all. So we're moving forward with that. Minerating second and staple set is really just about polishing the workloads API service and providing the same readiness functionality that we do for the other controllers for a staple set. If you're familiar with deployments and DemonSense, they all have a Minerating seconds field that allows it to like during a rollout and during a rolling update to determine the availability of the workload itself. So random replica set downscale, that the very deterministic order in which replica sets downscale pods is kind of very sensitive to failure domains failing concurrently with upscales and downscales. So by randomly selecting pods that we terminate during a downscale, we're going to reduce the sensitivity of concurrent infrastructure failures during rolling updates and during downscales to make sure that you still get a good distribution of pods across your failure domains. And replica set pod deletion costs. So this allows you to determine which pod should be the best pod to delete during a replica set rolling update or a replica set downscale. The thought behind this is that sometimes there's an optimal pod that you would select during a downscale and there's no way really to communicate this to the controller presently. This feature will allow you to be able to do things like if there's a pod for instance that hasn't been running for very long so it hasn't been able to pull down, let's say files from a system and load it into DRAM. It would be optimal to delete that prior to one that's been running for a long time is currently serving traffic and has all of its data that it needs to serve traffic. Ready? Slap. I think I skipped one slide. Okay, going back. So within the batch area as Ken mentioned we did promote cron jobs recently to GA but still within the job board side of the batch area there are three important areas that we're trying to improve. First two are heavily related with external orchestrator, orchestrators that will allow to first of all suspend jobs that can be even useful for regular users if you know that for example, your cluster is less busy overnight you can either do it manually or have an external operator that will manage so that your job will be doing its task overnight instead of during the day time. The next improvement is about indexing jobs. So previously when jobs were being run we didn't have any ordering or indexing within the pods that are being executed. So the order was always not guaranteed with index jobs every single pod will be given will be assigned a particular index in a completion so that at the end of the time you can explicitly assign specific parts of the task for the job itself. And finally the biggest one that is basically since the initial days of jobs. So currently if a job or when a job is running it depends for calculating its status on pods that has to be on the cluster which is fine for jobs which has about five, 10 even to a hundred pods running. When your job is becoming pretty big I don't know thousands and more pods the problem is that those pods has to be present on the cluster even in completed state until the job completes because the job controller relies on the pods being present to properly calculate its status and know when it actually finishes. This improvement that is being worked on by Aldo he's changing the status calculation to allow the pods to be removed even before the job gets completed. So we're adding a special finalizer that will properly mark the pods as already accounted for and they can be safely removed by Cuba for example. So for our future plans we have a few new features coming and a few going to the next phase of release staging. The first one is TTL after finish it's planned to GA in 123. TTL after finish is for cleaning up jobs and if you have cron job that manage your jobs you don't need to worry about cleaning it up but because jobs are not long running process after it finishes you would want to clean it up easily. And then the TTL after finish feature allows you to specify a time when you want the jobs to be automatically cleaned up by Kubernetes. The second feature that's main ready seconds in step staple sets is planned to be beta in the next release. So a staple set doesn't have the main ready seconds feature right now but other controllers such as deployment and replica sets has the main ready seconds that you can specify to say I want to wait for this many seconds before I can declare that the pod is ready. So with this we can add that functionality to staple set and align it with other workload controllers. The next feature is auto remove PVCs created by staple set is coming in in the next release and a staple set by default doesn't clean up its PVCs after you delete the staple set. And that's because that's for safety reasons because you don't want your data to be removed by default. And then with this feature you can opt in to allow the storage storage to be cleaned up with the staple set when you delete it. The last one is max unavailable for a staple set. So staple set right now you can do rolling update but the default policy is to update one pod at a time. And if you want faster rollout you would want this max unavailable feature that allows you to specify the number of pods you are allowed to turn down and have a faster rollout. And this doesn't affect the original capability of doing scale up in parallel for creating more pods at the same time when you scale up the staple set. And that's all for the future plans. And let me hand over to Maciek. So there's more future future plans that we just recently started working on. So if you have been working with any of the controllers APIs before you'll probably notice that each and every single controller has its own status to say that it is ready or that it is pending or that an upgrade is going on or that it is not available yet. The problem with this approach is that when you're trying to orchestrate the controllers you always have to write your own logic per specific controller. On top of that, you cannot create any automation or it's even hard from a user perspective to figure out at which point in the process in the life cycle your controller currently is. Our goal is to change that and to have a unified set of workload statuses or at least the conditions for starters so that it doesn't matter whether you will be looking at daemon sets or a stateful set or a deployment or a job you can easily figure out at what point in the life cycle of a particular controller it is currently at. So we're hoping and this will work in both ways. It will help with a lot of automation that is built on top of the Kubernetes controllers because you can have unified conditions and you can easily figure out where we are at. And additionally, it will be very easy for newcomers when they are working with controllers for the first time to easily jump from one controller to the other and figure out where the controller is at what might be the problems that it is currently struggling with and so forth. Currently we have Kubernetes enhancements proposal in place. We are working on it. We're trying to gather as much feedback as possible around the possible statuses of the workload controllers all the conditions. We're looking at all the currently existing controllers both the apps that both Janet and Ken were talking as well as the batch. And we're trying to consolidate and figure out what will be the best approach to unify those statuses so that we can easily express the lifecycle. So we're hoping to land this sometime in the next couple of releases. This will be definitely something that will take a little bit longer because even when the enhancement will be ready probably we're planning for 124, the implementation will take time and as we are implementing those features we will be updating the enhancements based on the implementation because on one hand, yes, we will go through all controllers. We will gather all the data but then at the same time when we will be actually implementing them a lot of other edge cases will probably appear that we will have to take into account. So if you're interested in that definitely reach out to us. We want to hear from you about how we could improve the current statuses and if you're interested in helping we're open to welcome people helping us with that as well. Okay, I think that's pretty much all when it comes to SIGApps updates. As I said before, we are meeting bi-weekly on Mondays. Feel free to shoot us an email or reach out to us on Slack channel. We are more than happy to help you with your PR reviews, with issues or eventual improvements to the controllers APIs. And we're going to take your questions now.