 Okay, good morning once again. Hello, everyone. Good morning. Good evening. Good afternoon depending on where you are Welcome everyone on site and everyone watching us virtually. My name is matcha and I'll be your host today Unfortunately Janet and Ken who are our co-chairs for 60 li sick apps. Sorry I'm in the mood from like literally from 24 hours ago when I was presenting for 60 li Janet and Ken could not be here, but they are also Leading the sick apps with me ensuring that the organizational side of things is slowly moving forward as well as the leading the technical direction for sick apps Where you can find us well Kubernetes has a slack sick apps. That's where we hang out There's a mailing list if you have any questions Problems issues feel free to reach out to us. There are always bi-weekly meetings Happening at 6 p.m. European time. The other time zones are also mentioned on this slide You can feel free to add any topics to the agenda if you feel like you have something important to talk to us Whether that will be for pull reviews. You have some specific issues that you want to discuss don't be afraid to reach us and ask any kind of questions or If you want to be a contributor, that's even more welcome feel free to to ask anything What are your what is that the sick apps does so primarily we are responsible for? Ensuring that you can deploy and operate your application on any Kubernetes conformant cluster I linked the charter and our annual report which goes into details what actually we did But on on the following a couple of slides I'll try to highlight the major initiatives that we've been ongoing For the past two releases since the last update that we had during kubecon in Los Angeles As well as what we are going to do over the next Couple releases so most importantly the GA features these are that you can actually safely use in production And we fully support you to do it TTL after finish so It's a nice little controller which allows your jobs because a job is meant to run to a completion And then it's done which basically means the resources hold by a job even that if that will be just the The storage and at CD. They will be held in your at CD forever That particular small controller which has been in beta for quite a while has been promoted to GA Thanks to hill if I remember correctly You just there is a tiny little spec field at a job Where you specify how long it has to live after the job has completed after which the job will be Removed so you can think about it like a garbage collector for jobs The new other two additions are coming from our friends from the work group batch Which I'll talk about in a minute what they do the first one introduces index jobs. What are index jobs? So normally when you run a job, it will ensure that it will run this many Pots for your particular task. I don't know five ten whatever you specify Index job is like state full Stateful set in a job in a job world in that every single pod will have their own index So if it fails a particular index with a number Attached it will be respawn for this particular so you can do some for example embarrassing parallel jobs or Address particular pots within a job with headless service Another one which was also coming from work group batch as I mentioned is a tiny little improvement to a job API Where you can actually suspend a job This will be useful for if you have an external controller managing your jobs when you suspend a running job The controller will remove all the currently running pots And then upon unsuspension you will be able to recreate whatever was left there to be done It's kind of similar to what cron jobs allows because cron job also has a notion of Pausing but in case of a cron job the pause means that there are no jobs triggered until Until we remove the suspension on to the beta beta features again, quite a lot of the the beta features and The work around the job API has been coming from the Work group batch and the first one is probably something that I've been trying to Address since very early days So if you work ever with jobs the way jobs currently work is it will spin up this many pots But the way it is calculating which pots completed for a job It requires the pots to be on the cluster, which is fine if your job is running I don't know like five ten up to a hundred. Let's say Pots that's fine But if you run a thousand pots or more those pots lingering around in your cluster are eating Serious numbers of your resources, which is a pain. So what Aldo did we started implementing a finalizer on the pots that are being Created by the job controller and as we are removing them we're also removing the finalizer and we're accounting the numbers properly so that we can track and We don't require the pots to be sitting in a quite a while that feature actually and I will mention that Yesterday during his presentation that has been going back and forth several times. There has been multiple issues with it We struggle with it. So if you have something like that, feel free to ping us with all the issues before we read GA which will probably Happening in a release or two Feel free to reach us. Let us know drop on on our slack channel on on or on one of our meetings and We're happy to hear from you another one For stateful sets is actually ensuring that when you are adding new pots to your Stateful set you can specify how long what is the minimal required time for the pod that it has to be ready Before we will be able to serve traffic something similar already exists for deployments and Damon sets stateful set was Was not accounted for it. So we want to align all the controllers with it as well another Tiny little addition from the work grow batch which exposes the information how many ready pots in a job There are in the status finally For the job so as I mentioned before we have an option to suspend a job and By default jobs do not allow modify that many fields only parallelism can be modified We figure out that for certain use cases you would like to be able to modify the scheduling Primitives for a pot in a in a job for that we allow Modifying those scheduling directives in a job, but would has one that has not been unsuspendent So you can create a job suspended by default and on before you actually start running the job. You can still modify It's scheduling directives, which is handy for for the batch use cases Onto alpha features there's a quite a few of them Mux unavailable which basically if you've ever worked with a stateful set and you're rolling newer version of stateful sets You know that we are trying to roll one by one There are some use cases where we would we actually be okay running more than one and the Mux unavailable Actually allows that so you can specify that we will be running more More Pots within a stateful set towards the newer version similarly for a stateful set Mark added an option to decide what's going on with your pvcs by default All the pvcs were deleted when When stateful said it was either scaled down or removed currently for both of these operations you can specify that It can be either removed or we will retain the pvc for a little bit longer So you can disconnect the lifetime of a stateful set from From the lifetime of the storage backing your application Another that we that we've been talking for quite a while now. We finally had some Initial idea how we want to implement and we managed to put together a Kubernetes enhancement proposal, which is a document that describes how we want to implement a particular feature is Consolidating work load controllers statuses That's a something that if you ever work with kubernetes Especially if you're a newcomer and if you play with let's say deployments You think you know how the controllers work and then you switch over to Damon sets for example, you actually have to learn how the Damon set presents its statuses its progress and you so we're trying to as much as possible Inline the statuses with all the controllers so that once you learn one controller You will be able to roughly figure out what's going on with your controller across the board and on top of that You will be able to also write higher level controllers on top of the on top of the existing controllers That's something that probably will take Significantly longer than we would wish Because there's a lot of controllers There are various various edge cases that we're trying to somehow figure out in some in some common ground Another one that we recently closed is a pot healthy policy for pod disruption budgets So during one of the discussions a couple months back That happened to be an issue with how we are counting pots for a pod disruption budgets Up until now We are only We are equally counting running and ready pots in the same way or pending pots There is no way for you to say that only running pots should be accountant for a PDB So the pot healthy policy actually Introduces a little more wiggle room into how PDBs are counting or which pots are being counted for pot disruption budget and finally a Feature that has been requested multiple times since almost the early days when we introduce the crown jobs Or even when it was called scheduler jobs, which is a time support time zone support That is a feature that that we rejected a couple years back For multiple reasons including one that would require all the Kubernetes distributions To include time zone database since the addition of the time zone database into the Golang In the default library we can now safely say that any conformant Kubernetes can Properly support time zones Although there are some fallouts out of just the fact that we added that Particular because we try to put a very strong validation around the time zone Names and it turned out that Mac OS with its File name handling where you can have various cop sizes lower and uppercase will work equally fails our unit tests so One of the requirements for beta Graduation for cron for time zone and cron job will be figuring out how to properly Handle the time zone Issue on max primarily Okay, so What are we going to do over the next couple of releases? So primarily as I mentioned we will like to Get to completion all the current alpha and beta features if From the beta or alpha features there is something that you are interested in you want to help or you actually want to test and You can provide features feel free to do so drop on our slag drop on our mailing list and information that you are having issues What are the problems? Feedback is super important for us to ensure that before we graduate something to general availability That the future is rock solids rock solid stable and then there are no major issues On top of that there is a a general sentiment towards pushing stuff to GA or Eventually, they will be removed entirely from Kubernetes There is a separate enhancement proposal that has been merged. I think last year where To prevent perma beta States of certain features this is to ensure that we are slowly but moving forward and something that has been pushed to a certain level But hasn't progressed to a general availability should actually be removed So we want to make sure that the current features were trying not to introduce too many features at any single point in time But rather focus on completing the ones that we already have in progress secondly There is an enhancement proposal about increasing the overall reliability of the Kubernetes project or the products that are based on a On Kubernetes project and there has been Sent a letter sent by the majority of the contributors from the Kubernetes project where we are increasing the overall acceptance criteria for any poor requests Either adding new features or fixing certain areas. So if your PR or your issue will Will get some pushback It's not a personal thing We're trying to ensure that everything is properly tested everything is properly verified before Accepting that to core that also means that if you will be fixing certain area that did not have a test before It might be pushed back and you will be asked to add some tests Even though there were none before Thankfully in the majority of the controllers both the units and integration and e2e testing is pretty good But we still have certain areas Like something that I I've been paying on last week Where for stateful sets the tests the unit tests that we actually had we're just verifying that we are throwing an error and But without actually looking into what the error was and it turned out that because of the Misconfiguration of the test because it was lacking one always required field the test will always fail and Everyone were expecting. Oh if it's a failure. So it was failing always. So it was always giving you a false positive The author reach out to me he I started looking at it And I asked him to to do something similar to what we already have in the batch group Where we are actually Interespecting each error for the field names and possibly also the type of the error So nowadays the stateful sets if Jordan or when Jordan approves it We'll have a little bit more Appropriate test cases so there are those edge cases where nobody actually was was aware about them and It's pretty nice that someone actually was able to to find it. So if you're into this You are more than Welcome to join us help us with any kind of contributions If you're interested with either Improving our documentation for all the controllers or any kind of specific controllers if you want to help with Project management because you're a project me a project manager We are more than happy to have people help us in various different places Whether that will be managing the issues managing the open pool requests We still have problems with having proper contributions or most importantly Reviewers and then approvers in the area. I'm fully aware that getting the get going through the contributions ladder in In the SIG apps area is quite complicated because the the initial knowledge needed to ensure that every piece that we're changing and then controllers is Not trivial and it does require a quite some time to to look at one controller or several For me personally even every single time at this point in time if I'm looking at certain PRs I I'm getting different messages on slack Have I ever looked at PRs and stuff like that every single PR that I'm looking from the controllers area I usually will double-check in the running cluster to ensure that it doesn't break something The scope of potential problems when modifying controllers is pretty big So we need to ensure that anyone who can And who does approve PRs in this area is 100% sure and it's not only about a particular feature, but it also has a Reasonable Sense in the general direction that we're trying to go with the majority of the controllers So if you are waiting for some PR reviews in the controllers area, I Apologize it might take some time But always try to ping us on slack even if it has to be a couple of times I I've been notoriously Responding to people a couple weeks or even months after they ping me if there are times like cube con and my slack just bubbles up with tons of notification unfortunately, I Barely track my github notification at this point in time because it's impossible So every time I will review a PR I will ask the contributor to to ping me on slack when it gets Addressed when he addresses my comments and so that we can move certain features faster With that Oh, yeah, I almost forgot so a couple months back. I think it was February a group of people from our scheduling mostly but not only Figure out that they would want to Formulate a work group within the Kubernetes project that is primarily devoted to improving the overall batch Interface so if you're thinking about I don't know high high performance computing Lots of any kind of data manipulation AIML or even CI flows So stuff like cube flow Or spark if you are running tools like that And there's quite a few of them that grow around the entire Kubernetes core because Over the past couple of years by now We said that we will provide certain basic features, but we want to see what the community builds around the core Kubernetes So Aldo reach out to to us in the segaps reach out to folks from six scheduling and Signaled and together. We're trying to get a group of people that are interested in and we're trying to push certain features Forward to improve the overall experience We are meeting every other Thursday at 4 p.m. European time, of course, there's a mailing list and a Slack channel devoted specifically for us Like you've already saw and I did mention that a couple of times all those work around index jobs the job tracking API Job suspension these are Contributions done primarily by the folks from the from the batch work group So if that's something that you're interested in feel free to drop us A note there was a session yesterday where Aldo was introducing what the work group batch is and their use cases I was hoping that it'll be the other way around but unfortunately it happened Yesterday, but I put a link so you can probably and I've heard that the CNCF is slowly Uploading the videos from KubeCon already. So you should be able to check the recording Within a day's hours probably I think with that I'm open to questions. Shall we have any? Okay, I have two questions the first question is Since reviewers need so much background. Are there any mentoring opportunities? Right, that's that's a very question. That's a very good question. Thank you We've started a mentoring cohort in combination with six CLI since I'm leading both of the SIG apps and six CLI efforts Where the entry barrier for six CLI is significantly lower than for SIG apps and They are very frequently overlapping I'm hoping that after KubeCon. I already spoke with Paris. I've seen her earlier today here We will be slowing slowly putting something together where we will be having private Slack channels and it will be Having a group of people that we will be mentoring towards getting to the reviewer and eventually approvers And my second question is what is your biggest need on SIG apps? Contributors, I would say probably like with all the rest of the people that were speaking either earlier today or Even earlier this week pretty much every single open source project is struggling We're not any different with that Even going through the open PRs even if you don't have the reviewer or Approver label that doesn't mean that you can't review PRs you can't double check The issues that are being reported by different users any kind of input like that where oh, I looked at it And I was able to reproduce oftentimes users will Report a problem Which is a very narrow or edge case that is hard to reproduce And it's hard for us to be able to verify every single edge case because if there's I don't know 50 or even more issues open and I have to spend half an hour or an hour per every issue not to mention Similar amount of PRs that I need to look where every single PR is roughly the same amount of time sometimes even longer it's it doesn't scale and There's not that many of us so every kind of input where oh, I looked at it and Even better if you can double check it or Ensure that it's still a problem with the latest version oftentimes people will Report issues that happen in earlier version of Kubernetes and something that might have has Benefix by a different change or is not a problem anymore Any kind of information and input from all of you where you will just say oh, yeah I can easily reproduce that on the latest kind cluster for example Or I did look at it in the latest kind and it doesn't appear to be a problem anymore Or any kind of input like that is more than welcome Okay, I don't see any other questions. I'll be here around for a little bit and we have sick meet and greet in Slightly less than an hour at one. I don't know where that is yet But I'll figure it out because I'll check my scat in a bit I'll be there for as well. So if you want to talk about sick apps or 60 li I'm more than happy to do so. Thank you very much