 Welcome to another OpenShift Commons briefing, as we are want to do on Mondays, is create a space for some of the upstream projects to have AMA introduction sessions. And today the CUBE scheduler is the topic of the day, and Wei is going, from IBM, is going to tell us a little bit about who he is and introduce himself, and then give us an introduction to the Kubernetes scheduler. So Wei, take it away, and we'll have live Q&A at the end. Sure. Hi, everyone. I'm Wei Hua. I work for IBM, and I've been working in Kubernetes for a while. And the latest two years, I primarily work upstream in the six scheduling. So right now I'm a co-chair, so I may know a little scheduling, so today I'm going to give you a Kubernetes scheduler introduction. Here is agenda. So Wei will go through our high level on what CUBE scheduler is, what CUBE scheduler does, and how it works. Oh, sorry. There's a title here. How the scheduler works here. And what's the latest development on the scheduling side? And the last part is to, because we are experiencing some component config API change in the later releases, so I want to highlight some configuration changes you want to be aware of in the last part. And the last part is how to contribute to Kubernetes scheduler. Okay. The first part is what scheduler is, and what scheduler does. So basically, I can just use one sentence to explain what scheduler does. So what scheduler does is to do some magic to find the best node for incoming part to be placed. So for example, in this, in the upper images, there is a part which has the field node to now. It doesn't have a proper node to be placed yet. So then scheduler comes into the stage and do some magic and find the best node to put the node, sorry, put the part. So you can see that the result of the scheduling is then find the kind worker to node for the part to be placed. I have a demo a bit, so no worries. And here is the basic workflow, not only for scheduler, but for the whole Kubernetes components. So I will do a bit demo on this, because I suppose you can see my screen, right? I have a con cluster with three workers. So right now, there's no part here, sorry, okay, no part here. So I'm going to do something bad. I'm going to stop the CubeScaker and the CubeControllerManager. So I have two clusters here. So I'm going to this control plane, because I'm using con, inside it's a QBADM cluster actually. So all the manifest is listed here. So I'm going to move the scheduler out and also move the controller manager out. So that is expected to see the scheduler and the controller manager gone. So there's only APIs over here. Okay. Then let's try to deploy a deployment, because why use deployment? Because I want to show you how controller manager works here. So I'm trying to deploy it. So what you can expect, the deployed API object will of course be created, but there's no part backing the deployment. Why? Because back to our slides, once you create a deployment, it goes to API server. And API server, of course, do some basic checks and the persistent API object. In this case, the API object here is a deployment. And then controller manager comes into stage and they try to get the deployment object from the API. So about here, we intentionally shut down the controller manager. So there's no part creating here. You can see no part. And the deployment managed the replica set. So no replica set either. Okay. So how to fix that? Let's move our controller manager back. Okay. That should be some time, a few seconds for the controller manager to go back. Once it gets back, it needs to do some internally, we call that cache and sync up. So you may need some time to wait for it to get back. So while we are waiting, let's continue to the map. So the controller manager takes over and get the deployment event back and internally it will create the object it needs to ensure the running part expect is desired replica numbers. For example, controller manager will try to see what's the replica set there and how many tasks are managed by the replica set and do some reconciling and make sure it's desired. Okay. Let's go back. Yeah. You can see that right now there is a, sorry, there's a replica set create and also there is a part create. It's pending. Why it's pending? It's because the schedule has been shut down. So there's no component tries to do the work to place the part to the proper now. Let's explain how schedule it does. And now let's move the schedule back. You're going to see the cube schedule is back and up for five seconds. Watch for the part. So it's expected that schedule gets the incoming part and look at the status of whole cluster and find the best of note for the part. You can see schedule finds the kind worker, the first worker for the part to be placed. And let's take a look at the spec. So the most important field as I mentioned is the spec download name. So once schedule sites the note name to the kind worker, then it's job is now. And the next is cube plate coming to a stage. And every cube plate in each worker now, watch some the pass. And if the pass spec dot note name equals to its own note name, then it says, okay, it's me to take care of this part. So it tries to launch the underlying container runtime and the launch of the containers. Then that is basically the part of the life cycle. So I just give a very high level on what keeps schedule does and how it acts with us components, Kubernetes components, and how schedule works inside. Let's take a look. So basically there's two major phases, the final decision is made. So the first is called filter. In the before, we call it the predicate, but I will use the two words interchangeable play in this session. And if you look at some documents, if there's some, a bit old document, maybe it's used the word predicate, but predicate and filter are the same. And right now, because we are doing some manufacturing, we are more intended to use the filter and use filter parking to describe this phase. So predicate hard constraints that you apply to your workloads that, for example, you may say, I want my part to have a two gigabyte memory and one core CPU. If there's no node fits, then just put my part handy there. I don't do anything else. So this is hard constraint. It must be satisfied. And all hard constraints are ended. If anyone is not satisfied, then the whole predicates will fail, and your part will be put back into the pending queue inside schedule. And here is a link for the full predicate list. I won't click here. You can check out later. And basically it works like this. There is an internal queue inside schedule and the queue pops apart one by one. And once the part comes into schedule, the main logic, and it will run all the registered predicates functions against HNL. And of course, some nodes will fail and some nodes will pass. For example here, the street nodes will pass. So by the end of this phase, the output is that we get a three-node list. We know that ILS three-node can fit the part. Let me do some demo here. Let me use a very simple demo, which adds a node selector to the basic demo template. Basically, the demo tells me that I want to schedule this part to a node which has a label kubernetes.io slash hostname to kind worker. So this label will of course try to put the part to the worker, the first worker. So let's take a look. Yeah, it works. So basically, this is how predicates work, which is the filtering works. Because it satisfies. So it's been put to the kind worker. There's no other candidate. And if we change the spec a little bit to use, if we apply a node affinity to the part and say, so it gives some room to not only choose kind worker, but here is just say, I don't want my part to be put to the third worker. So it can be worker one or worker two. And there's also a hint here is that if you look at your spec, if there's some keyword like require means it's a must. So it usually mapped to the predicates or filter phase of scheduling. That means it's a must. So let's take a look. So basically, the result is that, of course, it won't fit to the worker three. It may be fit one worker one or worker two. Let's take a look. Also one part is put to worker two. So you can see that maybe there's a magic happening inside the scheduler. Try to balance the load allocation a bit because we already have a part landed in the worker. So it tries to land on worker two or maybe. So let's continue to look at the size. So then after the filter case, as I mentioned, the output of the filter case is that we got a list of good candidates to be placed upon. But we want to do the step one, step further to choose the best node for you to, for your pilot to land on. So we call it list phase that as predicates or colleagues phase as priorities. Also we, the latest term we just score because it's just rank the candidates and that is from the previous stage. So all these constraints are soft. That means I prefer my part to be, for example, I prefer my part to be scheduled to another, which has SSD. I prefer my part to co-exist or not co-exist with some kind of part, et cetera, et cetera. So that we have some registered priorities. So each filter, each candidate node, we get a score and then we sum the score to find the highest score, which the node has and put the incoming part to the final winner. So as I mentioned, there may be some user defined API spec that impacted the priority. For example, I will show you a little bit. There is a preferable spec in past spec usually means that this is kind of priority. And also there is some implicit default priorities. For example, if you create a part and that the part belongs to, for example, replica set, we will try to balance the parts, belongs to the same replica set evenly. So that is some implicit settings. You have to change that by, for example, disabling that kind of priority in the schedule configuration with YAMLs. But if you don't have control for the config YAMLs, maybe you have to turn to your admin to turn off some of the settings. That is implicit default priorities. The whole step is the logic works like this. The part has three nodes to be filtered, to be fit down. And then the priorities will be on the candidates. And then the node 3 gets the highest score with 90. And then the part will be put to the node 3. So that is basically the second phase of scheduling. Let's go back to the demo. I have to show you a bit what it means. So there's another part that I can put here. So again, it doesn't want to be put into worker 3. So it's required. So it's a predicate. And also I put the priority here. It's preferred during scheduling, ignore during execution. Because it's a scoring, so there usually comes with a weight. If not here, but usually specified in the Kubernetes configurations for YAMLs. And then I specify a soft requirement to be. So I prefer my part to be in worker 1 instead of worker 2. So let's keep it looking. It works. And yes, the soft constraint has been taken into effect. You can see that part 2 goes to worker. So after these two phases, so the whole cycle of schedule is down. And then maybe you are wondering, what if there's no candidate? That means if all the predicates fail, it's very possible, right? If your cluster is experiencing a high workload, and almost all the existing workload has used up all the resources. So how can we resolve this? So in Kubernetes 1.10 or 1.9, we introduced the concept of priority class, or we call it priority or preemption. They are all the same thing. So let's take a look. So basically right now we have 3 now, right? So next I'm going to deploy the deployment, which has one replica, and which has a low priority. If you don't specify the aspect of priority class name, it will be equipped with a default priority, which is 0. So in this default part with the lowest priority and the request of 3.50 views, delete the part pass. So this part is land on worker 3. So next to continue demo, I need to do some checks, which is the 10, the other two parts, so that because I will show you a demo that the preempt part comes in. And I will just ensure that preempt only lands on the same note as the low priority, so that the preemption can happen. So let me 10 now. So we want it to be scheduled with any note. All right. And then I will land into a preemptor, which also requires three CPUs, because in total we only have four CPUs on one note. And also this part has a higher priority. The priority class name should be matched for any existing priority classes. So by default it has two system level priority class. The P1 priority class is created by me manually, and it has a value of 1. Why is larger than 0? So let's take a look how it works. Because we have counted the other two notes. So it definitely has the only place, which is the workers three to land off. So let's take a look. You can see right now it's an interim state. That is terminate, which you call the preempting, the old part, which let me scroll up to show you the ID here, dp, zp, x. The dp, zp, x is being terminated. And why is a new part is to be created? Because control manager is notified. It's got notified that its old record is being deleted. So it's trying to spin up a new one. But no worries. This one doesn't have enough resources to run. So it will be pending out there. And here you can see there's an interim status for the preemptor, which is the nominating note of kind of workers three. It is because we don't do this preemption in one scaling cycle. Once we found that we have a low priority part to preempt, we just set the spec nominating note to the preemptor. Then we continue to the next scaling cycle. So that is a bit different. But this state is very transient. So maybe you won't see that because it's a very quick mistake. So you can see that the preemptor, of course, preempts the older replicas. And then it lands on the workers three. So this is basically how preemption works. I mentioned this part with priority cost. Preempt cost should be created manually. And then the logic goes like this. The preemption happens on every note because there's no candidate note at all. And then maybe some notes are good candidates for preemption some existing path. And some will come out to the preemption candidate. And also we have some internal algorithm to compare the preemption candidate to find the best of notes. For example, to preempt the lowest priority path as much as possible instead of preempting the higher priority path. And let's go back to my demo. I want to un-tent the two notes. Of course, because now we have two available notes again. So it's expected that the low priority should have the proper note to be scheduled to. Let's move to the next slide. So here's just a summary on the key features of schedule. Like if you're a close-back, ask some specific resources. No matter it's the default resources like CPU, memory, storage, or some extended resources like the media.com slash GPU, or whatever extended resources you define, the resource fit request is going to the schedule. And also, if your path has some toleration, and toleration to match up with the tense also down in the schedule, as well as the note selector, note affinity, part affinity, part anti-friendly, 1.8, sorry, in 1.8 we have introduced the part to part spread, we will ever mention a little bit later. And also preemption is a built-in feature, not specific to any kind of workload, but applied to every part. If you want to have preemption, you can define the priority class. And also some storage-related features are also related here. For example, if you have a storage class which is of the type, wait for the first consumer so that if you create a PVC, then the PV won't be created when the first consumer comes into. So that phase also happens in the schedule because it wants to delay the PV to PVC binding to the scheduling phase. So that is basically the key features of schedule. And what's new? The first new features we introduced is the part to part spread. So top part to part spread is trying to resolve some limitation of the existing features like part affinity and part anti-friendly. So part affinity is like, I want my path to be scheduled to a topology domain as much as possible. So basically, you can't control how many I want to be placed into the same topology domain and how much I don't want. And part anti-affinity is another extreme case. That means you can only schedule one path to a defined topology domain. For example, if you define the part anti-affinity into two zones, then each zone can only have one path. So in between these two extreme cases, there's a lot of room that you want to manage the balance or imbalance degree of how your workloads behave so that we introduce the part to part to spread. OK, we'll click demo here. So the part to part to spread add a new API spec called the topology spread constraints and it has some API fields here. The when unsatisfiable, there are two values during now supported. The first one is do not schedule. It's a hard requirement. That means if it doesn't satisfy this requirement, which means if the balance of degree is larger than one max you describe the imbalance degree, then I don't want to schedule. And of course, you can make it a priority, which means a softer requirement, which can use a keyword schedule anyway. So you can basically combine use the two. You can as much as constraint as you want. So right now I just demo just this one constraint. So basically here I want to stream replicas. I want them to not that imbalance, so most of strict balance model, which only one topology can larger than the other by one. Pause evenly across these three else. And yes, it works like this. But if I want to scale it tonight on the three else evenly, I see if it's the case. It's pending. Why? It's pending. You're sure we use part kubectl describe part to see what's the error message to schedule it. It says three notes didn't match part topology spread constraint. And there are actually four notes here. Why are there four notes here? Because the master note, you also count as a note. So the first part where I definitely tried to place onto the master note, but master note as you know, master note has some special cant to avoid the incoming part to be placed. That makes sense, because in the QBADM's deployment, the master notes are like this. So that we need to change the spec a bit so that we want to explicitly exclude the master note. So I just use this trick and know the affinity to exclude the master note. Let me just apply this. And oh, it has three requests. Let me scale it again. Yeah, give it the seconds. OK, you can see now it works perfectly. There's nine parts and worker one, there's three parts, worker two, three parts, worker three, three parts. OK, here's the basic part topology spread works. And there's a blog explaining this feature in details. And there's some advanced usage on this feature. So if you're interested, just check it out. And another feature we came up recently is called multi-profile schedule. I will demo this a bit later. So basically, the motivation is because of the limitation of multi-schedule. So we use this upon multi-schedule. Multi-schedule means in addition to the default schedule, you can have as many schedule processes running outside or whatever, running somewhere. And so this works together by identifying which kinds of part each schedule will cover. For example, there is a part spec called spec.schedule name. So if you don't specify that, the default schedule will take over. But if you explicitly specify that spec, the corresponding schedule will take over this kind of part. So you can easily think that there will be some racing issues, for example. If there's only one CPU left in this node, and the two schedulers are both trying to schedule its own part to that part, competing for that left remaining one CPU. And then, of course, we'll have a timing racing issue. And that is very difficult to be avoid, because these two scheduler processes are running independently. They are not running in memory to share in something. They're totally different boundaries. And also, right now, the suggestion we gave to user if you're running multi-schedule is that this racing case can be mitigated by dividing your cluster into several parts. And each part is governed by each scheduler. So this can mitigate the issue, but not resolve the issue totally. For example, some advanced scheduling features like part affinity, part departure spread, this feature doesn't just use a single node to do decisions. It has to look at the whole cluster, look at the inter-part and the internal connections to do the right decisions. So in that case, this feature can also cause the racing issues. So how can we do that? How can we resolve this limitation? The proposal we came out is using a. Wei, I'm not sure if it's me or if it's you. You're dropping out. Your voice is dropping out just a little bit. You could repeat that last phrase. OK. So the solution we propose is using a multi-profile mechanism so that we still recommend you use the scheduling binary and just one scheduling binary. And within that, we give you some API spec to define the profiles you want. So each profile is like a scheduling flavor. For example, this flavor wants to be the part to be more bean packed. That flavor to want to be whatever, takes more priority on some other features, whatever. So that the banners is compiled there. And the profiles are defined there. They are just working into one binary so there's no racing issue. And also, the consumers can specify their workloads to use in what kind of schedule profile they want to use. So that works like the same. So basically, we want to resolve this issue like this. But this multi-profile schedule cannot be done without a major code refactoring inside the schedule. So in the last three or four releases, the most important thing is code refactoring. And we refine each phase of the scheduling. So in the before, we just have two phases. One is filter, one is score. And now we have a more fine grained phases like pre-filter, filter, pre-score, score, normalize, score, et cetera, and preemption. And also, we sort of separate the binding phase apart from the main scheduling cycle. So the binding cycle works as an asynct good routine sort of. So that is why the profile can only happens when we raise up the scheduling framework. So framework is just, we redefine some internal logic and expose the extension point that the consumer can consume and write their own logic. And then those kinds of logic can be put on top of our schedule to combine, so compile to a new customer scheduling binary which has some capacity that you want, not the default capacity. But 100% of the default schedule capacity is kept. So this is a scheduling framework. This is the major three things that we are developing recently. So the next topic is config, kept schedule. I think this must be just demo much time I have left. 20 minutes. I'm trying to use 10 minutes to do the demos. Okay, let me, I have a two cluster. Let me change to another one. So this is a con with extension to map. Let's have three nodes and don't have. Right now I'm going to the, another cluster is controlling the binary. Now just make it fast. So let's take a look how the schedule is launched up. And take a look at the default schedule configuration for us. It's a, because QBADM is special, because it's using a static path, but that doesn't matter. And the main parameter is here. So it's still using the command line arguments style. But recently, not only is scheduler, but also in QBADM, Q-Control Manager, we just use YAML to describe this command line arguments. We use, like scheduler, use a scheduler component config, which is YAML, you can pass in using the dash dash config. And most of the command line arguments are also defined there. So right now the two arguments coexist, but maybe in the future we will totally remove the command line arguments, but keep the scheduler component config. So let's move this out. And I have a new configuration file. This is the same configuration files. I keep the most of them, but just using the dash dash config to pass into the scheduler and also use the extra mount so that the manifest I use here can be mounted directly. Oh, and also I need to mention a bit on this YAML. This YAML is using the v1, alpha two component config. You can see the con here, scheduler, QB schedule configurations. So basically it just defined some common command line arguments here like the keep config, the leader election, and additionally we define the profiles here. So you can see here I keep the default scheduler and also I can give some other flavors like image first. Image first is to give a very high weight on the image locality. Image locality is a priority function that say, okay, if this node has the image, for example, it has a, you want to have the njax image. And once the next njax paths come, I prefer to use this path because it can save the image pulling time. So this is easy to understand. And also I define another player flavor called beanpack. So by default, the node resourcing list I located the priority is defined, which means I try to spread the overhead on each node. So put the path as even as possible, but it's not the case for like cluster auto scale because this kind of scenario wants to use the resources as much as possible. If not possible, they will spin up on you now. So in this case, they want the path to be more beanpack. So I use the opposite of priority called node resource most allocated and give a higher score. And of course, in some extreme particular case you just want to maybe don't any, don't want any score parking to participants because you may think that a filter is already good enough. So you just want to your path to when I land on any candidates that may be possible. We don't know, but I just use a demo here. Okay, so you can, so that I'm going to move what's that, mount, mount. Okay, I'm going to mount that here. Yeah, schedule it. Oops, it's not quite sure why. Oh, it's, it still has a configuration for Ikea. So let me move this out. Oops, schedule, maybe I made some mistake there. It's because it's live way. All right, so I just, okay, I just want to show you that once you define the new style, new style configuration file, then you can run you work like this. So for example, a beam packed, I'm going to demo that. I just want to deploy 10 paths and the 10 paths where I land on the same name. So here is that you can use the, you should use the schedule name to one of the profile names you'll define. And of course, the bound to here is that you launch one image and then you launch a second, maybe the second one will be placed on the same note there because we use the profile flavor image first here. So you will try to use the node which has already the image loaded on there. Oh, I'm sorry that maybe I made some mistake on the configurations. We can give a try. So one thing I want to mention here is that if you're still using the lower version of Kubernetes, the multi profile is not supported. It's only supported on one 18. If you still use the old version, maybe you have still to use the we went off of our version. And in that case, deprecated field called algorithm source dot policies used. So that policy general, you cannot define, of course you cannot define multiple profiles. And also the predicates, you have to use the keyword predicates and the priority to define or modify some in-tree default settings. So big thing you must be aware of. And the Wilma Alpha two is just what I tried to demo. And yeah, multi profile just work like this. So the spec dot scheduling, no name right now adapter to two phases, sorry to two kind of mechanisms. You can use the multiple schedule just like what you used to before. And also you can use the multi profile in one schedule. But in the communities long-term roadmap, we recommend that you use just one schedule binary to avoid any recent issue and define the profile inside the schedule. All right, here's the last section, how to contribute to the six scheduling. So some high level definition of basic is defined here in charter.md and our, we have several projects that the priority is the default schedule which is shipped with the Kubernetes release. And also we have some sub project like this schedule. This schedule is initial by Rehab. So it's trying to resolve the problem that along with the running of your cluster some balanced situation may be broken. For example, at this moment at the point of scheduling maybe your workers are evenly balanced but maybe some nails have been shut down unexpectedly. So that pass got moved to other nails. And maybe at some time does not get recovered back but at this point a schedule doesn't do the rescheduling for you. And we don't have a priority on supporting that default schedule. So we have to use some external component like this schedule if you really care about the balancing or some specific strategy you want to comply with. So use this schedule. And the second one schedule plugin is we want a marketplace for you to hold the customized the plugin for extending the existing schedule compatibility. So the schedule plugin is also a sub repo which is to follow the scheduling framework to build custom plugins to fit more innovative requirements scenarios, essentially. So wait, are there any, it's part of my naivety. Are there any custom scheduler plugins already available? Yes. Okay. Yes. For example, Alibaba contributed a co-scheduling plugin which tried to schedule a group apart together. This is a typical requirement from the batch workloads, AI and machinery field. And also they raised another proposal to consider elastic resource quarter. So because right now resource quarter is defined in the admission hook. So admission hook means if a part specified I won't use this to be, I won't use this memory but it fails, it still keep the request. No other incoming parts can come so. It's also a common requirement to sort of delay the evaluation of the part of request of the quarter to the scheduling phase. So yeah, some common requirements and the plugins are developed here. We'll lead in the chat is mentioning that Portworx also comes with its own scheduler called Stork. So that's it. There are a few other examples of other schedules. Yeah, I hadn't heard that. And a lot of the content that you've covered I didn't really have a good background. And so thank you very much so far for today. This has been really good. And yeah, I think this is almost for today's session and the last two slides are just some links you can check out offline and yeah, that's it. There was a question about the pros if you would educate us on the pros and cons of kind versus mini cube. So I prefer kind. I think kind maybe it's a super set of mini cube. So basically mini cube just launch up a one node cluster and the inside there is a virtual machine if I understand it correctly. So there's one machines and you can operate with that virtual machine that has one one node there. So of course you have some cons that if you want to do some scheduling development or just want to try some scenario that must have multiple nodes you cannot do that. For example, if you remember that if you want to debug the priority if you have only had one node, of course the priority won't happen because there's no one choice. There's no need to do the scoring ranking sort of things. And I do like con because it can't be useful in the open source development, in the entry development. So once you change some, you're some source code you can build your current code base into a con node image and launch up that there. And also some other features supporting con. So I think I'm looking at the Q and A here while Leeds adding some commentary on the mini cube here in the side, mini cube can use Docker containers as nodes now and kind challenges ingress. So I think that's, I think you've mostly answered the question about mini cube versus kind of not seeing one other, there's another one just came in that what will be the impact if we use two schedulers? Yeah, that is the, this page mentioned. If you use two scheduler, the racing issue cannot be avoided. Even if you manually separate the cluster into two parts and each scheduler just govern that part. The two reasons I just mentioned here because if you don't divide the cluster, it's possible that the new try to schedule the two scheduler try to schedule the same part at the same moment to compete for that resource because there's no distributed log deciding which schedule can win that. So the final arbitrator is just the QBlade. QBlade just is a single points, right? The QBlade will finally decide which one, maybe the later one will fail, so that's the case. So if I have a hybrid cloud deployment, so I've got some of it running on bare metal, some of it running on hosted someplace, am I necessarily having multiple schedulers? I'm not quite sure on that field, but someone has explored that field using Schedule and Framework to manage the resources on a multiple cluster. Let me, I have a tweet on that. There's a tweet for everything. Yeah, someone just, yeah, someone just pinged me that they use the Schedule and Framework to manage the multiple cluster on the scheduling tasks. So, yeah, let me, yeah, this one. Yeah, this one. I think last year. Yeah, so Robbie. Is that Robbie? Yes. So is, excuse me, again, my, there, Admiralty IO is, is that a custom one that they've done, or is that an open source one? Where is that coming from? I think it's open source one. Okay, good. It does open source, yes. Yeah. Ask and ye shall receive. There is also Univa, which is an HPC resources scheduler company. They have released, they have worked with Red Hat in releasing an open source university source broker last year. That's right. I'm not sure what, yeah. I'm not sure the grid engine people, yes. I know them, but yeah. They are in Canada. Yeah, yeah, Canada Rocks, Canada Day, they're all like, we're all getting exposed. Yeah, so basically they spawned this company in DevOps and they are doing like data science, cloud migrations, but in their heart, they are an HPC company and they have open source, what do you call it, universal resource broker, which basically fits in the scheduler area. Now I'm making the connections between all of these projects and that's what this is all about, is really trying to figure out how all the pieces in that wonderful CNCF landscape fit together. So if you could put your last slide up again, the one that had how to get a hold of you, that would be great. And I know the scheduler SIG is meeting on a regular cadence so if folks are interested in this, are interested in just reaching everybody, the Slack channel is probably the quickest way, if you don't wanna wait for the SIG meeting to get a hold of people who are keen to talk about this topic. And when is your next SIG meeting? It should be just this Thursday, 10 am PST. Cool. So that's where you can get more information about this. If you're working on a scheduler, a scheduler plugin, whatever it is that you're doing in the space, please come. I know more bodies on SIG meetings is always a good thing and that's helpful. If you have some items to discuss, you'll just put the item here in the next meeting agenda. Go through that. Perfect. Well, thank you. I know that this is one of those ones that I'm going to have to watch again so I can get through it again, but the documentation is pretty good around the scheduler stuff so please do take a look at all of that. I'll see you guys next time. Bye.