 Around 8.30. Good morning, good evening. Yeah, I'm good to see you. Good to see you as well. Hey Ricardo, I need to drop in a few minutes. I have a conflict in meeting, but I just wanted to come in and say thank you for the for the work that your new team did on evaluating Harbor. I much appreciate it. Hey, yeah, no problem. Yeah, so then it will be up to the TOC now you know to put it up for a vote for graduation. That is correct. Thank you for the recommendation. Yeah. Is it gonna go to a six storage now or is it? Six storage, complete the review. Sad did it. He added the comments on their PR, but he got busy and has not merged the PR yet. So we're waiting for that merge. Okay, cool. But they did complete the review. Okay. Awesome. Yeah, I'll stay on for a few minutes and then I'll jump off. Thank you. Cool, no problem. So yeah, welcome everyone. So we have a you know presentation today, I think that's the main item for Kira. So did I pronounce that right? Kira or Kira or Kira? How do you pronounce that? Either works. We generally go Kira, but there's no official pronunciation guide. Okay, cool. Yeah, so it's Jeff. Yeah, and he's gonna talk about that and so yeah, take it away. Perfect, great. And I assume I'm fine to share my screen out. Yeah. I'll go ahead and do that so that I can show some of the slides and it's good to see we've got Anurid from Microsoft here, Tom from Kodit, Zibionek from Red Hat. I'm even probably missing a few other folks. So thanks. Thanks all for joining to help answer questions as well. So, as I was evaluating why videos things mirror my screen, but it looks like you don't see a mirror version only I do. This was my exploration this morning. All right. So we'll start with Kata, event-driven auto-scaler. This is something that we are making it. We've had a proposal open for the sandbox. So I just wanted to share a little bit about what it is and then answer any questions that you might have that can help in doing this. Previously, I think, as mentioned, we've made a similar presentation to the serverless work group to a few folks there across SAP VMware, whatever else, and that went well. But this was before the new kind of policy happened with the SIG. So some of this might be repeated if you were at that presentation a few months ago. So some background and history here. Kata was initially started by Microsoft in Red Hat primarily. For some background, so I'm Jeff, I'm a product manager at Microsoft Azure. And when I'm not focusing on open source and Kubernetes stuff like Kata, I'm helping manage and run the Azure Function Service. So Azure Functions is Microsoft's serverless offering. I can type AWS Lambda or GCP. And one thing that we had observed as a team is that we had developed some technology to help run functions and scale them effectively. But we had customers and users who are interested in using this type of functionality outside of Azure. And so we kind of looked at the Kubernetes ecosystem in general. We're like, you know what, there might be a gap here in terms of what's possible today and what we think is there. So we talked to a few folks and let me know too, if other people only see a black screen. Thanks, Tom. Yeah, we see a black screen right now. Black screen. Okay. Let me unshare. Thanks for flagging and see if resharing makes it happier again. Let me try one more time. I'm not getting any pop-ups from Mac telling me that it's unhappy. Is it still black? No, I can't see it now. Okay, great. Thanks for flagging, Tom. Okay, so we reached out to a few folks, Red Hat, and some of the folks on the call from Red Hat were like, yeah, this sounds interesting to do this event driven scaleway. So KdaEdit's core is a component that can be installed in any Kubernetes cluster that will enable your cluster to scale pods and deployments and jobs even. Not just based on CPU and memory, but based on metrics that are being pulled from the event source. So specifically in Azure Functions, we don't just scale based on the CPU of your functions, but we're actually proactively looking at the queue or the SQL database or whatever else it might be and helping really rapidly scale your functions as a result. And so KdaEdit is doing something very similar in a hopefully very seamless way. So we wanted to make it very simple to wire up metrics from event sources and plug those into things like the Horizontal Pod Auto Scalar. We wanted the ability to scale down to zero in the same way that Azure Functions users were akin to scaling to zero and saving resources. We released this last April around this time. It went GA1.0 at KubeCon in 2019. And we currently have about 20 scalars to different sources like Kafka, PostgreSQL, Nats, Prometheus, a bunch of sources out of Azure, AWS, and GCP. So even before I do anything else, I did want to show a quick demo just so that you can see what this looks like. This takes about 15 seconds. So I have a Kubernetes cluster that's already running and I have one container or one deployment that's in it and it's a RabbitMQ consumer. So this deployment, I've said, hey, it's consuming RabbitMQ messages. And the one thing to note, because KdaEdit is installed and doing all of its stuff, this is actually scaled all the way to zero because KdaEdit has let Kubernetes know there's actually not any queue messages here to consume anyway. So you don't even need to consume the resources or reserve the resources to run this thing because there's nothing to be done. And I can show you, if I just look at the KdaNamespace, you can just see it just there's a Kda operator and then a metrics API server that's running and monitoring the stuff. Now, if I go ahead and I watch the pods, this is my RabbitMQ server. And now I'm going to deploy a job which is going to publish 1,000 messages to the queue. So not just drop one message and it's actually going to drop 1,000 messages in. So we should see that job spin up and it's creating and dropping 1,000 messages in the queue. And what happens as a result then is now KdaEdit has seen, oh, there is work to be done. So now we have one consumer that's come online right away. But what's nice is that even before that sentence is finished, you can see because it wasn't just one message that I actually dropped in thousands of messages into that queue. That very rapidly, KdaEdit has actually driven this to say, hey, I actually need to scale this RabbitMQ function a lot to make sure that I drain this really rapidly. So this kind of very proactive, very event-driven scale is what KdaEdit is making possible. And if I waited here for 30, 45 seconds, it would finish scaling up, consuming all the queue messages and then scale all the way back down to zero again. So that's kind of what KdaEdit is doing behind the scenes. What's making it work is one of our core fundamental value that we wanted to do when we built this, that we set from the get-go and we continue to stand by with our communities. We didn't want to rebuild anything that Kubernetes already did. And so behind the scenes, how it works, I showed you there's that Kda operator that's running and also has its metric server, which connects to the Kubernetes metrics APIs. And then there's a number of what are called scalers. Those are all the different event sources. I mentioned like there's a RabbitMQ one, a Kafka one, a Postgres one, a Prometheus one, whatever, about 20 of them. And you end up having your event source. In the case of my demo was RabbitMQ. I think in the rest of the slides it's going to assume that it's Kafka. And so then you just deploy, you create a deployment like you normally would. So I just deployed using a Kubernetes deployment. And then there's a special CRD that KdaEdit exposes called a scaled object. And this is really the metadata of where you map your deployment, where you map your job to the event source that you care about. So in this case, I'm saying, hey, it's my deployment that I care about and I want you to scale based on Kafka. So here I provide a little bit of metadata for KdaEdit to use. I can configure things like how frequently should KdaEdit check to see if there's messages to be processed. I can also configure things like minimums and maximums. Maybe I never want to scale all the way down to zero. Here I define like, hey, I'm interested in Kafka. Here's how to connect to Kafka. I can set whatever info I need to there based on the event source. And even some values here, like in this case with Kafka, there's something called the lag threshold, which is more or less setting the target for scale. So in this case, 50 is saying, for every 50 unprocessed messages in Kafka, I want to target about one replica. And so if there were a thousand messages, it's going to try to do, what is that, 50 replicas? But if there's only 50 messages, it's only going to target about one. So if I make this number lower, KdaEdit is going to scale faster and more aggressively. If I make it higher, it's going to scale more conservatively. So you have a bunch of knobs there to help control this. So you go ahead and apply that to your cluster. KdaEdit picks up that scaled object. The KdaEdit operator knows about the scaled object. And you can see in the case of my slide, I've even graded it out because it's like, hey, I can scale this thing to zero now. Because I know the Kafka event source is empty. KdaEdit is just doing this by wiring everything up automatically for you to the HPA. So it's not using its own autoscaler. It's just augmenting the existing Kubernetes ways to do this. If a message pops in, oh yeah, during this whole process, now it's just up to KdaEdit to constantly be asking, how many events are being generated? So it asks Kafka at every polling interval and says, hey, are there unprocessed messages? And if the answer is no, then it keeps this thing scaled down. If the answer ends up being yes, then just like I showed you, you actually pop up and then potentially scale out very rapidly. So a few key features, kind of based on the demo and the architecture. You can scale any deployment or job based on event metrics by defining that additional CRD. And we're just using Kubernetes CRDs to drive the experience. It lets you scale to and from zero based on events, back and forth. It has 20 event source scalars built in. It's completely extensible. This is the largest area of contribution and interest that we've seen are people adding these additional event sources. I mentioned kind of in passing, you can also say, hey, maybe I have a long running job. Maybe every queue message isn't just a simple order I need to process. Maybe it's a video I need to transcode. And so you can actually use a scaled object mode where you say create a Kubernetes job for every event that comes in, which is a very useful model. There's ways to define authentication. So we have ways to integrate with secrets, with other sources as well. You can use pod identity if you're connecting to a cloud provider. So for instance, if you're using like the Azure queue scaler Cata integrates with Azure pod identity. And so you don't even have to pass in a password. It's just going to use its own identity to authenticate their support for that in AWS as well. And really this is about letting you focus on your app and not have to worry about the scaling internals manually wiring up the custom metrics doing the work to do this manually. Cata just makes it as easy as defining that scaled object. So in terms of community, we've been really happy and pleased with the amount of energy that's happened around Cata in its time. So we've got about 2000 stars on GitHub. A number of contributors. This is across large corporations as well. Microsoft Red Hat, IBM, Kota, Pupa, Astronomer IO. This is just the few that I pulled off the top of my kind of stand up sheets. There's much more. We have weekly stand up. So from the get go, Cata has, there's nothing in Cata that's branded Microsoft or branded Red Hat. This is something that we've wanted to be community driven. So we have weekly standups on Zoom. We actually have one coming up in about three hours. There's a website that has a list of all the scalars and a few users who are using it across their solutions to help add some more stuff. This was nice. I just noticed when I was preparing the presentation, there was even some folks who were just tweeting like, Oh, hey, Cata, this actually looks super interesting. This looks like what we're looking for. And then Richard chimed in and was like, Yeah, we've actually been using this in production for a while now. So it's very simple. Like we didn't want to make this a full complex doing 80 things. It's really just driving that event driven stale, but it does that very well. So the last slide I have is in terms of like, why are we interested in the CNCF? I mentioned already with Cata, our intent wasn't to reinvent the wheel, but it's really building on those standards and building on those technologies that are being developed in the CNCF like Kubernetes. So it makes it a natural home. Our intent has always been to do this open and community driven. While it started with Microsoft and Red Hat in a partnership, we really want to make this vendor neutral in every way possible. We feel like donating it to a foundation with CNCF is a way to show that good faith with the community. It's already MIT licensed where we're planning if this becomes sandbox to use things like the CNCF, CLA contributor license, all those things. There's no kind of we still want to hold on to this that are the other. This is really our intent to say we feel this is a useful piece of tech. We've been using tech like this to run the Azure function service. This has been in the open now for a while and we just want to go all vendor neutral now. Cata also integrates very seamlessly with a number of other CNCF projects, things like the virtual kubelet to scale out into virtual nodes. Nat scalers, Prometheus scalers, StreamZD scalers, Helm is the way we use to deploy it. We're really looking for that vendor neutral home for a key serverless capability, specifically in the serverless space. I think serverless has this connotation of being very vendor locked in and there's been some heated discussion about CNCF and serverless in general. We're really hoping that Cata can be one of those very nice pieces of serverless in addition to things like cloud events that would tie in very neatly with the CNCF. So that's all I really wanted to share. I'll stop sharing here. I saw a few comments. I think most of that is handled. So I think I'll just pause here if there's any questions or anything that you could use from us. I'm more than happy to share more. Yeah, questions on scalers. So is that a single process or is it multiple processes running in Kubernetes? Yeah, so today we have it can run in both modes. Today, the majority of our scalers are just running in that single process. They come out of the box. They're fairly light rate like each scaler is about 30 lines of go code. We have a way to make them all external and this is actually something that we've had discussions about even as recently as last week stand up, which is like there's a world that you deploy Cata and that you kind of like check all the boxes for all the scalers that you want. And now instead of getting just those two pods, now you have 15 of them and each one's doing its own scaling thing, but we didn't want to make it too over overloaded so far. So right now the majority of them run in the shared process. We do have ways for you to plug in external ones. There's a few that only run externally. And this is something that we're still kind of evaluating with to make sure that we don't get the footprint too large or we need to start to version these more independently. So we have the capability there. There's some scalers that take advantage of it, but mostly for convenience we ship most of them just in the same process today. I mean that's not necessarily a bad thing right so just because I mean if you end up having multiple processes for each scaler then you're using more resources but but like you said they're lightweight right so so maybe yeah if you add more processes there then it wouldn't affect too much. Yeah. The workload in in a Kubernetes cluster. Yep, great. And I see one question in chat from Jay real quick. Integration with cluster autoscaler not just the HPA. So there's nothing directly we do with cluster autoscaler as far as I understand though, and I invite others to chime in. How the cluster autoscaler will work is it will look at what the HPA is scheduling and the resources that it's trying to schedule. And then based on that, it can scale the cluster. So I believe indirectly Kato would cause your cluster to scale, because Kate is going to be telling HPA you need to add more resources the HP is going to be scheduling those. And then at one point the scheduler is going to be like I don't have the space to put all these things that Kate is telling me to schedule and that would kick in the cluster autoscaler, which would then scale my entire cluster so I believe they work directly. This is a common question though so I am pausing a little bit in case if you're neck or others want to. The reason I brought this up is because cluster autoscaler only has the one catalyst for scale up right which is that pending pods are queued up. And that obviously we'll talk to the underlying cloud provider and cluster autoscaler and ask the auto scaling group or node node group or whatever to expand. One thing that we hear a lot in cluster autoscaler is support for more predictive or scheduled based scale up events. So I was thinking maybe since Kata seems quite flexible in that regard and it's a wide choice of sort of the events that can trigger an action. So I was thinking maybe it would be possible to integrate Kata with cluster autoscaler and use the Kata event sources as the triggers for cluster autoscaler instead of just the pending pods queue but I don't know it's just a thought. It makes a ton of sense I mean even just briefly looking at the cluster autoscaler stuff and it does look that they are some metrics that maybe these are metrics that it exposes but yeah I think that makes a ton of sense and even to your point of scheduling. One of the work streams that we've been funneling some resources into recently is kind of along the like I mentioned it's predictive in the case or it's less reactive because it can actually see that hey there's a thousand messages in the queue let's do something and it makes maybe you set some thresholds like look if there's 10,000 things in the queue yeah go start scheduling stuff but also go go scale there at the cluster thing. Right or if you know yeah if you know you're doing you know 100,000 batch jobs starting at noon you know like you could just predictively go and and increase the number of worker nodes you know. That's the that's the one that like Microsoft research right now is partnered there they're helping tweak some algorithms where their hope is that. In addition to that like if it is every Friday at five o'clock there's a thousand things that drop in the queue or I'm running some batch. So Kata ideally would be smart enough that Friday at 455 to be like look the queue still empty, but I've seen this too many times to know the storms coming. And so now let's go and crank up the cluster auto scalar quick up the HPA in advance so I think that makes a ton of sense I'd be interested to know what integrations exist today to do the cluster auto scaling stuff but there's nothing. There's nothing fundamental to how Kata works that would prevent any of that so I think that's all within the line of thinking of how we've been approaching Kata as well great conversation. Predictive scaling of the cluster for example would would you be interested in having a history of all the scaling that we've triggered. Or do you just want to have a source to scale on at this point in time. Are you asking me or Jeff. Yes, no you sorry. Generally I'm referring to being able to change the trigger for scale up from the single thing that currently is supported by cluster auto scalar which is that there are pending pods in to be scheduled. There are alternate events for cluster auto scaler scaling down. So you can do like custom metrics and stuff like that to trigger a scale down event. But for scaling up meaning you know increasing the number of worker nodes in the auto scaling group there's only one event trigger as far as I know. So that's that's what I was referring to. Okay, thank you. Yeah, I think you you need both right so. So one of the problems when you auto scale is that you end up thrashing or you may end up. You know with more resources that you actually want. And then but, but also you when just before you scale down you want you want to know if there's something maybe, or coming up or some event coming up in the next maybe 10 minutes. So you will want to keep your cluster up and running, because let's say if it's 10 minutes, your event comes in and you've already scaled down but now you're scaling back up right so we end up kind of thrashing in a way right so scaling up down and depending on this event so if you have a way to predict some of the workloads that are coming in a little bit later, you might be able to smooth that out. And so I think that's one of the concerns. So, so how I see data for now is that it's purely focused on application auto scaling, where we then rely on the cluster auto scaling to make sure that there is enough capacity. But maybe we should indeed also have a look if we can help on the cluster side of things. But yeah, we don't really have a plan at the moment. And I see there's a nice question for you Jeff on k native. I was busy creating a GitHub discussion issue around this conversation. Okay, any relationship with k native so a few things here. That's worth noting in, in general, I think the short answer is k native is a entire that the idea of it is to be an entire serverless platform. And that's with it does about 20 things out of the box. Kata is just a very single purpose thing that's like, I'm just going to be doing event based scaling based on this kind of pattern that I talked about so it is a much smaller scope just a single use component. Now that said, it actually ties in well with the k native story so one of the work streams that's kicked up as a result of the last cube con and going one dot oh, to actually have an active work stream with the k native group, who are they can leverage kata within k native to add some additional functionality for example in k native there's a way to get event notifications when there's a Kafka message. There's a poll request right now in the k native repo that says hey if we actually took a dependency on kata, we could scale that thing down to zero when Kafka is empty. There's interesting stuff there and just this pulling pattern general and integrating it more tightly with k native but directly. They're they're not really overlapping too much because kata is this very single use thing and k native is trying to do a bunch of other stuff but it is one important piece to a serverless platform that we feel. And there's additional things that k native is trying to accomplish in addition to that if that if that helps. And thank you Jay for a flagging the AWS team I might ping you afterwards there's there's someone on our site to who's interested in some of the deeper Kubernetes integrations. So I'm going to see what we can do with this cluster one so hopefully class that answers on the k native stuff to let me know if there's any other questions there. So if you have k native and k they're running on the same cluster, they can you auto scale functions in the cluster or not or this is not supported yet and yes. So there's two, there's two kind of ways of doing scaling I guess I'm trying to think of the k native today does auto scaling. So as its own custom auto scaler can also integrate with the HPA and how k native generally works is that everything that it's scaling is an HTTP request. So it's either going to be a cloud event over HTTP or an HTTP or g RPC request from an application or from a client or from other worlds. So k native primarily optimizes scaling on today by looking at things like concurrency of HTTP requests, and then driving scale that way. There's this thing behind the scenes as well that's taking Kafka events and turning them into cloud events over HTTP or g RPC that kata might scale. So, kata approaches things slightly differently, and that kata does not look at HTTP requests kata is actually looking at the end event source. So kata will look at Kafka or Rabbit MQ or Prometheus or wherever else and drive scaling that way so you can auto scale today I think the reason that there's interest from both sides but k native and kata and understanding how we can bring things together is that the differences in the trade offs between both of those models of scaling only on HTTP and scaling based on the event source have some differences both are valuable. The kind of long answer to your question is, you can do auto scaling and k native today, but there are ways that you cannot auto scale and k native today that kata can enable, and the k native team is interested in lining that up. And there are a lot of case in that sentence. It's almost a tongue twister. Actually, I could also speak to that one a little bit. So I was originally so I work on Apache airflow and originally we were looking to use k native as our as an auto scaling system. And what we found was that for long running tasks, k native is kind of not an optimal solution because you have to keep an HTTP request open the entire time that a task is running. And I find that kata was a lot better suited for a more asynchronous or worker based auto scaling system. Thank you, Daniel. Perfect. Yeah, that's that's an example to have like the one when everything's HTTP base versus the events first base. That's one of those trade offs is like long running becomes a lot harder when you're trying to hold open an HTTP request for 20 minutes or whatever you might need it. Or hours or hours. Yeah. Sorry, can you hear me. Yeah, that's great. So if, if, if one user has such kind of scenario so for example, they would like to scale out their deployment based on some events. So they, they may didn't know the details of their full kid are all kinetic. You know, the overalls, you know, the overall feature is similar. I know there may be some technical detail may be different. So problem. So what's the way, what's, what's your suggestion to this user, which case. He should use the key, kata or which case he should use a kinetic, you know, he didn't know the detailed of, you know, detail of some technology for the more HTTP connection something like that. Yep, yep. And if I, if I understand correctly, just to kind of repeat the question, it's with Knative since you're connecting directly to the event source, the developer has to have knowledge of that event source. And so what do you do in the case where you don't want to have that direct knowledge of the event source that I think there's a few different answers to this one as well. My initial thinking to is cloud events, I feel like is a very good way that this has been solved even in the CNCF of being able to say hey at the end of the day we're just going to have events that you can subscribe to and scale from that kata could help scaling that indirectly through some of the things like metrics of the amount of cloud events they're being generated. In some ways, I'd almost say that there's, there's a few ways this could be solved as well. So Knative definitely can do that part like Knative eventing specifically is all about letting you subscribe without having any knowledge of the event source. There's a way that the Azure function service like that we built our serverless service where we abstract that within the SDK that you actually deploy. So there's something called the Azure functions runtime. It tracks a lot of the details of the underlying event source and enables you to just write code, but it's the container itself that's doing the abstraction it's not some cloud event behind the scenes. Then the final answer is just cloud events in general so I think there's kind of three ways that you can cut it different SDKs like the Azure functions runtime, Knative and Knative eventing, or at the end of the day just using cloud events so it's a trade off because it's one of those things to this is this is one of the interesting discussions in the service community in general which is there's a push for how much do you abstract away the underlying event source from the developer and the benefit of it is the developer doesn't have to worry about the underlying event source, but the downside of it is the developer can't take advantage of something that's not a common denominator across multiple event sources. So I think both are important. Sometimes I want to know it's a Kafka stream that I'm connected to, because I need to have a point and I need to do in order processing and I need to do things that are Kafka specific. Other times, I just know that there's a notification that happens to be coming from Kafka but I just need to post something to Slack cloud events works great there. So there's room for both, which is why I think there's room for both the K to style and the cloud event Knative style. I don't think it's going to be a one size fits all. Okay, okay. Okay, thank you. Yeah. Do you, do you have integration also with some of the big data frameworks like spark and flink, or has there been any discussions about that? I think I know one of the someone doing big data stuff today is using kata but they're using it before the like the spark layer and they're scaling based on, I believe in their case like Kafka is the theme that is filling up all of the data and then they're using Splunk to process those events that originated from Kafka but then are going through a different pipeline. So they're using kata to grab it from the Kafka side and not from Splunk directly. So there's nothing direct, there's no scalar today directly that grabs any of those from something like a big data processing pipeline but the event source that's probably funneling the data to that big data processing pipeline, whether it's Kafka or something else than that that is a supported scalar, if that helps. And I just wanted to add that we've added integrations with two databases, MySQL and Postgres. So if you're, but yeah, we don't, as Jeff said, we haven't yet added integrations with spark and flunk but with rather the event sources that may be funneling data into those. Cool. Any other questions? Silence. How do we proceed after this meeting? Do you guys have to vote or how does that work? I think you need a consensus of two people from the SIG? Yeah, so if you can create a PR with recommendations, I mean, based on what I see from years ago for Sandbox, so I don't really have any major concerns but there's a template that we follow. So I can show you what the template is. I think there's another PR that we did for Volcano and then so basically we need to fill in all the details and we'll go from there. And after that we'll take it up to the, we'll recommend the project and the TOC will take a vote to put it in Sandbox and it needs a two-third vote. And after that it will be in Sandbox. Okay, so I thought the issue was the new approach but basically I just take what is in the issue, open a PR and we'll take it from there. Yeah, yeah. And let me know if you have any questions, I mean, about the template or anything about any item that's in the template. So it's a new process. I mean, we're just starting to use it. So we're just kind of like trying it out and if there's any suggestions or anything that might not work out or you have any concerns, just let me know. Yeah, if you could maybe go to the issue and post the latest template that I'm certainly using the correct one and then I can create the PR tomorrow. Yeah, and I pasted the PR for Volcano because I know that's the one you reviewed two weeks ago. Do you want us to open this poll request or is this something that the leads of SIG runtime need to fill out this template? And then we open the PR and then you folks can add comments and then close it. Because I see the ones they did in terms of it's more or less kind of our PowerPoint presentation and markdown which is great. Like we've got the info, but I don't know if we need to put this together. Yeah, you can open the PR and then people can chime in with comments or anything that any questions of them may have. And then the PR gets approved and basically, yeah, and then we send it over after it gets approved and Merckz, then we send it over to the TOC. Okay, so it's a PR to the SIG runtime repo, not the TOC. Right, right, right. Yeah, okay. Okay, yeah, I'll bring this up today in our stand up and then Tom and I can probably have this open by the end of the day today and it would be, I don't know how long the TOC and stuff works but I know anecdotally we've been hoping that this is something we can make some noise about assuming all goes well and we get sign off at KubeCon. Yeah, but I know that's a month away now and I don't know how quickly the wheels turn in this new process. Yeah, so. Yeah, hopefully they can put it up for you for KubeCon, but yeah, we'll see we'll see what I mean sometimes. So the TOC just got new three new members so you might actually need more votes now so so but Oh, no, wait, wait, wait, sorry, I'm talking about graduation, my bad, my bad, my bad. So for Sandbox, you need three sponsors, that's what I remember. So, so there won't be a vote, right, so so you need to find, so after we do the recommendation and we're filling the PR, then you find three sponsors in the TOC and then they'll basically say, okay, we want this project to be in Sandbox and they take it from there and then they put it in Sandbox. I don't know if it really needs a vote, I know that graduation needs a vote, right, because we're just, we're doing harbor right now for graduation, but yeah. Do they have to be a TOC representative or contributor? They have to be a TOC member, right, so so I think we have we have two liations, Brandon Burns and Brian Grant, so you could reach out to them and see if they want to sponsor the project, right, so and then do you want us to, do you want us to reach out, because I know one of the things and I know that's a new process, initially they're like, hey, if you go through the SIG process, that way you don't have to just go and poke TOC members directly. Once this goes through, do you, do you, is it best if we do kind of go find three members of the TOC and we're like, hey, we presented for SIG runtime, here's the PR that got closed, they gave us the recommendation, would you be willing to sponsor, or is that something that you have names in mind that you'll flag and we can just kind of help answer questions? No, you guys can go, actually, you guys can go in and ask TOC members, right, for sponsors, right, so it's, yeah, I mean we can, I mean as a SIG, we can help out too, and in terms of finding some more people, you need more sponsors based on our recommendation, but you can also go and contact some of the TOC members and, you know, and they can look at the presentation, this recording, and based on that, you know, they make a decision saying whether they want to sponsor the project or not, right, so. Does that make sense? Yep, it does, thank you. Yeah. Yeah, so that will be the next step, so, and yeah, just follow the PR, and then so we can get that move in, hopefully we can find a sponsor for KubeCon. Okay, thank you. Well, thank you guys, so I think the other items in the agenda are, the volcano that is already merged, and so that's looking for TOC sponsors is what we're talking about, so if you know, if you know anybody who would like to sponsor volcano, I think, reach out to the TOC member and to help out. So we reached out to Brandon and Brian Grant, who are our TOC initiations, but we haven't gotten a reply yet. And then, yeah, and there's some new TOC members, I find that some of the new TOC members are a little bit more receptive to us, because they're, you know, they just kind of want to learn more and they might be able to sponsor some of the newer projects. So, I think Harbor has already completed the review from sick runtime and so that will be set because it's a graduation so that will be a TOC vote tonight so Michael Michael will review from sick storage and I don't know if it needs a review from sick security but after that it will be sent out for a TOC vote. So cool, anybody has anything else that they want to talk about anything related to runtime, to KubeCon, to community, anything? Hey, recall, this is Tao from Kata community and you introduced sick runtime last week in the Kata AC last week. Sorry, I wasn't there but she told me that this is a very interesting working group and I'm here to learn about it and we might have something new to present to show for the sick runtime Asking for simple thing but it's not reading right now, so we just heads up and learning the process and see and prepare for myself when it comes to that. I'm happy to learn to watch the Kata sandbox review and it's very helpful and the process is helping. Thank you. Yeah, thank you for joining. Yeah, so if you if you want to present anything or anything or you want to add any agenda topic, you know, feel free to add it to the doc. So we meet every two weeks, not exactly the first and the third Thursday of the month, right? So, yeah, any item that you want to add or feel free to add it there and then we can discuss in the meetings to be a presentation or any concerns, any concerns about projects. I know, I mean, we're just getting started. This is, you know, it's been around for maybe month and a half. Six security, for example, has a lot of other stuff, you know, for example, they have a security security reviews for projects. But then that I mean that's outside the scope of this group, but anything maybe related to runtime review or, you know, maybe AI type of workloads, high performance type of workloads, you know, it's within the scope of the group. Okay, thank you. Is there any documentation on how they process the entire process for entering sandbox stage for project? Yeah, that's that's documented all on the CNCF TOC GitHub page. So there's plenty of documentation there on how the sandbox incubation and graduation process works. And then, you know, how you take it up to the six now. So it used to be before that it will, the projects will go directly to the TOC. And the reason they're creating this six is because they're there's a lot more projects so they're trying to scale into different areas. So obviously they have runtime, they have observability, they have security app delivery and storage. So, and I think there's another group in the works called contributor experience. And that double deal more about how I mean, helping out contributors in the community. Yeah, that is. Thank you. Yep. All right, guys. So anything else? No, thank you very much for your time. And we'll open up here. All right. Thank you. All right. Bye guys. Thanks. Thank you.