 Good morning. Hello. Excellent. We'll give folks a few more minutes to come on in. I am camera off this morning because I'm having bandwidth issues. So I'm hoping everything holds up. All right. Can never be a few more minutes to come on in. Liz, whenever you would like to start perfectly fine. Probably should give it one more minute. I'm seeing the folks coming in. Yes. We have a lot to get through today. Busy meeting day. Busy meeting today. We're up to 40 people. People are streaming in now. Everybody. Maybe we should get started. So welcome to the TOC meeting for May. You've only been here. Why do we put the zoom details in? It's so that we can make sure that like the everybody knows that this actually is for slides ahead of time. So. Okay. Yes. Okay. Yeah. Fair enough. All as well. All right. And Amy will be checking off who's here. Yep. All as well. All right. So in our agenda today, we're going to go through the six, get some updates from the six, have a quick look at what is waiting for TOC input. And then a late breaking proposal. We want to revise and improve the sandbox process. So this is the first exposure to daylight of a, an idea that we've been thinking about around improving sandbox. So hopefully we'll get five minutes to talk about that at the end. So who's first sick at delivery. Hello. Hello. We can hear you just fine. Yeah. Yeah. Yeah. Quick update on projects here. So for captain sandbox, the review is done. Same is true for litmus. So captain is. Here as a conflict plain for delivery and ops. Purpose is litmus is a framework for chaos engineering and chaos testing. Next time native space. So both of them are were sandbox. We have done the review. So this is a little bit more. Not. And just as we speak updating the issue here. That we have to read the document also think in the to see issue. Cloud native build packs. We looked into this. And the question we have here back to the to see is mainly on about wording. The, what the official definition of end users for the CNCF criteria is. adopters are actually cloud providers who people built it. And the requirement is that there's three end users using it in production. So the question here is, in the case it's more like for cloud latest build packets built by VMware and Heroku, which also means Salesforce. And as their end users, the name obviously their clouds and environments. But as a TSC requirement, would this mean that A, we want the pure open source project accepted and is an end user, do they also count as end users or would end users be like actual companies using it as the application? So that's the question back to the TSC, what's really meant by this criteria. Serverless works. We have to revisit that definition for spec projects already. So I think part of this is really just a judgment call on what counts as independent and what is end users for a particular project. Yeah. It can make our recommendation based on this. And we put it in our recommendation in there because it says independent. So I think one interpretation of independent is also it must obviously be adopted by somebody else except the people who are actually building this project. Otherwise it's more less than the final one. Serverless workflows is still under review. This was the former working group or actually was the sub-directory in the serverless working group on workflow standards. We had some initial conversations. We are talking with them. We specifically want them to sync with other workflow with related groups with projects within the CNTF as this is mostly about defining a standard, how they will find workflows. So talking to Argo obviously makes sense and also talking to Brigade makes sense because they are a spec project. So if this is a spec to be there, you would assume that other CNTF project would adopt that spec. So the idea is like what was happening around cloud events here. And last but not least for artifact hub, we had the presentation. We asked Matt to provide some more details and also do some follow-ups, especially with the projects we mentioned will be published on it. All the details are on the issue and obviously so far we haven't heard back. So that's why this hasn't moved forward since the last one. Yeah, and I can explain a little bit about why I haven't come back. We've been asked to provide a whole bunch more detail, things like the incubator due diligence, talk about three people using it in production. There were certain graduated and incubating characteristics that we were asked as part of the sandbox review and we just don't have that information yet. It's gonna take us a while because we submitted at sandbox and we were asked to do more. And so it's just gonna take us a little bit longer to pull that information together. Is there any reason why, I think Alex, you can speak to why they are not just being held to sandbox criteria. They always had them in the review in there and they're using sandbox criteria but some projects, when we talk to them, they already have some of those criteria for incubation already fulfilled. So they don't need to stick to this but like having coding standards and these information for project already has it available. I don't see anything against why they shouldn't actually post it. Yeah, in SIGAP delivery, it's become kind of the standard to start doing the incubator due diligence as part of a sandbox project. Other projects going through SIGAP delivery for sandbox have already been doing the incubator due diligence. Okay, I think this may speak to one of the reasons why we wanna streamline sandbox process in general. So let's not dwell on that, but let's point notice. Okay, one more slide. And then we're done. Just the updates on the working groups. Yeah, there's the two working groups. The working on number one is air gap. So there is now a, an aggregate repository and an overview for the air gap working groups working on obviously with the charter and the user story. So what they're working on right now is I think very interesting. They're reaching out to people or companies running air gap installations of Kubernetes and what their best practices are. It's already one in there from Cray. They're discussing with some additional companies who have this information available. This is mostly focused on air gap installations of Kubernetes itself, not the applications running on top of Kubernetes. Right now they're kind of figuring out how some of those users can share this information because as with air gap, you can assume that some of them are governmental organizations and for them it's a bit hard to share this information. But I also find their Cray one already being quite interesting on how Cray is running in the air gap environment. They also have those demos scheduled around CNET Porter, IBM's users, Jeff Scopio and Red Hat on the container signing and how they handle this across registries, which is one of the issues obviously in air gap environments, like especially the transfer of container images between registers and getting them in the right place is like a key area there. The operator working group just very briefly by next week, we should have that operator definition proposal to have a more wider discussion also with the TUC, which was the initial request. I didn't link in the document right now because honestly it's under heavy cleanup right now. So it doesn't make sense to look at it right now, but I talked to the people leading the working group that said they are confident to have it ready by next week. That's it. Cool, thank you. Which thing is next? I have not memorized the order. To be fair, I changed it up a lot, so, you know. All right, contributes a strategy. Disaster, good morning, afternoon, evening, breakfast, lunch, dinner, all that stuff, TOC folks, hope everybody's good out there, staying safe, staying inside. I have a better update actually. I just sent it in chat on our list yesterday. I sent that out to the list. We are pretty much up and running at this point. What this means is we have a readme, a contributing markdown file. We've got working groups that have moved past the stage of being proposed like governance and so many more things like labels and meta stuff and you can talk to us and that's really awesome. The governance of project is alive and kicking. Josh Berkes is heading that up last time we all met as a group. We collectively looked over the project-related graduation criteria and sandbox criteria so we can see what exactly the TOC is looking for so that all of our folks in our crew can be on the same page and Josh and folks are gonna have some recommendations already from things that we should either change or either make more broad or make less broad so you'll see those shortly. And then we're also now at the stage where we're preparing the communications to project maintainers and contributors for all of the 40 plus projects, both calling for support. We're putting a survey and some focus group things together. I know people have survey fatigue so this isn't anything that's like checking the pulse. It's more like discovery survey of what people like that they're doing in their projects. For instance, do you have a contributor program that you love that you wanna tell us about? So it's not all sad stuff like what is your problems? We wanna collect the stuff that people are doing good and best practices, not just the stuff that needs help, for example. In those communications, we're also preparing a welcome to our fourth Thursday meeting of the month for everyone, this including TOC. And this would be an AMA for maintainers, contributors, et cetera, for them to get direct support and help for anything governance related or contributor strategy related, literally anything that falls under that sun. And if it doesn't fall in our sun world, then we will kick it and help you figure it out and where to get responses and answers. We're also gonna be doing a heads up about the maintainer circle planning and hopes that we can get some other maintainers to come on board into our little planning circle of trust here and then we can get that kicking as well. We're close to setting up a Slack channel for maintainers and contributors so we can all gather somewhere together. And that's really it for us. So we are out of our meta stage and onto the real deal. The link that actually set in Zoom chat is our list and I sent an update to our list that has things like issues that you can help us with, things that are on our horizon and things that we've done over the last couple of weeks. So I would love 64 of you in this room right now to help us out in some way. Trust me, when I tell you there's work here for everyone, there truly is. We've been doing a really good job of using our issue board as well to try to map out and plan publicly some of the things that are on our horizon so that folks can jump in. A lot of that stuff definitely has a signease but we're talking like two to 10 people teams that are gonna be needed for some of this stuff. So don't necessarily get set back if you see an signe. Just comment on an issue and say, hey, you really wanna work on this. I know Lee and some other people have. So thank you for everybody that has come forward so far and we've got a really good core group of people right now. Wonderful stuff. Just pausing for a second in case anyone has any questions. And it seems like everyone is happy. I'm very happy. So let's move on to signetwork. Now Paris is right, I'm pretty pumped about that. Okay, great. So updates from signetwork, some project updates. So we've got a contour as a proposed incubation level project that's gone through SigReview and I think is pending some discussion in the TOC and probably a vote there. That we've got another project, Chaos Mesh that's recently presented within the working group and is under review, SigReview at the moment. There's a backlog of about three projects, Ambassador Meshri and Kuma of projects proposed for various levels, I believe, sandbox and incubation. And then coming up on the seventh, there's on the schedule is a presentation from WeWork about the survey of later seven protocols. A very relevant and interesting to, I think the world of on-way filters and maybe some web assembly. So it should be an interesting topic for us. Another topic that has been in review, it kind of proposed for adoption within the SIG is is essentially a white paper. It's efforts have been, hopefully I'm capturing it accurately or in the best of light that it's something related to and an offshoot of some work that was being done in within the telecom user group. That group having naturally a more service provider centric orientation and part of the effort that the crew, Jeffrey and Watson and a number of others that are kind of behind this particular set of principles have tried to bridge the divide, I think, toward the enterprise side, kind of between service provider and enterprise. Anyway, that's been proposed. I don't think it's been given its full due just yet in terms of potentially incorporating that in as a white paper within the SIG. So that's a to-do item. Lastly, hopefully we'll talk about this on this coming Thursday is what is a proposal for a working group around service mesh performance. I assume unbeknownst to most on this call, there was a separate Linux Foundation project being formed that was focused on, well, workload measurement, workload definition and measurement. Actually, it was a really long project name. At first blush sounded fairly related to CNAB and some other things that maybe the app SIG might be interested in or already defining or some of its projects defining. Point is that Linux Foundation project and its formation had recently fallen through but some of its work remains. And so there's some that's specific to network that we will propose and consider having as a subgroup. There's a, so that's the update. I'm sort of waiting for those that are representing the cloud native networking principles. If I mischaracterized that effort, feel free to correct. I have no public shame. No, I think it was good Lee. It's basically just a set of principles that I asked Watson to help me develop so that I had common language when I start looking at and things like cloud native network functions and trying to do performing networking in a more cloud native approach. Okay, very good. Turns out I didn't put you. That sounds really interesting. And I think actually getting some more kind of public awareness to what we're talking about in that kind of telco space would be useful. Lee, I have one question for you. So contour, that's incubation level, right? Contour, yes, it is. And do you know, or anybody else from the TOC, do we have like a TOC member who is lead on that? Matt, Mr. Klein is probably a point there. Excellent, okay. Yeah, hi. What's up, Liz? I was just checking to make sure we have someone who's gonna drive that through the whole VV process. Yeah, I think we're actually done. I had sent the DD doc out to the private TOC for comments. I was actually going to follow up. Great, okay. Today on that, so, yep. Okay, so that's with us for comments and then, and is it in public comment as well? It has been in public comment for some time through the SIG. I was looking for some private TOC feedback just to see if there's any major concerns. If there are, we can talk about them. If not, probably do a vote this week or next week. Wonderful, yay. Getting through some projects, that's awesome. Thanks, Matt. I'm assuming there's nothing else that you want from the contour team, right? So we're all set on our end. Not right now. If there's any feedback from the TOC, we can talk about it. Okay, thank you. Anyone else with any questions for Lee and SIG Network? Okay, let's move on to SIG Runtime. Hey, everyone. Yes, hello, SIG Runtime. Good questions for the SIG Network. I was making on you. Can you share the link of the white paper on the chat box? Thanks. Okay, so SIG Runtime, I hope everybody can hear me. So project updates. So at our last meeting, we had a presentation from Metal Cube that's basically bare metal, no provisioning for Kubernetes and just standard bare metal provisioning for machines. And the project is currently in review. They're applying for Sandbox. So we created a document and it's publicly available for everyone to comment. So just waiting for some SIG chairs and tech leads to review the document. And the next step will be to create a PR, emerge that PR into the SIG Runtime repo. And that'll become our artifact for the TOC to find sponsors. And we're also following the new Sandbox template, the working Sandbox template. Another project that we have in the pipeline is Quay and that's basically a container registry with lots of different features, security features for image scanning and different ways of storing this container images. And then we have their presentation on Thursday, our next meeting. So they're applying for incubation. Then Harbor finally got their consolidated due diligence document completed from all the relevant SIGs, SIG app delivery, SIG storage, SIG security and SIG Runtime and their recommendation is to graduate. So I think the next step will be for the TOC lead to kick off that two week public coming period. And after that, if everything goes okay, kick off a vote. Then we have another project at Cube Edge that open PR in the TOC repository for annual review. So we'd like to schedule a presentation to help out with the review. So hopefully we can help the TOC with the review. This project is in Sandbox. Cube Edge is basically running container type of workloads in the Edge using Kubernetes. Then we have some roadmap items. We had a presentation from container device interface at our last meeting. So this is a team mainly driven by NVIDIA. And yeah, they're looking for a home to do their work. So possibly interested in spinning up a work group and maybe working with the OCI folks and trying to define those containers, standards and the way they interface with different devices. In this special case, I guess it's GPU type of devices because it's NVIDIA driving this group. And yeah, finally we have our next TOC liaison and yeah, that's Alina and yeah, welcome and glad to be working with her looking forward. And last item that we have is we're continuing to reach out to more communities. In some examples of these are Bottle Rocket which is an operating system specifically designed for containers. This is driven by AWS open source. This is a different approach of operating system for containers are similar to what CoreOS is but they're looking at using a more of a API driven type of approach. They have a container manager in the operating system. So that's an interesting project in the space. Another project is our group that we reached out to Firecracker. So I think a lot of folks are familiar with this project. It's basically a micro BM runtime also by AWS open source and used primarily I think for serverless type of workloads. Then Lupin is a research project that IBM would reach out to the community too. So that's in between approach between micro BM and a unit kernel. So it's basically a stripped down Linux kernel and sort of allows you to run all those Linux workloads as opposed to a unit kernel that has to be custom built. So that's a very interesting project. So there's a research paper in the link. And then finally we reached out to WA SEC which is a web assembly runtime and a project that we think it's within the scope of the SIG and yeah, and that's it for the updates. Quick note in here, the annual reviews aren't required to be able to go through the SIGs. Just wanted to be able to flag that here. Yeah, so I don't think they're required but we'd like to help out and I don't know if what the requirement is, is it a presentation? No, obviously we can take this offline and all of that but being able to put up a PR into the TOC repo just being able to say, here's what we're doing for annual review. It is separate from being able to do an incubation review. Got it, so. Actually let's talk about this at the end. Yeah, at least let's take it offline or at the end. Sounds good. Great, and I think for Harvard, Shang, I think I've managed to look up the issue. Are you the lead on the due diligence on that? Yeah, that's Jan. So I don't know what the, I think the next step will be a two week review or public comment period. And after that, it would be a TOC vote or a final vote. Grace, great. And thanks to all the six for working on that. I think it's been around several, so. Yeah, it's been about three or four months, so multiple six and trying to get them all together. Cool, thank you for coordinating. All right, any other questions for sick runtime? And let's move on to sick storage then, Grace. So good morning. We have had a number of projects going through a review process, both TI, KV, Rook and Harbor are doing sort of larger graduation reviews for TI, KV. The due diligence reviews and process is in progress. DOC is mostly written and we're reviewing that and the presentation is scheduled for our next sick call next week. The Rook graduation, the DD DOC is being written by the team, so there's a backlog here because when Rook was first accepted as a inception project as it was back then before the concept of sandbox and incubation, we didn't have a proper DD document written, so we're kind of having to do one at this point. And Harbor, as Ricardo mentioned, we've folded in the sick comments into the main due diligence doc. Finally, we've been working on a number of documents. The storage landscape version two is now finalized. We're leaving it open for comments for another week or so, and then hope to publish it soon after that. So this includes a bunch of updates around the use of databases which we hadn't scoped in the first version and it also includes a number of updates around orchestrator management interfaces and control plane interfaces. But in general, this is a really powerful document to kind of as an education tool and also to help users familiarize themselves with the landscape and kind of covers everything from storage attributes to the different aspects of a storage system as well as the interactions between volumes and block and file system type solutions as well as API-driven solutions like object stores and databases and key value stores. So we've tried to make this broad as possible. So hopefully people find that useful and we'd love to have any comments if there are any before we close this off. We're also working on a performance and benchmarking paper. So this has been stuck now for a month or two just because of other workload, but we're hoping to start working on this again in time for the next meeting. And we've also been working on a use case template. So we're gonna be scheduling a meeting to review that template and then send it out for a wider review. And also we had a little bit of a surprise. We had a survey running, which has been running for a few months now and it's been sort of slowly accumulating responses and we now have sort of 54 responses to this. It's a fairly comprehensive survey with lots of questions and lots of coverage. So we're gonna be having a separate meeting to attempt to summarize this and we'll be looking to share this both at the upcoming virtual KubeCon and to this forum too, because there's probably a lot of useful information there that can help us prioritize what to focus on next based on the end user responses. And that's six to urge. Great. Any questions for six storage? All right, thank you, Alex. All right, we'll see you down next. Hello, so for a sake of some ability, we had our first real proper call last week. Already took quite some to-dos internally and I think we already made good progress, which I think everyone on this call will be happy to hear. There's two things where we actually need input slash help from TOC, the vote for the third chair, Steve Landers and the vote for a tech lead hasn't seen any votes or questions and I would just ask for people to either ask questions or vote yes, no on those. So we can move there forward. Then there was a question about if we should be doing interviews as part of our project incubation review. And I'm kind of worried of this because I don't want to introduce impromptu or implicit processes which TOC already has as part of their process. So I think we should actually agree within or TOC should decide or the six need to agree or whatever, but I think between the six we need to have one single common approach on how to do this and not six doing their own thing. That's currently seems a little bit to be the case. So we don't want to do work twice. We don't want to take anything away which as per process is part of TOC's work. On the other hand, we might be in a position to ease the load and we just need to know what TOC prefers. We don't have strong feelings, we just need to know. And the last thing is we are still getting up to speed and for now ongoing work is just tracked in that pull request and it'll become better all the time. That's it. Okay, thank you. My own thought on doing the user interviews is that probably whoever is the TOC person driving the due diligence can work with the TOS work with the SIG and decide who's going to do what. Because I guess on a case by case basis there could be people in the SIG or not who have good contacts to do those user interviews effectively. Anyone else got any thoughts on that? Yeah, I don't think a strict rule is required. I know there's a, when I looked into a project for incubation dragonfly, I actually did the user interview but I also know if SIG wanted to do a user interview just to validate some of their observations I don't see anything wrong with that either. So it just seems like it should be, could be dealt with on a case by case basis. I don't know what the downside of that is. I mean, is there a need for consistency here? Yes, to some extent at least in my opinion we currently do have a hard rule at least implicitly by the simple fact that we have a process and that process clearly lists user interviews as being part of the TOC due diligence work block which in turn means if we start doing this within the SIGs and we do it subtly different or anything it will not really decrease workload in my opinion. So if the TOC is fine with doing it on a case by case basis then we can just or a TOC can just adapt the documented process and it's done. I just want to avoid this thing there at least with SIG of survivability we immediately had that suggestion of hey, let's do things which are not part of the process and longer or even medium term that's probably not a good idea for any of us. I think we can clarify the documentation around this to say that the person from the TOC who's driving the DD can because they're basically delegating work to the SIG and ultimately have to sort of take responsibility for it. So they could agree with the SIG whether on a case by case basis who's going to do which aspects but I could completely get behind clarifying that. No, that sounds great because then everyone knows what to do and how to do it. Sounds great. Great. All right, any questions for SIG observability? Okay, next one I think is a... So there's been a proposal for SIG serverless as much as on hold right now while we have a discussion about how this potential SIG could relate to at delivery and how its charter can be written so that it's clear what it owns and yeah. So I know Doug couldn't make this call right now so that's the status for now. We're going to be having a discussion with all the sort of interested parties about how we can best get a group of people together to deal with this weird thing called serverless. All right, and there's one other SIG which is SIG security, I think with a variety of work and coronavirus related issues, they are absent this month, but okay. Moving on was the conversation about annual reviews here. We have three of them, I'm sorry, four of them currently in that need TOC sponsors so I want to be able to highlight that here. And I know there was a question about NACs for graduation. Yes, so I think there's a bit of confusion around NACs because it's been going along for such a long time. It came up for discussion about graduation some months ago. TOC, various people on the TOC had concerns about the diversity in terms of organizations of core maintainers. Now, the last time I looked, I think they had actually got some changes to the core maintainers. So that particular bridge made now be crossed. There's now a bit of discussion about whether they've actually had due diligence, which I think Quintin I think might have driven back in the day or possibly Alexis. Now that's true, Alexis did some of the DED. So I think there is some DED and TOC folks we need to kind of identify where that's gone and make a decision about. They certainly did do a pretty thorough graduation PR. So I think we need that on our to-do list for sure. Yeah, and the annual reviews, I guess that's in our weekly to-do lists. So hopefully. It is. So I'm really just kind of highlighting that we need to be able to get TOC sponsors for these to be able to move forward. Great. I can sponsor Bellpacks. Yay. Go put a comment on the thread. Thank you. Going once, going twice. Moving on. All right, so we have nearly 20 minutes to discuss what might be quite a controversial point, but maybe it will turn out to be not as controversial as we fear. So I think in the TOC we've had concerns for a while building up about Sandbox. And it was originally intended to be a low barrier to entry. It's supposed to be there so that we can, so that the CNCF can enable projects to have this neutral collaboration ground, potentially really early in their resistance for experimental projects, for people from different organizations to be able to come together and work on an idea. And what we've observed is a lot of projects seemingly wanting to use Sandbox more as a marketing tool than necessarily purely as a collaboration space. And we've also seen an awful lot of work going into an effort from SIGS, from ourselves, a lot of, if I want a better word, lobbying that we have to deal with from projects who want to get into the Sandbox. And to be honest, it's a distraction when we're trying to, you know, projects that are a bit more mature, that are at the incubation and graduation stages are potentially having effort and focus taken away from them because we're being distracted by all these people who, you know, right you're wrongly with the best will in the world, they want to get their projects into Sandbox. So we wanna streamline that process and make it much less onerous. And at the same time, we don't wanna end up in a situation where every single project on the planet kind of comes into the Sandbox and gets like some kind of giant marketing boost. It's against the King Makers principle. So we've written up a proposal, we've been through a couple of different ideas that we've bounced around inside the TSC. What we've ended up with thinking is a simplified Sandbox submission process that doesn't go through a SIG recommendation process. The submissions would still be public. So SIGS, if they feel particularly strongly about a particular project, can still, you know, raise their comments on the mailing list or in a discussion. Rather than doing them all one by one, we would review that spreadsheet on a kind of regular cadence, perhaps every quarter or every two months, up for debate. Just to go through everything that's on the spreadsheet, you'll see in that document, we've kind of tried to clarify the criteria. A lot of those criteria are qualified by in the TOC's opinion, but really trying to lower the qualification bar and correspondingly have some new branding that makes it clearer to users, to the community, to everybody out there that Sandbox projects are not kind of, they don't have the stamp of endorsement of having been through due diligence. Really what we're saying by Sandbox is, it's an experimental project in the cloud native space, and that's pretty much all we're saying about it. So yeah, that proposal is there. Now, obviously I can imagine that might be creating some concern for existing Sandbox projects and maybe for some projects who've been through or who are going through the process at the moment. I can only apologise if this ends up meaning that some projects have done a lot of work to do something that we're now going to make easier, but I hope that in the future that makes things easier. Oh, the other thing, I didn't actually put this on here, but it would stop being a sponsorship model and it would be a simple vote model. And hopefully that will make us get less lobbying requests. So yeah, that document is there. We'd love feedback and comments. And does anybody have any immediate kind of dark reaction thoughts about this proposal? I do. Hi, Liz. So I think it's a great idea. I mean, I was actually involved in helping create Sandbox and that was our driving concern, was to try to figure out how to take away the endorsement, component of it, that just being part of Sandbox was not an endorsement of support or media. I mean, prior to that, all Sandbox projects were looked at when you looked at the website were at the same level. So as part of wanting to avoid it as a marketing tool, do we plan on having them be, I mean, how do we really facilitate that so that it's someone brand new looking at the CNCF who doesn't misunderstand and have to understand the differences between these? It would be my first question. So that's the idea behind having this new branding. I think it was Brendan who first suggested it and he kind of said, we get somebody like Ashley McNamara to come up with something that's more kind of fun and less serious to kind of give a different feel to the branding that we're giving to the Sandbox. I mean, that's not like a kind of final decision, but I think it conveys the kind of emotion that we're thinking of here, that by requiring Sandbox projects to use that kind of something experimental about the branding, it would be clearer to the community. We're also talking about not giving Sandbox projects beyond any existing commitments that will already be made, not giving them automatic slots at Qcon, for example. So the automatic marketing benefit would be non-existent, we would have. Yep, nope, that was actually gonna be my next question, so that's perfect. I like the idea of it, I like it being, the whole idea was just to make it easy and not have time spent with due diligence and we seem to have like swayed, now we're swaying back to it, so I think it's smart. Any comments? Sorry, go ahead. Yeah, I think it's great that we have more clarification here. I think we all have like the same concerns, especially with the review. I haven't read the document yet, what I still think is important that the project has a certain quality here, while if you consider them experimental, like early stage might be actually a better word from, and but I think there's two factors to it. There's like one, how far is the project along with its capabilities that it's providing to the community, and how good is it from like an open source, say, process that it's providing, like has it, does it have like, there's also CII criteria, contribution strategy, these kind of things. I think these are two independent ones and maybe we want to be, or the TSE wants to be a bit stronger, like it has to fulfill a certain level of quality of the project, that it is still a quality stamp. But the technology might still be early stage. So the one thing that we really have to be careful about here when we put in criteria is, we want to be able to enable people from competing companies to come together and work on something from scratch. And that's something that I think the CNCF really, you know, it's one of the reasons why it exists, right, to bring together people from companies that would otherwise compete and to give them that neutral collaboration ground. So we have been concerned about making sure that we're not putting in criteria that you end up with a chicken and egg situation, that we can't give them that experimental ground. But criteria that we currently have in that document and, you know, totally up for comment is whether or not, in the TSE's opinion, it's a fit for CNCF. So this is really the sort of judgment around, like making sure we're not getting projects that are nothing to do with cloud native. Is the project roadmap in line with the goals of the CNCF? That really, we're trying to address this idea that competing companies could come to get all individuals from a variety of organizations, could come together and say, we want to work on this particular problem as a project. We want a space in order to do it. This is what we're trying to achieve. And then we also have a criteria about whether we believe it to be on a good path to becoming well-governed and vendor-neutral. That's quite loose language, deliberately so, so that we can exercise judgment, but hopefully with the indication of what we want to see. So, quick comments on this, because I kind of like the motives behind, you know, making it easier and taking away some of the, some of the due diligence and putting some sort of constraints on the marketing because ultimately those were all the problems with Sandbox, but I'm not quite sure. I understand how this will work in terms of two things, right? If we disconnect the projects from the SIGs, that's probably disconnecting them from the most valuable part of their community. So, not having the projects interact with the SIGs or present at all seems harder to understand how they then move forward. But then also secondly, if the TOC is trying to understand the project roadmap and their viability and all of those sort of things, doesn't that come back to the original thing that the TOC is gonna need to work to understand the project and the project will have to present to the TOC and we're kind of back to square one? Yeah, I do, I share some similar concerns to Alex about potentially over-correcting, possibly. And if the, it seems like working groups might be a possible place to easily form and kind of come together in a vendor-neutral way. The part of my concern with lowering the bar, I think more recently we were sort of talking about raising the bar around Sandbox in certain areas and it looks like in certain areas we're considering lowering it. I wonder if that doesn't begin to overwhelm the number of projects sort of trying to make it into incubation status. And then distance, well, not disincentivize but not incentivize enough those that were shooting for the current Sandbox requirements, if those projects just sort of find it well. So I have a few thoughts on this and if anybody else on the TOC wants to jump in, please stop me from, you know, people might be bored of the sound of my voice. I think in terms of the community and the connection between projects and SIGs, there is no reason why that can't happen anyway. We're just saying that doesn't have to be part of the process for Sandbox application. I think, you know, like SIG runtime is reaching out to a group of projects that we saw earlier, that seems like a really healthy thing and it doesn't necessarily lead to those projects coming into the CNCF but it means that there's knowledge, there's an understanding. I think that's wonderful. But that doesn't have to be coupled to admission into, you know, as an official project. And I think the other thing is we're trying to reduce the benefits really of being in Sandbox. Like we're trying to distinguish, we want to encourage the good projects to come in but we don't want people to come in just because they want to get a slot of KubeCon. And trying to reduce the incentive. I don't know, we'll see whether this reduces the floodgate of kind of projects desperately coming and knocking on our door and trying to be at Sandbox. And I think what we'd really like to be seeing is the good projects who are some way down the road shooting for incubation. And Sandbox being something that people do less as a kind of matter of course because they see it as a run on the ladder and more about, well, we're not ready for incubation so we're just going to have to go into Sandbox but we want to get out of it as quickly as possible. So I'm the guy who tried to push key clock twice. So based on the experience, I actually do like the draft a lot. I think it addresses most of the concerts which were there. One quick question is, you know, does it effectively as of today put the current process like putting the current process until the new one is act because I was approaching some of the TLC members about sponsorship, so should I stop? And then on top of this, you know, I do like the science, I'm not in awe about timing because, you know, we initially submitted 2018 then literally got dropped a few hours before the TLC meeting because there was a project reboot. So I literally was waiting several months until the new process was settled and election was completed to work with a new process and new TLC group and was actually this week approaching some TLC members. And so it's kind of like it's a deja vu a bit. Yeah, unfortunately, I don't think there's ever going to be a good time, you know, there's always unfortunately going to be some projects who get caught out by changes in process. But yeah, I mean, it's an example of why, you know, that multiply by every project by the number of TLC members, by the, you know, the number of times we get approached about Sandbox and it's one of the reasons why we feel like now we just need to find a simpler process so that we can discuss them together, assess them. I think everyone will be happier if it's a vote rather than sponsorship because it's been difficult to explain what a lack of sponsorship means. Does it mean that all current submissions get on hold as of today and we should not talk with TLC sponsors because we need to start switching to the new one when it's in place or we should try to finish the current submissions? It's a great question. I mean, at this point it's a proposal, it's not a decision and at one level, you know, if people, you know, scroll all over it and say it's terrible for, you know, any number of reasons, then we'd obviously have to take it back to the drawing board. On the other hand, if people broadly think it's a good idea, maybe we can evolve to adopt it quickly. You know, I don't have a simple answer to that question. Could I just ask what would be the next steps and processes from here? So we go review it, we leave comments, things like that. What happens next with this? So the document ends with, next step, circulate for public comments and discuss what to do with existing submissions. There may be other things we need to add to that list. I just want to make sure that we're not just moving the problem. Like we're, you know, I don't know. I'm not sure that people looking out from the outside will understand still the differences between marketing and not. And I'm a little concerned around the language of identifying good and bad projects and what that means. I like the idea of it coming into these SIGs and work groups to marinate. I mean, Sandbox really was for like, could also be I have this great idea and I want to find like-minded people to grow it. And that wouldn't be something that would meet the criteria of we feel like that has good contributorship and is going to become incubation. It was also meant to be like, this could completely flop and that's okay. But at least the idea is get out in the community and has a chance to collaborate with people. So I'm a little, that would be my one concern after this discussion is that. Erin, I couldn't agree with you more. I think we need to see what is the goal that we're looking to achieve from these projects and measure that as opposed to measuring something that none of us would be able to measure. And that is the desire they have for marketing, right? I think that'll be difficult for us to measure and weed out. It'd be better for us to measure what's the goal that we're looking from these projects. And if that's healthy participation and exchange with the community, if we have a way of measuring that, and Chris has suggested that, you know, we could look at the health dashboard as one measure. But if we have a way of objectively measuring those criteria that we're trying to promote, then we can take the subject into this of what's good and bad out of the equation. I don't think any of us intended to characterize new projects with whether they're good enough or bad enough or have enough, you know, formal processes in place. That's one of the reasons they would come so that they could learn those, you know, what are the processes that they can do to engage and become more collaborative and to be more productive projects? I think that's why they come to the incubator. But we need to measure something a little bit more objectively and something that we can say is truly, you know, what it is that we're looking for these projects to achieve rather than what we want to exclude, I think. Yep, thanks. Yeah, that was a good way to say it, so. OK, I think we are up to time. So these are good thoughts. Please do, you know, think about it, add your thoughts in that document. You know, it's a proposal, it's not a decision. So, you know, we'll we'll take it forward as, you know, based on people's comments and thoughts and, you know, the TLC's response to those comments. So thank you in advance for all your thought on that. OK, I think I think that's the end. All right, take care, everyone. Thanks very much. Thanks. Good to see everyone play off. All right, thanks. Yeah. Thanks.