 Good morning. I'll get screen share up and running, see some folks coming on in. Good to see everyone. Final call on anyone who's still editing slides. Hello. Liz, you're here. Hello. Jeff, welcome back. Good to see you. Hello. Good to see you. Hello. Lovely. Lovely, I will go over and grab attendance. Can you place the links in the chat? Give me a second here. I need to find them in mine. Oh goodness, okay. I'm sure they're there, but. Mine box is not a thing of beauty. There you go. We're two after the hour. We'll give everyone a few more minutes to come on in. Oh right. I've got lots of folks on the line today. I see. Actually nearly all of our to see is here. So Liz, do you want to get started? Shall we? Oh, let's do that. Welcome everyone. Thank you for joining us. Everybody made it presumably. This is this particular meeting. Good to see everyone. This will be updated with. To see members present. So. Okay. Let's get this slide. Like, pardon. Yeah. Okay. So I guess there's a few projects that need our update. We can also touch briefly on the plan for sandbox at that point. And then move into the SIG updates. Yes. Okay. I'm going to take a look at some of the updates. Note that we are missing a few SIGs this time. SIG security is still kind of down some chairs and a SIG contributor strategy. We'll be sending in written updates later. So. Do you want to talk us through the. Of note, we've got some annual reviews. There's a link up in here towards the board where all of this is tracked. We have some new reviews from last time we met. We have brigade and network service mesh, both are linked here. So we're going to go ahead and do a quick review of this. And then we're going to do a quick review of this. And then we're going to do a quick review of this. One, as of yesterday when I was checking on this. For TLC sponsors. Cortex telepresence and QB edge have two. You need three in order to be able to complete this process. So. My request of TLC members is to go and review these particular annual reviews, come in and ask questions. And as if you're satisfied, please put in a, looks good to me. Any questions around this. I just want to emphasize these are all sandbox annual review. Correct. We're not looking for any, you know, incubate. Well, they might be looking to move to incubation, but that's a separate process. Exactly. That is completely separate. This is our kind of new process that we implemented at the beginning of the year. And we're now seeing a lot of people kind of coming on through. So. Great. And actually, if any of the people who've been putting their annual review reports together, they're going to be able to do that. So I think the ones that I've looked at so far, I have found it really useful and getting a snapshot of what's happening in those projects. And, and I think anybody who's interested in kind of keeping a pace with what's going on across all these different projects, these are a useful resource. All right. Any other questions on that? Right. We can move on. Also wanted to call focus attention to votes that are currently open. We have Harbor that is currently open for graduation. And we're also going to be able to get some more. Incubation. So if we can please get some input on that, that would be lovely. I have seen some votes coming through this morning. So I'm happy to be able to see folks come in and put their plus one binding, but they've been open about a week now. And it would be great to be able to get your input here. Anything else around votes, questions, comments? I hear nothing. So we can move on. I will pass it over to sick. Thank you. Thank you. Thank you. I'll just touch briefly on sandbox. Well, that's actually part of their questions. Okay. So. The TSE. Agreed last week. What we want to do is try or run the new sandbox process. We, you know, we can back and forth about how easy or hard it's going to be all day. Let's just try, try it out. So we've asked Amy to. We've asked Amy to do the projects that are currently awaiting sandbox. You know, review. Hopefully all the information to complete the spreadsheet, the spreadsheet being based on the form. Hopefully all that information is already available in the submission. So we're asking Amy to do that to save the projects from having to feel like they're reapplying. And then our plan is that the next time we have a private meeting, we will. Go through that process. Our plan is to hold that privately, but record it so that people can. And, you know, see the discussions that we have, but that we kind of. Not interrupted and then we can have that as a group discussion amongst the TOC. And, you know, maybe the process will turn out to be a disaster or maybe it'll be all smooth sailing. We will find out by experimentation. Any kind of questions or comments about that? I have one. Related. That is for projects that are under consideration. Or I guess projects that are yet to be inside the CNCF. Is there. Is use of dev stats or analysis through dev stats. Is that something that would be available to them as they go to. And then, you know, I think we can kind of assess where contributions are sourced from and kind of in all of the get based activities. Is that a service desk request away kind of a thing or is that. Maybe Chris, you can answer. I think my understanding is that's dev stats gets. Added for projects once they are already part of the CNCF, right? Pretty much. I mean, it's, you know, if there's a request from a TOC member or someone that needs a little bit more extra diligence from a sick point of view, we could see what we could do, but those are truly meant for projects part of the foundation. Got it. Okay. I think at particularly at the sandbox level, just the sort of surface information that we can see on GitHub is probably sufficient because, you know, we're, you know, we wouldn't want to be necessarily accepting into sandbox something that, you know, somebody knocked together a PR that morning and there's literally nothing there, but, you know, we don't have like a minimum bar. So, yeah. All right. Any other questions on that? Or should we move to SIG app delivery? I had one quick question and I joined late. My zoom was being difficult. So I'm probably missed it and I'm sorry if you have to repeat it, Liz, but what is happening to the projects that are already in sandbox? That's the ones you're going to look through in both on the private meeting. Did I understand that correctly? You mean the ones that have already applied for sandbox? Yeah. Yeah. Yeah, exactly. So the idea is that rather than ask the projects to kind of reapply, we're hoping that all the information to fill in this hopefully relatively straightforward form is available and that Amy will be able to do that on their behalf and then we can use that to kind of trial the process and, you know, we'll find out by experimentation whether that process is more effective, at least in terms of us reviewing it and then we'll take it from there. Who's here from SIG app delivery? Who's here who's not on mute from SIG app delivery? I'm not actually seeing anybody. I'll give it another second or so to be able to step in. We can always move on to the next one and come back with some of the joins. Happy to. All right. We'll step on over. It may be worth trying to answer the SIG app delivery question though. Oh yeah. Because some of us who attend are here and we could carry it and there was an open question here for the TOC. This is the end user question. Yeah. Because if you've got end users, if you have it and specifically you'll see does a project meet incubation criteria, if it's adopters are mostly not end users, but vendors. Okay. So I think this is probably one of those situations where we might be able to tidy up the definition, but we might also want to rely on some level of judgment from the TOC. That's probably the way that it's worked before because I think it is somewhat dependent on the type of project. We saw this, the extreme example of this was with spec projects where end users almost by definition can't sort of explicitly directly use a spec. So I wouldn't remember the definition exactly, but we did try to tidy that up for our own judgment. So I think the problem is with these criteria is we might start ending up with a situation where we don't, I don't know if it's always going to be an absolute number of companies so much as like understanding what kind of size and scale they are. If we have, and I see a question from Doug, if it's an intro project, then it should have, it should. Sorry, there should be periods between there. Sorry, these lines of period. Sorry about that. We had a very similar problem or question when cloud events went to incubator status because we had to have a certain number of end users. And obviously most of cloud events usage is not for end users. And we kind of winged it a little and we talked about all the different vendors that are adopting it. And that seemed to satisfy the TOC during the review process. Yeah, I think this seems like one of those criteria where it's quite hard to imagine a firm definition that we would always be happy with and that perhaps this is a better, well as Chris has said, a judgment called by the TOC at the end of the day. And I guess this is another one of those things where, you know, public comment can also inform, you know, if people think that something is coming up for incubation and is, you know, can always raise questions about that. But for build packs, I mean end users are building things that are compliant with, they're using things that are compliant with the SPAC. End users are using build packs to build things. That seems like an end user use to me, even if it's strictly a SPAC and not a, the implementation. But I mean, if an end user is going and doing things which they find useful because it complies with the SPAC, then that seems particularly sufficient to me. Yeah. I think my recollection is that we tidied up a graduation criteria for SPACs and it was something about the number of implementations and then whether or not those were actually being used. Was that for tough? It's pretty much harder to measure that. Presumably that was for tough, was it? It was. Yeah. Yeah. Right. So I'm hurry and I can add more details about this question. So I think the current discussion around, okay, maybe some project, they are more like to be adopted by vendors instead of end users. So that is why we raise this question because today, we emphasize a lot on the end user adoption for incubation label and of course with graduation criteria. When we are reviewing the bill pack, we notice that maybe it's more, it tend to be implemented by vendors or supported by vendors instead of end users. That is one observation that we are not very sure how to proceed with this project review actually. So I guess the useful information for us to make the decision would be to understand who the adopters are and if they're not end users, how is that characterised? It could be that it's implemented by two different vendors and then those vendors have 50,000 end users of those implementations. That would seem like useful information for us to understand. Yeah. So I think maybe this also points to a general point that for these SIG reviews, it's not absolutely critical. I mean, we're asking for a recommendation, but I wouldn't say it's critical to always have like a yes-no sort of pass-fail on everything. You can have like qualitative information in the review. Yeah. I definitely agree with the point here. Also somebody, you also mentioned the same thing, same idea. So I think we will actually do the recommendation based on the current facts and we will mention that the adoptions of this project is actually more like vendors and we will let TOC to make the final decision because we can still do the recommendation around every other aspects of the project. I think that's how we will go in with this specific project. Great. Matt, thank you for holding us back onto this slide. Yeah. I thought a lot is easier. So sorry for that. So yeah. All right. Do we have anyone from SIG App Delivery who wants to talk about the status side of things or shall we move to SIG Network? Yeah. I think we are okay to move to the next slide. This is, we only have one issue to discuss today. We should run a bill pack. Okay. Okay. All right. Lee. A few, a couple of projects under a proposed projects under review. The first of which is, and this is sort of, well, almost in chronological order. I think if, let me skip around for a moment and go in chronological order and that is to say that contour that's currently proposed for incubation is kind of furthest along in the pipeline. And probably the majority of the folks that are on the call here, probably either weighed in on or seen the, the proposal for incubation. If you haven't seen it yet, just sort of a redundant call for. Review there, I guess from earlier. Stepping back to the top under project reviews. In chronological order. It goes chaos. Smash is a project that's been proposed for sandbox. It has. It's sick review will be complete this week. That we've spent quite a bit of time with that team and with that project. And. For as hard as I personally tried to come up with something to dig up the bones or to come up with something negative. We're really just saying this as a joke. Rather, I've got nothing but for my part, nothing but positive things to say. That pending can map at Ken's analysis as well. So far so good. Kuma was also proposed for sandbox. It was presented. This month is the last, the last time that we met, I'm sorry, not this month. Last time that we met last month. And so an active seed review starting there. We meet this Thursday. And we'll receive a presentation from ambassador for. Sandbox. I say that tentatively waiting for correction, but that looks correct. Okay, good. So I'm just going to go back to the other project. That. Shortly to be proposed for. Sandbox. Sort of on the list. C and I, Genie was previously proposed for Sandbox. And we've just, we haven't. Made the right contact with those maintainers to, to actually get them scheduled for review. And so. We might be, I think, I think we have a stale issue, but I think we have a little bit of, a little bit of this is a recap, I think from last time that we met, but it's probably good to mention again, because we've got different folks on the call and it's good to let things settle in maybe more than once. And that is the formation. Within the, the SIG, the, there's been interest for some of the participants around a service mesh performance working group. There are about three high level goals within that working group. And I'm not sure if the subsequent slide made it in or not. It's not critical that we review it. But for those that have the link, it was a late coming slide that you can see there. And so we don't particularly need to cover it here. But maybe it's, I guess, both. Good to recognize that there's a working group being formed. And good for folks to review the initiatives that are within the SIG network. Much interest from a variety of vendors, as well as universities in some of the research that's going on there. There's research things. There's a spec. And there's actually a community bridge and a G SOC. Internships that are hopefully will be students or facilitated through that working group. And those, those projects are focused on the projects that the service projects that fall within scope of SIG network. Lastly, I think we noted, I think last time we met that we had an upcoming presentation. From Jonathan Burby. On the sort of the state of layer seven protocols. That are not HTTP really much more IOT focus protocols. And he delivered that presentation and it's available. Not only is the recording there, but his presentation is in the SIG network repo. So is it. A job well done on his part. So I have a couple of questions on the, on the slide that is, you know, the new slide, but I have the benefit of actually being able to see it. And so a couple of really interesting things that one, talking about maybe having some CNCF labs for benchmarking. So is the idea there that we'd be able to publish results for, I don't know, different scenarios to compare different. Service mesh implementations. Yeah, you pretty much, yeah. That was yes, although I would care a caveat it with. But that being the potentially contentious area of research and focus and the point there, hopefully to lift. To encourage and inspire confidence and how bad things could possibly just in helping provide people information. But hopefully lifting up on all of the projects involved. There are maybe some other more in, or equally as interesting things beyond the comparison about. Patterns and best practices in terms of it really, these three initiatives are at least the first two sort of interweave a bit. That first one, if we're, as we look at like benchmarking things and benchmarking various scenarios under different versions and different configurations that we will see. We will see a bunch of different, a bunch of different per overhead and performance scenarios. And what we're going to try to do in this, the second specification is help people. Help give people the right context to measure when they're measuring the overhead of running some of their cloud native infrastructure help also present them the context for the value that's being derived from that infrastructure. Service mesh in particular is kind of interesting in this regard in that of the variety of value that you derive from a mesh. Oftentimes people are getting that value, whether it's logs or metrics, some of the observability stuff, some security stuff, some traffic control things there. It's not like they're not getting a lot of these features and functions out of their myriad infrastructure today. They generally are, but mesh can bring a lot of those functions under one domain of control under into one system into one layer, if you will. And it's lost on a lot of people that the overhead that's incurred from a mesh, that there's a ton of value derived out of that. I mean, clearly there's value in that people get that, but some of that value is softer, like a single point of control for all of those things were otherwise had disparate. And so my point is, some of what we're trying to, some of the other scenarios and some of the way in which we hope that we would empower people is to provide them a scale, a new measurements by which they would weigh the overhead in context of the value that they're deriving. I guess my next question is whether the service mesh performance specification, establishment of mesh mark, is that the same tests that the CNCF labs would be? It would be that that spec is, yeah, or rather the two would go in combination, or can go in combination, that they can go out of combination. You can go run. I guess that's a great question. And the way that I would put it is that the spec helps bring formality and repeatability to those benchmarks and helps make sure. So the benchmarks would be like an implementation of what's laid out in the spec. So then if somebody submitting a service mesh, they kind of know what they're right. Any other questions about that? I do actually have a question. In relation to the benchmark and the publishing of results, we've been working on a benchmarking doc and performance analysis doc on storage for a while too. And we were proposing a set of tools that people can use to run their own benchmarks in the storage space. But one of the challenges that we saw was it was really, really hard to ever get an apples for apples comparison unless there are so many caveats and so many what-ifs. And therefore, we specifically refrained from making any recommendation where we would actually publish results ourselves. What we really wanted to do was users to test different options in their own environment under their own conditions because really that's the only thing that matters because the results in a synthetic lab are often irrelevant to the end user's real-life application. So I kind of wonder how you go over that sort of mental block because we just couldn't get our heads around that part of it. I preach on Alex. And moreover, those published results are a point in time and each of the projects that are measured, like you have the myriad number of variables that are under consideration for that specific environment, that specific configuration of those versions at this time under that load, under this workload that is hopefully representative of your workloads that, well, yeah, that's in part what the other project measure was born of was to enable and empower people with the tools to do it themselves. The benchmarks published here are in the use of the CNCF labs for this are to be very helpful to many who don't have those environments and don't have that gear or that kit to do their assessment or take the time to do a one-time assessment themselves. But that, yeah, for my part in speaking for that working group is we'd really rather avoid sticking our foot in our mouth in terms of like, in terms of publishing this outright comparative, this one's better than this, or that is not the point. The point is rather in context of common patterns or common things that are deployed or just actually one of the KubeCon talks that a student was to present, that they will present this, I've come in KubeCon is on like a very grand, narrowing a lot of the variables and saying, hey, this function, maybe it's traffic redirection or it's a denial of a request or this specific function that you receive out of a mesh, that's in this scenario, that's with this cost. Here's the incremental cost to impart Alex to help empower people with, that's not a comparative thing, that's rather like, hey, what is that overhead? Like in a very granular way, if I'm looking at architecting my application around the power of this cloud native infrastructure, how much, just incrementally, how much does that cost to me? Should I take two weeks of, should I take two sprints of my developer cycles to build in this network function and have it like I like it? Or is the overhead such that so negligible anyway that I should be moving briskly and taking advantage of the infrastructure? I'm hoping for a lot of softer questions to come out of it and the comparisons aren't, aren't exactly of the focus that is to enable others with the ability to do their own benchmarks because that to your point is, is in fact what matters. It is a relative scale. That sounds, that's actually really useful and really interesting. Kind of a way of looking at it. We were also sort of a little concerned as to, are we, are we verging into king making territory by putting that comparison online, right? And that was also something that was, we were really worried about. Maybe that kind of publication of results things is more at the discretion of the project and maybe letting the projects kind of choose where they shine, you know. Using this particular mesh, you can set up, mutual TLS will cost you this percent extra bandwidth or something, you know, whatever. Yep, exactly. It totally, it absolutely is. And that each of them have been apprised and invited to be involved. And as a matter of fact, I'd almost argue if they're not involved, then I don't know that publish it. Like it very much so needs to be with them kind of thing. Okay, cool. I can see it being a really useful resource for users to help them understand whether they want to add a service mesh or not. Like that whole kind of what performance impact will it have question. Yeah, it's important. All right. Thank you, Lee. Let's move on. Hello, runtime. Hi, this is Ricardo. Yeah. And so we have some sick runtime updates. So projects. Quay is applying for incubation. So the due diligence document is done from the sick runtime perspective. We provide a recommendation. But still pending a security assessment from sick security. So once that's available, it will be ready for the TOC for review. That document's not public. Okay. So we'll thank you. And then we'll provide that document. We'll make sure this public. I think it's actually started by some of the folks at Red Hat. So then we'll provide that. I think it's some of the people in the call from Red Hat if they're available. Maybe they can make that public. Thank you. So Metal Cube is another project from Red Hat. They're applying for sandbox. So the document is ready. And they're basically looking for TOC sponsors. So anyone who wants to sponsor that project, if he has any questions, you know, they can ask us or the project maintainers. Cube Edge. It's another project looking to apply for incubation. They're going to have a presentation at our next meeting on Thursday. So Cube Edge is basically Edge workloads on top of Kubernetes. So after the presentation, we'll go forward and see what happens. I think K3S is another project applying for sandbox. It's a Kubernetes distribution. So there's been a lot of discussion and PR. And, you know, they're not really sure how to proceed. I guess, you know, there's comments about the Kubernetes community weighing in, whether this will fall within the Kubernetes community or whether this will be a CNCF project. So I think still waiting for the Kubernetes community to weigh in. So I tend to scope for the sec, but, you know, based on what the community says, you know, we'll take you from there. And container device interface is another proposal. They presented at our last meeting and they're looking at creating a work group proposal or a work group. And so the proposal is in first draft available publicly. And yeah, and then so we'll follow the process, you know, what it takes to create the work group. And this is basically, you know, some of the folks trying to come up with a common interface for devices for containers, mainly driven by the NVIDIA folks, but also applicable to other types of hardware, like networking interfaces and, you know, some other specific hardware interfaces. Then as far as communities and presentations and reaching out, Lupin, which is container slash unicolonel based on a very stripped down Linux kernel is presented our meeting in two weeks and this month. So this is a project led by IBM Research and there's a paper based on that. And there's first a virtual cloud native summit China. So we submitted an intro session. So we'll have an intro session to try to get more community involvement and awareness. Yeah, those are the updates for the runtime. Any questions? Just a comment on the metal, metal cube. It says that it's all process, but we're going to cover it as a part of our trial run for sandbox process. Please, can you confirm? Yeah, agreed. Yeah. Okay, so yeah, we'll. I'll try to open up that quay document right now. I don't know if I have rights or not, but I'll try. Great. Thank you. Yeah. For some reason you can't change it. Then we can create a new document and public document. I think, you know, K3S probably deserves its own agenda item a future discussion. It's obviously kind of, it's marked itself as a distro, you know, but it also is maybe solving some problems that, you know, are cloud native. There are swings and roundabouts to what we do with that project. So yeah, we should spend some time on that. Yeah, it's a very popular project and solving a lot of use cases. So that's where, you know, there's a, you know, you know, whether it should be a project or it should be just a part of the distribution within communities, community, you know, or, you know, they will benefit from the CNCF being, you know, part of the whole set of projects and get more community awareness and more exposure to land users and that type of thing. Right. So. All right. Thank you. Thank you. Six storage. Are you covering this? There you go. Um, you can, it doesn't matter. Okay. Um, so, so we have, uh, we have two, uh, we have two projects, um, going through, uh, graduation, um, the IKV and, and work, um, the due diligence documents have been, um, completed. We've had the, the project presentations and we're just going to be, we have to type up a bit of a, I guess a formal recommendation, um, from this, just, just to tie it all together, but I, but we should be good to go on, on both of those, um, projects to, to take it to a vote shortly or, or to, sorry, do that two week, um, periods shortly. Um, uh, and, and we also have, um, another project which is, uh, which is in the wings called, um, Provega, which is, um, an interesting cloud native, um, uh, streaming storage project. Um, I guess the closest thing that it's similar to would be something like Kafka. Um, but it's, it's a, it's a fairly mature project. Um, and we've had a, we've had a, we've had a sick presentation already. Um, and they are, they are looking to, uh, the, the project team are looking to, um, to submit this for, uh, an incubation project. So this is, this will be something that will be on the, on the agenda for coming up. Um, and finally we wanted to update the, the, the TOC, um, on the, um, on the situation we, we had with, um, use case documentation. So, um, following on from the, from the, um, landscape point paper, um, which, which, uh, of which just as a tangent we're, we're, we're about to ready to publish now. Um, we were, we have been trying to figure out the next steps being, um, documenting a number of different use cases of, um, how to implement, um, different types of use cases on, on different types of cloud native storage to, to kind of go more, uh, to kind of provide more depth to, um, to the, the landscape documents that we had published. Um, and at first we, we talked about, you know, um, having use cases that were specific to, to particular projects. Um, but a lot of the use cases or a lot of the consumers of cloud native storage might not necessarily be, um, CNCF projects in themselves, you know, so, so a lot of the databases that, or, or message cues or, or instrumentation or whatever that, that used cloud native storage might not really be a CNCF project. And therefore, you know, we thought developing those kind of use cases might, might create the, the semblance of a formal recommendation. Um, so we decided to, uh, we decided to try scaling it back and kind of create categories of use cases. Um, where we, where we would, you know, create groupings like databases or, or, or message cues. Um, and, and provide some, some general recommendations, you know, along the lines of use, uh, you, you know, optimized databases for consistency, for example, or whatever else, um, or performance or latency or things like that. But, but then we kind of came to the conclusion that creating, um, creating that sort of, those sort of use cases for groups or categories of, of projects didn't provide a lot of function for the end user because the end user really wants to have, um, you know, specific tuning or, or specific, um, specific recommendations for their specific, um, for, for a more targeted use case. So we're, we're kind of from this, um, in this scenario now where we're thinking we might actually pull back because we just could not come to a reasonable way of providing use case information for projects without it appearing to be, um, king-making in some way or appearing those, those, that lists to be recommendations from the SIG. Um, so, so we're, we're, we're going to try and open it up to projects and hope that, you know, projects can provide, um, uh, use case, uh, recommendations based on a template that we've, that we've built, um, because I still think the template is useful, um, but leave it up to the projects to, to, to provide the, the recommendations, um, based on the attributes that we describe in the landscape document. And where do these get published in the end? Well, what we were hoping to do was, was to build a library of use cases in the, in the SIG Storage, um, GitHub, but this, this, this was the point. It was kind of like, you know, if we provide recommendations for how to run, say, Postgres for the sake of the argument in cloud native storage, would that be seen as an endorsement of Postgres? Um, and, you know, similarly, you know, you could pick any other example and have the same kind of questions. So, so that, that's basically where we were, where we were, um, um, coming stuck. So, um, I'm curious to, I'm curious to understand what other people think about that. So, our, just to articulate that Alex, we, we were not going to publish them now, this is what we were saying, because we worried by doing that, we would be establishing one thing over the other in terms of key thinking. So we've, we pulled back from publishing this, but if, if people don't view it the same, then maybe we'll reevaluate it. But it became so generic, we didn't think it was for our time or helpful for the user community just to say, generalized database could be any of these and actually provide something that's useful. Right, so it sounds like the, there might be a kind of landscape he, these projects fall into these categories of things, but that would be about a limit of it. Yeah. So, so, you know, the current landscape, um, document that we have published already covers, you know, um, block file system, distributed file systems, object stores, key value stores, databases and give some examples of those already. The use cases was, was going to be more about, you know, how, how those, how specific examples or specific systems could consume that cloud native storage, right. Um, and you know, it was, it was sort of moving it a step further, so it wouldn't be as generic and that was the challenge. Okay. Which, which, you know, the, the minutes that Lee mentioned, um, publishing benchmarks from, from different mesh projects, I, I kind of saw, ooh, okay, so maybe if that's okay, maybe this might be okay, but I, I don't know. The difference is that um, a lot of the consumers of storage are not necessarily going to be CNCF projects out of the door. Right, so our recommendation was to have projects provide these point users, because it is a common question that we get at, at KubeCon almost every time. Um, are how do we do this, how do we set it up, what do you recommend, very specific opinionated ways of doing things that we feel that's kind of outside of our purview, but I'd like to hear the TOC's opinion on that. Yeah, I can see that if end users have questions as an organization, we would like to be able to answer those questions, but if we can do that without the whole kingmaking, you know, if we can't do it without kingmaking, then it's, then it's a problem. Yeah. I'm, I'm pretty cool with this idea of having the projects contribute their own use cases and it's kind of like, you know, each project is able to I don't know, market itself in, you know, this is what we're good at. Yeah, yeah. And, and you know, we can provide, you know, we can provide the venue for for helping to publish and, and kind of iterate through them and keep them reviewed, because we also talked about, you know, a simple process to just to make sure that the use cases are kept current and that sort of thing too. To be clear, are we talking about projects that are already part of the CNCF or just any project? Well, at this stage, I was thinking projects which are part of the CNCF. Okay. Alex, these I may have miss not been listening closely now for these abstract patterns and best practices with backed up with specific examples using particular projects. Yeah. So, so for example, if we were if, you know, if we were to say if you want to run an object store, these are the type of, these are the type of patterns you would choose to use. But of course, you know, just as, just as an example you could use CEP as an object store or Mini as an object store. But the recommendations for how you deploy them and the best practices and everything, you know, anything related to that are very different for CEP and Mini as a simple example. So it's kind of hard to say, you know, you should use a block file system or you should use a distributed file system or you should use this type of replication or whatever else if you don't have the specifics of the, you know, if you can't mention the specifics of the particular consumer of the storage. Yeah, I do see that it's much more valuable when a specific example is given or it's easier for people to digest. That doesn't mean that the pattern itself isn't it isn't useful or speaking to a use case where here's when you'd want to have six copies in your object store, you want to be redundant at this level based on certain criteria of you know, of objects to be stored that here consideration is the latency across geography geographies or such and such. No, no, indeed. And that's the thing. I think those, those sort of generic level things where we talk about the attributes like latency and consistency and durability and scaling and we talk about, you know, the topology of the storage and the number of copies and the data protection and data services and things like that. We kind of cover all of that in that 50 page landscape document. So this was kind of we were looking at this as the next evolution, I guess, the next step to take it from there. Yeah, we run into a similar, yeah, I don't know, I'm facing this myself. I've just started to author service mesh patterns through O'Reilly and as part of that was, had been approached once from those wanting to bring service mesh patterns to the CNCF and to the SIG network, which is great which is a good home for a lot of those things and also I'm aware of an end user group that's I think focused on service mesh best practices, which is great and so I don't know that I've got I'm trying to figure this out myself. I don't know that I've got the answer about the I think Aaron had mentioned earlier about just things becoming too generic such that like it's not worth doing or it's not insightful enough. It's not prescriptive enough. There are maybe maybe by way of anti-patterns you might be able to that might be a way of keeping things clean and still being useful that you know. That's interesting. Yeah, I like that idea. We hadn't considered that we just mainly wanted to avoid taking a particular technology and doing use cases in the event we would accidentally not highlight one or misrepresent it so I think putting it in the hands of the project owners allows them to ensure that that content is accurate and then but I like the idea of the anti-pattern. We should definitely discuss that, Alex. Yeah, that's actually a really good idea. Thanks, Leigh. Great. I don't know if it's for everyone else on the call as well. I just want to let you know. Interesting, but I think the users are not looking for something generic. The users are looking for something concrete. It doesn't matter what project it is I wouldn't look at it as an under-user. I wouldn't look at it as a kin-making. I would just look at it as an example and as long as we give the users a chance to submit their use case as an example, I think we are fine. I suppose also if they're open you know if these are documents that are managed in GitHub and a project feels that they've been missed out or misrepresented, they can address that by submitting a PR. It doesn't have to be a group of Ivory Tower authors writing this thing. It can be collaborative, right? Yeah, exactly. Right, but then what do you do when I don't know an oracle or whatever submits a PR because it's good marketing and how do you make that call as to whether something's in or out? That's a challenge. Yeah, I mean I guess like any project we have to make judgments about what's good for the in this case the project being the documentation what's good for that documentation project, what's good for the end-user readers of that project maybe emphasizing projects that are in the CNCF to start with would be another way of steering that investment. Yeah. Agreed. Here's a thought I'll offer really quickly and that is potentially the SIG is publishing the generic pattern but providing a space for vendors to link to their write-ups of their specific implementations such that there isn't maybe your voice in that argument. That's exactly how our template is structured. Yeah, we were going down that route actually. Great. I think we should move on because we just have a couple of minutes left and we have SIG observability but thank you. We don't actually have any of the SIG observability chairs here so it's just a note basically saying like we need some TOC and it votes on a tech lead nomination as well as a third chair nomination so we are actually mostly okay on time. Okay. I apologize for interrupting the discussion. No, this is good. I think we should probably add this in for some of our conversation for our next TOC SIG chairs meeting because it seems like there's plenty in here and my last note I believe I see Priyanka on the line so a quick shout out to the general manager here at CNCF. Yay, Priyanka. Hi. Welcome. Thank you. I didn't want to ambush you with a full introduction in here but just a quick shout out. Thank you so much, Amy. Yes, I was so glad to be able to listen in. I've been doing that since before I got into this role. You folks work so hard and I'm like, man, this is a lot of work. But I know there's no time today. Thanks for the shout out, Amy. And maybe we want to have a little bit of space in the meeting you recommended. Yeah. I think probably our next meeting will be as I look at a calendar quickly June 16th is our next TOC and SIG chairs meeting. So we'll add some pieces in there as well. Fantastic. I look forward to seeing you all at that. Welcome, Priyanka. Fantastic high notes, Endon. See you, everyone. Be well. Bye, everyone. Bye. So, congrats.