 Hi, folks. I guess we'll just give people a few minutes. Amy. Hello. Hello. Ah, good. I can hear. Good. Yes. Signing in from the regular office. So howdy, howdy. Oh, I am signing in from Switzerland. That's good to see people out on the road again. I know. First, well, actually second time out of the country in two years, whatever. It's crazy. I've gotten a few regrets, but I think we should have enough to be able to have like a really robust discussion this morning. Folks that'll be joining probably like the half hour as well. But I know Don is here because this is something that Don Foster probably wants to be able to talk about as well. Brilliant. So yeah, give it a few minutes. Folks to start running on in. Yeah. Liz, you're in Bern already. I'm actually in just outside of Zurich right now, and I'm heading to Bern. As soon as we come off this call, I'm going to jump on a train, but apparently it's only like an hour and a half from where I am. I mean, I don't know if you know it. I'm in Falkett'sville. Well, I've heard of it. I'm on the other side. Okay. You can see that participant counts still slightly creeping up, but I think we're probably pretty good to get anybody else who expecting Amy that we definitely should wait for. Or should we just do the participant lists? No, I think we should actually be okay on this one. So happy to be able to kick us off here. Awesome. So welcome everyone. Usual terms and conditions apply. You made it. Hooray. Indeed. These folks are getting updated in the public working doc, but this is all of your folks. And here's our agenda. Yeah. So I think this is a pretty much open discussion that came out of, you know, we have a few concrete examples in the storage space that we have, I think, three different sandbox projects who are applying for incubation. And we really wanted to have a discussion about when we have similar projects without, you know, we don't want to just pick a winner. We're not kingmakers here, but equally we need to balance that with helping end users navigate the landscape. And if we had a landscape full of 50 different storage solutions, let's say as an example, how would we help people understand which is appropriate for them? And is it are there going to be times when some projects are not, you know, how do we draw a bar to make sure that projects are projects that we want to stand behind and recommend and suggest that end users might want to use them? Just having a quick look to see who else is here. Do we have? I think Saad is Saad here. Yeah, but I'm keeping watch. Right. I was thinking of Saad just because I know he's been involved in looking at those particular storage projects. But yeah, I mean, I'm using those as an example that the same principles need to apply across other projects. And we do have examples of multiple projects solving the same kind of problem. You know, we have multiple solutions for runtime, for example, container D and cry out. I think that's a great example where they're both being used really heavily by different sort of sectors of the ecosystem. And that's that seems good. You know, they're kind of competitive, but they both have strengths. So that's, I don't, I feel like that's a good example of a healthy competition between two alternatives. But I wouldn't want to see 50 different runtimes because how would you choose between them? Or maybe there would need to be 50 different use cases. As you said, that Saad is in fact coming to join us so we can circle back towards the storage conversation. Hi Saad. Hello. So we were just sort of laying the scenario of, you know, we have a general problem of the balance between competing projects, you know, having some healthy competition can be good, but we don't want to have a giant number of equivalent projects that are hard for end users to navigate between. And then Saad, we mentioned to you because I know you've been looking at some projects that have quite a lot of similarities in the storage. That's our kind of concrete example. Yeah, very timely. So I've been looking at Longhorn. They supply tinkubation. I've been doing their due diligence with them, worked with tag storage on doing that due diligence. I raised that question with tag storage as well. Say, hey, what happens if we end up with a bunch of different software defined storage systems as is going to happen since open EBS is already part of the sandbox. And, you know, we already have Rooksef and there will be others. So what do we do? Their recommendation was kind of waffly saying, oh, we don't do king makers. We should just let the best projects, you know, in that kind of thing. So this is a very good question. So I guess the best projects, you know, do how do we we concretely evaluate which of a competitive set of projects are the best to what bar, you know, and this is not a question just for some of this is a question open to whoever wants to get involved and throw in their their thoughts on this. Sorry, do we I don't understand we need to pick the best project, or we just need to pick a project that has passed the criteria to advance. Isn't that just it? Well, what's that? I think the question is more, you know, if at the size we're at, it's, you know, it's okay, we people can probably pick between, let's say, two different run times. But if there were 50 different run times, again, I'm just using run times as an example. If we had 50 of them, how would people choose? I don't know that number. I don't know what is a good number. Yeah, I think that's actually, I mean, that obviously is an unrealistic number. But if that were true, I wouldn't even personally, I wouldn't even think that's a bad thing, because I mean, ultimately, that means, you know, given given there's a very limited attention span for the for the for the for the whole ecosystem and the industry, I mean, given that means if there's 50 thing or even 10 things going on, a specific topic, that's got to be a very, very interesting topic. So, so I mean, if all 10 projects, let's say, pass the criteria for incubation and even graduation, whatever criteria, subjective objective criteria we've set, then I think they should advance and and we'll just, you know, we'll do our best to to shepherd those projects to help people choose the right project. We got to do all of that. But ultimately, I mean, let's just say the interest lanes, the hype cycle and it. And then some, then some of these projects will no longer be interesting. And then we will archive them. You know, so I don't I don't know if that's, that's the, that's almost like the worst case scenario, but it still doesn't seem that bad to me because the premise of all of that is there is something that exciting going on. So anyway, that's that's that's my thought, Tim. Hi. So I think we talked a little bit on another call before I forget exactly when. So so if as a problem statement, I would say is somebody's coming in, they want to know which of a certain type should they look for, right? So one example that I have that of a community which did this in a in a good fashion was like in OpenStack, Cinder, since we're talking about like storage and sad. So that is the example where, you know, they had a matrix and they said, here is the list of features that are available that are possible. And here is here is a set of the drivers that are available. And, you know, you essentially have a checkbox if it supports it and X mark if it doesn't. So if we come up with like a generalized matrix for, for example, run times, right, where we say, okay, here is the different ways of looking at it, whether it is features or whether it is capabilities. And then we, you know, we maintain and we essentially get input from the run, the folks maintaining the runtime saying one of the things that you think you are important to your runtime that we can put it on matrix and then we can use it for comparison, right? And then when they when somebody comes in to evaluate a runtime, they will go look at this matrix and say, okay, this is important for me. That's not important for me. So let me pick one of these two or one of these five, right? So it gives them a chance to like look at what are the different things that I should be looking at and which of these run times support or does not support one of these things that I'm interested in and gives them a starting point. That is basically what I'm looking for is like, how do we get somebody started? Once you show them, okay, evaluate continuity or evaluate cryo because it has a set of things that you have. And then if you don't really like it, see if there's something else that is available and go evaluate that. That was how I was thinking about it. I think that makes a lot of sense. My hesitation on that example is I think it's really great until you scroll to the right and you see, I didn't count them, you know, like 15 that have like a tip in the top two columns and then a series of crosses and I guess as a naive end user, I'd be thinking, where do I start to evaluate these? Maybe the answer to that is we ask the projects to tell us how they differentiate themselves from the 14 other options. Right, of course. So basically they're crowdsourcing this matrix to the people who are doing the work and we are not like as a POC maintaining the matrix or anything like that. So they work on it together, you know, similar to, let's take the runtimes, right? So if we say that the tag maintains the matrix and all the runtimes, you know, when they have something new, the exciting that they are happy about, they go to the tag and say, hey, I want to add this to the matrix and can we start tracking this stuff? I really like the idea of tags maintaining the matrices like that. Yeah, it's a good idea. It's the Kubernetes CSI is actually a good example of where we're doing something like this. I think there's over 100 CSI drivers now and the Kubernetes CSI community maintains a table with all the drivers links to them and then a little matrix of what their features are. I think that's working well overall. The interesting thing there is that the CNCF doesn't host all those drivers. Necessarily some of them are hosted within the Kubernetes project, but the vast majority of them are actually self-hosted by the company themselves and we just link out to them. So that is a good example. I think overall what it boils down to is what Liz is saying, which is what happens as an end user when I'm trying to make a decision. How do we make end user life easy, basically? So if we have some guidance on where to start and how to pick, I think that seems like a good solution. Yeah, I think from the end user point of view, something like what Dim said makes sense, like having some kind of matrixy thing and having the project maintainers saying why they think their project is special, like I don't know where the fastest or where the most available or whatever have some metrics. The thing that I think is interesting given this path we're going towards having the tags maintain something like this is in the scope of an incubation project, when do we go from this thing doesn't exist to it does? You have one project does something, you don't have it, but then the sixth project comes in and we say, you pass all the incubation requirements, but we're not letting you become an incubating project until this matrix is created. How do we get to the point that this thing exists once we've crossed whatever that number is without penalizing the end projects in its process towards getting incubated? I just posted in chat one of the things we've been trying to drive out of the contributor strategy tag is getting people to better define their charters and what functionality is in scope, what's out of scope and help people differentiate this because the problem we're trying to solve is that we're in a complex ecosystem with lots of overlapping functionality. The better we can get people to document this in the readme's, I think it doesn't necessarily solve the matrix problem, but I think it gets us a step in that direction. Josh Berkus has been driving trying to get this included in the project templates as well, so hopefully we'll get this in better shape. We do have we do have the document that I linked into the chat which talks about how projects can better document some of this, so we can encourage people to use some of that. One thing that's just crossed my mind is sometimes less is more. A simple straightforward project that does one thing really well may be better for some applications than a project with a whole tonne of extra bells and whistles, so I wonder if we need to somehow express that through the matrices or accompanying. I think it was someone mentioning things like high performance, that might be the thing, people might trade off few features for high performance of one particular feature and it would be really nice to make sure people are aware of that and not just saying I'm looking for a matrix with all the ticks. That's the sort of thing I was hoping to get out of having a blurb or something from the project. They might say we check only one box, but we are the best at that box, so if that's what you want, use our project, but if you want something else, obviously use something different. I feel like that's the kind of thing you can only get if you give the project give me a two-sentence elevator pitch for why you're different as opposed to just the matrix. On the same page, you can have the blurbs at the bottom and the link to the actual readme, so it's two clicks instead of one click. We don't want a massive page that's hard to maintain too. There might be a concrete example. It's slightly off the beaten path, but there's various sites usually called alternative to.net or equivalent to and they're pretty much, oh, some tool they like isn't being produced anymore or now it's commercial or for whatever reason. I kind of like the examples they have. They often have a sort of matrix, but it varies from one category of software or even just two specific apps one to another. It might just be a nice visual example of something that overlaps with the matrix that's already been described. It cuts down to here's the key things you like and maybe one will have three key features and the other just has two. We can't perfectly map them one to one, but it spells out this one has more features and it excels and these three categories users are interested in and this one only meets one or two, but does them exceptionally well, so I found that to be a particularly interesting example. Yeah, but I would like to try and manage it. In terms of the work to be done, I think created a matrix for everything. I think we're taking a lot of time, so I think I like the idea of the blur. Yeah, I think it will vary between different projects. In the case of cryo and container D, you clearly have two projects that do something very similar, but for example, in tag runtime, we're looking at edge projects or the edge computing projects and they tackle the problem in different ways. They're essentially solving a similar problem in a similar space, but they're solving the problem in different ways, so I think it might be good for the project to provide a blur of why those particular projects are better or how they can be used as opposed to a different project for a certain kind of application. And we do have questions about how does your project compare to other projects in the cognitive ecosystem or something along those lines. I think in both the sandbox and incubation documentation, but I think we have that in the evaluation, but we don't really maintain that or make it available to end users in a consumable way. Is there specific intent to provide all the comparisons up front for consumers or do we back away from that a bit and just give here's the blurb and leave it to end users to do their research and compare them and arrive at said conclusion. It does kind of remove us from the position of appearing to endorse a particular one versus another. I guess there is already a signal of endorsement by having a project in particularly in incubational graduation. So we are providing some kind of endorsement, but I feel like right now there's sufficiently small amounts of choice that although it's a huge landscape and pretty hard to navigate, it's not completely insurmountable, but I worry that we will get to a point, we've got a lot of projects in the sandbox and some of those experiments we expect to fail, but quite a lot of them probably will turn into really great projects and I just worry that we'll need to help end users. We're supposed to be, while we're not king making, we are supposed to be helping people build really good cloud native stacks and finding the best projects from which to do that. So I guess I do feel that we have some responsibility to help people understand what the strengths of different projects are. In another question to me, whether that should be part of the incubation or part of the graduation? When a project gets into incubation, the expectation, it's not a guarantee, but the expectation is that it is on a path to graduation. And I feel like when I want to compare, when I want to assess projects for incubation, one of the things I want to understand is how does it compare to its alternatives. So I personally would really like to see this at incubation level. That sounds good. Yeah. Bob, I agree. Thank you. Would it be premature to throw out a potential technical idea that might fit the bill or save it for a separate meeting? No, go ahead, please. I was thinking of two platforms that sort of come to mind. One was LinkedIn and one was DeGreed and I remember with like DeGreed employers often have their employees add a ton of skills and I believe some similar platforms you can even add like a number. So you have just a massive dictionary of thousands of keywords and skills that people want to associate with themselves, data mining, cloud computing, etc. And then they can add like sort of a score out of 10 to it. I'm wondering if maybe it's sort of a community-driven, here's like the key features that OPA versus some other policy management engine has that are relevant to that sort of domain and then they can maybe add some sort of maybe not numbers, maybe a scaling, low, high, medium, good, great, best or something. I wonder if it's sort of a community or maybe maintain or driven set of features that uses keywords and they can throw the rankings in there. I'm wondering if an engine like that might help save us the work of sort of hand crafting all these matrices and comparing them, but we still have somewhat similar criteria to compare different projects with just a bunch of key value pairs. Yeah, I guess I'd slightly worry that we'd end up with a popularity contest rather than a real assessment of capabilities, but I would love it if we could crowd source it in a meaningful way. Adam, give your hand up. Yeah, sorry, this might be a bit of a naive question, but I'm not sure where the notion of the responsibility of finding the best comes from here. There's been mentioned several times that the responsibility is to find the best open source projects or to pick individuals and that doesn't seem like mentioned in the charter for the CNCF in any kind of way and it doesn't seem like it's actually the responsibility of this group, it's more the responsibility of the users to make the determination of what's best for them and then through that selection of what's best for them, there might be some aggregate sense of what's the most popular or most useful or most broadly applicable, but the judgment of saying this is best seems a little bit outside of the scope of what we're supposed to be thinking about. We're supposed to evaluate whether something is viable, technically capable, useful to the community, those sorts of things, but whether it's better than something else seems a little bit the opposite of what we should be doing if we're trying to encourage as much adoption as cloud-nated technology as possible. So I was just having a quick look to find where, you know, where in the charter and you're absolutely right, the charter doesn't really talk about qualitative assessment, but I know from particularly speaking with Alexis when he was first sort of, you know, his kind of vision for what the TOC was there to do was that it definitely is applying judgment that, you know, we can't just have a tick box of criteria that it is supposed to be helping assess not just the kind of, you know, is this project healthy, although that's a really significant part of it, but also does it, is it a good solution to a problem? And I think it's very clear that being in the CNCF is an endorsement and that endorsement suggests a level of approval, which needs to be balanced with the no-kingmakers. I completely agree. I don't think it's a an easy line to draw, but I think what we're definitely not trying to do is accept every project that considers itself to be cloud-native. I think we are looking for a quality bar and then that immediately says there's got to be some judgment about quality. But it seems like the criteria we have already established for sort of, you know, sandbox versus incubating versus graduating is itself a collection of hurdles that provides that validation and judgment that are about the viability of the project inside the guidelines of what we have. So, you know, it seems to me like, you know, the door should be really open at sandbox because the movement to the next level is actually qualitatively assessed with a process around it. So, it's like, you know, the, you know, where is the judgment actually fit inside the process? If we've already got a process for evaluating viability, usefulness and things like that that exists for each of these graduating stages. I've just seen Dawn's, no, I don't know, Dawn, if you're still on, but did you want to just jump in and mention before you need to draw the health metrics point? Is Dawn still here? I think we lost her already. Ah, okay. Sorry, Adam, I'd just seen her comment. And I don't think, I don't think this conversation needs to be about changing the criteria for sandbox and the kind of bottom of the funnel because that is very, we've talked about that quite a lot. And I think that the bar for entry for that is pretty low in some respects. But incubation is where we really start seeing people taking notice of what the CNTF is saying. You know, we see sandbox is experimental. And we're not making any guarantees about that. But we are telling people that, you know, an incubation product is something that maybe early adopters want to consider for production use. And I think that's something we need to take, take responsibility when we say that. So I think the question for me is not, are we changing the bar? It's more about how do we help? Is it useful to end users if there are, let's say, three different projects that essentially do the same thing? Maybe it is. But if it is, how do we express why that's useful? So I don't think it's just about saying it's not just about quality. It's about communicating the values of different projects. Yeah, I mean, you know, I'm, like I said, I'm pretty naive, naive statement from my perspective. I'm only being involved with you guys for following along for the last year or so. So I don't have all the backstory for everything. But, you know, my general thinking about this and my experience in other open source communities is the marketplace of ideas is the marketplace of ideas. Like it is okay to have 15 different implementations of exactly the same thing. Because at a certain point, the users will make the choice that identifies that this is the thing that works the best for them, regardless of all the other pieces. And we can give them a framework to help make that decision. But ultimately, it is the end user's choice about whether those things are viable. And there's plenty of situations where a very, very reasonable open source project is still interested and useful for a particular subsection of the industry, despite it being completely replicated in another faction somewhere else. And that's completely fine in sort of like the general sense of open source land. And in particular, it's completely fine in the general sense of getting as many people to use cloud native, you know, work as all like if that thing works really great in that industry, then you don't need to try to encourage people to migrate away from it to something that's equivalent or more popular somewhere else. So that's where they'll make sort of like, you know, letting the users make the determination about what is valuable, how, you know, falling back to that sort of sense of this is the way this is the way it will happen in the end anyway. So like, you know, we can put our thumb on the scale, so to speak, or we can give them tools to help them make the sort of those determinations. But maybe those determinations ourselves is, you know, sort of like, you know, feels like it's the wrong way around the way that things are going to adoption or choice is going to happen from my perspective. I don't think we're talking about making a determination as much as like giving guidance saying, if you're evaluating projects in this area, then look at these aspects where things are the same or things are different. So you can make up your own mind. That's basically what I was looking for, rather than the popularity kind of thing which can be gamified. I think that's exactly right. It's trying to find a way of getting the real qualities of a project expressed in a way that consumers can can understand the differences between them. And some of those differences might well come from the experiences of end users. I think some of the metrics that I was just having a quick look at the things that Dawn had pointed out, things like responsiveness to issues. That's a pretty interesting metric for what's it's going to be like for an end user if they have problems. How responsive is the project going to be to those problems? I can see that being a really useful thing to... I don't think that's quite what we want to have in the matrix, if we're still going on the matrix idea, but having this data available is going to help end users assess which projects they want to experiment with. So I think one plus one to that, Liz, I think one idea I think we still like seem to be like getting the tags to do this. Each tag should have a page where they have some sort of information about the projects that fall under them. And they may be with consultation with the projects that they are responsible for. And it could be a matrix, it might not be a matrix. It could be a set of verbs with pointers back to CNCF landscape or the readmease of the different projects or whatever they feel like. But from our point of view, we should say, okay, hey tags, go do this. Have a page where a CNCF end user can come and look and get a sense of what's going on here. And the TOC liaisons for each tag can help the tag come to a land on a solution, whether it's a matrix or something else that works for those that set of projects. Does anybody think this is a bad idea to ask tags to take on the task of documenting how different incubating projects differentiate themselves from each other? I think from the tag perspective, they can ask the project maintainers to come up with that information. So I think some of the work can be just lining up all that information and making sense out of it. But most of the information will come from the project maintainers or the projects themselves. I completely agree. I think that the tags can play a role in helping standardize that across different projects and making sure it's not all just marketing fluff from the different projects. Right. And basically, be the referee when people will say, hey, this thing is unique in mine and collapse it into one line item rather than two line items. Something like that. Right. Ricardo. Correct. Correct. The one caution I'd have is you have things like features. Either you have the feature or you don't. But if you get into these gray areas of, oh, how responsive am I? Who is highest performing? Then you're going to get into these battles and each challenge you're going to have is where are you going to get data that everyone can agree on as to what the right metric is for a given project on that. The feature at least seems relatively black and white. You have it or you don't. And the features are what end users are looking for then. It's clear. If you get into the performance element, the response element, et cetera, I think you're going to be in a trouble spot. Then you're going to be caught in a seamless debate of, no, we are the highest performing. Well, how do you approve that? Because the other project says they are too, right? Agree. So let's see the community aspects out of this and leave it technical. Probably that's the right way to do it. I think that's where the tax can come into play a kind of police role of like, okay, if you're going to make this claim about performance, show me the proof. And the tax can, they know that area. They can help assess whether those claims are true or not true, or maybe they tone down the language that the project would. And maybe it becomes less. We're the most high performance and more here is a bit of documentation about performance of this project measured in a certain way, which might be informative, but may not be the end of the story. Yeah, I mean, it could be the highest performance for a particular case, but I mean, it may not be the highest performance for everything. So I think the tax can actually point out that aspect, right? So I'm going to be marketing material and claims about projects. So I think I agree with that. So the tax might come in and say you have to, the line item would be supports high performance mode, and then the link for each of the project would be to how they do high performance and how they measure high performance. Should we be measuring? And I kind of asked that question in terms of, I don't think we should be comparing subjective stuff. I think we should be comparing functionality or other metrics where it's number of end users or biggest deployed project or whatever it is. But should we be measuring things which are completely subjective? You could put 10 engineers in a room and spend three months trying to figure out the best way of measuring performance and still not come to a conclusion. How are we going to put performance metrics for different projects? I think this is exactly where the tags need to be using judgment and saying this claim isn't a reasonable thing to say about a project. I think it would be totally reasonable for a project in its differentiation to say here is some data on not just the performance measurements, but how we came up with them and what it was we were measuring as a piece of information. And maybe they make a claim that says we believe this makes us really high performance in certain scenario. And maybe the tag can look at that and say, yeah, that's a reasonable claim or no, the data doesn't back up that claim or it's too subjective. We're not prepared to publish that on our assessment of what differentiates. I think it should be really the tag making the decision of this is how we assess the way different projects differentiate from each other. There is also one more yardstick. The yardstick is how much of this information is required by an end user when they are just starting out. So the tag might say this is not relevant information for this specific thing that we are talking about why we are putting this together. Yeah, but even then, right, how do you judge what an end user needs when putting something together? Like for a small end user, it might be ease of use and for a large end user, it might be the ability to scale. Yeah, they can do two clicks or three clicks to get to the point where they have that information. Alex, they can go on to the B and then dig deeper into like what are the benchmarks and things like that. So I don't think we need to overthink this too much. It doesn't need to be like the full and complete assessment of every possible quality of every project. It's more, I think about let's take this storage example, as a concrete example. If you're looking at Longhorn and EBS and I've forgotten what the third one is, the third one. Chewbacca, I think is the one I'm thinking of. But if you're looking at those different things, what is it that they do that means it's viable to have four different projects? Why do we believe that's the case? And why would an end user be interested in one or two of those projects, but not the other two or three? So let's take that as a simple thing then. So Rook is an operator and it's not a storage product. Chewbacca is a distributed file system and Longhorn is a block store. So they are fundamentally completely different functions. The only one that has some overlap, which we're about to consider is Open EBS, which is a block store too, but obviously has some differences. So out of those four things, given that these things are either a sandbox store incubation or graduated in the case of Rook, we kind of should know that one is just an operator and the other is just a file system, right? And another is a block store. Yeah, that's exactly what we need to bring up in this comparison page. Like when you one look at the comparison page, you should be able to see that these three are different from the other two, right? Exactly. It's that level of detail we don't really have right now, like that level of not very much detail that we don't have right now. Yeah, I guess for me, the challenge is going to be when we're comparing things which are a little bit more similar or have similar functionalities. I think that's going to be problematic because we've had a discussion about this in our own tag called a couple of weeks back. One of the things that we were all deeply uncomfortable with is becoming king makers in some of this stuff. Honestly, I'm not super comfortable being a judge of somebody else's marketing or somebody else's performance claims or something else, because when it's functionality or it does this thing or it does that thing, that's easy to talk about. But when you're comparing subjective stuff, it's much, much harder and I'm not entirely sure we could do this without controversy and a lot of steps. Yeah, I agree, Alex. So what we would say on the landing page is we would say has performance benchmarks and the links will be to the respective projects benchmark page. Right, but are we taking it on ourselves to do an apples for apples performance benchmark on some standardized benchmark performance? No, what we're saying is go look at here is where you go look at what has been published by a specific project rather than here is how you compare using the same benchmark with two different things. Right, but just for just for to make the point here, we've spent quite a bit of time writing a performance white paper, for example, and we kind of highlight how hard it is to do Apple for Apple comparison because there are so many things to consider. And we actually conclude in the document that you should absolutely always ignore vendor benchmarks and you should run your own tests and your own environment because that's the only way to measure anything worthwhile. And so no, I actually don't feel comfortable pointing to benchmarks published by the vendors. I think that's just bogus. Well, we would throw in the link to this white paper too saying read this white paper first and then here you go. Look, I think we should be asking the tax to publish anything they're not comfortable with. I mean, you know, in some cases, there may be a measure that makes sense. And in other cases, maybe there just isn't or maybe it just comes down to like I said, we're not trying to overthink this. It doesn't have to be a full encapsulation of everything. It's not replacing the end user doing any experimentation testing themselves. It's more just saying how do we help people? Like if we've got two block storage projects now, what is it that would make some people lean towards one and other people lean towards another? And it might be, I don't really know the storage market very well. But I can go back to the runtime example of saying, you know, if you're choosing between Cryo and Container D, a big part of that choice is just going to be the ecosystem we're in. If you're in the red hat ecosystem, you're probably leaning towards Cryo. I think that's the reality of the reasons why people lean one way or the other. It's not a secret. We can talk about that. Yeah, that's that's that's fair. I would be just on a principal level. I would be uncomfortable sort of getting to a position where we're recommending one product over another, based on subjective matters. I think things things like ecosystem or, you know, for example, you know, talking about, you know, objective things like, like, you know, scale or security or functionality. Those are fine because they're, they're factual, they're objective. But if we're recommending one thing over another, based on subjective stuff, I think that's where we open the proverbial kind of worms. Yeah, and I absolutely don't want to ask anybody to, you know, do something they're not comfortable with. I think that's part of, you know, the tags ownership of that assessment should be, you know, this is, this is what we're comfortable with. And it may be extremely factual. And it may even say, honestly, we don't have any reason to prefer one of these projects over the other, but they're both, you know, I don't know, this one's popular in Asia and this one's popular in the America or something. I don't know, whatever. Yeah, I think it's not in the spirit of making a recommendation. It's more in the spirit of helping end users navigate the ecosystem and the landscape of projects. Like, oh, look at this, this information is here and there, and we can help you, you know, see what information is available there and you can make your determinations. I think I think I'll stop further and say that it should never be making a recommendation. It's not just not about that. It literally isn't making a recommendation. And as an end user, I would be okay with something as simple as a list of projects and each one of them having, like, a two-sentence place where their marketing department wrote something. And if five projects each wrote, we are the fastest X in the world, then as an end user, I just know I have to do all my own research because they're all saying the same thing. But I think with a lot of these projects, they will have either slightly different ways of saying it. Like, we use the least CPU versus we, I don't know, are the fastest over the wire or something, or they will just say completely different things. Like we have the simplest possible runtime. And that might not be true. But at least that tells me immediately what that project focuses on. And as an end user, it helps me simplify it a little bit. But to be honest, I wouldn't trust any of it anyway. I would want to do my own research to the point that Alex was making. And I would never, even if the tag came up and said, like, I don't know, MagicFS is the fastest file system for your use case, I would say, I'm not sure that Spotify is doing the same thing as tag storage. So I'll still test it. But I'd love someplace that doesn't just list here all storage projects. Instead, it says something about them, even if it's complete marketing. Do you know, I would love for us to do a better job of publishing the information that we do, that we gather during the due diligence for incubation and the annual reviews and that sort of thing. Because that contains real valuable information about the products architecture, the team structures, the roadmaps. Often it has interviews and whatever with end users that describe their own actual use cases. We should do a better job of setting up a library of that information. I mean, that is factual, authorised, reviewed and voted on, and not subjective at all. Maybe just to second what Dave was saying, to take the data example, if I wouldn't end up as an end user in a page with object storage, shared file systems, block storage, like colons and a bunch of products. And I'm looking for object storage. If I see one product that only offers all the object storage and all the others offering the three, I might be tempted, okay, maybe in the future I'll need them, so I'll pick that one. But maybe the one that has only object storage is the one that I need because it's more performant or something. So those metrics are really important for end users as well. So maybe having these two lines where the project explains why they do things and what they focus on is also important. Otherwise, the matrix can be a bit misleading for end users as well. Yep. Some comments going in about public visibility of due diligence documents. I think it's a great point. Those documents are, a lot of work goes into those. They are public. People can look at them, but they don't because they're not easy to find and it's not easy to compare them. And I think maybe having some way of pulling the salient pieces into like one page where you can say, yeah, here's the sort of two sentence description and here's the kind of really high level feature matrix if that's appropriate, or this is block storage. However we want to categorize and I think the tags would be in a really good position to do that. And then it could just link to the due diligence documents. That would be great. Okay, this makes slightly more sense because I was putting plenty of comments in there and like the chat going like, wait, we do publish this, but we don't. And what I'm hearing is that it's not immediately available for people to understand. What is this gigantic thing? Why is it here? Yeah, like maybe Riccardo saying get how far with links to all of them. Yeah, whether we have them all in one place or a per tag. At the moment I'm picturing like a per tag just overview of the projects and that could link into those due diligence. Each tag tends to list projects under eval or projects which are done in their repos and in their readings. So adding links to the documents and any other collateral would be straightforward enough. Would it be better to make this web searchable easily rather than GitHub? It might be a barrier to end users. I mean, that's fair. We, you know, cncf.io slash projects list them all. There could be a link to the DD documents added there. Tricky part about that is that data is actually coming from the landscape. So if we want to be able to somehow put this like, and I recognize we're getting into like procedural pieces here rather than like the substantive like the, oh yeah, we should do this. So we might have to take this one offline. I think the principle here is a good one and, you know, whether or not it's initially on GitHub and then at some future point gets translated into somewhere on the web. You know, maybe we take it step by step like that. The action item that I am hearing out of this is make the due diligence documents slightly more visible either with like working with the tags or putting able to like link to it on like cncf.io. Yes. And I think having the tags with the, for each area, I don't know whether this duplicates something that's already on the landscape, but I think we nevertheless need something somewhere that says here are the projects that are incubating and graduated and here are the key characteristics of them to help you understand this is block storage. This is, you know, and we don't necessarily need that for every project. It's more as soon as we start having similar projects that people get confused by. Let's try and help people navigate that. Okay, so what I hear in that like the first part already available on the landscape, the second part descriptive part is where we should be relying on the projects to be able to like put in more details here rather than just saying, I am storage. Yeah. I mean, the first part is saying it's available on the landscape. They're very broad buckets there. Well, the part where we say which ones are incubating, which ones are graduating, which ones sandbox, that's that's already kind of done. Yeah, no, I meant more in the landscape where it says things like this is storage. Yeah, I think this is somewhere where the tags and the talk can help. I'm found when talking to projects, especially for example, when they're doing things like submitting a sandbox proposal or sandbox form, putting two or three sentences together that actually describes the project in a way that's not either technically obtuse or marketing overloaded is really, really important. So to be able to say, look, this project does this and this way for this sort of use case is really valuable. And sometimes the project needs guidance on that because we've had a fair few instances where projects making an application to sandbox, for example, just completely, you know, the TLC actually got the wrong complete end of the stick based on the description of the project supply. So actually being able to help them with this, I think is super valuable. I'm wondering whether like on the CNCF site, we have places where there is like project logos, whether we should be crafting for those, you know, with the projects, maybe, you know, the projects come up with it and the tags help review like what's the two sentence description of each project. So I feel like we need to, we've got some good ideas here. I think we've slightly moved away from the matrix suggestion, but I still feel like they're, does it make sense? And I think this is a question really for Saad and Alex as in, and I can't remember who else is liaison for storage, but maybe to look at storage as an example, sorry, we keep using storage as the example, and kind of flesh out if we did a feature comparison chart with that look right or not. And does it make more sense to have it as like a couple of sentences for each? Yeah, got it. I think overall guidance seems to be fairly clear, which is let's, you know, let the best projects rise up. We're not trying to play king makers here. That's the ultimate goal. At the same time, we're trying to balance that with let's make sure that users and users have a clear idea of what they should use. That's where it gets a little bit tricky, especially if we get into head to head comparisons and, you know, we get into subjective things and we'll kind of try to use as much as possible. The data that's collected around due diligence and things like that surface that to end users to help them make their decisions. And overall, let the tags kind of make the judgment call about what are the best or most productive ways to help the end users navigate the landscape. Does that sound right? Awesome. Thank you, Sal. That sounds like a really good summary. Brilliant. I think that was a really useful discussion. I think we've hit the hour on the head. So thank you so much, everyone, and see you again soon. Bye, everyone.