 Good morning. Really greetings. Hey, excellent. Hello. We'll give everyone a few more minutes to come on in. 42 people on the line and you're two minutes in. 43 now. So Liz, whenever you're ready, we can kick stuff off. Usual antitrust. You all made it to the meeting. Hello. And I take you'll be filling that in Amy and gender today. So we have Priyanka to say hello. We'll talk a little bit about sandbox process. We will review all the things I think all the things that TOC need to get on to our to-do lists. And then we have Cheryl talking about the new technology radar, which is very exciting. Very organized today with this agenda, aren't we? It's awesome. Priyanka, say hello. Hello. Hi, everybody. Thank you so much for having me here today, Liz, and all of you. I just wanted to join in and say hi to everyone. Some of you know me. Some of you don't. I am Priyanka and I have been part of the CNCF ecosystem for a while. In the early days of the cloud-native community, I joined with the open tracing project, which some of you may remember, which is now merged with open sensors to be open telemetry. And those were the early days because we were the third project and there was a lot of excitement. I remember meeting Chris at a co-working space in San Francisco and being so impressed by this cool idea. I stayed involved as a contributor, as someone who was educating the ecosystem about what is tracing, what is the word observability, very basics at that time, and have seen it grow in that first wave to where we were explaining what are distributed systems effectively to now, to the second stage, which was when I joined GitLab as director of cloud-native alliances. At that time, I was elected to the governing board of the CNCF and got to see how the magic is made by the execs, Dan and Chris. And then eventually, I had this opportunity to join the foundation as its new GM. You may wonder what is a GM when Dan was ED? It is the same difference. It's the same thing. We're just changing titles in the LF to have more consistency across foundations. And now, I'm brand new in this role two weeks in, two weeks in one day. And my focus is very much listening, learning, and understanding because, and that might seem like a catchphrase of sorts. And perhaps it is, but for the right reason, because it is a really important thing to do when you start anywhere, especially a community and ecosystem, as impactful as all of you are. So my virtual door is very open. Anyone who's interested, please ping me on Slack or email, whatever you want to do, to book a time where we can discuss what your experience in cloud native is. How can I help in any way? So that's what I am doing mainly right now. Interestingly, while doing that, there's always a lot of work to be done too, which is like, you know, you can't wait for the listening tour to be over to start doing. So that's also occupying my days quite significantly. Overall, my objective with the foundation is to keep building upon the awesome experience that I've had here and that I know a lot of other folks have had. And we want to do that by growing and nurturing the end user community, who all of you want to engage with and work with for your projects, for your technologies. And we want to support the ecosystem in general during this very unique time. And finally, I think there is an opportunity for us to go second wave with a deeper developer engagement. You folks are the people who approve and manage the projects that enter CNCF. Liz has done an amazing job being the chair, in my opinion. And I think everyone works so hard here. With all the awesome technology coming in, we need to be able to support the wider ecosystem with how can they leverage this? I know that fairly well, because I remember the open tracing days, I was just focused on one project, right? And it takes a significant amount of work, in my opinion, to educate the larger developer or DevOps community on the benefits, the reasons why people should think about their tech stack in a certain way. So I think there's a lot we can do there. And I'm very excited to partner with you all and continue this journey. Thank you. I would be happy to take questions, if Liz allows. Of course, absolutely. Shoot with your questions. No one wants to hang with me. No one wants to talk to me. Well, I... Priyanka, welcome to the Cloudread of Compute Foundation. And we wish you the very best in your role in terms of expanding the reach and the objectives of this community. Thank you. I appreciate it. Fantastic. I'm so pleased that the person who's taken this role is someone who's come from within the community and is familiar, has worked on projects before, he's worked as a vendor before. I think you've had a lot of experience working within Jesus. So I think having someone leading this community who's come from this community is terrific. So I'm excited. Thank you, Liz. All right. All right. Any last questions, please pipe up. Otherwise, we'll move on. Okay. So was it Sandbox Next? It is indeed. Great. Just a quick note that... So we've agreed that we're going to trial the new Sandbox process. Amy has pulled together information from all the sort of pre-existing Sandbox applications into the spreadsheet, which should be accessible to everybody. And the plan is that the TOC will get together next week, 2030s next week, and review those. And we will use that experiment to understand how many projects we can get through in one meeting and use that to figure out the cadence that we'll need to do these reviews. Indeed, if that process works, it's a trial, we'll see. I think there have been some questions from some SIG chairs about whether or not we still wanted any kind of ongoing reviews and presentations to happen. And so at this point, assuming that the new process is successful, they won't be required, but we do encourage this kind of interaction between SIGs and projects anyway. It will be really worthwhile for projects to familiarize the SIGs with what they're doing and to build relationships. So don't just kind of feel that just because there isn't a requirement for it for the assessment, that isn't the only purpose of it. The purpose is really to get more people understanding what projects are out there. Any questions about that Sandbox process? Great. And Katie has posted the link to the spreadsheet for anybody who doesn't already have the slides. Okay, wonderful. I think we've got a couple of slides to remind TOC folks that we have worked to do. So some of these votes, they're four votes outstanding. I think some of them are pretty close to being passed. So if you're a TOC member on the call, please go and figure out what your vote is going to be on these votes. That would be really appreciated. Please go vote. We have some annual reviews for Sandbox projects. So these still need three sponsors to basically say, yeah, I support the ongoing Sandbox nature of this project. So again, if you're a TOC person on this call, it doesn't require all of us to look at all of them, but if everybody goes and looks at one or two of them, we should have them. We'll be fine. Okay. And then due diligence. So I know some of these projects are going through some SIG reviews, but I was a bit struck today that actually we've got quite a lot of incubation applications kind of in flight at the moment. And we're going to need TOC members to lead the due diligence for all of these. So if anyone and if a TOC member is particularly keen to lead the DD for any of these projects, either shouting now or adding yourself onto the PR or mailing or whatever slack, it would be great to start identifying who's going to do what because I can see this is going to be, you know, there's a chunk of work coming up, folks. So let's try and line up who can do what. Hey, Liz, just a quick one on Profica. The project had talked to the SIG and done a presentation before submitting their proposal. So we already have a SIG presentation and the recording there and the due diligence is docked. The prepared is already pretty comprehensive as a result. So happy to work with whoever on the talk is going to sort of complete that due diligence, but that one in particular is fairly advanced now. Fantastic. That's really good news. And just from a process standpoint, Liz, I mean, that's kind of how we've progressed things through the SIG is help prepare those items for the TOC. Is that standard process from each one of the SIGs? I don't know. I was looking at this this morning, the process that we currently have, they say, you know, we should get the relevant SIG to do their review and then the TOC member drives due diligence. I have noticed that some of the projects are quite rightly they're sort of writing their own due diligence document to help kickstart that process. I think that's a completely valid thing to do. I am slightly concerned that we're going to end up with projects that have been through like three SIG reviews because they're spread across different SIGs. And, you know, there's a lot of work gone into it. And then maybe the TOC, if we haven't identified who the TOC person is, they might have other questions. It might feel like we're going back to the beginning again. So I'm kind of hoping we can start trying to get these due diligence things happening a bit more in parallel. I'm also thinking that the TOC folks can work with the SIGs and ask for help and say, please, could you dig into this particular area? I have questions, you know, either directly with the project, but or maybe help asking the SIG for their opinion as well. Liz, I can see some folks volunteering for a few projects. So that's wonderful. Thank you, Michelle. Thank you, Justin. Thank you, Katie. I think that's a really good idea to do kind of some cortex together. There's one more project that filed for incubation as a cubate. They just they just put it in like last night, so I didn't get actually get on the slide. When you put it done in like 1230 my time, it might not happen, but yes. And there's another note in here about key cloaks are changed from being able to come in a sandbox to incubation and we're waiting on PR for that. Okay. All right, so lots of work ahead of us. I guess I'm also going to say, you know, for the projects, if anyone is representing those projects on the core, we'll do our best. You can see the workload we have here. So, yeah. All right, any other questions about the kind of project side of things before we move on to the technology radar? And I believe we have Cheryl on the line as well. Hey. Awesome. Thank you. So, for those who haven't met me before, I'm Cheryl. I'm the director of ecosystem at CNCF and I focus primarily on end users. And I'm very excited to announce this new initiative, which comes from the CNCF end user community. And this really came about because of some discussions that I had within the end user community and with Liz, with Elena and Katie and Jeff, the TOC end user reps. And the thinking was, how do we make sure that we have the actual reality? What is the real world usage of Cloud Native? And how can we use that to drive the CNCF technical strategy so that, for instance, if projects are in sandbox for a year, then we have a process where we could say, like, these are things that we're actively looking for, actively interested in. So this is some of the driving motivations behind this. And this will be presented in the future once a quarter by one of Jeff, Katie, Elena, and the future TOC end user reps. At least that's the goal. And I did want to note in here, this includes proprietary and non-CNCF projects. And I think that's very important because although we are obviously all working in open source and the goal, the existence of CNCF is to support open source, we do need to recognize as well that end users use more than just the CNCF projects. So next slide, please. Who is in the end user community? So this is a group of about 145 companies. You can see they range some tiny startups building greenfield applications, some are huge household names, some have lots of legacy and enterprise, and some have to deal with regulatory compliance. So there's a really wide range of really wide diversity of companies within this end user community. And most of what the end user community does is closed. It's not publicly recorded or shared. And the reason for that is that these companies often don't have the legal or PR approval to say what they're doing publicly. So they can talk about it within a closed group, but they can't talk about it publicly. So over the last couple of years, I've heard a lot of really, really intense, amazing discussions within these companies. And I wanted to share some of those discussions in an aggregated and anonymized way out to the general public and the end users and vendors who are looking at cloud native. Next slide, please. I'll do feel free to put some questions in the chat as you go along if there's anything that you want to ask. So a technology radar is an opinionated guide to a set of emerging technologies. And I think I should have added here as well at a specific point in time. So this format originated from a consultancy called ThoughtWorks. And this is what their latest radar looks like. This is from this year. So they published this once a year. And in case you haven't seen this before, the way to read this radar is in the center, in the smallest circle closest to the middle is adopt. So these are projects that they consider ready to adopt. They've been widely used and tested and they're stable and useful across a range of use cases. And then going further outside, you have trial, meaning we've tried this and we think it's been really successful, but maybe there's something that was lacking. Maybe it wasn't appropriate for all use cases, but you should definitely try it. Assess, meaning we think that this is promising. And then the outermost one is hold. Hold meaning we think that there are better ways to do this. So we no longer recommend these features, these projects. And you can see that the ThoughtWorks radar is divided into quadrants. So it's not just technologies, it includes techniques, platforms, tools and languages. As someone I would love to know what this number 49 is, hold. What is node overload in platforms? They recommend holding it, but I'm not sure what this means. Anyway, let's keep going. So this is a clearer definition of what these four levels mean. So again, adopt, trial, assess and hold. So from most mature and widely used to least. Okay, let's go to the next one. So I wanted to take this format and tweak it a little bit to make this useful for the broader community. And I'm glad to see that Josh has also never heard of it. It's new to me. So the most important one is the first one that this is community driven. So it's not the opinion of a single company. It's the opinion across that 145 companies, that group of the end user community. And it should really represent what those end users are actually seeing and doing right now. We wanted to simplify it a little bit. So we want to focus on the future adoption. So we just keep assess, trial and adopt. And then on the last slide, the ThoughtWorks radar is 100 items published once a year. But at least within cloud native, I think we can all agree once a year is kind of slow for our community. So instead, we're going to do 10 to 20 items once per quarter on a specific use case. So following good release practices, sort of small and often, or frequent and often. And so the first use case that I focused on was continuous delivery. So the next few slides, I'm going to talk about how actually the methodology behind building it and creating the tech radar. I wouldn't be showing this in future ones, but I think it's important to understand how it was created. So I created this Google spreadsheet. It's open. Well, it's not open. It's not public, but I shared this within the end user community. Listed a bunch of CD tools down the left and asked companies to add a column for themselves and choose in their opinion and their experience what their company suggests for each of these tools. So adopt, trial, assess or hold. And if there were any projects that weren't included on the list, they would add them down on the left hand side. So you can see that this is not anonymized within the end user community. So the end users among themselves know who's doing what, but the final report is anonymized. And I also invited them to leave comments if they wanted to say, here's why we chose this level. Here's what feature it was missing or what our experience was with it. Next step, step two, not a step two yet. So this was the result of that spreadsheet. So you can see this is actually a super wide spreadsheet. So this goes across the top and then keeps wrapping down to the one on the bottom. And I blacked out the names of the companies, but 33 companies submitted 177 data points. And you can see it's color coded. So green for adopts, blue for trial, yellow for assess and red for hold. So the next stage in this was looking at looking horizontally, looking across the rows at the individual projects and then putting those into the rough categories of adopts, trial and assess. And this is where I want to make it very clear that it's subjective, it's a opinionated guide. So you could certainly argue that some of these could be in different levels. It just depends on where you wanted to put the cutoff point. When I was building this, I decided the first one, like flux and helm seemed to have enough positive consensus to go and adopt. Circle CI, GitLab and customize had mostly positive recommendations, but perhaps not as many responses. And then assess was just a really wide mix. You know, some of them there wasn't enough data to say conclusively one way or another. Jenkins, which is obviously the longest one, had lots of positives, but also lots of negative ones. So assess is the most varied out of these three. And then the next step after that, step three, was to look at some of the patterns or themes. I see the question from Liz. Do you think we should publish the anonymized step two data as well as the final radar? I think we could probably do that. It depends entirely on whether the, it would just be in the editor's opinion whether this, we might have to strip out comments, for instance, if the comments reveal the identity of the company. Of course, a lot of the companies are absolutely fine with sharing their data as well. So I think this is, yeah, I think we could probably do that. Cool. And then for the last step of this was analyzing, I'm sorry, go back one, analyzing the themes. So again, this is an editor's subjective view of what is interesting or surprising. So this could include what was not seen on this radar that you might expect to see. And then yeah, there would be three themes per radar. So all of that is set up. So you understand the methodology. So this was the actual final radar that we came up with from the end user community. So I would read this radar as Flux and Helm showed the most broad positive consensus. CircleCI, Customize and GitLab are seen as, has some positive responses, but maybe not as much consensus as those in adopt. And then everything else in Assess can be there for different reasons. So some things are too new, some things like Jenkins, you know, the difficulty I had with this, my first reaction when I saw this was, but why is Jenkins in Assess? Because everybody knows Jenkins and and lots of people use it. So the point about showing you the spreadsheet was to say, would it have been, in my opinion, as the editor to put Jenkins within the same region as CircleCI, Customize and GitLab? If you could actually pop back to that spreadsheet for a moment, Amy, thank you. Next one, one after this, sorry, forwards. Yes, this one, thank you. So, you know, in my opinion was Jenkins, would Jenkins have been right to move into trial? I thought as the editor that it shouldn't be there. So again, this is, this is why it's a opinionated guide. Let me just catch up on some of the chats. Right, yeah, Jenkins is, is pretty old. So this is exactly what I'm saying. Like, I wanted to not make this judgment on, you know, because everybody knows Jenkins, so we should move it higher. The point is to look at what these actual companies recommend and try and be at least somewhat fair about, about it. It misses an inclusion of many of these projects in commerce, commercial products, for example, the, for example, the inclusion of tecton and OpenShift results in massive adoption. So there are some sort of difficulties with this because it could be, well, it almost certainly is a biased group, right? This is the same thing that Alex is saying about getting the, the surveys, size and sorry, the verticals and some ideas about the respondents. Because the projects were suggested by the end user community, we've, we're explicitly not adding in extra things that it's not the opinion of CNTF. It's not the opinion of the TOC. It is the opinion of the end user community and the end users. So in that sense, I'm hesitant to make my judgment or say that we should just add things or remove things based on how we feel about the adoption in other projects or the, the relationship with commercial vendors. And the CNTF vendors get access to the original data, maybe with company names replaced with industry sectors. So again, we definitely couldn't release the real names, possibly the industry sectors. I have a, I have a link at the end of this couple of slides where if you, I'm going to, I know I'm going to forget some of these answers. So if you pop them in there, that would be, pop these suggestions there, that would be great. But I think that we can just be careful about, as long as we're careful about how much information we release. Yeah, certainly possible, certainly doable. What's the criteria for including the items in the technology list? The criteria is it came from the end users. That's 100% it. So we didn't suggest anything. We didn't take anything else. It's just what the end users themselves suggested. And yeah, I do think this is super useful to show at least what a group of real end users is doing. So awesome. I also think it's actually really interesting data, just even the anonymized version is really interesting. Will there be a centralized portal to access the radars? Will it be linked to the CNTF landscape? Again, like I think we could build this out. I need to, I'd need to scope that out as a project and build it, but it sounds like people find it useful enough. So at some point, we'll definitely live on the CNTF website as a resource that you can explore and look around for sure. Would it be linked to CNTF landscape similar? It's great includes lots of vendors, which kind of give good assessment in terms of how much end users use vendors versus open source. So yes, agree. That's why I said I think it's very important that this is reality and we're not just talking CNTF projects only. I think this is crucial to recognize that this is the actual experience of the end users. So Alan, I think this is sort of similar, someone, yeah. So I again, I'm not saying this, I present this as a resource to be taken with the caveats that is opinionated. I don't, in my opinion, as a as the editor of this particular radar, I would not have felt it was the right thing to do to move Jenkins off the basis that it's adopted in lots of places. So again, you could certainly argue one way or another. It is definitely subjective. But the point of this is to spark discussion. It's not, it can never, clearly never be purely objective. It is the opinion of 33 companies. And as Michelle says, it does show out hold. There were quite a few holds in that. And therefore, that was my judgment for what I said. And I'm willing to stand by my own judgment. But certainly happy to discuss this. The official version leaves out hold data most likely for political reasons. Can you say some words regarding this or what are other opinions? So then I suppose you could call it a little bit political. But I also feel that this is a matter of choice. And I wanted this to be a bit more positive and forward forward looking, other than saying don't do this. I felt that it would be better to just keep the three that were more like yes, you should try it or suggest it. If there were strong opinions and people really feel like they want to see what things were in hold, then again, we can look at it. This is the very first one that we've run. So all of this can change. Everything here is flexible. We can absolutely change this format. The main thing for me is that this is valuable to the community. And it's useful information and useful data presented in a kind of easy to consume way. And trying to reflect some level of reality. But again, I said there is a link and please feel free to add a comment there so that I can track it after this call. Because I'm very, very keen to make sure this is a useful resource. Okay, I'm going to keep going so that I just get through the last couple of slides and then I'll come back to the chat again. Okay, so as I said before, what were the themes that I saw within this particular technology radar? So one that surprised me was that most of the end users had tried a lot of different options and they'd adopted more than one. So I guess I expected most to just end up adopting a single solution, but many had combined multiple solutions. And then also open source components of their own continuous delivery framework. So Luna Way have released manager, Kuperpire from BOX, Statset controller from Zalanda. It also surprised me that I didn't see any of the public cloud managed solutions, which could be a bias in the group or it could be that there is actually a preference for running continuous delivery within your own cluster. Or it just might be the maturity in the features that were available a few years ago when the end users were making their decisions. Number two, Helm is more than packaging applications. So I was very surprised to see that Helm, somebody suggested Helm in this chart because I don't think of it as a continuous delivery tool, but it turns out that it's a widely used component of lots of different companies as part of CD. And then the third one was Jenkins. Again, I thought that this would have the most discussion around it. So Jenkins is definitely widely used, widely evaluated, but compared to almost all the other options that had the largest number of companies who said they put it on hold for new applications. So that's not to say that Jenkins is bad or Jenkins is not used. Clearly it is. But it means to say that if I were giving advice to someone today who's looking for a CD solution, I would say to them, look at Jenkins and assess it, but also look at other tools which support things like GitOps, which are more cloud-native friendly because it might be more applicable for your use case. And then I think this is the last one of my slides. So again, I really want to make this a useful resource. And so each one of these that we're going to publish each quarter will focus on a different use case. So I picked out a few categories from the CNCF landscape. Maybe you want to find out what end users are doing around security, what storage they use, what runtime they use, or what serverless tools they use. And this link goes to a GitHub issue. So you can just put it there, put your thoughts or feedback or recommendations there. And again, I'm very, very keen to make sure this is useful and valuable and actually a reflection of reality. And I'm going to come back to the chat now. So plus one, to sharing anonymized data. I think this seems like a positive, you know, lots of people are asking for this. So I think that we can find some way to at least release the anonymized data without any of the comments. And yeah, we'll see. We'll see what we can do, for sure. Another interesting point that fluxes and adopts while others and assess can't draw a conclusion about not having cloud platforms present if they aren't in the technology list. So, as I mentioned before, the projects that were listed here were suggested by the end users. I did not require them to come from a preset list of options. So, okay, I think that's fair to say. We can't draw a conclusion either way. We just don't have data. I agree with that. Another alternative approach would be to see the technology list of all the solutions from an area of the existing landscape. But I like that the current approach doesn't put words in the mouth of the respondents. So, yeah, Liz, this was exactly my thought. I didn't want to give the impression that there were, you know, 10 options, and that was it. If you don't, if you use anything other than these 10 options, then, you know, you don't have an opinion on CD. I don't think that can be correct. And also, there are lots of projects which are not going to be on the CNCF landscape. I mean, amazing to think that there can be, the CNCF landscape is not all-encompassing, but there are always going to be products that are not available on there. So, I do think it's important to say this set of tools, the answers, the options also come from the end users. There's a lot of bash. Well, fair enough. I chose not to say just use bash for CD. Yeah, but I would love to hear more feedback, thoughts, how can we make this better? How can we make this more useful? What are things you want to see in September? That is what I asked from you. And yeah, back to you, Liz. I can figure out how to unmute myself. That was awesome. Thank you very much, Cheryl. I had a quick question, Liz. Cheryl, thanks. This is really interesting. And I think from a storage perspective, we had been looking at different use cases and trying to avoid kingmaking. And perhaps this is a way for us to evaluate that. But one thing that you said, and maybe I misheard, was, is this data, did you say it was years old? Or, I mean, how do we gauge the relevance today of those? I'm assuming I misheard. Sorry, yeah, we wanted to make sure. What I said was Jenkins is a few years old. So, this data was collected in May, so last month. And the idea is that it would be newly refreshed every month. Oh, and also the idea is that since this was the first time, I ran it as the editor. I collected the data and I, these are my opinions on it. But in the future, we're going to pull five companies from the end user community to be the judge of what projects, what that final radar looks like. So, and what the themes are. So, it should completely and solely be the editorial opinion from the end users. Yeah, I like that. But yeah, it's a perfect, it's a great place. It's very refreshed. Yeah, perfect. All right, thank you. Cool. Yeah, no worries. Just to kind of drill down on that question a minute. I think you're not refreshing every single technology area every month, are you? You're going to do different technology areas and then come back to each one. Exactly, yeah, exactly. So, continuous delivery probably won't come up again for a little while. But, and the actual topic of it would be decided by that editorial panel, you know, what they want. But I want to invite votes and opinions from the wider community about what you would find interesting and then let the editorial panel decide what the topic is and what the final results are. But yeah, I foresee this being hopefully an interesting resource that at least once a quarter we can use to make sure that we're all aware of what the reality is as end users are facing it now. Oh, and yes, I keep forgetting things I should add. So, in the future, I'm looking to remove myself from this process as much as possible. So, in the future, one of Jeff, Elena, Katie, will do the presentation of this radar. So, it will be from their point of view, what anything that the TOC should be aware of in these areas. So, again, I've set this up as my framework, but in the future, I want this to really, really be end users completely driven by the end user community. Any other questions? Snap. More of a comment than a question. You know, in the last TOC meeting, we were kind of having a little bit of a discussion as to when we're building out documentation or choosing things to prioritize in terms of maybe use cases or things like that. We were kind of debating how not to be king makers. And it sort of occurs to me that the SIGs could use the tech radar as a way of taking kind of a semi-objective view of what end users are actually using to kind of prioritize and to shape that documentation. So, perhaps this is an easy way to unbiased debate. Yeah, that is exactly my goal. I want this to feed somehow into what the TOC or the SIGs prioritize. Absolutely. I think some time ago, we'd been saying, you know, it would be great to get more, you know, some way of the TOC being able to understand what end users really want and need and what do they see as gaps in the landscape. And I think this is a fantastic initiative to help supply that information to the TOC and the SIGs and, you know, the whole TOC community. Yeah. So, Josh, I see the comment about, except that the tech radars retrospective and the SIGs are supposed to be looking to the future. Absolutely agree. It's advisory, right? It's extra information that the SIGs could use to make their judgments. It's not a suggestion or requirement that you necessarily take any action. It's just another data point. And similarly for covering closed source, it's like, yes, this is purely advisory for the SIGs. Can I ask one question? So, here's Bartek from SIG Observability. Is there any way we can help with this, rather, if it touches, like, let's say, our area? Because we did, like, similar surveys, to be honest. So, we would love to have some input into a potential line of questions or things. What do you define by input here? Josh, I guess, collaborate and, I don't know, help in any forum. I really appreciate the offer to help and I'm glad that you think this will be useful enough that you want to help with it. I think this should be driven by the end user community. So, I keep saying this, but I don't want to give the impression that, you know, you can lobby and say, I want to have this project on this radar. So, that would be my only, like, caveat or thought about it. Otherwise, I think, if this means that, for instance, the SIGs and the end users can come together and, like, for instance, if we did do one on observability and after a tech radar was published, SIG said, hey, we really want to talk to this particular company because they seem to be doing something unexpected or they seem to be quite forward-looking. Like, can you help connect us with this? Like, we could definitely do that. Hey, Cheryl, quick one. Do we think we could maybe extend this to include technologies other than just projects? So, for example, you know, whether it's runtime or storage or whatever, right, there are lots of different focuses and it might also be useful to gain some feedback as to sort of what classes of technology as opposed to just specific projects are grabbing the attention of the end users to. What would be a class of technology, Alex? For example, you know, are people more interested in object stores versus file systems or are people more interested in managed services, managed Kubernetes versus serverless versus, I don't know, you know, that kind of thing? Yeah, I mean, as long as we could frame it in the right way, the question in the right way, the nice thing about this format is we can present anything as things that are ready to adopt, things that are ready for trial or assess. So, for sure, I think that's a great idea. As I said, the actual decision of that would be made by that editorial panel. So, if they wanted to talk about, you know, what are people doing for, you know, object store or block store or file or whatever, it would be up to them to decide what's interesting to the end users. Does that make sense? Yep, yep, completely. Sorry, Liz. I was just saying that, you know, you're kind of soliciting votes for topics for future technology radars. So, and I've clicked on the link and it links to an issue. So, presumably, we could use that to maybe suggest some ideas like maybe this Alex's idea of sort of technologies that could potentially be a candidate for a future radar. Yeah, absolutely. Like, it's more of a discussion thread now. So, absolutely put anything you want to see there and I will try and steer future versions of it so that it's the most useful. Fantastic. Any more questions or comments? Just follow the chat. So, I think the radar is a time slice which also shows things which are established or coming, i.e. both directions. Yeah, it's very much fixed to this point in time. So, that could reflect what's coming up as well as what things are starting to people moving away from. Just one point from our standpoint. I think the SIG is also representing end users a little bit. Anyway, some way of help we can give is that we would love to help within companies. We know to take part of these surveys to have data points. Yes, absolutely, absolutely agreed. If you want to, you can always put them in touch with me if you want to get end users that you know to contribute towards this. It is part of the CNTF end user community. So, there's a little process for how they join that end user community. But then within that, I would absolutely love to have the SIGs try and get there and the project maintainers get their viewpoints expressed on this as well. And thank you, Katie, for linking the issue. Fantastic. Okay, I think giant virtual round of applause for Cheryl for this initiative and for coming and presenting it to us. And so interesting to see the results of the first one. So, that's really cool. Thank you very much for all the feedback and all the questions and interest as well. It's been a bit of labour of love for the last couple of months working on this. So, I very much appreciate that people are interested and want to make this better. Great. All right, we have about five minutes left. If anyone has any other business they would like to mention, it seems like everyone would love to have five minutes to go and make a cup of tea or something. So, go about your days and have a lovely day, whatever time it is for you. It's good to see everyone. Thank you. Thank you. Bye.