 Good morning. We'll wait a few months before everyone comes on and we'll get started. Katie, should I wait for Liz to come on in or are you good to be able to rock and roll? For my side, I'm ready, but I think the ritual is fully just started, so I'm happy. I'll give it until five after and then we'll start because you are our topic today, so it should be fun. Sounds good. Sure. Hey, people. Just in time. I told everyone I was waiting until five after for you. I don't know. All is well. All is well. Welcome. So am I the last to arrive, do you think? Do you know what? You show up when you do. It's fine. We have a slow agenda today. Just some small things here. So we can go ahead and get started. Our normal antitrust policy note. Take it away, Liz. Yeah, welcome everyone. You've made it. Do we have an agenda side? I'll let you take us off as we get here. And yeah, exciting things to hear about the end user tech radar, the latest edition. This is episode two of the tech radar, right? So I think Cheryl and Katie, are you presenting this to us today? Katie, I'm just here to sit and listen and answer any questions if they come up. Good luck, Katie. No pressure. I think the slides are going to be the shared. Liz, there's a note here for you around the LF diversity training. I did not build out slides for it because it's very tiny. So our housekeeping notes are, we are doing a special edition of the TOC meeting on September 29th, because that's where we have space for it. We would not normally have a fifth meeting of the TOC in a month, but we are here and we'll be talking about graduating requirements. So special meeting, usual time, usual place, but a day that we would not normally have meetings. So, Katie, if you are ready, I'm happy to move to you. Good. I'm not sure if I should share my screen or you're just going to take care of it. Awesome. Awesome. So my name is Katie Monchi and I am one of the end user elected to see and today I am assigned to present to the new tech radar, which is going to be focused around observability. Now, before I move forward with the topic, I would like to introduce what exactly the tech radar is as a term and why exactly it's such an important tool within the ecosystem at the moment. Now a tech radar is an assertive guide or quite opinionated guide on the emerging tool within the ecosystem. It actually aims to provide this context of the tooling which are used currently by the end user company. So this is actually quite important. It's focused on the end users and what they utilize at the moment in their stacks and the tech radar aims to showcase free main areas or free main levels of adoption or categories for every single tool is going to be developed, adopt, trial and assess. With the adult level, there is a clear recommendation to use this technology in production. It actually showcase quite stable features, and it provides a solution for the current problem or whatever they want to solve. It's actually solved with this particular tool. At the trial level, there are usually the tools which are categorized as they actually aim to solve part of the problem, but at the same time, there is some success with them, so keep an eye on them and make sure to adopt them or POC them if you would like to adopt anything else. So this is going to focus on pretty much more of like emerging technologies. They definitely will focus more on solving future problems into the technology so there is definitely something to keep an eye on in the future. Now with the technology radar, there is a theme for every single one of them, and this particular one is going to focus on observability. And observability actually is quite in a core kind of functionality in every single company because we require that visibility, even like the company success, is not able to be measured if you don't have a good observability stack. So it will mainly focus on functionality such as logs, metrics and tracing. And currently the technology radar has been curated by a radar team. I think that's the name of it. And the radar team is a selection of randomly kind of pointed out users from the end user community as well. And they will be able to make sure that they kind of ingest all of the data and make sure that the final radar is something which really showcases what the community uses at the moment. In this particular trial, this is the first radar team, as far as I'm aware, and they have a very wide representations from different companies such as Send Desk, Fox, the New York Stein, and Paid. And with their help, we were able to create a final radar as mentioned, and they would want to identify some of the main things or some of the main kind of takeaways from the current exercise as well. In terms of how we survey the companies, this is going to be based on a Google spreadsheet. So that's going to be the next slide. Yeah, they will be able, currently we don't have an exhaustive list of all of the tooling. The companies will, or the voters, they will be able to add new tooling or new instrumentation mechanisms based on what they actually use in their stack at the moment. And they will be able to as well categorize those tools with the levels we've mentioned before, adopt trial and assess. And there's actually a hold vote as well, which means that the company don't really recommend that tool to be used in the future. The TechRadar actually aims to showcase a good example of what's usable now, rather than what should not be used. So only the three levels are presented at the moment. And with this observability, TechRadar, there were more than almost 300 data points. That's going to be the next slide. So 280 free votes and these are coming from 32 companies and you can see some of the companies showcased or the logos showcased up here. When you look into the number of employees for a single company, there is a slight tilt towards the companies which have more than 1000 employees. However, there is a very fair and good representation for small and medium companies as well. So when you're looking at the breakdown by industry of voters, some of them are going to, well, most of them are categorized as software, which is quite a wide term. But if you look at the table as well, there's a wide representation of our industries as well. As I mentioned, the TechRadar itself is not exhaustive, it's just based on the current data points, what exactly the end users are using in their production systems. So if you go to the next slide, this is a final representation of the TechRadar, and we can see that in the Doct, we have, in trial, we have quite a kind of chunky list and it says consists of three elements. I'm going to go through all of these levels and just point a few takeaways about every single one of them. So the first one is going to be Doct and there's going to be the next slide as well. In the Doct, we have tools such as Prometheus, Grafana, Elastic Data Dog and Open Metrics. And if you look at the amount of votes proportionally, most of them, most of the companies, they're opting to, this is a good tool to use in production based compared to other votes as well. Now, if you look into all of this tooling, there are tools that have been in the industry for quite a couple of years now, and they're actually quite stable. They've been used in production and they have a very good way to solve the current problems in the industry. The trial level is going to be composed, it's going to be the next slide, it's going to be composed of tools such as Yeagers, Blank, Light Steps. That's the cloud watch in the century. Now, these tools have a very good kind of amount of votes across different levels as well. So that's why I think the radar team, they opted to put them in the trial category. But if you look into the distribution of this tooling, some of them have been around for quite a long time, some of them are actually quite emerging. But what we can see is that they really focus on specifying those maybe subset of problems. For example, Yeager is going to be specifically focused on the tracing. Well, CloudWatch is more of like a SaaS solution from AWS, which comes out of the box pretty much. And when you look into this, there's going to be composed of free tools, open telemetry, Thanos and Keali. Now, this was one of the most interesting, personally for me, areas because most of these tools, they have been around quite recently. And I think there is a wide awareness of these tools. The companies have been widely POC-ing them and trying them within their stack. However, they didn't move completely to potential production environment yet. So this is something definitely to keep an eye on, which again, for example, if Thanos, they have potentials to solve like logs, injections at scale. So something to, again, to give an eye on in the future. And if we move to the next slide, the radar team as well, I identify some of the main things to which kind of output were outlined from this radar. And the first one is that the most commonly adopted tools are open source. Now, if we look into all of those tooling, which we have on the radar at the moment, most of them are going to be open source and some of them are going to have a SaaS provider actually coming with that extra supply enterprise support. If need be or the support itself. With this particular theme as well, there was this balance, a balance could be seen between actually running the product in-house and actually, or vice versa, making sure that a SaaS provider will provision those capabilities. So there is a very fine balance between in-house build or in-house maintenance of the open source tools or paying for them. The second theme is going to be focused on the, what's going to be the next slide. There's going to be no consolidation in the observability space. Now, based on the votes currently, or for this decorator, most of the companies are going to use between five and 10 different tooling within their observability stack. And what it actually means is that when talking about Kubernetes, we say that there is not one platform which is the same. And I think this is applied for the observability stack as well. There is not one observability stack, which is going to incorporate the same tooling or provisioning the same solutions or same functionalities. And that's why we see that companies actually opt to use different tools. Now, as mentioned when you're talking about observability, there's going to be metrics, there's going to be logs and tracing. And usually they're going to use different tools to either get a subset of those functionalities or just focus on a functionality straight away. Maybe another thing to mention here, if I have my nose. Another thing which was actually outlined here is that it's easy to adopt new tools rather than migrate to new tooling. Now, observability is something which everyone wants within their stack. However, at the same time, if you incorporate a new feature, if you have a new tool that will come with a new observability endpoint potentially. There's kind of this expansion growth of the tooling within a company. That's why we don't have consolidation, but at the same time it's difficult to fully migrate to a different system as well. So that's why we have like different components and different tooling taking care of different functionalities. And the last thing which was identified is that Prometheus and Grafana are frequently used together. This is something which is interesting. It's kind of a natural outcome. Nowadays, if you run Prometheus, there was going to be a Grafana and vice versa. Actually the Prometheus team, they deprecated the prom dash in four years ago or so in favor for Grafana dashboard. So there is a clear association between how the log should be ingested with Prometheus and a potential visualizer with Grafana as well. And as well, the community quite heavily focused on provisioning the documentation in different kind of examples, which are focused on deploying Prometheus and Grafana side by side. And these are pretty much kind of the main themes which were outlined all together. And overall, we can see that the decorator for observability, if we go to the next slide. The decorator for observability has kind of different level of adoptions as expected. We can see that some of the tooling which are very stable, they are very widely used and quite commonly used in the big companies and these are outlined by adoption. The trial itself I think there is an area which is a good step for well matured new projects and projects which have been around for quite a while but they've been able to transform and keep up to date with some of the new kind of principles and trends within the technology. So that's kind of an area for kind of side by side new and emerging technology. While in the CES we have, again, new standards, new way of observability, so something which definitely will solve the future problems when talking about scale and enterprise. And if we go to the next slide, yes, there is a new initiative actually, there is a new endpoint to see all of these tech radars. That's going to be on radar.cncf.io that will contain of course the diagram itself with the main takeaways and some of the reasoning behind that including the tech radar team. And currently there is this tech radar is going to be deployed or put together every single quarter. So if you'd like to vote on the next thing, please do so. And that's going to be on the next slide as well. So currently the votes are on a GitHub issue. So please be aware of that. If you'd like to choose any particular theme, please vote. And if you'd like to actually vote for some of the technologies in this ecosystem, please join the end user community. Only the end users are able to input all of these data points. It's very end user-driven rather than vendor-driven. So that's going to be the next slide as well. And the last slide is if there are any feedback you'd like to improve or add new additional features to the tech radar or any new information you'd like to get from these data points, please send the feedback on that email as well. And I've been saying that there's a lot of chat going on over here. So if there are any questions, please shut them out. We'll try to cover them as we can. Any burning questions or anything like that? There's some good commentary going on in the chat. Yeah, I think a few people saying the sample size is still relatively small. So 32 responses was it? It's 32 companies, but they actually had almost 300 data points based on their usage. Now I've seen Cortex mentioned here and there. I was actually surprised as well that Cortex is not on the radar, especially when we're looking at the adoption companies for Cortex. There are quite a few companies and we'd be using Cortex quite heavily as well in production. I think this is like going back to we need more users to actually have their input and this is something which I think the CNCF is working on. But it's all about like, it's not like Cortex is not used, it's just like it's not represented well enough on the radar. And as well, there are more than 30 tools I think voted on overall in the spreadsheets, but there is a kind of a choice to have a limited amount of tooling on the radar not to make it too overwhelming overall. So just to be aware of that as well. Yes, great stuff. I mean, it's another useful end user tech radar. So thank you very much. Everyone, Katie, Cheryl and everyone who participated in it and the leads on that survey. Great. Looks like a few questions coming up in the chat. What if your company is not officially in a CNCF end user but would like to provide input anyways. I think Cheryl can take this one. Yeah. So if the question is if there's a if your company is a vendor company versus an end user company. So if it's an end user company then basically the way to contribute is to join the end user community. We don't make it 100% open because the companies in the end user community cannot state publicly what they're using, like their legal and PR teams will completely block that. So we have to have a private forum where these things are discussed. So if you come from a vendor company, then the way to contribute to this is to ask your end users to join and to vote on the things that they're using. It's not, it's not like, you know, get your end users to join and then you will definitely end up on the radar. But it's also a great way for them to get involved with the community and get engaged with open source. So it's a good way to do it. Yes, we're working next time we're going to try and get the topic a little bit earlier, probably in about a month's time. Lee, yes, the point about feedback being funneled to the SIGs. So I had a chat with Rishi, who is the chair of the SIG observability group and he, I don't think he is here today. Yeah, I think he had a clash today today. Oh, you're here. Okay, awesome. I think you should weigh in then, since we had a chat about how what SIG observability can do with this information. Both the, with both my Prometheus and my SIG observability head on, I would like to enable end users to onboard themselves onto the more modern solutions amongst the survey. Of course, there was at least one surprise in that list, at least to me. And basically what Cheryl talked about is to create a kind of short questionnaire of actually actionable things, which we as a community of maintainers and SIGs can provide to end users what are their main pain points. It could be applied to Prometheus. It could be how to migrate from StetsD to Prometheus. It could be whatever. Like, come up with a few of those questions, send them to the end user community, and have the end user community choose what type of content gives them the most value to use modern technology, basically. And if anyone has any suggestions, we have the SIG observability call right after the QC meeting and your model will come to join, or you can send email to me or you can poke the Prometheus team, all of those. Thank you. Yeah, it's definitely one of my goals to get this kind of feedback going to the SIGs and to the project maintainers, ultimately, and to try and develop this as a feedback loop. What else can I address? Put Lucky on there, lol. Sorry. I think we don't think we want to just randomly add projects. I mean, the way to add projects really is to have end users who are using it. Lee, I'd love to hear more about this comment. The radar is helpful, but too well distilled. What else would you like to see? Well, good question. Let me let me rainstorm out loud for a moment and say that it might be that if there are like specifically painful, either features that are missing or bugs that have been, you know, in specific projects that have been hard to for users to overcome, and that's in part why a specific project is still out on the periphery, like just some of those some of those detail, like you can't put all those details into the can't digest all those in a in a picture and so I think that that's kind of an example of like if there are specific PRs or issues or or maybe those things haven't been documented that like you said, ultimately getting that back to the project maintainers is kind of the goal that SIGs can can potentially help in that regard. Let me brainstorm on a different perspective and that is like in SIG network, there's we've formed a service mesh working group. Part of the goals of that group is to put forth common patterns of use and can curate curate those patterns as things that end users or conceptually could could either learn from or inform and and we don't. We're not connected in that way today or we don't have a, or I don't know that we have a like or at least facilitated through the CNCF I don't know that we have a vehicle for sort of exchanging those in advance of publishing. I think that's a separate question to the tech radar but maybe working with Cheryl you could figure out how to have that feedback loop for that particular white paper or whatever format it is you're going to create that in. Yeah, I really do want to encourage this loop as you said, Leigh that it's too right now it's too separated. These two groups. So it is, this is like tech radar is one of my first initiatives towards bridging this, this gap. This is nice. I'll also interrupt to say that, well, it's like absolutely necessary for a couple of reasons that they there is a divider that like, not all that is open you'd articulated one of those and another one that I would say is just that having been on both sides of that line that like with my end user had on I would say we don't we don't want those thinking vendors here kind of getting in the like we want to have an open conversation and honest one about what's going on without influence per se. And so, yeah, so it's a health it's a healthy thing I think thanks for what you guys are doing. It's good. Absolutely. I'm glad that you you find it helpful. I did have one comment about coming up with the list of PRs or issues or pain points because rich I spoke about this yesterday as well. I think it's very hard for a group of people to just collectively come up with, you know, here's our top three pain points, you know if it was that easy then they would just go to a file an issue or vote up on an issue. So the, I think some of these also are not things that are well encapsulated as issues. For example, one of the companies who responded who a lot of these companies use many many tools right as Katie was saying there's no consolidation. And one of the reasons is because in some way many companies observability is not a core business function business value to them, and therefore investing the time to move away from an existing observability solution to a new one is a very hard sell, you know. And that sort of thing is not a fault of the project. It's not an issue that can be resolved. It's just how the observability space works. And I think that's different from, for example, the first radar in CI CD, where there's a benefit to having everybody on one solution, or having everybody on, you know all your developers using the same solution, compared to observability what where there's benefit to having a few different ones because maybe there's different strengths, or different approaches and different tools. Anyway, not an easy, not an easy. I would love to get to the point where I could say like okay here's five pain points, like done go away and then we're done but I'm not sure how easy or if it's possible to get to that point. So, so this is Steve, rather than jumping in the chat room. Right. Hi. So, so what you articulated is is true. And I suspect it's the difficulty is not just associated with observability I think it's. So I come from years of, you know, commercial software background and open source. You know, when you've got install base, you know, adopting a new technology is going to be a challenge and uncomfortable. Right. So, I think it would still be helpful since this is, you know, a technology radar. There are business conditions that influence their assessment somehow capturing that so that we're not trying to solve technology problems but, you know, get an understanding of kind of the, you know, the real life business impacts that also apply. Yeah. No, Steve, that is a, that's a great point. And one that I think this is where the, the SIGs and the project maintainers should work directly with end users to figure out these points. I think, I think it's very hard for people to just spontaneously say here are the reasons why, why we're not moving to this technology or what our pain points are here but I think in a discussion. These are the sort of things that come out so I guess my answer to you is just say I'm going to try and push project maintainers and end users together more, get them to talk more and hope this comes out. Fair enough. If there's a better way to think about this, then I'm all for trying new ideas as well. Yeah, a comment here from an end user I obviously represent a really big company, which is not always the norm but you know distilling down what's the business use case I mean we have so many different lines of business and internal groups and there's so much different uses of technology depending on sort of where it's at so it's not possible to just distill this down to here's the business use case for this thing. So I'm with Cheryl, you know, more real engagement with end users. Throughout the process and the lifetime of the projects versus China, you know, pull snippets off off radars the radars great but it's not going to, it's not going to help projects, you know, align what they're doing to real users without that sort of engagement. Yeah, it's certainly precipitated the conversation here right and this would be what you would hope just that interaction with produce. I think the point towards business case here is also that that it's, as people mentioned the technology radar and I think it's where some people are headed. Not just a project major and I know that you have done more than this but it could feel a bit like this because you're focusing on the projects but because of what people are adopting what they are struggling with and still be helpful questions. I mean it's not surprising that there is obviously bigger adoption in the metric space than there is in the distributed tracing space on the open source side simply due to the maturity of some of those projects and how long they have been around. And I think it can also be guidance to people really picking and choosing technologies eventually. I think what I kind of read between the lines. It's great that you have product adoption in there that it's an honest feedback from end user that's all amazing but should also focus on guidance. Why certain technologies are where they are why end users see them where they are. Say for users as well we are very good at metrics but we unlocks obviously but we're just moving into tracing or we have certain use cases where actually we just get enough out of this technologies or using certain technologies only in certain areas. I think that that would be helpful also for our end users to choose what people are doing it and also for say that the wider community understand where you are really on the technology side of all of these these topics. If that makes sense. Can I answer a point in chat about capturing comments during the assessment. Is there any kind of capturing of ad hoc commentary during this process. We do capture ad hoc commentary but again. It's difficult to publicly. It's difficult to publish this completely publicly. Maybe it's okay in an anonymized format but then maybe some of the comments would reveal too much about the companies. I mean it's been a bit conservative so far on on what we're publishing. But it would be useful information for sure. And I wanted to add in also a link to the webinar that we did with the radar team on this observability radar because we discuss in this webinar some of the questions that you had about. Why end users chose one solution over another or what were the what were the factors that are actually important to them when picking observability and observability solution. So I recommend that you go and watch the the full webinar as well. As we do these we need to be at least conscious of the fact that not all solutions in a space are kind of necessary. I think, you know, if you have a big thing like observability maybe everybody is doing metrics but not, you know, some percentage of people are doing tracing. Do we take into account that, you know, I'm going to make up some numbers maybe of those 32 companies, 10 of them just have no need for distributed tracing and so they didn't put any answers in. Does that kind of penalize, you know, it wouldn't want those projects to look like they're less successful just because they have a less broad usage. Makes sense. So, so the definitions are very specific for this reason. Putting something in assessor in trial does not mean this is a bad solution. It just means there was not broad enough consensus to say positively that everybody who uses cloud native should adopt this thing. But it is it is a very hard line to. It's a very subtle distinction. Yeah, I can imagine sort of finer granularity radars coming up with a different answer. You know, if you had a radar that focus just on logging with that. Maybe move things. I don't know. Do you think. Yeah, I mean we could add more more levels if that would be helpful. And I saw Matt, you raised your hand. Yeah, and I think a good point was brought up that not every CNCF project is everybody use it kind of thing. Some are going to be for specific use cases. So there isn't just the column of should everybody use it, because then you'd never have specialized tooling, which of course you need. To scale really big, you're going to be different than some, you know, and really broad and have many clusters you're going to have somebody different than you just got a small setup right. And so there are other angles other verticals to look at as well, including should everybody adopt it as far as if you're measuring success. And I think that would maybe be useful to put into context and to take into account. We shouldn't measure everybody by the same stick. If it's not appropriate. Yeah, I feel like what Matt said there I think it's it doesn't have to change what we're doing with the technology radar it's more about that kind of color that we might apply to whether or not particular tool is, you know, the sort of thing that everybody would have or the sort of thing that some specialized niche application would need. I think that Katie or then or maybe Michael want to weigh in as well because this radar has two audiences right one is the projects but also one is the end users. Yeah, I have a comment so like I like that I like the the current radar because I think I think it's it's it's about the projects and it's about the technology adoption as well so like a look at it. If I see the tracing in the SS and trial and not in the adopt is just to me it's like it's, it's not as adopted as as a monitoring so I think having a generic radar is is good having a more specialized radar can be good to it's just a slightly different different angle of looking at things. I think from my perspective as well I was compared to the first radar. If you're looking into the adopt section in the city we had maybe free tools whilst in here we actually have five and there was a potential to have even more tools in the adoption. I was actually showcase I was surprised slightly disagreeing but then I realized every single company that I've been interacting with they actually use like more than four or five tools so they observe ability and it's absolutely fine. I think this was like one of my main surprises about the current shape of the radar, especially as well if you go to the next level there's plenty of tools. Again there's not one way to actually one golden path to make sure that all your components are going to be very well measured with any infrastructure with this like free tools everything is going to be fine there is a subset of different things that you need to use. So I still think it's a realistic example but like putting my like community hat on I still would love to see more of like open source tools there like a bit more presentation. But yeah I think I'm actually curious if we do the same survey, maybe in a year, like I would be I'll be very pleased if that would happen if some of the tools from the trial would actually move all the way down to like a more centered like towards the center of the radar which will actually showcase a real increase in the usage of that tool in the end user community so yeah I think maybe we kind of redoing the same the same exercise on the same thing at some point in the future and kind of differentiate them. Yeah I think that's a great idea to also see how certain areas develop over time because there will be development especially if you're not so much consolidated space. That would definitely be helpful I think and also that the more people know about it and more people will will contribute to it. More questions or thoughts on this. I just wanted to share some sorry. I just wanted to share some very positive feedback like it was this long discussion on Twitter I think that everyone of us really read about like the landscape being a nightmare and I think this is a very good answer to this. There's the technology radar this is where you see adoption is not just the landscape you see more end user focused data on how people are using certain projects and so forth. So I think that's definitely moving the right direction and I really like the direction that it's going so take this as like as we used to say in German like critique at a very high level. I think it is amazing that we are moving in this space and I think it answers a concern it was raised on on Twitter and other channels about the landscape and think that's it that's a good that's a good direction that this is going so I really like this. To add to this point I think it as an input it's super valuable but not as two distinct things. Again, the landscape as it is, at least to me and I know we had this this conversation within the sick talk call before needs the ability to be sliced and ties as to the users needs as to what they care about at the moment and what they need to see it. If we were to, for example, attach this information from the survey to the landscape and allow people to sort stuff by different label sets like for example, in what category in the end user survey is this and also give me everything which is and then just list everything out which which like is has those two properties or whatever that would allow users to actually answer those questions for themselves, as opposed to having two completely distinct overviews of basically the same thing but no way to to compile to combine the data as state currently needed. So I, I did think about how to whether these two things should be compiled together. And I think the big difference is that the tech radar is a snapshot in time. So if we did this radar one year ago, or year from now, it would look different. So I feel a bit hesitant to say to put it on the landscape, which is supposed to just be a, you know, present day overview, and say this thing is an adoptive this thing in trial. If it comes from an old radar. Another question then about what is the current data and do we have good data or is data becoming too old and doesn't need to be refreshed. That's not a function of the landscape it's just a function of the age of the data. Additional piece of feedback. First, I think it's a great body of work that's been done, and it's it's super great to see. I think in future iterations. I don't know if it's the right form, but it would be interesting to see if, if it were possible to survey what drives people's decisions, certainly in our company and then talking to some colleagues and other other other folks across the spectrum. You know, obviously there's there's competing interest when it comes to the selection of particularly absorbability tools, and many people have many in parallel. As you said, but you know that there's a lot of ambiguity it seems at least as I've heard people say around what's the total cost of deploying an open source or CNC F based solution versus a more commercial offering. And how do decision makers grapple with those choices like what what drives them are they are they constrained by engineering capacity or they willing to spend more to have a smoother lower friction implementation. And how do they make those tradeoffs. I think it would be interesting to understand what what drives those decisions, if it were possible to capture. Yeah, I think that's, that's a really good point. I mean, every company organization has their own priorities and different projects, different resources, different amount of engineers. So, it would be interesting to see, you know, like a trend, you know, whether companies are actually thinking more about open source lines or thinking more about vendor. Yeah, it might be nice to be able to provide that not not only to the end user community of decision makers but back to projects as well to help them prioritize. You know how much time do they spend on documentation on quick starts on, you know, on material for people new to the project to either use as a as a consumer or to engage as a contributor. You know, yeah. I would like to hear Cheryl's view on what end users think of the landscape. I mean, we've all seen the size of it. Um, before I talk about that, Matt, let me just say about the reasons why end users pick different solutions. I would love love love to have this information. It's really hard to get this information, like, especially if you want people do like a five minute survey where they might not spend half an hour writing down, going back to emails and writing down all the reasons they chose one solution over So I will aspire to get to that point, but it's another level of difficulty. Yeah, that might be some that might be a place where the SIG observability could help. I mean, part of our mission and then we're just getting getting rolling after after launching the SIG earlier this year is to curate, you know, case studies and patterns and and and try to, you know, facilitate gathering that that data and reporting back to the TOC. So I think it will take some time and But but that's work with you. Yeah. Um, view on the landscape. Um, again, okay, maybe I'll ask Katie and Elena and Michael or anyone else to answer first before I give my, my view, like, do you use the landscape do you find it valuable? Oh, the really big landscape. Um, no. No, we, we don't. Well, we use the landscape occasionally to I think the card mode is actually not bad. And if you actually go into some of the categories by by the actual grouping you can get like a meaningful sort of list of sort of the things in that space. So this is the larger landscape. I mean, certainly, again, where we're probably more sophisticated sort of it buyer, just based on how big we are and what have you but we generally don't rely on landscapes to drive sort of what we're adopting so You know, assessment of what we need and what's out there and what sort of makes sense and existing vendors and these sort of things. So we don't we don't rely on things like landscapes or or really even research like stuff from even large research shops doesn't make the cut because it's typically lagging a long way behind and sort of better a different type of customer so It's occasionally helpful to help people get sense of a space. If you can break it down by the actual category, but the thing on its entirety is obviously way too big. And I think that this is going to be a really interesting friction point right so we've got the radar, which is coming from an end user point of view, and we've seen what happened to the landscape. It's just a land grab it's like you know it's just everyone's trying to tick the box to get their logo on the landscape. And so there's the vendors want that everything to move towards the landscape and the end users want more about what the radar is so it's going to be an interesting friction here I think Now I can I was just weighing from from the massacre side of things were very similar to Michael right that we You know what we're solving you know financial and security issues all the time so we look at the landscape to kind of give us an idea where problem space could be helped for us to solve and So it's been very useful to our teams and that they look at it when there's a problem they're trying to solve and we don't have a current solution for we look at what's available in the open source community because we're trying to be much more open source focused these days and so That's what comes into play for us extensively But it's not it's not just because it's fun right we have a business we're trying to conduct so we we have to have a reason to use it we don't just use it because it's there Michael, can thank you. I'm glad that you put your viewpoint in first that also lines up with what I've seen from end users. It's there they'll look at it on occasion as a reference point but they're not going to use it. There will always be a second level of evaluation within their own environment. So it's not something they rely on to just say use this or use that. I think the landscape discussion will probably run and run but since we've touched on it it was interesting to to mention it. I think we have well we have six minutes left and I will quickly like to go on to the second item on the agenda which is hopefully pretty quick. Anyone have any last comments on the technology radar before we move on. Wonderful work. Thank you very much. Katie Cheryl everyone else involved. Really great discussion. So the the other thing that we had on the agenda, the Kubernetes sig chairs and leads are now required to take the. I think it's called inclusive speaker training in terms of making sure they have awareness of diversity inclusion issues. And the question was raised it was raised such on time ago I can't remember where it came from whether or not we should also make the same requirement extended to CNCF sig chairs to see project maintainers you know what group of people we might we might want to extend that to assuming that the Kubernetes sig chairs and leads are finding it useful. Any thoughts I see a plus one from Ricardo. Hey, it's Michelle here. I had suggested to the Kubernetes steering committee during my time there that we take this course. I think Chris has suggested that we do this inclusive speaker training and it was more of a, like a top down effort to like lead by example and make sure you know, like the the governance body is doing all the right things to be inclusive so kind of started with steering first and then the sig chairs later it came later. I think if we go forward with this, it'd be really great for the TOC to take it first and just kind of see how it is. Feel it out and then suggest it to the rest of the organization as well. I don't see any problem with all of us taking it though. Chris has just made the point that we're the courses being updated, which I think I've been through the course and I think an update might be sensible. It's being expanded to to make it more than just focused on speakers at a conference it's around open source community participation and so on but in its current form it's still very useful in my perspective. And so shall we I don't know have a I'm just going to assume we're core today looks like lots of to see people here should we just do a quick vote we can do votes in the chats, whether we want to make it compulsory for to see to start and then having, once we've taken it, we can hold a view whether or not we want to require it of everybody else, any whatever group that everybody else is. Okay, so Michelle's votes is few other votes, lots of votes coming in. Wonderful. Okay. TSC folks I mean I haven't counted but it looks like that's that's going through so if you haven't already done it. Chris has provided the link to that course, which is free so go take it at your leisure and then we can talk about which groups you want to extend it to obviously everybody else is also encouraged to take that course as well. All right. I think we are at the top of the hour. We're seven great. Thank you very much everyone. Thank you. Thank you. Thank you. Thanks everyone.