 Hi everyone. I'd like to thank everybody who's joining us today. Welcome to today's CNCF webinar, which is on the end user technology radar for September 2020 on observability. I'm Cheryl Hum. I'm the VP of Ecosystem at the CNCF, and I will be moderating today's webinar. And we'd like to welcome our presenters tonight today. Kunal Parma, who is Director of Software Development at BOX, Martin Zucersky, who is Software Architect at the New York Times, Jason Tarasovic, who is Principal Engineer at PayitGov.com, and John Mota, who's Senior Principal Engineer at Zendesk. A few housekeeping items before we get started. So during the webinar, you're not able to talk as an attendee, but there is a Q&A box at the bottom of your screen. Please feel free to drop your questions in there and we'll get to as many as we can at the end. This is an official webinar of the CNCF, and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or to questions that would be in violation of the Code of Conduct. Basically, just be respectful to all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF webinar page at cncf.io slash webinars. And with that, I'm going to kick off today's presentation. So I'm going to share my screen. Alright, so as I've already said, today's webinar is going to be about the end user technology radar. So I'm Cheryl. Basically, I work with end users. So end users are companies who are not selling any cloud native products or services, but helping them get active and involved with the open source community and get engaged with meeting each other. This is the people I mean by the CNCF end user community. So there's more than 140 companies, they span finance and retail and software and so many other things. And all these companies are using Kubernetes and other cloud native projects. So today I'm really happy to have with me some representatives from the end user community. And I'm going to ask each of them to introduce themselves one by one. So first off, we have John. Go ahead. John Boter. I'm a senior principal engineer at Zendesk. I work in the foundation organization or foundation engineering organization in Zendesk. We provide compute storage infrastructure, all the technologies and tools needed for the rest of the Zendesk engineering org to deploy and run their applications. I've been working with Kubernetes and cloud native technologies for probably about five years now right around the time that Kubernetes first came out. And I've been part of the CNCF end user group for, I don't know, probably three years now, I think. Yeah, nice to meet you. Oh, thank you, John. Next up, Kunal. Hey, everyone, my name is Kunal. I'm a director at BOX. I'm in the backend organization where, you know, I, my team is responsible for the platform that the rest of the engineering team uses to run all of our applications on. And so, you know, we are responsible for everything from Kubernetes to service mesh to observability. I've been involved with BOX and myself have been involved with the CNCF and the tools in this space as well from the very beginning. You know, we were very early adopters of Kubernetes. And also we've been very, we've been part of the CNCF end user community, I think from the very beginning as well. So I've been involved in all the end user meetings and everything. So nice to meet everyone here. Cool. Thank you. My name is Machu Soteski. I'm an engineer at the New York Times. I work on a team called the River Engineering. We are essentially trying to enable other development teams and engineering teams at the organization to do whatever they need to get down for the business. So providing them with tools, different processes, and a lot of education. And my focus currently on the team is observability. Fantastic. And Jason. Hello, I'm Jason. So I started the platform engineering organization at Pay It and up until recently led that team. And we were responsible for our Kubernetes infrastructure, which we manage ourselves running AWS's Gov cloud. So there was no, no, no EKS until very, very recently. And we, you know, provide that the platform for our engineering teams to be able to deliver our solution to our government partners. Awesome. I want to thank all four of you for joining me today and for working with me on this radar to represent the whole of the CNCF and user community. So we're going to launch into the radar itself. So technology radar is an opinionated guide to a set of emerging technologies. So this is the second time that we've run the CNCF and user technology radar, where we survey the different companies in the end user community and ask them to report what solutions they're using and whether they would recommend them to other people, effectively. So there's three levels that we have. One is adopt, meaning we clearly recommend it. We've used it for a long time. It's stable. Secondly is trial, which is we've used it with some success. So we can recommend it, but maybe it's only applicable for certain use cases, or we only use it in certain ways. The third, the third level is called assess. So assess is we've tried it out. We think it's promising and we recommend that you take a look at it. And then each technology radar is accompanied by some themes, which is anything that the radar team thought was interesting or unexpected or noteworthy about what they saw. So as a reminder, this is the second time we've done it. The first time was on continuous delivery, which was in June. So we're going to keep publishing these once per quarter. So first question. So obviously it's from the title, you know, already that we chose observability for this radar. Martian, maybe you can lead us off. Why did, why do you find observability interesting? I think it's really difficult to run any business or an organization without knowing how it's doing and having an insight into the processes, the products and engagement with users, for example, is essential to be successful. And given that we are all running systems and providing features and products to users, it's important for us to provide them in a reliable way and understand how we're doing. And as you know, the landscape, including CNCF products sort of exploded recently, there is many, many things to look at and be interested in. So, you know, starting with metrics, logs, traces, there is many things to analyze, to collect and to measure. And there is also many different things and capsules in the observability from tools to different protocols to different processes to different ways of collecting all of those things. I feel like that this is something that everyone could benefit from knowing how other people are doing, and we chose to talk about it and give it a try. Awesome. I know this is what you do in your day to day at the New York Times as well. So definitely interesting for you. Exactly. John, what about you? Do you have any thoughts about choosing observability? Yeah, just for everyone watching the, this started, Cheryl invited us to be part of this radar team. And it's not cheap like she or anyone else. It came with us with a topic or agenda or anything like that. We just started with a conversation about like, all right, what would be both interesting to us, and would we think would be interesting to the larger community. Like, bounced ideas back and forth to try to do things and observability seem like something like, like Marcin said it is both universal, like you can't, you, a company can't run and be successful without in some way observing the state of their servers their users and that sort of thing it's something that we all need to struggle with. And there's also been a lot of change and development, like whenever I started at Zendesk five years ago, the, there's a, you know, there were a set of tools available for do this. For us to do it is very different than what we have today. I think both a important universal and sort of like a dynamic field that at least we've been experimenting a lot with. And I think it'd be interesting, at least interesting to compare notes with both the other people on the team and the larger community. Cool, I'll give it to Kunal next. I think similar to what Marcin and John have mentioned. You know, from my perspective, there's been an, and kind of rapid increase in the cloud native space. A lot of adoption of the new tools the new technologies and the new kind of paradigm in which developers are writing code and operating their code in. And a lot of them are choosing like a microservices based architecture that's kind of what's become the norm. And in this kind of, you know, massively distributed system observability is really, really important. I mean, day in and day out we rely on on our observability for us to make sure we're serving all of our customers and I'm sure everybody here also feels the same way. So from that perspective, you know, having a better understanding of what is the landscape look like for observability, understanding what, you know, our peers in the end user community are using and finding helpful for their own needs is very compelling for us to know so that we can understand how to know how to chart our journey forward and what tools technologies can we take advantage of. So this was very, very relevant for us and very important. And we figured it's going to be the same for all the other end users as well as the new people coming into the cloud native space so that's kind of one of the reasons from my perspective why this is a very interesting radar to choose. Thank you. Now, Jason. Yeah, I think the topic of observability made a lot of sense. It seemed both timely. I think this is something that is top of mind and a lot of organizations seem like there were a lot of projects open source and and closed projects standards vendors, like software as a service type solutions. So it seemed like we really could, you know, have a really meaty topic, or we can dig into it and learn something. Sure. And, yeah, I think you're absolutely right observability is is one of the fundamental parts of cloud native. I don't think you can say you're doing cloud native. If you don't have an observability solution somewhere in your stack. So we actually went ahead and asked the end user companies what observability solutions they use. So just to give you a quick view of the kinds of companies that responded. You've got some logos you can look at here. And these are the companies sizes by the number of employees, the total number of employees. So you can see that there's probably maybe 5050 maybe slight bias towards the thousand employee plus so mid to the large size companies. And then the companies that were represented were across a range of different different industries software is a bit. I'm not quite sure what that means but you can see the rest is quite a quite a widespread across different industries. And did you want to add anything before I move on. Yeah, just just one note about the as we start to get into the some of the actual numbers here. So, you know, we campus 32 companies, we said effectively like a survey or spreadsheet so we did not have like in depth scientific interviews with like a super deep analysis here so. We have a larger number of technologies that we kind of the people that companies brought up than our representative here. So, you might be asking yourself hey why you know why isn't so and so here like the, the lack of something in here does not necessarily mean it's like, it's not used we don't like it, but we needed to kind of winnow down to a, like an interesting and useful set of things to have opinions on. So by no means attempting to be an exhaustive authoritative view of the entire 32 companies is not the entire industry. But from the data we did see, we're highlighting a couple of interesting bits of information. Yeah, I think the more interesting analysis will come in six months or a year 18 months when we come back and do another radar on observability and see how things change over time. Yeah, thank you for adding that John, you make a really good point that this is not supposed to be 100% objective. I don't even know if you can be 100% objective on this. But hopefully you can see at least from this roughly what kinds of companies and the size of companies that represented. And this tech radar is really intended to be a guide for people as well. It's not supposed to say there is what exactly one technology stack that's going to be perfect for you. Instead, it's more like, here are different things that these different companies are using. And if you're looking at a range of observability options right now, you could use this as more information to help you evaluate and prioritize what to evaluate. Okay, with that, I'm actually going to go into the numbers that we saw. So as I mentioned before, we're starting with assess. So the things where there were the least consensus. And so here we found there were three comp three projects slash tools in assess and this bar chart represents the votes that people brought in. So the green is adopts the blue green is trial and then yellow is assess. And then there's some gray for if they put it in hold as well. So, John, maybe I can give you this to start with just any thoughts about these projects. The first thing I want to point out is that the observant might notice that open telemetry is effectively a format. Whereas Thanos and Kiali are like product, you know, product software systems that you install and use. So it points out another interesting bit about the observability space that was like, you know, there are SAS providers their open source software you can run locally there are like data formats and methodologies and we decided to make it a fairly broad, like, you know, asking companies like what are you using and leaving it a fairly broad question just to see what people came up with. So you might notice there's some interesting like, okay, how do you compare open telemetry to Thanos? That's a bizarre thing. But again, this is more just like what people are interested in using. These are all like relatively new systems like, you know, Kiali, I think, has been around around for a year than Thanos. So I think the, it shows that, you know, what I what I posit from, at least from our experience, what I see is that like newer open source projects in particular have like interest and companies are like, you know, checking them out interested in, but we didn't see a whole lot of people who would like, you know, put all their chips on that and like, you know, commit like this is a key part of our entire infrastructure. So I think but these are seem like seem like some promising technologies that have shown general interest in the groups and you know are worth spiking on or take or at least following to see if they might be worthwhile for your organization. Yeah, I agree. It's quite hard to compare, you know, one's a standard ones and open source project one is, you know, a project for specific other projects. So is a challenge to compare. Okay. In that case, I'm going to move on to the set of projects that were placed into trial. So we have more in this case we have Yeager, Splunk, Light Step, StatsD, CloudWatch and Sentry. Kunal, can I give this to you for thoughts? Yeah, sure. So, what we noticed is there's a lot of tools here in the trial phase. You know, what we noticed, of course, was one thing in there is that some of these tools got a significant number of contributions from the boards that came in and a big fraction of those were actually people who successfully went on to adopt them. So that's kind of what helped get these tools moved into a phase where you feel they're more in the trial phase with the end users who are using it. Some of these names are quite common and popular and most people know about these tools as well. So there's a good amount of tools here that people have experienced with using successfully for their observability needs. I think that we're seeing tracing as a little bit of a newer thing and so you can really see that here with open telemetry Yeager, Light Step, things like that, being a little bit more towards the bottom as organizations are starting to experiment with these tools and really get on board with them. Do any of your companies use any of these projects? We are big users of Thanos. So far, we use Splunk, StatsD, CloudWatch, and a bit of Keali. We also use technically open telemetry since it's a format. So there's a number of these tools as well that have been mentioned here. I think if you're running on AWS, CloudWatch is something that you have to be familiar with, at least familiar with, and many, many organizations already sort of assessed what it can do and what it's capable of. So CloudWatch and StatsD are things that have been available for a long, long time. So people are also familiar with both of them. Yeah, you can also, yeah, so we see some more maturity in these as well. Apparently, Cheryl has just lost her audio. So she's asked one of us to say something. So as we can see, next we can move on to the, can you advance the slide, Cheryl? Yes. Oh, it's all gone to heck. Can we move on to the adopt? Okay, so let me talk about it a little bit. So the adopt category is where we see products and technologies that are actually pretty well established. And things like Prometheus and Grafana and Data Log, Elastic, Open Metrics, those all have been present on the market and providing tooling and solutions for a long, long time. And they are already pretty mature, like people can use them and companies can use them and write on them pretty extensively. So when we look at those, those are the tools, I think, that solve the actually solve problems for people in reasonably good ways. And compared to the products and things that we have in Assess and Trial, Trial and Assess seems to me like things that people have are looking for solutions for in many cases and trying out those tools to check if they can actually solve those problems for them and are hoping to get those solutions to solve those problems for them. And with adopt, we can rely on those things and make sure that and are pretty certain that they will be reliable for people. Awesome. Thank you. I think my audio is back. So I appreciate the summary that you just gave. I think that's a really good summary. There were a lot more solutions that people gave answers for that were not listed here. So I think we had more than 30 in total. I think it was, we had to sort of choose how many we could actually fit onto one radar just to not be overwhelming. So, Jason, I'm going to ask you how did you find creating the radar? Did you find it easier or harder than you expected? Yeah, the just kind of proliferation of tools and vendors and projects in this space made it challenge. So just dovetailing right into what you just finished talking about. I think we looked at that as a blessing like, oh, there's a lot of tools that'll be really helpful. But because there were so many tools, you know, that we know there were projects that the NCF and users are using, but we may be didn't get any respondents that were using those tools. And so, you know, we can't make a judgment about that, you know, a tool that no one who responded is using. Or if there was just very few respondents that were using it. So it was both a blessing and a curse, unfortunately. And that was, I think the hardest part. And I think it made it a lot harder than we were anticipating going into it. Actually, I'm going matching. I think that what I found interesting about it is that we found like clear, let's call them winners, and things that are in between, where they see clear adoption, but not across the board, like with Prometheus and Grafana. It's pretty common everywhere, or almost everywhere. But there are tools that have almost 100% of adoption, but on the smaller scale. And that was sort of interesting to me to see that there are tools that solve problems really seemingly, but are not as popular or widely adopted as things like Prometheus and Grafana. And it was difficult for us to then, you know, judge where they should land on the radar, because we didn't want to necessarily promote at all. Let's call it promote. That is good, but not widely adopted yet. One of the experiences I had is that we, you know, we got information from the other end user communities, but in a lot of cases, it just ended up bringing up more questions for me like I wanted more, like there were several companies like that had like said that were adopting like multiple arguably competing products or thing, you know, Prometheus and Datadog and Splunk, all kind of as tools we had used. And I was wondering like, okay, our, you know, is one of them a legacy that you're moving on to the new one? Are you, is it different teams? Like is it different groups in the organization? It has a different use cases. So there's this like wondering like what are all the various stories and the in depth thing. And like the radar, we're trying to flatten everything into kind of a two dimensional grid of like, you know, adopt trial. So like trying to wondering about all the stories involved here and the reasoning for things. But still trying to like, okay, we need to converge on a useful story instead of just like asking more and more questions. But it's really, really neat to see the, at least the feedback from the other people on the, on this team and the information we got about the end user community companies. Okay, I sort of came to this evaluation with open mind, because we as an organization went through a pretty extensive process of evaluating what we actually want to do for the future. So we did POCs with open source tools. But at the end, we decided to go with a SAS provider. And I was very curious about what other organizations do and how they do it and what kind of tools they adopt thing for all those different use cases and data points that we now have to sort of keep track of. And the number of tools was sort of surprising to me, like how many tools there are that I was not aware of, or some of them. I'd like to hear a little bit more about your thinking Martin as you were doing that evaluation like what are the things that you thought are important about choosing an observability solution. So, first, it was maturity of the products that we were evaluating or wanted to run. The second part was, as an organization, do we want to invest in building our own ecosystem observability ecosystem, or do we want to hand it over to someone that is probably better than us at it, even if we had to pay for it. We decided that we'd rather focus on building our own or helping our own business, rather than learning all the things that other people already know and our experts at. That was essentially it. One thing that Martin just said that kind of triggered a thought in me is that like in the five years, I've been at Zendesk how we use observability tools have changed. Like when I first joined we had an ops team, and they were responsible for production and they looked at graphs and you know dashboards that sort of thing that they're the ones on call. So, kind of morphed into an organization where like the product engineering teams that build the service monitor service. So now like the entire engineering organization needs to interact with observability tools, and gets you know they're like the teams of the ones getting paged or alerted deep to look at their SLOs and that sort of thing, so that you know the scope of who needs to like who this needs to work with. And in the case of the use cases has like changed dramatically so our tooling is needed to evolve to match them. Right, again, we don't have concrete data from this but I imagine a lot of the companies in this group have kind of similar similarly gone through like changes and evolutions at the time regarding. Yeah, it's worth noting as well this is a snapshot in time. If we did this a year earlier or a year later we would end up with something different. It's worth noting this is this data was collected last month so August 2020 and really reflects as it as people see observability right now. All right, so now I'm going to talk about the themes. So what things did you find interesting or noteworthy about what you saw. And the first one that the radar team came up with was the most commonly adopted tools are open source. And I thought when I saw this like well duh right, of course, of course this is open source because everything is open source in this world right. So, john, maybe you can comment on why was this interesting. Yeah, I think, I mean, yeah, like you said it's kind of unsurprising and so far as the end user group are set of people who are most almost all of us are running Kubernetes, either managed or open source so we've all kind of bought into the idea of open source community supported, you know, thought provider supported technologies. So kind of makes sense we use other ones as well. But at least in our experience at Zendesk at least the once you get to a certain scale of like of like data and company. It takes a lot of effort and time to actually run a lot of these open source tools at scale. It's easy to spin up in a weekend following the blog post that sort of thing. So interesting to see that even at fairly large scales a lot of these companies are investing the time and energy to run their own Prometheus processors or Rafauna or, you know, and like manage the complexity there. And in many cases rather than like, okay, using a set a SAS provider or paying someone to handle that stuff. Now to be fair, some of some of the companies like data dog and Splunk are, you know, we're in the upper upper range of commonly used so I think it's while open source is most common, even amongst these 32 set of companies, there's a variety of approaches and finance or trade offs and work trade offs we've all taken. Right, so that it was actually very surprised to me that so many organizations are running those open source tools like John said, probably at a bigger scale. Because it's actually either opposite to what we did or how we evaluated our situation, or maybe those organizations didn't get yet to that point where they had an opportunity to evaluate what they actually want to do and they just go with a flow like they started with Prometheus and Rafauna. And as they are growing, they expand and the deployment of those tools. And that was very, very surprising to me at the beginning where we got the results. All right, at New York Times and same with you at Zendesk, you both use SAS products for observability, right? Yeah, I mean, we are a SAS company so we're all like, yeah, you know, SAS is a good idea everyone should do that. Like yeah, for a while, like logs for example, for a while we were running, you know, way back in the day we just had a log server that people used grep for and then eventually moved to a system where we were like pushing logs to Kafka and Kafka goes to an elastic search cluster. And then we realized that that elastic search cluster was getting bigger and bigger and we realized, okay, we either need to hire several engineers just to keep that up and running and tuned and scaled and that sort of thing like that, and that's expensive. Or we should, you know, we should try to find a SAS provider to handle logs for us. And we looked at the cost of hiring people, the opportunity cost, the fact that we just figure a good SAS provider would probably do a better job of managing logs than, you know, three and three or four or five engineers would do an R team. So we decided, okay, let's like take that money and give it to a provider so we can use DataDog in this case. But that was, you know, it was a fair bit of like backing forth and trying to determine the like, okay, do it in-house versus like admit that's not our core competency and have someone else do it for us. The other problem that we run into was that we want engineering teams to be independent, but it came with a cost of them deploying and maintaining their own observability infrastructure, which again caused another problem where there was almost no transparency across the organization about how those systems perform or where those metrics are, where those logs are. And by adopting the one of our goals first was to consolidate everything. And with that, the next step was to use a SAS provider to just give people tools and processes to adopt the platform. And it was easier than managing the infrastructure for all those teams, especially that they had different use cases, different requirements. It would be difficult for us to handle it all at the same time. Yeah, now I think that makes sense and the trade-off between in-house and having a SAS provider is something that every company struggles with or has to make a decision on. So I'm going to go to the second theme or pattern, which was that there's no consolidation within the observability space. Kunal, your thoughts. Yeah, this one was actually very interesting from my perspective. What we noticed was that a large number of companies had actually given opinions on a large number of the tools that have been mentioned here, which means that they have actually tried and have experienced with at least many of these tools. In fact, I think more than half of the companies are using five or more tools that have been mentioned here, which is a lot of tools for observability. And so as the radar team was looking at the data and trying to understand that, one of the things we realized was that the cloud-native space is a very thriving community. There's a lot of interesting innovation that's happening here. And so there's a lot of new tools that are coming in that are looking to solve some of the problems as people build more cloud-native systems. And so as these new tools are coming in that are solving problems, people are looking at those tools to try and understand how to use them. I think that's kind of what makes a lot of people at least have some experience with these tools to be able to give some opinion on them. But I think the interesting thing that we also noticed was that a large number of these tools are actually being used on an ongoing basis. And part of the reason we think that's the case is because, you know, observability itself is a very kind of interesting art. I mean, you will often hear people talk about observability in the sense of logs and metrics and tracing. So you're basically looking at a lot of data from a lot of different angles. And a lot of these tools have their strengths in one or maybe a couple of those, but not necessarily all of the dimensions in which you're interested in understanding your data. And so that's probably a contributing factor to people having to choose more than one tool in order to understand all of the data that's coming in and to be able to make observable decisions based on the data. And then finally, I think one thing that I think a couple of us had experience with on the radar team that I think contributes to this fact as well as a lot of us are not in the business of building observability tools themselves. In our core businesses are somewhere else. And so often once you make a choice for a tool, and you invest heavily in adopting that tool, it becomes very hard to move into a different tool like the cost of investing to move completely from one system to another is pretty high. Often there isn't enough ROI to want to make that investment. And so that's kind of one of the contributing factors where once you adopt a tool, you tend to stay around with the tool, even though you might introduce another tool or give it a shot to see if it solves another problems, but you may not necessarily fully migrate off the old tool. Yes. And sometimes it can feel that, you know, every month or every quarter, there's a new way to do things. New way to deploy your infrastructure or workloads. There is new platforms to deploy to, you know, we went from VMware from VMs to containers to cloud functions. All those things require different ways to observe your workloads and your infrastructure. And any, and that comes with a cost of adopting yet another tool to do those things for you and yet another protocol or pattern. And it's, it feels like a natural thing to go because the technology progresses and it requires us to try and assess things constantly. And there's just more and more things showing up on the market. I think that's probably one of the reasons why even kind of ties together both the first and the second theme you'll see here is the fact that when you choose an open source format, it actually makes it easier for you to experiment with other tools and move on to a different tool from at least that perspective. And so that's probably why more and more people gravitate towards using open standards as opposed to closed standards when they're choosing their tools. So one of the things that we did, even though we adopted SAS platform, we still stayed with the open metrics for metrics, because we do want to have flexibility to migrate somewhere else if we ever needed to even if it comes with a higher cost of just running those systems. Yeah, I would say that in the consolidation, even what my experience at Zendes, there's a constant conversation going on here about the relative value and use case of different tools. Like metrics, for example, it sets these metrics don't give you a whole lot of granular detail of exactly what happened. So logs give you a lot more information, but they're more expensive to take more space and sort of thing. Then there's distributed tracing. So we kind of have multiple ways of monitoring stuff, which is close to but have their own quirks. So what's the relative value and what is most useful to teams trying to monitor systems to know if they're errors, to be able to troubleshoot things if an incident does occur. Look at historical trends so they all these trade off. So we end up like running a couple of different tools in different both for different use cases and I think we're still experimenting. And you can see like things slack came out with the blog post the other day about their, you know, their system. What is it? Yeah, tracing and Netflix has their egg drifting. So it's a constantly churning domain with lots of really interesting technologies kind of constantly. And I see we have a comment from, I guess, one of the founders maintainers of open metrics. We said that to, I think this is after Martin, like that's in part why I started open metrics to force metrics into a label based system and then let the best one win. So he loves to see that it's being used and thought about as such. I think it's a great comment. Yeah. Any other thoughts on these first two themes we have one more to discuss. We found this second one interesting compared to the first radar that I did where it was fewer projects and sort of more easy to choose where it was I felt like this one had a lot more projects and it was much less clear which levels they should fall into. And I was corrected. He's the founder. So I'm sorry, founder of open metrics. The third theme for me, this and Grafana frequently used together. Again, I thought, well, maybe this is obvious. But Martin, maybe you want to tell us why this is interesting. Like, it is not surprising, but it is also surprising that so many organizations are deploying this essentially as a bundle. He did run our own from issues with Grafana. Any tutorial or any guide that you would, you know, Google for answers, they both calm. Like, if you're deploying from issues, you're essentially going with Grafana. Sometimes there is a mention of graphite, but those two are like as a bundle that come together and even if you look at things like hand charts or different deployment patterns for those systems. They both are bundled. And it may be that they just essentially work very well with each other and provide people with what they are looking for. And because it's so widely adopted, it is now easy to deploy and maintain them as a bundle essentially. Jason, do you want to add to that? No, I think it makes a ton of sense, but it was really striking. You may not be able to see it in the radar on the way that the data is presented there but in looking at the responses. It was almost 100% kind of overlap. So everyone that was using Prometheus was also using Grafana, which was interesting. I don't think it was exactly 100, but it was close enough. It was probably close, yes. Yeah, go ahead, John. I just want to point out that, you know, Zendesk, we've got like 1000 engineers, we have a foundation team, we've got a like a vendor review board, but, but at the end of the day, usually when we are, you know, trying something out or doing the proof of concept, there is some engine like one engineer who is like Googling and reading blog posts to figure out, okay, how do I get this up and running on a test cluster or something like that. So, you know, the process that goes through initially is like the exact same as, you know, a hobbyist or a three person startup or something like that. So I think that like, all right, a couple of tools that fit together nicely and make it really easy to see value and end, you know, it's a lot easier to get to the point of like, Oh, this is cool. I'm going to go tell my boss about this. And then you have to work up the chain. So I think that integrations like that are useful in the open source community. Now, if I remember correctly, you don't use Prometheus or Grafana right. So do you want to tell us why? Yeah. So, so we got into the observability game, you know, right around the time when we were starting to get into the Kubernetes side of the world as well. And when we when we started on that journey, Prometheus actually wasn't around. And so we made bet on a different tool we instrumented all of our systems and today we have millions of metrics being emitted and you have, you know, over 400 engineers in the company were completely trained on the tool and understand how to use it. We have, you know, hundreds of dashboards, thousands of alerts set up. And so at this point in our journey, kind of to the theme number two that you see that it's really kind of a lot of investment for us to move from our existing over to something like Prometheus or Grafana. And that's not just the cost for the redoing all the work, but it's also like keep in mind to train 400 engineers on the new tool. There's going to be a brief period in time where we're probably going to live in two worlds in the existing world and in the new world and like going through that whole transition just seems like a lot of work. And we just don't see enough ROI in making that investment at this point. And again, it's not contributing towards our core business. It doesn't really buy us anything in terms of where our business wants to go. So that's kind of what's holding us back. If I were to start from scratch today, I'd probably go with Prometheus and Grafana, which are like the popular choices and where end users are today. But given where we are and what the investment we made so far is just a very hard sell to make. Yeah, that makes sense. And someone in the comments also said that we as Prometheus team deprecated Plum Dash in favor of Grafana in 2015 or 2016 as the officially recommended dashboarding tool. So I guess that was somewhat earlier on. And yeah, much harder to switch once you've built a few years of investment and turning around it around one project. So putting that all together, this is basically what the final radar looks like. So in Adopt, we have these five projects, Prometheus, Grafana, Elastic, Datadog, Open Metrics. In trial, we have six here and then three in Assess. And I'm going to ask each of our panelists just one thought or takeaway or something that you learned from going through this exercise. So, John, can I start with you? I think it was, it was nice and somewhat validating to discover like so many, so many companies use so many tools, and are still like it's not it's not just us that has three or four different observability tools running at the same time. And we're all, you know, sharing sharing thoughts and ideas with the other people on the team, like, oh yeah, this is something that we all struggle with. And I think the value of talking to your peers, and getting either to get to like advice commiserate or just like have someone who has an opinion on this that isn't really biased in front of sell you some one thing one way the other is really valuable to me. Jason, let's get next. Yeah. This was a really, really fun process, and it really wouldn't have been possible without, you know, Cheryl and Julie's coordination so big thanks. Mad props to them for that. But I think my thoughts on the kind of the radar and the process and whatnot is that, yeah, I again I want to reiterate that there are a lot of tools and unfortunately because of kind of the subset of data that we have, you know, we can't make judgments about tools that, you know, that subset of CNCF and users didn't use widely. I know that there are a lot of good tools that kind of didn't make the cut because they didn't have the votes we couldn't make a judgment about them so that's not a reflection of the quality of the projects that aren't or the tools that aren't on here. Yeah, definitely. So after the last one a few people asked me how do I get my tool onto this. And I was like, you can't just add it if the data is not there like, like is it comes from these companies that we've spoken to. Not in thoughts takeaways. So I really really enjoyed the process and the collaboration. I'm happy to that I was able to learn how other organizations do things. As for the radar, I read it as there are things that do solve problems really well. There are things that solve problems well for some people, and there are tools for things that people hope to be solved for them in the future. Another thing about open telemetry it's something that we are excited about and waiting for. When I think about Thanos when we did that you'll see with Thanos, we were really hoping for it to solve the storage problem for the metric data for us. You know, as those things mature and get better, they have a chance to solve those problems really really well. And people are anticipating them. Awesome. And I'm glad that you enjoyed the process. And canal last word to you. Yeah, so I will echo what the other panelists have said and thank Cheryl and Julie and the entire CNCF team to, you know, help shepherd this whole thing I think is a very valuable effort. I also want to thank all the fellow panelists I think it's been a lot of fun, having these conversations and learning what everybody thinks about what's happening and kind of trying to come up with some way to wrangle all of this data together to present in some meaningful way. I'm super excited actually to see the large number of tools and the various kinds of problems they solve here and how end users are using this. I think it for me reflects really some of the challenges in running a distributed system in a cloud native way. I'm super happy to see the amount of interest and investment that's happening in the industry that's leading to, you know, newer and newer tools coming up that are looking to solve some of the problems that, you know, we as end users are facing in trying to build this kind of an architecture I mean like a big takeaway is that this observably landscape is pretty large, lots of people looking to solve interesting problems here so I would encourage both the people who are building and creating these tools to continue all the hard work that they're doing and then the end users to share all of their feedback with CNCF and the creators of these tools so that they can continue to iterate and make these tools better so that we as end users can benefit from that. That's a great summary and also I want to say thank you to all of you for working with me on this. I've actually enjoyed the process a lot as well and learned a lot from all of you. So last thing to mention we have a new website radar.cncf.io where you can go and see all of the information that we've just run through today you can find the previous radars as well. If you want to get involved. I'd love to hear what do you think the next topic next radar should be on. You can go to CNCF.io slash tech radar. This is a GitHub issue where you can just vote on things that you think the next radar should talk about. If you want to come and contribute towards future radars, then please come and join the CNCF end user community where you can hang out with fantastic people like these people. Obviously this is only for end users so vendors are not allowed to join but we would love to have you in the community. And lastly if you just have general thoughts or feedback about how we can make this radar more interesting more valuable to you what else could we do then just email info at CNCF.io. We are pretty much out of time so I'm sorry for not being able to have time for questions, but again just put it, you can put stuff on this GitHub issue and I will go and check it out and answer afterwards. Thank you so much. I really appreciate everybody's time chatting today and just the last things to wrap up. I want to say thank you to all of our presenters for coming and joining today. And the webinar recording and slides will be posted online later today. We look forward to seeing you at a future CNCF webinar. Thank you and have a great day. Thank you.