 Hi and welcome to the June 2021 edition of the CNCF End User Technology Radar. I'm really pleased to have with me today the radar team who's featuring representatives from Fidelity, Matamost and Meltwater. And today we're going to look at the technology radar that they've put together. So let's go from the beginning. So my name is Cheryl Hung and I lead the CNCF End User Community. You can find me on the internet at Oi Cheryl. And the CNCF End User Community is a group of more than 140 companies featuring some of the biggest and smallest companies out there who are all using cloud native and Kubernetes. And the goal of the CNCF Technology Radar is to find out what is the ground truth, what is the reality of cloud native as it looks today. So the CNCF Technology Radar typically looks like this. We have three rings, adopt, trial and assess. And we're going to look at a specific topic which the radar team has chosen and look at a few different tools and frameworks within those that topic and place them into each of these three levels. So adopt means clear recommendation, many companies and many teams have used it successfully. Trial means that we've used it with success and recommend a closer look. And assess means you tried it out, it seems promising and you should take a closer look at this when you find this need. I'd like to welcome members of our radar team now and ask them to introduce themselves. So we're just going to go left to right. So Gabe, please go ahead. I'm Gabe Jackson, I work at Mattermost on the cloud platform team. Even though Mattermost has a history of developing a communications platform that's on premise focused recently we've been also delivering it as a service in the cloud so that's what my team is responsible for. Awesome Federico. I'm a principle engineer at Meltwater, being part of the teams in the engineering and enablement mission, helping other development teams at Meltwater to work efficiently with their mission, providing the base platforms for them to deploy their applications to our team. My colleague Simona is unfortunately not able to join us today, but he was part of the radar team contributing with content ideas and his expertise. Yep, definitely. All right, let's go next to Aaron. Hello everyone. Hello everyone. My name is Rajan. People call me Rajan. I'm a VP at Fidelity Investments. My focus is mainly cloud platform architecture. We are responsible for building action platform, cloud native platform for Fidelity where application teams, development teams read the benefits of the latest cloud native technologies without putting much effort into it. So we try to make it easy for them. Currently we are at 250 to 300 clusters in that range. We are managing that. So that's me. Awesome. And Neeraj, FT. Thanks, Cheryl. Yeah, Neeraj Amin also work at Fidelity, leading the cloud platform teams, primarily focused on the CSPs and the journey that Fidelity has taken to migrate applications to the cloud, specifically on the communities platforms that we're building out and architecting. Fantastic. And I want to thank you all for giving up your time both today and over the last few weeks to put this together and contributing your expertise back to the larger community. So first question. As you've probably seen, we chose multi-cluster management for the topic of this radar. So why is multi-cluster management something that's interesting to you right now? Yeah, I can start with that. So I think this is one topic where it really depends on whether you're a small team, you know, it's a small organization or we have a lot of application developers in a large organizations. Those sort of things really affect your choice, your choice of how you would like to manage it. So, and this is one of the areas where there are several number of options available. But there is no clear choice typically as to which one is the better one, right? So typically I think the, this is one of the topics where based on the results you will really find it useful that if you're doing something today in a certain way to manage clusters, you would get the reassurance that others are also doing it in a certain way. And if not, you get to know the reasons why that's not the case. So it's really interesting and very important topic because everything starts with cluster creation and provisioning. So yeah, so that's fine. Cool. Anyone else have thoughts about multi-cluster, perhaps what they're doing right now? Yeah, I can jump in. Regarding the topic itself, I think one of the things that maybe caught most of us sort of off guard or at least me was there was sort of the idea of some of the other things we had discussed as possible topics. There was sort of like a clear cut sort of top three or top five sort of options. But when this topic was brought up, there was a lot of organic conversation immediately, a lot of people doing a lot of different things. And so it was like a perfect thing to sort of dive into. Yeah, and we have the same idea at Mattermost that we use a bunch of different tools to do different things. And depending on sort of our needs at the time, we'll pick a totally different tool set. So yeah, that all ties into the topic. There's also, as everyone starts with one cluster there and then expands and the journey continues and you grew your environment into large environments. There is no clear path out there. And this is like an interesting journey and to document for teams starting out, okay, I'm now here. I will probably end up with a large environment, what possibilities are there, what are others doing and learn from that to avoid the pain that a lot of us have experienced and not longer experience, maybe. I'll just add, and I kind of agree with Federico I think that scalability concerns around trying to figure this out and picking this topic I think is good, think clusters themselves right are becoming more like kettle right more and more. And as we grow and more, especially at fidelity is more teams adopt and move over to communities I think for a platform team that's sort of essentially trying to manage this I think this topic is pretty important. Fantastic. Okay, so after picking the topic. So basically went out and asked the end user community. What are your thoughts on this what are you doing right now what things do you not use have you moved away from and just to give you an idea of the different kinds of companies that responded. We've got some of the companies listed here. Most companies kind of fell in sort of generic software industries which can cover a lot of different things. But I think there was a slight bias towards the larger, larger companies which perhaps makes sense. If you're talking about multi cluster management, you're more likely to need it if you have a larger company more complex infrastructure. So, at this point, what did you expect in terms of results. I think that off. The funny thing is I didn't know what to expect. On one hand, I kind of expected that there'd be a lot of varying answers. I assume that as the number of employees that the organization's sort of skewed towards the higher end as in there's more there that they would sort of have a more clear tool set and infrastructure stack. But it turned out that that wasn't necessarily the case I was expecting maybe there to be some sort of like hidden gems of like this is a way you could do it that maybe we weren't expecting. But yeah, the, I was, when this topic initially came up, sort of was in a camp of thinking that perhaps matter most was doing it in a unique and maybe not completely optimal way. So it was definitely pleasantly surprised to see that it wasn't necessarily the case that a lot of people were using a lot of different tools, and that this is a definitely an interesting problem that's still being tackled. Yeah, I want to add to that. So I think I get the feeling so we, we started with over a period of two years I think I still remember creating the first cluster. We started with like one cluster. And then now we are at like two to 300 clusters so in the journey like many times. We felt the same thing where are we doing things in the right way because we have to sort of do some custom things like we have to do things in a slightly different way, especially the scaling part so I just clearly remember that the first six months we were only at 10 clusters or something like that so we're doing a lot of experiments we are making sure this the stability aspects and all those things are there, and then we quickly scale right so at that time we had to do things in a slightly different way. So many times we did feel the same thing where I mean the right track, is it okay to do things that way but yeah looking at the results I think it's definitely reassuring. Yes, Federica, I think you're on mute. Yes, sorry. What I expected to learn was or was curious about is that since the end user community is a spread over a variety of industries with all different requirements, different policies, different rule sets. If there is like a pattern emerging, if you're in this industry you manage your environments like with this and if you're in this industry you will manage with this and so I was expecting perhaps to discover some patterns there or and also as Gabe said, hidden gems that are not really known but would be good to give them a larger platform to become known in the community. And I'll add the same I mean I thought when I'm going into this I thought there would be some conformity across some of these these toolings. So that was my expectation or opinion going into this. So it was interesting to see the results. I'll follow up with the question of my own to Gabe since you mentioned hidden gems. Why do you think that there aren't really those hidden gems why do you think everybody has deployed it in kind of separate and unique ways. Another question. Just going with my gut on this one. I think it's just because it's a hard problem. Kubernetes as was discussed earlier, you know, was, we've solved with the Kubernetes platform the idea of running apps and services as sort of cattle right, we're now at the point where the clusters themselves need to go through that same uplift and I think that it was just something that wasn't initially tackled in the same way as the core platform was. And it's in certain ways even more complex than than the Kubernetes platform itself so if I had to guess it's just that that it's just the next logical step so that's where we're working towards and also it's a really hard problem to tackle. We'll dive more into the into the themes of it in a little bit, but first of all, let us take a look at the results. So the first thing to note on this is we actually have two radars this time, not just one. And these were split between cluster deployment and core services and add ons. So, first question to the radar team why do we have two radars and not one as usual. So I think it kind of dissolves. I don't think any of us were expecting to kind of go into having two radars at the end of the day but as we as we went through the questions and you know the radar itself I think we started figuring out that we had two radars that would be required one that would handle sort of the infrastructure piece or the cluster deployment aspect of it, and then another one that tooling wise would answer what you do almost in day two or kind of build on top. Once the infrastructure provisioning or, you know, day two operations of the cluster itself outside of the infrastructure, we're done. So I think it was an evolution or as we kind of dug into solving or answering this question. And there's some other interesting things I see here as well. So for instance private cloud managed Kubernetes and public cloud managed Kubernetes. So, with someone like to talk about why these are grouped in these ways and maybe what I think public public cloud managed Kubernetes is maybe understandable people knows know what that means but what is private cloud managed Kubernetes, what kind of things fell into that. Yeah, what can take that. So what we have seen is that organizations with with smaller amount of clusters depend on on the the the regular installers like cops. And, and others, we've seen that the, when the number of cluster grows. There's a tendency to move away from these installers and to use managed Kubernetes services for organizations and in the in the public cloud. That would be the offerings from the public cloud providers for organizations then in in the with their own data centers, not being in the in the in the cloud. Even those then tend to to use package Kubernetes offerings that would be managed Kubernetes offering and resembles the ones that you would expect and see from from the from the public cloud. So the pattern there is either you're in the cloud or in your data centers. The more clusters that you manage. There's a tendency to move over to to manage Kubernetes offerings. And another aspect that I saw here and in these results is that we compared also to the other radars is that we have the adopt sector, pretty much felt while the other sectors are a little bit empty compared to the other radars. And during our discussions we we said that this if you're operating Kubernetes and if you're in production with communities, you have found your, your tool set and you will then stick to that one and continue to work that with that one rather than experimenting a lot and switching a lot of these things out. So it's either you're in the adopt phase and using those and perhaps you then look from time to time into the into the part of SS. Yeah, just to add to that, I think, in respect to whether it is private or public, I think the keyword there is managed. So, as the number of clusters increase like the one way to look at it is like as the number of clusters increase your complexity of managing control plane components at CD and stuff like that. It's going to be tricky. That's that's one aspect but but the other aspect, at least from the fertility side, what we looked at was, we wanted to spend that time instead on that like spend on other stuff where we add like a lot of lot more features that will you know benefit the application teams. The things that will make it really easy for them to consume the technology so we sort of chose a strategy to focus that time on those things, you know so that things get better and easy for the app teams to sort of use the technology. Yeah, for sure. Sorry, I was just going to say, regarding the radar itself and the fact that there's a lot of tools in adopt. We actually really challenged ourselves to on those assumptions of do these all need to be in adopt and why are there so many. And I think it actually is a good way to visualize just like how tricky this this problem still is this cluster management issue. And I think over time we'll see other things change but right now you can see that it was almost in a way sort of a forced adoption where you have all these tools and they help you in a very specific way sometimes in a couple ways, but you can't really get the whole issue that you care of with just like one or two tools and then maybe you're you know assessing another three or four, you're sort of were pushed into a spot where you needed a lot of them. And that's why I think a lot of them ended up in the adopt circle. Yeah, exactly. And I think both of these waiters have in the adopt section, you know custom in house tools I think that kind of ties back to earlier point where we don't. We haven't a clear cut winner yet. So, I think folks are trying to bridge that gap where possible, or where needed. Yeah, and to give a little bit more details on that. So, in the answers that we have seen is even for organizations choosing the the the manage Kubernetes offerings. They were like a 100% overlap to custom in house tools. So while you're still using and trying to get the benefits out of a managed service. It's, it's not enough. The manage service does only provide so much that you need to complement it with custom in house tools that help you to do the work and the, and the setup that is needed for your own organization. Awesome. This is really great commentary. I just want to move on now to the specific themes that we pulled out of this and look into those in a little bit more detail. So the first thing was there is no silver bullet for multi cluster management. And we summarize as Gabe said this. While there are these tools. There is no clear winner, and you need to have a combination of tools to do the setup that is required for your environment, as well as with the, as I said, just a couple of minutes before with the manage Kubernetes service. They cannot, or they are not giving you the silver bullet, you need to complement this with with extra tools or with extra custom in house develop tools to do that to overcome the the lacking features, the lacking possibilities of what is out there. The other thing is also since there are so many tools required for this is that it feels like that you need to come with your own glue to put these things together so that they stick together and work together. Yeah, so a lot of nodding heads there with you need to glue everything together. I definitely agree with that and I think going back to the idea of the hidden gem. I think it basically ties directly to this points we're all sort of hoping maybe there's a silver bullet out there or something that's at least a little bit closer to that that we could all start using. And yeah, I don't think we necessarily saw that pop up but I definitely agree that the one of the common teams was that glue, as mentioned, it's a really good point. Yeah, I mean, a lot of this, you know, and this is kind of where I think it matters for this maybe the sector or the industry or in or the company and that has certain rule sets and said alright so fidelity. We have lots of regulations and security concerns so part of that part of the glue is to handle some of these. I know different companies have different hierarchies of how they set up account or subscriptions right, etc. So, all that kind of ties back to needing some custom tooling or glue that the kind of mesh couple tool kits together. Okay, let's go on to the next topic, which you've discussed a little bit already cluster management often requires custom house custom built in house solutions. Maybe I'd like to know a little bit more about like what, what are those solutions, like, what are you building for. I can I can probably start with that. So, yeah, typically right so when the problem statement is clearly defined. Though you start up with number of tools then over a period of time you'll see clear winners but in this case I think the problem statement itself stretches little bit here and there depending on the company policies and stuff like that. I'll give you some example. So for example, some sort of companies might take an approach where the app teams might actually go get the cluster and then they sort of manage it from there. So they just go to the central team just to get the cluster provision. You have the other set of, you know, teams where they want the central team to manage the entire platform, for example in fidelity rate the reason to have the custom, you know, in solutions. So we sort of took an approach where instead of looking at clusters separately add ons and the features that you add on that separately we sort of decided to look at it as like one platform. So what I mean by that is from from an application teams or development teams standpoint they look at like one platform version they say a fidelity platform version 1.0. And that behind the scenes could be like 118 Kubernetes cluster, a set of add a specific version of add ons, a specific set of, you know, infrastructure setup and stuff like that. So if you want to put all these things together. You, you sort of go down the github throughout and stuff like that so in our case we sort of came up with like custom solution where teams can just go and describe what they need and like playing YAML files and behind the scenes like views like a lot of other. A lot of these tools behind the scenes work together to make that happen so that is one example the other one is we sort of decided to take the infrastructure setup sort of into account for example. One of the tools that we've been along with the cluster portion and sort of does the infrastructure setup like it executes a cloud formation templates and stuff like that. The main point here is the versioning is like map. So this particular set of the cluster portion works with these set of cloud formation templates right the specific way you set up VPCs and stuff like that everything is like version control. I'll give you another example which we have done it's an open source tool from our side. We wanted to know we wanted a tool where developers can simply plug in their active directory credentials. They have an identity which is the active directory credentials simply by plugging in we want them to get access to the cluster so in our case we have a tool called K Connect. So developers sort of plug in their AD credentials and then it automatically behind the scenes goes and figures out based on their credentials what sort of clusters they have access. They have two clusters across clouds so it'll automatically list here you have access in five clusters in AWS two clusters Azure and like five clusters in Rancher and they just select one and then behind the scenes it's wires the connection so they don't have to manage Coupe config you know and stuff like that. So this this this might be trivial if you have like a five member team but when you're talking about like 10,000 developers in an organization even small things like this makes like significant values right so yeah these are some of the examples where you still need like custom build you know you know solutions. I don't know if others want to add a manner most we had to develop a tool to basically allow us to scale our sort of custom cluster so we for the majority of our workloads decided not to use a managed solution and we used cops which is fairly flexible but if you're not familiar cops allows you to just sort of pick a public cloud and deploy Kubernetes cluster there. But one of the things that is sort of inherent with cops is that you just run these commands and you manage it that way, we needed to scale so we needed to build a bunch of clusters we needed to upgrade them and manage them, possibly in parallel. So we developed this thing called the cloud provisioner and yeah it was our custom tool and our way around this problem of how do we sort of like retain control you know we can choose our Kubernetes version and like we have access to the master nodes and some of these things you have to give up with you do it with a managed solution. So how do we keep all of that and that was that was where the glue came in we had to sort of build this tool to do that. You know it works fairly well allows you to scale but it just shows you like the tools need help to kind of get them to a spot where maybe they're as useful as they could possibly be. Cool. All right, let's go on to the next one. All right, common tool combinations include Helm with operators and get outs with our go slash flux. Think new ash, do you want to start on this. Yeah, sure. So I think this goes back to the second radar right so at a certain point in time, at least at fidelity, once the infrastructure piece in the clusters there. So that the cluster with this part of the platform with the, what's a bunch of stuff. First and foremost comes certain security and our back that we apply. Then there's other other operators that we've custom built. There's, you know, ingress controllers in terms of how to, you know, get connected into a cluster and set or and set or so. And then there's things from a post provisioning or post day to action on the cluster itself the infrastructure piece of it. We actually handle today with a get ops and use flux. So, you know, we have certain repos, you know, at Felty that manage based off versioning of platforms that will then you make use of flux and Helm to, you know, push basically a set of add ons to a cluster and get it to a proper state so that this makes use of the communities know declarative fashion and really works well for us and at scale. Gable Federico is get up something that you use. We use it, perhaps partly not directly with our organ flux. But what I also wanted to mention is from the answers there is like where you have seen is on the cluster provisioning part. We're just standing to use them the the the manage Kubernetes and then gluing it together with their custom in house tools. This even goes on to the to the day to services to the core services to the add ons a naked cluster cannot be used by any organizations. There's observability that needs to be added on our back ingress and and those, instead of being that the manage Kubernetes there, you will see that organizations use the the the project provided Helm charts, but that is not enough again. You glue those together with custom in house tools which could be in most cases than the operators. So, the same problem for provisioning the cluster exists then on the on the on the other side of the core services and add ons. They need to be combined, they need to be adapted to the requirements of the organizations using them, and there is no no standard way of really doing it, unless you see the operator pattern be becoming a standard but there are so many operators, and there's also and so many ways configured differently. So, good point to jump to the fourth one, which mentions operators. So we did see operators as quite popular. A lot of people voted for adopt place them in adopt. And so what do you think, why have operators become so successful. I'll start off with an example it's an interesting example and then I think you just can follow up with that. So, we had a requirement where teams had to do exact into parts in production so typically it's not allowed, at least in our case, but we had some really interesting use cases which basically warranted for that. So, it was very difficult thing to because typically when you when you do an exact then it's the connection stays forever and stuff like that it's a tricky problem so one of the ways we saw it is, we sort of have an operator in our platform which is there and like all the clusters, where teams can actually go and request an exact pass so basically they just submit like YAML file, which is like kind of exact pass, and they say I need like one, like few minutes of exact access or stuff like that and then behind the scenes and operator actually gives the exact access to the specific team, and then takes it away. You know, after like certain number of minutes. So, without operator achieving something like this, it's going to be really, really tricky we didn't think about having an API first where they call, but the moment you have an API you have the authentication authorization thing that you need to take but with operator, we can easily tie into the RBAC model Kubernetes RBAC model so if somebody can submit a request for exact pass which is like the kind YAML file, the exact pass YAML file, then we know that Kubernetes has allowed them to create it, you know, it has gone through the Kubernetes RBAC so we can tie into that. So, I felt I sort of wanted to start off with an example so that it becomes much more clear, I don't know, Neeraj if you want to add something to go. Yeah, no, I would just say I mean I think we have that's one example I think we have at least four or five operators that we built in house I think we've open source one of them. I think operators are kind of the standard way to automate and kind of target concise tasks right to complete within a cluster. So, I think I see, I mean, from a community perspective, I think, almost everything, or almost all the new things at least all have operators associated with them. I've seen stuff from Mongo Kafka and cetera right so operators really make it easy to and mass some of the complexity that normally would would appear right so instead of having to maintain or manage an entire Kafka cluster. You can have an operator that really constructs the cluster itself for you. So, I think we've also I think to Roger's earlier point use operators to facilitate some of the work within a cluster so we have tiers of authority within a cluster so there might be a business unit cluster administrator that may be able to do certain things, whereas normal, like a namespace admin cannot. So, tying our back to operators is really easy. And with with custom resources, etc. I think it's extremely sensible. So I think that's really beneficial for us. And you made a good point there about customer in house operators versus the operators available widely I don't think we distinguish them on the radar itself but that's something to look out for as well. During our discussion it was mentioned also like the operator is the the resident expert that lives for that piece of software that lives in the cluster, and you can talk to that resident expert the operator in the same way as you do. All other things in Kubernetes with the same declarative way of of writing your your deployments your your services, you you you control the the the operator the expert in the same way, which makes it a common pattern to do. And then that is something that that makes it also easier to switch from one task to the other tasks, when you operate and manage a large scale of environments. Cool and let's. Oh, sorry, go on. One point I wanted to add, I just wanted to talk a little bit about the downside as well that it's it's not like it's an easy thing to do as well. So there isn't decent learning curve, I would say initially but one when you when you get past that things become okay, but but there are some things which are not straightforward for example, you are your versioning and stuff like that for example let's say you bring up with your first version of your custom resource, and then you want to make some changes on top of it. So the migration, it really depends on like what's our changes and stuff like that but especially when you have like a lot of clusters, and people are already using like one version of it. One one custom resource version to another and stuff like that. It's doable. Yes, but it's not very straightforward. So sometimes you might have to sort of want to take a look at like the complexity of it versus the benefit you get out of it. So if you just have like a handful of clusters maybe there is a different way which might be easy for you right maybe you don't need an operator. So, but in our case like given the number of clusters we have and the number of developers it was like, you know, straightforward choice, but there are times where you definitely want to look at what benefit you get out of it, versus the complexity of like managing it and then you really have to take a call so there is this downside which I wanted to mention. Yeah, I definitely appreciate that. Let's look forwards now to our last theme, the community eagerly awaits readiness of cluster API. Tell us a little bit about cluster API. Yeah, so I think anyone that's had the privilege of managing dozens hundreds or thousands of Kubernetes clusters probably heard of cluster API at this point. It's a really exciting project that's being developed, and it's coming along fairly quickly. And I think a lot of the community is waiting for this to be ready. It's probably the closest thing we have to a possible silver bullet to handle a lot of the issues we run into now. There's kind of two main points about cluster API that are sort of, I think kind of tell the story a little bit. The first one is that it cluster API just approaches cluster management with more of a desired state Kubernetes focused cattle focused sort of architecture, which is awesome because it's worked out well for Kubernetes that Kubernetes itself. So it seems like this would be a good fit for the cluster management side of things. It's sort of unique in that way, or at least mostly unique. And so I think the hope there is that this will solve the problem. Really well, I'm sure those will be edge cases that are a little rough but this will probably be their best chance at sort of getting a really good singular tool to help us out with this cluster management issue. And I think what's interesting about it is that even though it's cluster API has progressed quite a bit. As was mentioned at this point, everyone that has to manage clusters has built all this glue and we use all these tools and we sort of had to go through a lot of like pain and effort to get to the point where we're at now where things are working and scaling in the ways that we need them to. So I think one of the tricky things for cluster API is going to be, it needs to get to that threshold where it's like finally good enough to make it worth our while to sort of like really put the time and effort into trialing it at least has to match all the stuff we've built so far so it's definitely getting there. And a lot of people are waiting for it to get to that point. But yeah it's, I think it's probably one of the more interesting things coming up in this in this field. Cool anyone else wants to comment on cluster API. I just wanted to add from Fidelity side, for example, we are multi cloud so we use like clusters and different cloud providers, and you know on prem as well. So this so we today have something custom which sort of mimics this this, we have been using it for like a couple of years and we can, I mean we really think that really help this, you know scale to like whatever clusters we have. So we have seen the importance of it like I'll give an example for example. If you are getting some clusters and like AWS, you have a tool called eks cutters right so that is like very specific for you know that cloud provider, but from a user standpoint, we wanted to give this use a simple unique interface where this is going to describe in a very neutral way, that's what they like right so they want to describe in a very neutral way where we sort of process that and behind the scenes the tool can actually work but we don't have to expose each of those specific tools to the users, you know straight away So in that, in that way I think cluster API putting a spec in the front I think it's going to help a lot. And another, another good thing about putting a spec in the front is, that is when the ecosystem sort of starts to you know really evolve the moment you have a spec, you know a lot of tools can supporting tools can you know, you know evolve around. So, yeah, I think like personally and from a fertility standpoint I think we have been definitely waiting for this. Yes, it will, it will like kind of extract the way the lower part that you might have to deal with. And reason about Kubernetes and the original base deployment in the same way as you, as you reason about your application and your services. And it makes it really good candidate to start to treat your clusters as cattle as you treat your ports and applications as cattle. Nice. Okay, well I definitely look forward to it I think it's something that is quite interesting, and is going to make quite a big difference in the next year or two years. So, um, yeah, I think that wraps up our themes for today. So last question. And I just love to hear a line or two from each of you about how you found the process of creating this radar was it something that you were surprised by you found interesting. Yeah. It was very interesting. You never know how this is done. And, rather than just watching a making off or the behind the scenes documentary of the tech router. And being part of this gives you the first hand experience. I enjoyed very much the conversations that we had around the entire radar, as you mentioned this was a couple of weeks process it's not just this very nor, and it's not just the, the inquiry that we send out. It's preparing discussing the topic and then combining the results together, which gives you the possibility just like look over the fence look over your own fans that you're normally busy and your day to day stuff and see what is out there. So I can really recommend it to to to everyone that might be invited at some point to to to say yes. It's, I enjoyed it a lot. Yeah, I'll just add it was fun for me as well and I think it's fascinating, especially on certain topics to see what your peers are doing right. It allows you to gauge if you have a chance to course correct or improve upon things. So it, for me it was a big learning experience. So it was really fun. I really agree that the, especially with the topic we chose it was really reassuring just to hear that you know you that this, this is complicated and then you get to see the perspectives of all the other companies, tackling this issue. And it was very much sort of, it helps you keep a long term sort of mindset about things, while also sort of approaching the short term like what are we doing day to day. It's the best step. And yeah, the amount of perspectives we've had from our conversations have really opened up my eyes quite a bit. And yeah I think it's been an incredible opportunity. And I think it's great that we get to share sort of all of these conversations in the form of the radar itself. Yeah, I mean, I definitely found it interesting. I think I think I personally believe in like, you know, creating, creating tech radars. I think it's super useful, especially, I think Neeraj mentioned about course corrections in our experience at least over a period of last two two and a half years at several points we have done course corrections. And most of the time when we did that it was usually when we sort of spoke to another into a conference I spoke to another set of companies through some other, you know, even something like that. So, I think in that aspect, I really found it interesting and I personally believe this is going to be very, very useful for many, many teams out there. Awesome. Well I actually really enjoyed it as well so I want to say thank you to all of you to Neeraj, Raja and Gabe and Federico and Simone who's not on this webinar today. For your time. I really appreciate it. I feel like I learned a lot from each of you as well so thank you very much. This is a reminder to finish us off. You can go back to look at previous radars at radar.cncf.io you can also look in a little bit more detail about the different kinds of votes and the different kinds of companies that submitted answers to this radar. For you as well to get involved. So if you want to have a say about what the next topic is you can go tocncf.io slash tech dash radar. This is just a GitHub issue where people have been posting what kind of topics they're interested in hearing about from the community and you can kind of upload and download things. I would love for you to come and be part of one of these future radars be part of the team and you can find out more about that atcncf.io slash end user. And then lastly I'm always trying to find ways to make these radars more interesting more relevant and more understandable so if you have feedback then just send that to info atcncf.io. Thank you very much and thank you once again to all of the radar team who joined and contributed to this today. And that is all from me. Thank you. Thank you very much. All right. Thank you.