 Hello everyone. Welcome to another installment of Cloud Native Live where we will build things and we will break things. A few housekeeping items today before we get started. During the live stream, you will not be able to chat as an attendee, but there will be a chat box on the right hand side of your screen that you can check out and drop any questions for our speakers right here. Please feel free to drop them there. We'll get as many as we can either during or at the end, whichever way works out best. This is an official webinar of the CNCF and as such is subject to the CNCF code of conduct. Please do not add anything to the chat or ask any questions that would be in violation of that code of conduct. And please be respectful of your fellow participants and our lovely presenters. Who are Shridhar Venkratraman and Lukrota and I totally butchered it. I'm going to let you say it beautifully. With Ops Crews and Chicago Trading Company to kick us off today. I will hand it over to Shridhar right now to get us started. Welcome all and you got my name. Okay, no problem. This is Shridhar Venkratraman. We have an opportunity for us to sit down and discuss Chicago Trading Company journey to the cloud and observability with our own Lukrota as a manager of SRE and observability at the Chicago Trading Company. All right, I'm the founder and the chief architect for Ops Crews and look forward to talking to you. Hi, thank you. Really appreciate the invite here to speak about CTC's cloud journey. And so I'm going to be going into some deep dives about our technology as well as overall the company itself and what it is that we do and then how that intersects with Ops Crews. All right, so next I will get into here. Who is Chicago Trading? So we were founded in 1995, so we've been around for quite some time. Our mission is to make markets better and provide liquidity when it matters most. So, you know, being in the market and I'll talk about what that is is paramount to us. So Chicago Trading Company is a market making proprietary trading firm. So, you know, what does that mean? It means we represent both the buyer and the seller in the marketplace. And those marketplaces are still today some trading floor venues but largely electronic venues run by exchanges such as CME Group, BATS, New York Stock Exchange, you know, Chicago Board of Options Exchange, UX in London, there's many, many other exchanges across the world. These are just some of the examples. So we're interacting with the marketplace on a daily basis, you know, and our customers are anyone who is participating in the marketplace at any one time. We are headquartered in Chicago and we have offices in New York and London. Right now we're around 600 people or so and we're rapidly growing. So, you know, we've doubled in size almost in the last four years. So a lot going on at CTC, definitely exciting times. One, so since markets, you know, are really always on at any one point in time in the world, there are a few market pauses depending on which exchanges you're trading on. For the most part, you know, markets are trading 23 hours a day or at least the markets we trade in, we're trading about, you know, 23 hours a day and over 20 markets across the world. And, you know, some of those markets are even open on the weekends, many of them are closed. So we have been moving closer towards almost a 24 by 7 trading environment. We aren't quite there yet, but may get there someday. So, you know, depending on the part of the world that you trade in, there could be trading activity at any one point in time during the day. 24 hours and 7 days a week, excluding some holidays. So, overall, we have a very narrow window of time in which we can release software changes. So, observability is really important to us as we have to understand the state of our software at any one point in time. So, I'll get into, you know, where kind of why that's important as we go. We have hundreds of applications that make up our trading platform that our traders and quants use every day to run our strategies. Engineers and quants and even traders write code. You know, most of our code is written in Python, C++, and Java. And so, you know, I'll go into here now a little bit about the technology stack and some of the challenges that we currently face. So, our current environment is made up of a mixture of things. We pride ourselves on our research that we do, our pricing and our risk management. So, we have a blend of systematic trading as well as human trading. And in order to accomplish this, we have a complex set of trading systems that traditionally have largely run on-prem. And there's a few driving factors as to why those have traditionally been on-premise. One of them is that we need to be co-located close to the exchange engines themselves. That's sort of table stakes today in order to be able to compete. So, that's, you know, for us, that's, you know, one of our kind of traditional on-prem data setters is being co-located. You know, that may change over time depending on how market structure changes. There are exchanges that have struck deals with big cloud providers. You know, New York Stock Exchange as well as CME Group have recently struck deals to extend their systems into the cloud providers. So, you know, we'll have to see where that takes us, but for right now, there is still a need to be, you know, co-located. Some other things that have traditionally kept us on-premise are things like multicast type protocols. Customized hardware configurations, data locality, right? These things can be challenging at times in a cloud environment. You know, kind of also combined with the on-prem is lower latency requirements. So, we aren't a shop necessarily that, you know, is worried about every nanosecond, but we do need to compete for speed. Again, it is table stakes these days to have some level of speed in order to compete with others in the marketplace. So, low latency is important to us. There are customized server and switching configurations that aren't available in the cloud, as well as, you know, we have specialized algorithms that take advantage of this hardware capability. And so, low latency is also traditionally in our kind of on-prem data centers and doesn't run in a cloud environment. So, next is our high compute, right? So, I touched on, you know, some of the reasons we run on-prem. High compute kind of really opens the door to run in other places. So, we, like I said before, we, you know, research is one of the things that we pride ourselves on, and there's a lot of computing that needs to be done when you're researching and doing things like backtesting. Backtesting is a common practice in a trading environment to see how your strategies are working over time. And that requires a lot of compute. So, in order to do that on-prem, we would have to really scale. And that can be very costly and difficult. And so, that's where one of the areas that the cloud has become front and center because we can scale there quicker. We can take advantage of tooling and native cloud functions in a cloud environment. And we can also leverage economies of scale easier in a cloud environment than we can on-prem. So, you know, this backtesting and some other types of applications that are not latency sensitive, we've changed our posture and started to consider those applications for a cloud environment. And then the fourth item here is really monitoring and observability. And so, over the years, we've used a mixture of third-party tools as well as some custom tools that we've written. And as we've made this pivot to move into the cloud and change our application architecture, those tools are being challenged. And there's gaps in those tools. And so, that's where a tool like Ops screws, that's where they can come in and provide some not only technical value, but business value. So, as we've began to adopt containers and change architecture, you know, we've really had to rethink our approach around monitoring and observability. So, what has the cloud-native shift looked like for us? So, really, markets, you know, the markets are ever-changing. They're always moving faster. The data sets are ever-increasing. And we are, you know, trying to always stay ahead of the curve. And so, one of the things we've been challenged with over the years and continue to be challenged with is getting our ideas into production as quick as possible, so that we can understand the impact that it's having on our business. Is it working? Is it not working? So, it's really important that we can iterate quickly. With that said, we also need to keep our outages low, right? So, like I started out saying, you know, being in the market is really paramount to us. Not only do we provide a service to the markets for our customers, but, you know, there's also opportunity cost for really any trading firm when they're not participating in the marketplace, right? So, if you're not participating, there isn't ability to capture opportunity. So, really, we had to think about how can we scale, right? And so, we started moving down the path of microservices, which initially we started breaking up monolith applications into smaller things, but they're still highly dependent on each other, so it has become more of a distributed monolith. And so, now we need to continue to modernize our application architecture and adopt a cloud-native approach and start using things like containers and K8s and cloud providers, things like, you know, Azure and AWS. So, we really started, you know, to modernize once again and really to reduce our slow iteration cycles and be able to get our ideas into production faster. So, by moving to things like containers in K8s, or in our case, OpenShift, and leveraging our cloud providers for economies of scale, we now have been able to start down the journey of really being able to scale out our applications, which was a limiting factor of when we were solely on-prem. But this has significantly changed the way in which we monitor and observe our applications. So, we've had to really rethink how do we fill these gaps, right? How do we know what a container is doing? How do we know how Kubernetes is working, right? There's a lot of things in play when you introduce these new technologies that traditional monitoring and observability tools weren't necessarily built from the ground up to handle these situations. So, we've really evolved our focus on telemetry, right? So, focus on instrumentation, what we should be logging, what are our metrics, preparing for things like tracing. And so, as we've done that, we've really looked to focus in on open source tools. So, we've started to focus in on open source tools, and really that's because we're trying to solve some pain points, one of them being the swivel chair, right? So, we have logs, we have metrics, we have dashboards, a lot of traditional monitoring tools today. They do a pretty decent job of bringing all of these things together into one dashboard, but there's still a bit of context switching and there is still a massive amount of data that you have to interpret. And so, that can be hard to understand which data you should be using, which data you should not, and sifting through all that data is a challenge. And so, that kind of brings us into logging, right? So, what should we log? What should we not log? As we know, logging data typically doesn't decrease, right? Typically, at least in CTCs case, we are always creating more applications, right? And we're logging more and trying to understand more what's happening with an application, right? But you can only store this data for so long, and it can become very costly to store this data in a closed sourced vendor. And it can be hard to assimilate this log data with other metric and tracing data. So, you know, open source tools can definitely help with this, right? But then there's the case of skill sets, right? So, the open source tools are incredibly powerful. They allow you to avoid vendor lock-in, and they give you flexibility. With that, you know, you do need to have some knowledge about these open source tools, and at least at CTC, you know, we're continuing to build our knowledge, right? But, you know, it's certainly, it's been a journey. We don't have all the knowledge. And so, it can be difficult to hire for those skill sets or build it internally. There's also, as you start to build the knowledge and you start to figure out what data you want to collect, it's still trying to assimilate that data, right? And so, that's where something like a smart layer on top of it that can natively plug into open source tools. So, you don't have to, you can still continue to use your investment in open source and the flexibility that it provides. But in addition, you can get a smart layer on top of that. And I'll talk about that in a little bit and how OpsCruise comes in and some of the business value that it provides there. Alright, so next I will talk about here kind of the open source tools themselves. Maybe go back one slide, Shrida. So, here's kind of, you know, the layout of some open source tools out there, right? Things like Prometheus, Loki, Yeager, OpsCruise out of the box works with all of these natively. So, even if you have these tools today, there's, you know, not much you really need to change. And you can leverage all of your investment in your current telemetry collection, right? So, telemetry collection itself has really become commoditized by a lot of these open source tools. And so, you don't have to spend, you know, you don't have to lock in with a vendor when it comes to collection, right? You can get that in an open source way. So, there's also open telemetry, there's several standards within open telemetry, different protocols that you can use, which work with Prometheus, Loki, Yeager, right? You can use it for logs, metrics, and tracing. You can use the open telemetry libraries within your application. And so, everything really from a data collection and sending standpoint can be done in an open source way. And you don't have to worry about locking into a vendor. The only thing you don't get is a smart layer, and that smart layer being things like, you know, machine learning and telling you more insights into your data, right? And that traditional tools, traditional monitoring and observability tools, they don't always have those capabilities, right? They're very good at collecting the data and giving you ability to graph the data. But contextualizing the data really is the next evolution, at least from my perspective, when it comes to observability. All right, we can go to the next one. All right, so, CTC and obscrues. This is where the intersection really happens here and where the business value comes in. So, one of the things obscrues provides is telemetry, unification, and support, right? So, they can bring all of your logs, metrics, and traces. All of that can be collected in an open source non-vendor provided way. They will gather that data. They will display that data. They will contextualize that data. And it also leverages, like I've said, the collectors today that are out there like Prometheus, Loki, and Jager. So, if you have prior investments there, those will now be wasted. It also has flow tracing in it. This is a very unique thing. It's based on EBPF. And so, there's very little investment for you to understand how your applications are interconnected to each other. It uses this to present the application map, which Shridhar will go into. And so, you don't have to do any customized tracing to get an application map and how everything is interconnected. It is done without any development time at all. Architectural governance, right? So, it provides you inventory of where your containers are running, how they're running, what they're doing, where they're running. You have a lot of view into that. And so, it really brings it all together. It's an easy way for you to understand inventory of where things are. At least for me, having spent a lot of time troubleshooting applications, right? It can be, and as the environment becomes larger and larger, especially in a microservices environment, you can easily lose track of where things run. So, this is really important piece and there's a lot of business value here because the quicker I can find something and understand what is going on, the quicker I can understand root cause and solve the issue. And then it brings ML into the fold, right? So, I don't have to apply a lot of human power to understand and assimilate data. The ML learns over time and can present to you issues that it has found, right? And a lot of times that can take on the order of days for engineers to find a configuration that might be causing an issue in the environment. I personally just ran into this. A few days ago, I've had engineers spending hours or days actually trying to find an issue within Kubernetes that a tool like Ops screws through its ML could easily provide in minutes. Next, yeah. Alright, so the features that I really enjoy about Ops screws is one. This application map, right? It's really intuitive. It's amazing. It's out of the box. I don't have to do any custom tracing, any custom development, right? I don't have to invest any development time. I can just install the agents and I can start my applications and I get an application map. The next thing that I really like about the Ops screws tool is fault isolation and cause analysis, right? So, there's a lot of business value here, right? Anyone can manage open source tools and collect data. That's pretty easy to do, right? Assuming you have the skill sets to do it. What's not easy to do is how to assimilate that data and find issues within the data, right? It's really powerful when a tool can tell you something quicker than what you could find out researching on your own. In CTC's line of business, every minute, every second really counts when there's an outage, right? When time is ticking away, we're losing the ability to capture opportunity. This is where the business value of Ops screws really comes in to be able to contextualize this data and find faults quicker. The quicker you find them, the sooner, at least, for CTC. We're back in the market and capturing opportunity. The next feature here is it makes more data available, right? It pulls all of your log tracing and metrics data together, right? It pulls it into one view and really anybody can log in and see this data, right? You don't have to have as much operational expertise of how a dashboard was built or how it's being presented. All of that is pulled together in an easy-to-read view within Ops screws. Then the final thing is being able to look back with topology and understand where, in the topology of an application, things may have broken down or where you might be experiencing an error. Then, I guess, not necessarily a screenshot for this, but one of the things that's really invaluable in my mind is that Ops screws has an extreme amount of knowledge and expertise with open source tools and Kubernetes itself, as well as running and operating in a cloud-native world. Their expertise is invaluable. They're an excellent partner. They can really help guide you with your challenges in either collecting or presenting data. I just wanted to mention that as well as an additional thing that I really appreciate about Ops screws. I'll hand it off here to Shridhar and he can jump into some more technical deep dive about Ops screws tool itself. I'm happy to answer any questions afterwards. Thank you. Thanks, Luke. That was great. Thanks for speaking so highly about us. I'm going to spend a few minutes going through some slides that give you an idea of why we do what we do and what we do. Though we all of us know this, it's worth spending a minute catching up on what the fundamental challenges are in today where observability has to make some changes. The first thing is things are very complex. There's a lot of abstractions everywhere and that abstraction creates points of performance loss. Performance issues also just like traffic ban chups move around the system. They're no longer static and easy to catch. There are also dependencies. We build systems with a lot more dependencies today than we did some time ago and that brings in a set of problems that we deal with. Finally, we've mentioned this before. Luke also mentioned the fact that you've got to recognize and understand that the speed at which we update and move the products forward is increasing. These are the three complexities that really form the undergird of good, the observability goals of today. But all this actually can be seen to have a problem of disjointed monitoring. So there's no problem for data today. Data comes out from everywhere we look at. And you can bring the data together, say I've got it here, I've got in the dashboard here, I've got a dashboard there. But that is a data rich information poor situation that we have seen in the market. And because of that, and partly in spite of that, it's manual and lacks closed-loop resolutions. And that's a problem that needs to be dealt with. So these were the challenges we thought we should try and solve as obstacles. And as these complexities and dependencies and dynamics increases, the problem becomes a multiplier. So instead of building a small number of thick things, which you focus was looking at inside that thing, we have a larger number of thin things and that changes the way things behave, right? So the whole emergent behavior becomes a part of the problems scenario in today's systems. Now, so what else do we need in addition to the normal telemetry that we look forward to and know and love, like metrics and logs and all that clearly traces have added a lot of value. And traces are an important aspect of telemetry. But there are other things we need to know, right? We need to know the structure and dependencies of the application, right? And it's not just about moving a trace from one container to another. It's also about is it going to lambda? What sort of RDS is it accessing? Is it dealing with? So there's a dependency and structure that you have to deal with. We also need to look at every element, whether it's a container in a 360-degrees view, right? What is coming in? What is going out? What resources are you using? What services does it provide? You've got to look at that completely, right? The third thing is you've got to have a way by which you can understand the behavior of the application. SLOs and thresholds are useful and provide value, but they don't represent the behavior of a container. They don't represent what it's all about because things are nonlinear. It's not just a linear situation. So understanding and having a better way by which we can look at a container is important. And finally, we see that in all of these systems, human expertise has to be figured out. Human expertise has to be laid into the architecture so that it can complete the story and make it valuable for everybody. So having said this, what do we do? We have two things. One is clearly observing and following the standards is important. The more we have standards, the better it is for everyone, and costs and quality goes up and costs come down. So that's one thing that we all agree that we should do. Then behind that, there are so many great tools which are conforming to those standards as well as open source. So we also felt that we got to lay our strategy on top of these two things, the standards and the open source tools. So what we do is we are not in the collection business. We don't even intend being the organization which keeps your data for long term. We only keep it for the purpose of providing operational support in the short term. So you have the freedom to move your data wherever you want and keep it for whatever other purposes that you think you deem and see fit. So some of the data that we all know and the types of tools are shown in this slide. In addition to the metric logs and traces, we get configurations, right? The orchestrators like Kubernetes which we support and then of course Nomads and other things which are there also. They provide a level of configuration information in a standardized way, which is a very important for observability. Then you obviously need to get changes and topology. As the systems are more constructive with small things, you need the topology without which you can't deal with with observability completely. And finally, the knowledge of each application. How do we understand each particular application? How does this one work differently from the other one? These are the pieces of information we feel we have to add to the story of observability before we can handle it all together. So that's what we do. We bring in all this other information in addition to the metrics logs and traces to give you a complete story. How do we deploy? This is a diagram with lots of colors in it. And like in the character projection, it may show our tools much larger than your application. Really, we just provide a set of pods which are either they are demon set replacements like in the case of NodeExporter and CAdvisor. These are all open source tools. And then we provide these orange gateways, a simple, very lightweight pods which pull the data and send it to upscrews. Now, we also listen to cloud and all your cloud products. We also have, if you've got VMs and most clusters have VMs working right next to it and sharing the environment. So all of that we monitor and very willing to pull it. This is a simple deployment. The deployment is easy to do in a few minutes and instantaneously it is available for you to see. So some of the problems we solve is we try to solve and we succeed is, you know, Kubernetes day one problems, day two problems. And a lot of people are in the day one and day two situation. So you've got to figure out basic things. But also the Kubernetes itself makes it very complex because of these interdependencies and multiple objects that it defines. So we solve a lot of Kubernetes problems. We solve a lot of technology related issues, serverless. We do some in serverless and so that you can see that if an application is talking from a container to a lambda function, you want to include that in your overall observability in an integrated way. We do that too. So these are some of the types of problems that we had a deal with in the interest of time. I'm going to switch over to aid. Oh, sorry. I clicked the wrong button. Yeah, so I'm going to log into our UI so you can see how our system works. Give you a quick introduction to our product. What it does right up front is give you an app map. This is sort of a, you can say the landing page for our system. Okay, so we discovered this automatically. All of this is picked up by our system within five, six minutes of you installing it. And you automatically do a picture of your environment and here is that star view. And when you look at each one of these things, and it's expanding a little bit for you to see, you get to see these different pieces. For example, we have a RDS sitting there. We have ELBs in another part of the system. And we have different containers around the system. So you can pick any one of them. For example, say, let me just figure out where it is. Yeah, so you can pick any one of these tools. For example, you can see a container. Immediately the container information is available for you to see. And not only that it gives you all the metadata and all that that you get from Kubernetes. It also gives you metrics that is picked up. You can also pick up its labels that are there. And at the same time, right there, you can see its logs, connections, traces, and even a three-layer view. A three-layer view tells you that this application is running and this is the information about that container, for example. And it also shows you on which node in the Kubernetes is running and its neighbors. In case you have a noisy neighbor problem. And at the same time, what infrastructure node you're running on, say, in this case, it's AWS. Your instance name and the metrics on that aspect of your kit. So you get all these things integrated and made available instantaneously, right, from the system. Now, this also happens for other things. For example, if you're looking at, say, RDS. I'm a little, everything is slow when this thing is going on. So here you get information about RDS, which you also pick up and make it available to you, right? So you have different ways of servicing. For example, this gives you a service view, the service interactions without the pods and see how they are all connected together. And you can filter them and all of these filters and save them and all of those standard things that you see in dashboards. But just quickly moving, you get the node map, which gives you, discovers all your nodes and shows you the information that you need about the nodes, including its metrics configurations. Even for example, you want to see, when we talked about providing more data to more eyes, it's much easier for people to use such a system than to get into Kubernetes. You don't even have to teach them the operational tools. The app folks can access the system and be able to see what's happening with their part of the application. Now, we also provide some analysis which helps in capacity planning, looks at all your pods in a node and tries to estimate the usage of your CPU, memory and all of that, and tell you whether it's, evictions are possible, whether you need to provide a better, this is, look at this, it's burstable, right? So you may not, you may be evictable. So these types of things are also provided. I'm just running through this quickly to give you a quick idea. Similarly, we've got other things, right? We have a K test view, which gives you all the K test things. You're looking at the node, you've got five nodes, click on the nodes, you can see all the node information provided. And then we have a service performance dashboard. Here, what we do is the same thing that helped us build the app map. What it's doing is, one sec, why is this in my way now? But it'll come back to it in a second. So we have events, logs, functions, which are easy to understand. You get, you can pick up or see all your logs and search on them and look at the labels that they've been with. These are functions that we are, I think we're all familiar with. These are Kubernetes events. And it's also, you know, the processing of these events in the system. I'm having an issue here. But what to see in the service performance dashboard is we see the URLs, right? Every service to service interaction and maybe that's what this is. Excuse me, guys. So what we see here is, for example, you can pick up here, for example, you can see traffic going from one place to another. And you can see the traffic, you know, L4 data here that is bytes and all that. And depending on whether there is seven available, you get to see, you know, the URLs that are also in there. Right, so in search for card, for example, we have a traffic here, and then you can see the URLs which are flowing. This is all done without any tracing and without touching your container. And there is no impact on your deployment. This is completely done hands off, right? So you can see the average response time latency and all of these that come as part of our solution immediately as soon as you install the software. So as a total, we've got logs, events, time travel, service communities, and time travel is something very unique. What we do is every five minutes we take a snapshot of the system and we store it. So you can go back any time in the past and look at how your topology look. For example, you have a problem at 12 o'clock in the afternoon and at four in the evening, you want to look at what's the problem, what took place at 12 o'clock, what you find is that the topology has changed. Your scale is not on, your pods are not available anymore. So what would you look at? You need a way by which you can go back in time and see the topology the way it was. And that's what our time travel feature provides. You can go back and say, okay, there were four more nodes and 23 more pods at 12 o'clock. And those were the ones with problems and maybe not the ones which are running today. And we're running at this point. So that's our idea of time travel, which we think adds value in a dynamically changing world. So this is a quick summary. Finally, there's alerts. These alerts are done in multiple ways. We generate alerts from logs and metrics in the traditional way. We have that too. And we also generate alerts from our machine learning, which provides different ways by which you can see without setting thresholds. We're able to tell you that there is something wrong in this container. Something is off. It's not normal. And that available, for example, here is a metric which, I mean an alert which has come automatically without any thresholds say that something is off here. And these three metrics, it also give you explanation saying that these three metrics have sort of created this situation where we think you're doing something wrong or something off. Now we look at a lot more metrics. We look at about 34 metrics just for a container. And our machine learning algorithms look at all of this, but then identifies that because of the combination of metrics that we have from our learned past behavior, you're not normal. And these are the three things you ought to look at. So that's our ML model. The other type of alert that we do is, and for example, a chain model. We look at the chain of a response time as taken place. And our analysis system runs through and looks at all the containers that are in the path and suitably prepares the information, makes it available to you saying, hey, there is a problem that you had a problem in the SLO here. But if you see down the chain, you have a problem in this car cash and that's got a problem. And here this car server has a problem. So if I just click on that and I look at its analysis, I get to see more about what that particular one segment where that last piece in the chain had a problem. So you get to see the whole chain and this is discovered, right? This is discovered on the fly based on the problem that took place in that. So you could have hundreds of containers, hundreds of pods running and services, but it isolated the tree down based on that current situation and presents it to you to see where the problems are and then isolate it to his action that you take to fix it. Now, there are other normal things like, for example, let me just give you one more. It's a replica account that is not okay. It sort of figures that out and also does some level of analysis and it can give you some information. There's problems. Many things we could live in a dinosaur, image tags, invalid image name, volume on fail, missing config map. All this is analyzed and provided to you as part of our RCA process, right? So that comes to the end of the quick demo that I wanted to do today. Let me go back to the slides. Okay. So any questions? It's time for us to take questions. We are close to the time here. So any questions from anybody? I think there's some questions here, Shridhar, around, you know, it's interaction with Istio as well as any kind of overall system performance impact, which I think Cesar largely addressed as well. And then I think a search option. Yeah, I just saw the questions now. Yes, let me just answer them one by one quickly. Yes, we do work with Istio. With Istio is there. We are able to pick up flow data from Istio. We set up certain configurations in Istio, which allows us to pull useful metrics from Istio. Yes, we do work with it. Then if you're already running the open source tools, there are customers who already use the open source tools that we suggest. So we don't have to install anything, but if you don't, we would do that for you. And we just connect your existing tools that you are already moving. Right, so if you're running the open source tools in a cluster, the data movers or gateways are used up less than one GB. So that sort of addresses that question about the system impact, right? What about search? Any kind of search functionality within OpsCruise itself? I mean, all our screens do have search. You can search in every screen, right, whether you're searching for logs or searching for events, searching for pods, searching for alerts, searching for snapshots, time travel snapshots. The search is built in into all of them. Right, yeah. I think with, you know, I don't see any other questions here, but I think just to add a few things to what Trudar was demoing and why, at least from a customer perspective, my perspective, it's important. You know, those being able to diagnose, you know, those, you know, like he showed what is the container doing, right? Showing faults and things like that. Those are things that my engineers spend a lot of time trying to figure out, right? So there's a lot of value if I can find those things quicker through a tool, right? And that's where OpsCruise really provides some uniqueness in being able to find those issues quicker, as well as to understand, you know, things since we're in such a dynamic environment when it comes to Kubernetes. It can be hard to piece together what happened in time, right? So seeing what it was versus what it is now is also very important in the troubleshooting process and really aids with getting to root cause analysis quicker. So this is where, you know, OpsCruise is really unique in observability overall rather than trying to provide dashboards that you have to, that help you find root cause but you still have to figure it out on your own. It really contextualize all that information for you. And that's where it's really unique and set apart from other tools, at least in my mind. You can view us as a smart layer that sits on your telemetry but leaves you with all the options that you want to manage your telemetry yourself, right? So you could be using your data for purposes other than just basic observability, it could be for product management, it could be for customer and market. There's so many reasons that you need to use your telemetry that you don't necessarily have to leave it with us, right? You can keep it on your own cloud. But what we take, we use it for operational support and then we toss it. Yeah. And it's also really important from a user perspective too, right? Because a lot of observability tools, you know, really people lean on folks in SRE operations and things like that to really determine what's going on and how to interpolate the data. In OpsCruise, you know, developers can, and operations, they can all play in the same tool and it's very easy to understand, you know, the information and you don't have to have that human layer of, that extra human layer of interaction to interpolate the data. All right. Are there any other questions from anyone in the audience that wants to pop into the chat? Well, Luke and Shridhar, thank you so much for your time today. Thank you for all of your content and for answering questions. Thank you so much for being with us, everyone. And if there's nothing else, we will go ahead and wrap it up for today and see y'all at the next installment of Cloud Native Live. And thank you both again so much. Thank you. My pleasure. See y'all later.