 So let's get started. It's weird at time now. And hi, everyone. Thank you for joining in this afternoon. It really is great to see all of you here. Again, I'll start off with doing some brief introductions, and then we'll get started. We'd really like this session to be interactive and have more of a time allocated for questions at the end. So with further ado, I'm Alulita Sharma. I'm the co-chair on the observability tag. And I've been working in the observability space for many years now. I contribute to Open Telemetry as well as I'm on the Governance Committee for the project. I also have been involved working across the Prometheus and the Open Telemetry project on making sure that the metrics protocol has been fully interoperable. And again, super happy to see the collaboration we had there. And I also work in the Thanos Cortex as well as other stacks across the observability space. So with much further ado, again, hand it over to Bartek and then to Vijay to introduce themselves. And then we'll get started. Thank you. Amazing. So my name is Bartek Vodka, and I'm working at Google as a senior software engineer. I'm active in CNCF as a TAC tech lead for this group. And yeah, I'm maintaining Prometheus, Thanos, many other open source projects. And yeah, I think in Prometheus, what's relevant? We recently are active to make sure it works with Open Telemetry very, very well. Yeah, I also wrote a book called Efficient Go. It's about Golang and optimizations. Yeah. Hello, everyone. My name is Vijay Samuel. And I help lead the query language standardization workgroup as part of the observability tag. I've been an active participant in the Prometheus and the Open Telemetry communities as well. And outside of that, I help run architecture for the observability platform at eBay. Let's go. All right. So for today, we have, again, like a few slides, few content parts prepared, but we want to make sure it's interactive at the end. So let's go through them very quickly. We want to talk about definition of this group, what we do. We want to actually show what we do by showing our progress. We want to talk about active workgroups. So kind of like dedicated group of people and focused meetings and work stream about certain project to get it done. We talk about trends that we see when we are talking with end users and vendors and how to get involved. So essentially call for action for you to help us in this journey. So very quickly, what we do, that's our charter. But essentially, in a simple world, we try to grow the ecosystem of open source observability. We want to identify gaps. So things that may be projects that are missing in this ecosystem, maybe kind of things we can improve in those projects, especially around interoperability, compatibility between each other. We want to share good patterns, share knowledge about observability in general for both new to observability and those who are more advanced. We want to be venture neutral. So we want to make sure we're unbiased here. And we are kind of like making sure you can kind of like no vendor or kind of like steal the whole ecosystem here in terms of observability. So everyone has a first chance to move around with their observability data. And finally, yeah, support projects. As we know, we have multiple observability projects in the CNCF. Some of them are in sandbox incubated and graduated stage. And they just sometimes ask us for help, or especially if they have to move to a different graduation stage, we help them to get there. So let's talk about some accomplishments. So this year we released observability paper. It's essentially an introduction to observability as we know it in the CNCF. That's the first one-zero version. Over 30 people helped writing content or reviewing. So it's pretty kind of like a laboratory. So make sure you go there, get this, you know, read me page and work for it. There is a lot of actions there, right? It's of course not, it has gaps, right? It has some things we would like to expand more. So this is called for action for you, right? Like essentially we have our GitHub repo just go there, check the open issues and assign yourself and try to provide the content for us. For review we can expand this and release version two-zero. This is in progress. If you have ideas for more content that you read through the paper and you are missing, let us know. We'll just add it there. And, you know, maybe we can collaborate together on this. So, you know, everyone knows more about observability, essentially as it evolves. Second thing, we love knowledge sharing, especially in the video format. So, in our tag community meetings we sometimes host presentations. So feel free to check our YouTube channel where we essentially talked about those projects. For example, in the last quarter we talked about accessibility, optimizing, promoting to use open course, open elementary community GPT, and where is the graph of understanding artifact composition. So lots of nice stuff new to me sometimes as well. And so feel free to join and learn. Especially feel free to share your knowledge about, you know, very wide spectrum of observability. We already have scheduled three more talks very, very soon. Native histograms by Bjorn and continuous profiling from Fred Eric, Polar Signals and Proxy, awesome by Wesley. So really let us know if you want to speak about that. We want to host you. We want to share your knowledge. And, you know, I will continue to ask questions to you as well. Later on we also collaborate with other tags and work groups. Recently we kind of sponsored a very important cloud native AI work group, of course. And they actually were super fast to deliver a really, really comprehend white paper as well. And it's available on the CNCF page. Side note, we should probably make our white paper available there as well. Observability one. That's some action item for me from this week. We should also mention that we are kind of trying to review the projects when that comes to CNCF. For example, we took a look on opalometry, Kubernetes GPT, Logging Operator. And yeah, it's amazing to see the space grow. And yeah, finally, work groups. I mentioned two work groups. First is Observe Kubernetes. It's essentially our idea to kind of make sure to share the knowledge in an interactive way. So we have a paper, we have documentation, but we would like to also show you maybe a demo, maybe a tutorial that spins up multiple, let's say, the biggest projects, observability projects on the Kubernetes and see how you can essentially observe some application, which is like online boutique. So thank you for, you know, like so many people were there, but definitely shout out to Ken and Hendrik, right? Probably more, sorry. But like so many people were kind of like already doing this. And so right now we have a demo that, you know, uses essentially contains, you know, metric logging traces, but we want to convert that to tutorials. So if you want to help join this work group and we can kind of build my studio that you can reuse as well in your presentations in, you know, in your internal training, so I think it's a nice project. And Vijay will tell you more about exciting stuff on querying. Thank you, Bartek. Query language standardization. As everyone is aware with the open telemetry few years ago when things on the ingest side were extremely fragmented, like-minded people came together. They put out a specification and a means for everything to converge to the point where we now have one SDK per language for all signals, and it is basically the de facto standard. A few months ago, Chris Larsen from Netflix and myself, we had the conversation with the tag to see how we can do something similar for the query side, because the same that was for ingest several years ago is still the case on the query side. There are so many languages that are there. Different kind of preferences were baked into each of the languages. So we are setting ourselves on the journey to figure out what these languages are, what was the reason they were built in or designed in the way that they were. What are the commonalities that are there? How can we suggest something that could be a standard on the query side as well so that observability as a practice has a single way to instrument and a single way to query? So that being said, this past year, we have actively been surveying several open source projects and vendor products on how the languages have been built out. And shout out to everyone who helped out or spent time in coming up with slides and meeting with the working group and explaining everything about the language and answer all the questions that we had. Some of them are on the screen. We still have a few more to go. If you are a creator of an open source project that has its own query language for observability or if you are a vendor that has done the same for the products that you own, please do reach out to us. We'd be happy to interview and collect valuable feedback that you might have on how your languages came about. The next step that we also want to do is to empower the end users to tell us about their observability journey. What are the ways in which they query the observability platforms that they are consuming right now? What are the pillars that they use? How they consume them for the various use cases so that we can identify patterns on these are things that are very important to end users. These are things that are not available, but they really care about things like that so that we can finally go about describing that ideal language that we could potentially propose. We welcome contributions both on the creator side and on the end user side. You can find us at the Slack channel that's mentioned on the slides and we meet the second and fourth Tuesdays 9am PST. Okay, so today another topic that we want to cover. You now know all about our work groups, but this has been an area which has kind of picked up steam amazingly fast in the last few, last year I would say. It really is how leveraging, understanding observability for LLMS as well as understanding how to use LLMS for observability. So in this diagram, as you see, in the case of LLMS, which are, again, you can use for observability to act, predict, suggest, assist with the help of an LLMS in observability. Similarly, LLMS can be observed, monitored and analyzed with the observability frameworks that they look at, right? So that they use it. So again, it means different things to different people, you know, from an observability perspective, LLMS can help a lot with root cause analysis. And I think we have the next slide here where we can, we can actually talk a little bit about the layers, but whoops. Go forward. Okay. Okay. Thank you. Try it once more. Okay, so going back to the first slide. Again, I just wanted to complete my thought there that LLMS in observability are typically used for root cause triaging today in systems, you know, where you have deployed applications in production. And you also use it for analysis, because it has already come into the, you know, ML Ops pipelines where operations does look at, you know, use LLMS now for being able to real time, you know, coalesce all the alerts and the data that, you know, telemetry data that is being generated by LLMS. And similarly, LLMS systems based assistance like, you know, chatbots are being used for querying this data. Right. So if you are getting, if you're running an application globally, where you're running it in six regions, and you're getting data telemetry data from all those six regions. Typically, an LLMS is used nowadays to be able to consume that data, you know, in terms of the telemetry coming in and being able to query that for triaging analysis. So, you know, that's a very basic case of how LLMS are used. But it still is something that has rolled in into ops today. Right. And this is not even without adding observability frameworks to use LLMS actively and directly. So that's something that, again, also leads to the, you know, opportunity here to be able to actually adapt existing open source collection frameworks such as open telemetry or any other Prometheus agent and other, you know, agent components in open source that exist in the CNCF environment or otherwise to be able to leverage LLMS for exactly, you know, consuming telemetry data, understanding it, pre aggregating and being able to actually provide standardized signal analysis as a result. So moving on, again, LLMS are also a new type of asset or workload, if you will, that we need to observe. And I had, you know, we were talking about AI enabled applications in a previous talk I had where you have different models that are being introduced weekly, some of them, and most of them are black box today. The black box is ability, the black box, even to the application sometimes, but no, there are also models where the entire, you know, weights as well as other parameters for the model are all defined. They're actually published. And you do have some metrics that are, you know, being released or available from each layer. So in these layers, again, as applications become, are built with LLMS, the need for observing these LLMS also becomes part of understanding the behavior. And applications, you know, which included code and models traditionally now also includes small models or large LLMS, right. And also that, again, leads back to the idea that for observability, you are also looking at new types of hardware, which is used for AI applications, such as GPUs, accelerators, CPUs and other kinds of specialized, you know, chipsets. And also model inferencing and training pipelines. So these are new assets that are coming into place, but also on the observability side, that framework of instrumentation as well as analysis needs to be built out in the existing application, you know, observability projects that exist today, which are very widely used across the industry. So moving on again, some of the areas that are already being used are using LLM based data are anomaly detection, tiered analysis, distributed tracing comprehension, where you need to understand, you know, the, if you have hundreds of spans in a particular transaction, for example, you know, what is the general behavior going to look like, as well as data quality, root cause analysis, and, you know, also suggestions for the steps. So what did, did you want to talk a little bit here about, you know, rags and why rags are not enough. Don't quote me there, but essentially I saw. Well, like, I guess, yeah, like, generally, like the community around LLM, especially around observability or making decisions on top of like some data that you have available are complaining on rack, which is essentially using vector databases, right. Essentially, my point is that if you want to innovate, if you want to kind of like know what's next is that we need better kind of ways of making sure that our LLM has context of your live deployments because right now, the current solutions are essentially asking you know, to make some decision or kind of suggest some decision based on like limited context of like, you know, maybe thousand tokens or 10,000 tokens, which you can paste maybe, you know, I don't know, hundreds of yamls of your deployments, right. But literally cannot model your whole architecture yet, right. So, and, you know, rack is essentially a way to kind of like maybe get this data upfront to the LLM, but it's still not enough. So my point is like, we are still looking for ways to kind of like make it better. So please innovate, please. Like, it's not like somebody do this for you. There is lots of room we can improve. But yeah, essentially, this is what we are looking for in the future of embedding LLM with observability data. Good. So I think, you know, again, needless to say the reason why we highlighted the spaces because it's evolving very fast. And whether that's on the tool chains that exist today, or whether there will be new tool chains that are coming in, you know, which are added for specifically, you know, understanding LLMs and being able to actually monitor, analyze, and visualize and correlate all across the board with other, you know, layers of the system to be able to get a more holistic understanding that you usually, you know, want to have from observability. There is a fair bit of work to be done there, right. And that's the opportunity where if you're an observability engineer or an ML engineer, you really can get involved in actually building out some of these features. On existing projects or maybe even, you know, starting a new project where you actually specialize on a certain set of models that, you know, you're looking at or you understand well, and to be able to build the observability instrumentation for it. So moving on, again, there's an, we usually do this every year. And these are some of the trends that, you know, we see across the observability space at this given point in time. And again, across the industry, whether we are using LLMs or not in our applications, cost continues to be a very, very pervasive theme where understanding cost and performance of resources is super important. Because then, you know, organizations are running large scale systems, large scale applications on cloud infrastructure that the cost optimization is essential as an essential part of observability. And observability platform costs from pre-aggregation and sampling, that is data costs in itself is an area where there are continuous improvements in terms of what does pre-aggregation do for you, right. Can you reduce the cardinality of the data you're setting over the wire because it costs money, right, at the end of the day. And you turn on tracing, you know, for 30 seconds in order to get 100% traces or, you know, do you do it for a minute? It costs money. And so this is a very pervasive theme in the industry in terms of continuing to optimize. And there's a couple of open source projects within the observability domain in the CNCF itself. Cube cost and open cost. Cube cost is based on open cost. And these have been, you know, used as just foundational projects, if you will, or foundational components to be able to track cost, right. But as you now enter also the world of smart applications, you know, that dimension changes because how do you measure costs for models, right. And is that something that has already been defined? So these are areas which are evolving, which, you know, there's a fair bit of work that is being done by hardware vendors to be able to, you know, provide some data for the resource utilization you are doing. But it's not adequate because, you know, there are lots of other moving parts in the data that is being generated, you know, for your telemetry of your applications, which also costs money. When you're shipping it over the wire, whether you're storing it, how long you're storing it for and, you know, do you actually look at it, you know, seven days after you don't need it anymore, right. So these are many considerations that you need to keep in mind. And there's a fair bit of work happening in the industry, both from end users, as well as from open source engineers on the projects, where some of this is being thought about. The other part is very, you know, there's a fair bit of work happening is end to end observability, where you have, you know, this is not a solved problem yet. Although, you know, many, many pipelines have been proposed, and many reference architectures exist. But you still cannot say, hey, you know, I'm going to just turn this, this, this on, and everything will work end to end in terms of being able to see my Edge networks, my Edge devices, client applications, server side applications, infrastructure models, as well as any other data that you want to see. It's not, it's not there still, even after, you know, all the work that we've all done, whether that's in open telemetry, or whether that's in the media, so whether that is in tracing or logging, right. So this is also another opportunity where there's a lot of work that is, you know, ongoing. But again, our world also becomes more complex as we introduce a new generation of applications coming in. Smart applications. Open telemetry is moving us in the right direction, because as you can see, initially, the project started with, you know, converging ingestion under, you know, of the different telemetry data under one umbrella. And initially it came in, started tracing, then metrics, logging, and today profiling has also become the fourth signal on the project for ingestion, right. So you continue to see that convergence happening in the ingestion space. Now, why is profiling important? Because believe it or not, in the world of understanding performance and resource utilization for models, again profiling is used, fair bet, for being able to understand those layers and what, you know, the latency is and performance is for each of those layers when they are in a model when they are being used for training or for inference pipelines, right. So again, it's an interesting time in history where you're also seeing that convergence happening, but existing telemetry type signals being used for, you know, that for these observing these new assets, if you will. And last but not least, just wanted to call out that multi-tenancy is also another area that is a significant amount of work that's, you know, ongoing. And what that, what does that mean is that, you know, for a large scale systems that you're building, you would like to have multiple customers leveraging the same common infrastructure from a cost perspective. And hence multi-tenancy, even in the observability data space, becomes a thing because you do want to be able to access, you know, data and correlate across multiple tenants, which may be different namespaces that are belonging to a single customer, right. And a single customer could be, for example, your finance, you know, organization running 20 applications where they're sending telemetry for 20 applications into 20 tenants, right. And they want to be able to correlate their observability data and be able to see, hey, you know, this is the behavior of our systems at any given point in time. So multi-tenancy is actually becoming very important in the, in the scale of, you know, the type of applications as well as the cardinality of data that is being generated by systems. So with that said, again, I think I'll hand it over to Barkech. Do you want to? Why not? Why not? I would love to. So last slide, how you can get involved, right? Make sure to participate in our discussions. You can do that by joining our calls, which we have two per month. Make sure to just add a topic to the agenda, maybe go to our Slack channel and let us know what you would like to chat and maybe how to form this. But feel free to ask about anything. We have people essentially new to the community asking maybe questions that are kind of related to maybe Prometheus project and telemetry and TANOS and any other project. And they kind of don't know where to start. Feel free to ask them there. Like it's fine. We direct you to the correct people. And this is super important. Don't be shy. We are here for you. You can use our mailing list, but I think Slack is really good enough. And yeah, really share your insights and present the topic about maybe the project. You know about maybe problems you have about, you know, how you, even if you didn't solve it yet, you know what incidents you have, you would like to have you in. So please join us and thank you for coming today. If no one stops us, maybe you can ask some questions. Let me grab a microphone. Anyone? Any questions? Yes. Hi. Thank you for the talk. I'm not familiar with the LLM things. So we talk, we, you talk about the model LLM for the observability. So where we can find such a thing where you can find the model or something available for, I don't know, to test on our data or you have to buy a vendor. So I think it's a good question because, you know, where do I find and model that I could use for observability data, right? Even if it were out of the box, there are several simple models that are available, which are open source and they can be used. You can even use the existing chat GPT or Gemini or other, you know, services out of the box. But typically, you know, you can just actually configure that in terms of being able to download models. I think the references we could give are probably your hugging face models, the TensorFlow models, your PyTorch models, which are actually available. It depends on the number of parameters that you are, you know, interested in and where you're running these models. Because again, size is, you know, proportionally larger if you have large models, right? Large LLMs. So definitely, you know, these are some of the projects where you can actually download the models and run them. Amazing. Any other question? Hi. Yeah, thank you for the talk. It was very, very good. I like the LLM part as well, even though that's not something that's being utilized by us at this point. I think we are still kind of at the point where I think we have a challenge of balancing resource consumption and allocation versus granularity of the data that you get out of it, right? It's like, you know, you increase your cardinality, you increase the amount of series and the resource usage, particularly memory, explodes in Prometheus and in Thanos. Do you have any guidelines on how to deal with that problem, how to find the right balance? Martin, did you want to answer that for campus? I think it's a good question. I think you should be able, we should have a solution where you specify I want to use only, you know, let's say two terabytes of memory on my cluster and nothing more. And essentially, you know, let's do as maximum as possible within. Now it's a bit more manual, so, but I really encourage you to kind of like make it data driven. So essentially, essentially, you know, kind of measure benchmark and put on production some portion of observability slowly increasing, right? I agree on certain cardinality of the metrics, agree on certain volume of logs and traces. See, you know, to the point where it's kind of minimally, minimally useful, measure the cost of it and let's see and start from there. And you know, if it's too expensive for you, you have to essentially reduce some of it, maybe focus on metrics more and less on logs, maybe opposite, maybe actually reduce cardinality of metrics and use logs more. But yeah, this is what I would, what I would kind of like recommend. I wish there was more automatic solution that will help you analyze all of this. It's a good idea. One other thing that you can probably do is like analyze your query logs to see if the labels that have high cardinality are things that you're actually querying. If not, some of the projects have ways to do a pre aggregation, or you can use the open telemetry collector to do aggregations as well. So that before it hits the DSDB, you can pre aggregate and store, which is a lot cheaper. Yeah, I mean, we definitely again in several of the solutions, you know, we definitely pre aggregate a lot. And we are very conscious of again capacity sizing ahead of time, where you are actually very much sizing storage, as well as data traffic, right? So kind of going in with some initial parameters, you know, for these numbers is always good. But also the percentage of, you know, traffic that you're sending or data you're sending is always proportional to the amount of aggregation you're doing. So you can do multi level aggregation, depending on the type of metric or depending on on the time period, right? Thanks for the talk. And it's nice to see the tag is progressing in a year. A lot of things has progressed. About the first question part, actually, we already have an open telemetry demo application in open telemetry repository and another one here. But a start point for LLMs, a demo application would be great, actually, where we can each offer. Yeah, actually, we were discussing this on the open telemetry demo. And we do plan to use, you know, kind of introduce a workflow for being able to actually do a simple model and being able to trace it, or being able to, you know, instrument it and then publish some metrics, right? It would be still a playground app, but still nonetheless, you know, does provide you insight into how you would do the instrumentation. What kind of metrics are you looking at for the models? And also, what translates into SLAs and SLOs for model performance? Because that's another new area that's emerging out of that. That's great news. And the second part, I have heard a lot of cost work throughout the conference, maybe an energy consumption can be also as refrigerators or kitchen equipment. We have those labels. That might be nice looking at from energy perspective. Yeah, in fact, in fact, there's been a conversation about the tag sustainability, working with the tag observability tag to be able to define a new sustainability taxonomy of labels, so that that actually could be used as a standardized set to begin with for, you know, energy and performance observability. Yeah, great presentations. I love the end user surveys, so keep up with those. I would really like to see more of those coming from the public sector. We would really like to participate more with our experience. Documentation is really top concern, missing documentation and scattered. On specific projects? Open telemetry in general. Is it on the website or do you need to dig through all of the repos? Open telemetry, for example, which is a very large project, it has 80 plus repos, is in the process of consolidating all the documentation on the doc site. So you have to go to every repo, but rather than you can find it on opentelemetry.io to slash docs. And Prometheus is the same way. Again, the documentation is centralized and most large projects tend to do that. But if you see any areas where you see that it could do better, definitely please let us know. We talk about the projects all the time. That's great. So definitely we can work together. And the second thing would be more use case studies, how certain companies in different sectors implemented sort of all the signals and make use across of that in their organizations. Especially when they have multiple teams, I have 200 teams and 1,000 developers. So making sense of all the telemetry is quite a challenge. Yes, absolutely. And I think to that point, I would say that the new end user tag tab that is coming in now is actually tasked with that initiative of continuing to get in technical case studies of observability implementations, which will be again published by the CNCF. But that's a new initiative that's actually just starting off. Well, honestly, we could kind of have some section about observability use cases and we can essentially dig through the existing KubeCon talks and just make it in a centralized place. That's a good idea. So I will note this down. Last two questions, maybe. Thanks again for the talk. So my question is what are some of the best practices to use LLMs for at scale observability. But if you want to take that, I can take the second iteration of it. Okay, I mean, that's the problem. That's what we are missing here. For example, like there was a good question before, there was no observability specific models, right? So you have to kind of use the general model to ask, you know, generic questions and provide as much context as possible. So the best advice I have now is to choose the model that kind of, yeah, is the biggest with whatever compute power you have available and really try to narrow the context. So essentially what exactly is happening in your cluster, what exactly scale problems you have and yeah, and go from there. But we are missing. Yeah, we don't have a good answer to these questions. Yeah, I mean, it's a good question because at this point in time, there is not a standardized suggestion that, you know, we can give out of the box. But however, we do plan to actually collate that together over this year because you know, again, there are several models at this point that are used for observability. It depends on like if you're doing anomaly detection, there are specific models that are used, you know, both statistical as well as ML models, which are very common, commonly used. And we can certainly, you know, kind of catalog that out on the tag of the ability, you know, documentation. But right now there is not any consolidated documentation that is available. Another to do. Thank you. Last question. I was following your work on standardizing query languages. And I also heard opinions that since LLM becomes so powerful, it doesn't make much sense to standardize it because learning is no longer a problem. So we can like generate everything and no need to learn each individual language. But on the other hand, each language is tailored to actual database. So it will be more efficient and any average standardized query language will be not that good as existing one. Can you give any arguments why it's wrong and like why we still need this? So I think it's a very good point in time. I mean, you know, because we started the query language group just, you know, a few months ago. And it's really, really awesome to see LLMs, you know, pick up steam and actually get used for MLOps, right? So ML observability is also actually being used. So if I give you an answer that, yes, it's already being done where, you know, multiple query languages are in the observability query language space are actually being fed to LLMs. And there's one human query that is being done to, you know, get whatever data you want from whichever telemetry type that is already being done. Now what is the next step here is that if we could provide a reference implementation from the query language group or reference, you know, architecture, that, hey, you know, you can take this kind of a model where, you know, these specifications and be able to have a demo application available for multiple query languages that you can download and use. That's the next step. I can add one point to that. So yes, it is possible for each project slash provider to have a model that can understand their backend. But from an end user perspective, it's not just dashboards or ad hoc queries that you typically care about a big portion of your observability is with recording rules and alerting rules and whatnot. So there, would you put an LLM in front of that, it would probably be something that's not very cost efficient. But rather if you have standardization, you promote neutrality in terms of projects and vendors implementing their own thing, and you're able to define your rules and alerts in one way and use it across any of these projects. Yeah, and like I already mentioned, like it also provides us the ability to say that, okay, you can have one base model for this standardized query language that you find you and you offer that as a mechanism for others to add weights and things like that. So there are there are definitely a lot of advantages to having a standardized language similar to what we have for the database world. And I would like to add two things. One thing is that when I was walking through the books today, or yesterday, there were two vendors who don't who offer essentially logging and tracing metrics back and they use prompt QL for metrics, but they don't have any language for logging traces. And I asked, why not just using something that that is now or or something that I'll create your own. I was like, I'm not doing this like there is query standardization how we know kind of collaborated with open telemetry. They will come with something amazing. I would just be that people will definitely adopt it. So I'm just waiting. I'm not going to implement twice. So that's the first kind of thing that people are waiting for kind of somebody who helped define some standard language. They don't want to do that. Right. The second thing is that many of those small, say languages that are per project, like they are not in the general model, like, and there's literally statement from from some people. If chat GPT doesn't know about this certain language, it doesn't exist for me. Right. That this is where we are here at this point, which is good. You could be changed many of those maybe small project or like vendors would have their own models that would help with this problem. But right now, a single language can help. Yeah, to have a one model that understand this language, for example, man. And with that, really, really, thank you for so many questions and yeah. Thank you again.