 Hello everyone. For those who are still getting seated, there are still plenty of seats up front. There's five amusing ones here, but there's actually legitimately like more seats here. So if you shuffle forward, you can actually find a place to sit. So please do. While you do that, we'll introduce ourselves. My name is Morgan McLean. I'm a director of product management on Splunk. I'm one of the co-founders of Open Telemetry. I still spend a huge amount of my time on the project every single day. I'm Dan Jiglowski. I'm a principal software developer at Observe IQ and a maintainer of the Open Telemetry Collector. I've been developing observability software for over a decade and contributing to Open Telemetry for the last three years or so. And we're talking about logs today. So Open Telemetry started 2019. We added traces and we added metrics today. We made an announcement earlier today that we've added logs. Logs have actually been present in Open Telemetry in some form for the last two years roughly. But a couple months ago, things started actually reaching maturity. And so we declared 1.0 for various components of logging in Open Telemetry. So we had a big celebration this morning, a big announcement about that. But today, at this session, we actually get to dive into the details about what that means. If you're not familiar with Open Telemetry, Open Telemetry is a collection of tools, APIs, and software development kits that are used to instrument, generate, and export telemetry data that help you analyze your software's performance and behavior. And in practical terms, that actually means it captures traces and metrics and logs and other types of data from your applications and your infrastructure, and it sends them to backends for storage and analysis. Part of the beauty from Open Telemetry is you can send that data to effectively anywhere you want, which is very, very nice. Gives you a lot of flexibility, gives you ownership over your own data. The most used component of Open Telemetry, there are many, but probably the most popular is the Open Telemetry collector. This is typically deployed as an agent on each host, so deployed on Windows or on Linux, or deployed onto Kubernetes clusters. And the collector does a couple things. It captures logs, which is what we're talking about today. It also captures system metrics, so host metrics, CPU memory consumption, things like that. It also captures metrics from third party applications you might be running. Databases, message queues, things where you didn't write the code, but you deployed it, and there's sort of common things that we know how to capture metrics for. Importantly, the collector can also receive data from other things other than that. The most common of these would be Open Telemetry's language instrumentation. So these would be SDKs, Open Telemetry has one for every common programming language, and automatic instrumentation agents. Those send data to the collector, the collector then sends that data up or out to wherever you want that data to go. You can also pre-process this data in the collector. We're not going to talk a whole lot about it, but I guess we aren't going to talk about that today. We're not going to details for non-logging use cases today, but you can use the collector to reshape, to change the data, to change the metadata, and it has a ton of power and flexibility there. And it's important to go a bit back in time to talk about why we're even discussing logs here. Open Telemetry was announced in 2019, which I mentioned earlier. Originally, the vision for Open Telemetry was to add support for distributed traces and for metrics. Tracing was particularly important. The only sort of tracing solutions back then for capturing data were always tied to particular backends. They were either vendor specific. You'd go sign up for some vendor APM software. They would give you an agent. It would capture traces, but it would only work with theirs. And even open source solutions at the time, Jaeger and Zipkin, they would have agents or components that would send traces back to those specific components. Open Telemetry really democratized this. It had an independent way to capture distributed traces that you could send to any backend. Last year, that vision was extended to metrics as well. And for logs, though, we hit a bit of a question, because there are actually many really good solutions for capturing logs that are independent of backends today. Fluent comes to mind, Fluent D, Fluent Bit, but there are plenty of others as well. And so logging is an area where there are already solutions that were relatively good and relatively well adopted. And so we wanted to add logs to Open Telemetry so that people would have a single place to process and capture their logs. They'd have one agent, one configuration. The benefits of that, I think, are fairly self-evident. But we also wanted to go beyond that, right? If we were going to tackle logs, we'd have to do more than just what is otherwise done in the industry but in Open Telemetry. And so we went around and we asked various large end users, sort of hyperscale cloud companies all the way to just large companies running software scale, and also met with vendors of logging products and also talked with them. And there were a few problems that they pointed out that even with the best logging solutions that existed then, this isn't about 2020, that they still didn't solve. And the first and probably the most prominent was enforcing consistent semantics and metadata. So logs, in contrast to metrics and traces, are almost always written by a human at some point, right? A software developer writes a logging statement, that's the logging statement gets picked up, it gets parsed by a logging agent sent to a back end. We all work in fairly large organizations, we have different people writing logging statements, there are many different standards and patterns that get applied to the logs that get authored. This is problematic when you actually want to analyze these logs. In order to do great aggregate analysis, you want metadata from the most basic things like timestamps to more advanced metadata like HTTP semantics for request to be recorded in a way that you can compare them across logs from different sources. This is often very, very challenging, and it leads to companies where these are authored differently to either doing one of three things where they either don't update this, so they just simply lose out on logs in their queries when they just don't match. They try to pre-process their logs and spend a ton of engineering time in a ton of CPU and memory, where they're reshaping logs after they're captured to put them into the right format, or they're writing the world's most disgusting, nasty logging queries that take every single case of metadata into account. These are solutions, but none of them are sort of enviable. None of these are a good place, a good healthy place to be. The other challenges that they pointed out is that logs are, log capture is often not super performant. So I'm not talking here about log processing in the back end of the cost of that, but literally the CPU and memory impact of most logging agents is higher than most people think it should be. This isn't to begrudge those logging agents. They're very well written generally. But the challenge is you have agents that are parsing human readable text. They're doing a fair amount of work because logs are free form, and that just simply does cost a lot of CPU and memory. There are other challenges that people pointed out. Logs aren't typically correlated with traces and spans. At least they weren't when we were having these conversations. I think that gap has actually generally been bridged since then. But it's another thing that open telemetry does nicely out of box. Awesome. Thanks Morgan. So I'm going to start with a quick overview of what the rest of this talk will cover. So first I'm going to talk very briefly about the evolution of logging, starting from its very simple origins, all the way to how open telemetry integrates with modern logging frameworks. Morgan will tell you about our representation of logs, including the data model and semantic conventions. And I'll talk about the open telemetry collector, specifically how it can ingest and parse logs from many different sources. And then finally we'll do a quick demo and wrap up with Q&A. So logs have been around for a very, very long time. If you sit down to learn any programming language, you immediately write a logging program. You might not think about Hello World as a logging program or telemetry program, but it emits a signal that communicates what is happening within the application. And that's pretty much what telemetry is all about. And from there, most aspiring developers quickly learn the value of dropping these kinds of print statements throughout their code. You can see which parts of your code are executing, where your errors are happening, what values your variables are taking, and so on. But this approach has many limitations. And so to address those, dedicated logging libraries were developed. So most languages these days have a small number of well-established and widely used logging libraries. And some are more sophisticated than others. On the simpler side of the spectrum, you have ones that pretty much just add a timestamp and maybe a new line to each log. But at this level, you're very much still just emitting information with very minimal structure. On the other end of the spectrum, you have modern logging frameworks. And these provide a number of other benefits. You get your timestamp, but they typically impose some notion of a severity scale. And they encourage you to structure the data. They'll encourage you to use key value pairs so that you can index and query on the keys. And they may even include some textual information either automatically or that you can optionally provide. A simple example is like a log or name, which gives you some context of about where the log came from within your code base. So the other important thing that these modern logging libraries like to do is to independently configure where the logs will go from your application, separately from the actual logging statements in your code that describe what information you think is relevant to capture. So these are concrete examples. Log4j is popular in the Java world. And they have a concept called appenders, which allow you to do this to say where you'd like your logs to go. This example is very simple. We're just formatting a log into a line of text and then printing it to standard out. So it's just a fancy hello world. But because we're using this logging framework, we can very easily reconfigure where the logs will go just by changing the appender or adding another appender. So we have an open telemetry appender here. So this will convert the logs into open telemetry's representation and allow you to transmit them directly to an open telemetry collector or to a back end using OTLP. So ideally, to make this switch, you don't really need to write any new code aside from a little bit of configuration. And these appenders are written using what we call a logs bridge API and SDK. We have these available in several languages today, and the rest of the languages supported by open telemetry should follow fairly quickly. These make the process of developing these appenders easier. They help to ensure that the appender adheres to the open telemetry specification. We have appenders available today for several popular logging frameworks, but the general goal here is that most popular frameworks and most popular languages will eventually have an open telemetry appender. And by the way, if you're the kind of person who's inclined to write an appender for a framework that you are using, please consider contributing that back to the open telemetry project. We'd love to have it. So taking a step back, this approach we've taken with logs is different than what we did with metrics and tracing instrumentation. And the reason is basically just because of these logging frameworks. They are ubiquitous and mature, and the subpattern pattern gives us everything that we need. We can apply a set of semantics across all telemetry types, and we can consistently correlate logs with the other data types by annotating the same contextual information. So the resource that emitted the signal, or in many cases, the transaction that triggered the telemetry. And we're also achieving substantial performance benefits this way versus the use of traditional appenders. And again, primarily, this is because we don't have to re-ingest and parse the logs. They originate in open telemetry's format, and then they stay in it. So I'll talk more about the collector in a minute, but Morgan's going to take you through the data model. Yeah. So there's two actual things to talk about here. So open telemetry has its core data model. This is the data model that is present on every single signal that open telemetry captures. Most of the fields are required. It is a nice, strongly typed thing that is always there. And it includes the fields you would expect for a metric, a trace, a log, and future profile, anything like that. So timestamps, trace and spans, if a log is going to be correlated with a trace or span, as well as information like the resource information. Resource information is probably the most critical part of this that defines the host or the cluster or sort of basic host information where a log or other signal was captured from. It also describes the surface information. That's really important when you actually want to correlate a log with a trace with a metric or something else. This is all the standard open telemetry data model that you have seen before. There's no major changes here. What's being added or what's being extended really is semantics. So these are fields that are very particular to the situation in which a signal is captured. These are not specific to logs, but they're probably most relevant for logs. So if you think of an HTTP request that gets captured using open telemetries now, 1.0 HTTP semantics, you would capture things like the error code or the latency or other sort of core information about that HTTP request. That would get encoded in the log. This is also strongly typed. It is also typically binary formatted when the log is being transmitted. And so it's something that can be read very performantly and critically. This guarantees the metadata will be the same because it forces you to enter it in the same way. It is checked and tested throughout the pipeline. All right. Let's talk about the collector. So this is, if you're not familiar, it's a standalone process that can ingest metrics, traces logs from many different sources and then it can process those signals in many different ways and ultimately forward them to the back end of your choice. Most users of open telemetry will find the collector useful in one way or another. So like the rest of the project, the collector first supported traces and then metrics, but the intention from early on was to natively support the ingestion and parsing in blogs. Basically, we want to have a single process that handles all types of telemetry. So this was largely bootstrapped by a donation from Observe IQ. We had developed a standalone log agent with a broad set of ingestion and parsing capabilities. And in 2021, we agreed to donate that agent to open telemetry, integrate it directly into the collector and continue development from there. So over the last three years, with a lot of great contributions from the community and a lot of good feedback, that code base has been integrated into the collector, refined, hardened, and at this point, many organizations actually will take their first steps into the open telemetry ecosystem by deploying the collector as a traditional log agent first and then layering on additional capabilities from there. Let's go one layer deeper here on what the collector can actually do. So we've talked about instrumentation. So just to reiterate, if you're using any kind of open telemetry instrumentation, including open telemetry log appenders, you can very easily forward your data to the collector using OTLPR native protocol. If you are using another log agent, such as Fluentbit, you can very easily bring that data into the open telemetry ecosystem. You can just send that to the collector. It will translate it into open telemetry's representation, and then you can work with it natively from there. And as I mentioned, we have now many capabilities in the collector for reading from other data sources. We can read application logs from files. We can read system logs from journal D or Windows event log. We support syslog and some other options as well. And again, this is all built directly into the collector, so it runs within a single process and there are no external dependencies. So one of the challenges with ingesting logs from traditional sources is the representation varies so widely. Some logs are well-defined JSON structures, others are pretty much just free form text, and we see everything in between. So in order to support these in the collector, in order to make it possible to interpret this into the open telemetry data model, no matter which format you're starting from, we need a very flexible solution. So to solve this, we have a very broad set of granular capabilities which you can compose as necessary to extract individual values from free form text, to manipulate and interpret and ultimately parse these values into the right data types and then assign them into the correct fields within the open telemetry data model. And then on top of this shared set of capabilities, we built a series of log receivers. So each of these corresponds to a traditional mechanism for transmitting logs, so wherever your logs are already going, you can point the appropriate log receiver to that log source and then you can apply any necessary operations to interpret them into the data model. This isn't a tutorial on log parsing, so I'll keep this example brief, but just to give you a sense of these capabilities and what this kind of looks like in practice, let's say we want to parse this log file. So first of all, we'll use the file log receiver which can find, track and read files, handles file rotation and other common log file patterns. So the basic idea is that we will read in contents and tokenize it into individual log records. Very commonly that just means one log per line, but it's not always a case and we can handle a variety of situations there. So in this case, let's say we get the first log, we've isolated it. What we'll do is we create a log record object and assign that value into the body of that and then we'll process it from there. So as you would expect, we have a regex parsing utility and this just allows us to isolate pieces of information from that log and give them names so that we can work with them further. Timestamps, pretty naturally you can define a timestamp layout and then parse that into a Unix timestamp, assign that into the time field in our data model. For severity, we have a very flexible system as well. There's many different severity scales out there and open telemetry is just one of them. So no matter which scale you're coming from, you need to be able to basically just define the mapping from that scale to open telemetry. So any string, set of strings, number, range of numbers can be corresponded directly to an open telemetry severity level and then it will just automatically interpret those. This structured piece is obviously JSON but it's technically a string so we can parse this into a JSON object and that gives us access to the fields within it. And then this value has apparently some structure so we can extract some more information and then let's just say that type foo is a piece of information, maybe it matches a semantic convention or it's just something we want to be able to index and query on. We can move it into the log record attributes. And then optionally, now that we've parsed out a bunch of all the rest of the information, we can put the remaining free form text back in the body or optionally, if you like, you can just leave the body as is. It has the original log record that way. So it's this kind of composition of discrete operations that gives us the ability to interpret pretty much any logs into the open telemetry data model. Now as much as I think these ingestion and parsing capabilities in the collector are both robust and I think competitively perform it with other logging agents, I do want to reiterate that all things being equal, you can save a lot of cost and avoid some risk if you're able to capture and transmit logs using instrumentation because you don't have to do all of this tracking of files, reading files, ingesting from files, parsing and so on. And it's probably a pattern that many people would follow of this was for existing applications where you don't want to touch them, you don't want to rewrite them, you can start capturing logs of open telemetry from those files using the collector right now. But if you're writing a new application, we strongly tell you to consider using instrumentation to capture those logs to perform us a little better, you're guaranteed to get that metadata coming through consistently. Exactly, the instrumentation can automatically apply the semantics and the context to logs just as well as the other signals. And that's going to be something I'll show you here in this demo, how that becomes an advantage. So the scenario here is I'm running the open telemetry demo application, this is a microservices application where each service is written in a different language, but they're all using open telemetry instrumentation. And then I'm sending logs in spans from all of these services to a collector which I'll run locally. And then within that collector I will make sampling decisions that apply across the data types. So this is something fundamental to the open telemetry data model that we have these cross cutting contexts. We have trace context, resource context, instrumentation scope corresponds to the part of the code that the signal came from. So we're able to make these kinds of decisions. And then I'm just emitting this to two different files so we can see which spans and which logs were sampled. I'm just using these as an approximation of a back end, this is a demo, I want to keep it simple. Okay, so let me try to get this started. So first I'm running a collector locally and I'm starting the demo application. And then I'm just going to immediately begin tailing files from the, basically tailing out the logs that we've sampled. While this starts up and runs, and we may not see a log for a minute here, but why is this important? If you've ever worked with spans or logs at scale, you pretty much have to do some sampling. The problem traditionally is that you make sampling decisions about spans independently from logs or metrics. But with these cross cutting contexts that we have built into our data model, we can make these decisions once and then apply them across the board. So if we were to, for example, make a sample just randomly one trace out of a thousand, and then keep the spans from that trace and then made a similar kind of determination for logs, we'd have very little chance that there's any overlap in the data that we've sampled. One out of a million basically. But that's the sampling ratio I'm using here. But I think in a minute here, we will see that we've sampled some logs and then I can show you that we've also sampled corresponding spans. Just to reiterate, the beauty here is that you're tracing sampling decisions here are driving the logging sampling decisions. So every trace will have the full set of logs that is super, super, super, super important. Assuming we generate. Yes, I'll show you the configuration in the meantime. So this is the configuration I'm using of just receiving with an OTLP receiver and then ultimately writing with an exporter. And then this is a just a proof of concept connector, but the principle is designed into our data model. But again, I'm just using a one in a thousand sampling rate to make a decision about a trace ID. It doesn't matter if the trace ID is first observed from a log or a span. I'll make that decision the first time I see it and then apply it to any additional spans or logs that come through. And as long as the, if the sampling decision is embedded into those logs, that means that you can achieve this, even if you're just picking up those logs from a file on disk. I think in this case, we're doing this directly through instrumentation. Correct. Correct. Yeah. So in this case, the sampling decision can actually be made within the application and process and the instrumentation. Very simple. But if that sampling decision is in bet from the trace is embedded in logs, you can still make that decision in the collector using the processing that Dan was showing earlier. So we did sample eventually a log. I copied its trace ID here and then searched the file with all the spans in it to just show that we also have spans from that same trace. This is a scenario that you typically, I guess, it would be very, very difficult to pull off using existing techniques. So this is very, very important. All right. So we're going to recap a few things before we jump into questions. I think open telemetry logging is very, very important, right? Like we announced, like I said, metrics last year tracing a few years before that, but logging is important for a few reasons. First is you can now use open telemetry as your primary data collection agent. You don't have to use open telemetry for traces and maybe metrics and something else for logs. That alone is a very big deal. We expect that to drive a ton of adoption of this. We're very, very excited for that. It also means you're using a single configuration, a single set of rules for pre-processing. If you want to do pre-processing, like some of the things that Dan was showing earlier, it makes it just a lot easier to manage your traces, metrics, and log collection at scale because you have that one agent and one semantics and one sort of schema for configuring it and managing it. It also means you can choose where to send that data for all of your different data types. It also guarantees you have consistent metadata across not just your logs. That alone is a big achievement, but also across your metrics and traces, which makes the insights that you can glean from all of those different types of data all the more powerful because that metadata is guaranteed to be the same. It means you can do things like quickly isolate all of your telemetry, all of your signals from a given host or a given service. At times, that can be challenging across traces, metrics, and logs even today. It also means you can show all of the logs for a given trace as we were showing or even sample your logs so that they're sampled consistently with your traces. Again, that's very, very difficult if not impossible to achieve today. It also means that you in most situations will not need massive clusters of servers for log shaping and field extraction. Of course, if you want to do custom things with your logs, then yes, you'll still need to do that, but if you just want to ensure that your logs have a consistent shape and consistent semantics, that is guaranteed at the source. You, the developer and operator, are not responsible for reshaping them or writing those nasty log queries that I'm sure we're all pretty familiar with at this point. This is very good. Finally, if you're capturing those logs directly from instrumentation, you're dramatically reducing your compute and memory overhead of capturing those logs because they're just staying in that open telemetry type the entire way. You aren't burning a bunch of CPU and memory on each host or with any, or yet on each host to go and extract fields from those logs so that you can get them into the correct shape or just better understand them. So this is very exciting. Again, this is a big deal for the logging space. It's also a big deal for open telemetry. We're excited to see where this goes. Certainly our experience of metrics last year is that the uptake was very, very quick and very rapidly as maintainers in open telemetry, we started getting great feature requests and other things submitted into the project and we're expecting to see the same logs. With that, we'd be happy to take any of your questions. So there's a microphone there or if people want to raise their hand, we can call them out and shout, but we'd prefer the microphone if you're going to go up. Yes. So one thing that some developers do is they have logs that actually are metrics. The log is one thing. Are there plans to have a connector or something to convert data that's ingested as a log into a metric? Like log metricization. Yeah. I think you can do that today in the collector. There are some, there's like a counting connector, but what you're talking about I think is more like extracting it contextually from there. Yeah. I think ObservaQ has a component like this and we'd like to upstream that at some point here. So I think others in the community have asked for this too. It's just a matter of getting the contribution made. Yeah. And you can already do the counts I think of logs today in the collector. Yeah. Please. When thinking about receiving logs directly through OTL-P, do you see a risk with data loss and if so, what would be the recommendation to do with that? I see this morning. So you can. And it's a good question because like I've run into examples today with customers of ours where where even with traditional logs, right? If you can't pick them up from disk fast enough, they get right and then they fill the buffer and then some are lost. And so you're really asking like with instrumentation, what happens there? I will defer to Dan for that. Well, I'll say that I know this is an obvious consideration that's taken into account. We have to design, well, the instrumentation is designed with this kind of thing in mind with like queuing and whatnot. But I don't spend much time in the instrumentation myself actually, so I don't actually have a great answer for you. That is the one question we were hoping not to get because neither of us know the actual answer to. For pulling up, for parsing logs from disk, there's a bunch of considerations that have been made. But for direct ones, I think that things have done. I'm looking at Dimitri over there to see, no, he's giving me a face, okay. We'll have to answer that later. Quick question about sample rate adjustments in real time. So like if suddenly you see a flood of errors, will the sample rate dynamically increase to get more traces and spans of information from that? So the demo that I did just there was just a very simple proof of concept that's just using probabilistic sampling, which is like the simplest you can get. But in principle, any criteria or strategy that you could apply, you know, on the tracing context, you're basically just making a decision about a given trace ID given whatever the criteria is to sample it or not. And the ability to sample it across data types could be applied regardless. So I think you can have dynamic sampling like you're describing there and do the same thing in principle. A short answer is yes. Again, in that demo, the log sampling is just following the trace sampling. You can do dynamic sampling of traces today in a hotel and whatever you have for logs would follow that in this example. Please. Thank you. So configuring complex logging agents has a problem that if you have the transformation, routine, multiple outputs, so the main problem usually is debugging, you know, what happens with my log and, you know, sometimes you get exceptions. So you need, you know, the error pass as well. So I'm just curious about, you know, if there is any future plan about, you know, how to debug those configuration, how to, you know, handle errors. Yes. So we actually have a lot in there today for this. So one thing you can do is you can basically among these many operators that as we call them in there, we have a router so you can identify specific formats that you know how to parse and then everything else you can send somewhere else to later look at and say, okay, I didn't expect this format. How do I handle this? There is also, there are some ways to sort of bail out early and just print to another file or standard out if you're actively debugging it. The other thing is... So for example, destination debug is interesting because I can, when I'm dumping something into file, it has a format, you know, like JSON format, but if I'm using, you know, like a service that has a specific HTTP request, I need the HTTP request dumped because that's where, you know, like the formatic happens at the destination level. So things like this, I'm thinking about this that is it really possible, you know, for example, dump the request because maybe there is, you know, like SSL, everything in place so I can easily TCP dump the stop, but I really need, you know, the payload that it sends for the, the URL or this kind of stuff. Yeah, I don't think that's a log specific problem. So I think whatever solutions we have, or we'll have, probably address broadly. Yeah. Yeah. This is probably an easy question, but can you talk a little bit about how scalability works in the collector because if you get just a lot of, a huge amount of logs, obviously you're going to overload something. And if you're doing correlation, kind of like the spam idea, the trace idea request across all these different streams, you have to have some clustering or some inter-process communication between the charts and the clusters. So how does that work? Yeah. So there's, you have the ability to, there's a load balancing exporter in the collector, which you can use to, you know, basically provide affinity based on trace ID to one collector or another. And so you can sort of like load balance and distribute your load and then make your sampling decisions. If you need to be doing that level of processing, yeah, I would say for the, for the examples like this, you're not going to need that. Yeah. So the idea to sample both logs and traces is really interesting. And I notice the examples we have here are more about head-based sampling. I'm wondering like, has anybody explored, it seems like it would be possible to do a tail-based sampling with this model as well, but I'm just wondering like 30 sort of scalability implications of that because usually you've got more logs. So you can do tail-based sampling in the collector today. I will say like, I think there was a lot of enthusiasm about this probably like two or three years ago for tracing ignoring logs for a moment. I will say in my experience the compute and storage and memory overhead of doing tail-based sampling, the collector is substantial. And it's not even like a collector needs to be optimized thing. It's just the, you're basically processing traces and then sending the spans on to be processed again. But that being said, yeah, you like in theory, in practice, you can actually do that today right now for the traces and the logs would of course just follow the decision to make for the traces, assuming you set it up the way we did. Another scalability question. It sounds like, to clarify, application logs if instrumented, sounds like very reliable process. Reading from disk, there needs to be some considerations. Can you speak to that? For reading the log logs from disk and how that works. Yeah, so that's the stuff that we weren't showing in the demo that we talked about earlier where, yeah, that's built in now to the collector first class. You can point it at log files. So again, it wasn't a live demo, but it was the slides we were showing where you can, you know, define rules in the collector to extract certain fields and treat them different ways. It's pretty robust. Like speaking for myself, we've had customers who've been using this majority of our customers from Kubernetes sources for over a year, even before when GA, and that's using traditional locking paths, not the end process path. So it works quite well. Yeah. You mentioned the exporters that are baked into that collector. Yeah. Like I use another product that has like a node exporter. That's fine. Elastic search. What implementations do you guys have for those? Are they different? I mean, I imagine over time it'll grow, but for right now, for logging, I don't know the exact list of exporters. I don't know the exact list, but there are a lot. It's pretty new. There's already quite a few though. Yeah, I think there's over a dozen. Metrics has a ton. So like traces and metrics are pretty mature. Like I don't really know of any sources you can't export to either via their own exporter or just because they support OTLP. I think you said elastic. I'm pretty sure that's in there. If I'm running an elastic search service on a node and I have the OTLP collector installed there, it'll gather metrics from elastic and export those. It can. Yeah. Yes. You want performance metrics of elastic itself in this case. Yeah, that should work. Yeah. Because I think you can download, let's say node exporter and install it on a machine and have that metrics. So the OTLP will have the node exporter baked into it and it'll scrape its own metrics and pull those. There is a metric scraper for elastic search. I think probably last question is because they told us to stop. Once you come up at the end, we'll check on one. How back pressure ready the new collector is in terms of pushing logs? For example, in our current environment, we don't sample anything in terms of logs for various reasons. Yeah. Compliance or other reasons. The scale is like always a problem. So we decentralized everything and we moved everything to the source rather than writing a central pipeline. Sure. So is it back pressure tested? We're being told we're out of time. So do you want to just come up and we can chat? Yeah, thanks. Thank you for coming. Thanks, everyone.