 Okay, so this is first steps with observability open telemetry and you I am no Shanika Malaferra The good people at sign it out were so good as to pay for me to come out I mentioned them in one slide. This is has nothing to do with sign it out So that's the nice part as I get to actually talk about stuff that I care about In this case open telemetry So let's go ahead and jump in this is definitely targeted at people who are starting at looking at Observability problems on big or small snacks. So that's that's where the the the targeting is so When we use this term observability, what are we talking about? It's a very big buzzword like on the West Coast You'll hear some very smart people use it all the time other people will use terms like monitoring or Some sort of initialism like APM or something So what's the distinction or what are we trying to show with this term? We want to think about a time to understanding so this we don't recognize it as the Voynich manuscript Which is a handmade manuscript full of indescribable both diagrams and writing In a language nobody's ever seen or using characters. Nobody's ever seen any place else So our time to understanding on the Voynich manuscript is currently in about 645 years And we're still not fully aware of what's in there So Time to understanding is the first half of our mean time to resolution usually right But it is possible of a fix with zero understanding right we have memory overrun we reset the server We're good to go right we never understood where that memory was leaking from it's like a classic example of like hey We have relatively low observability to the system But I would submit that while fixes are absolutely possible Without understanding the stress level during incidents becomes extremely high right, so you know Back in the day, maybe you would oh, let's just go ahead and create a cron job Just go ahead and restart the service every 24 hours, right? We're all and You know even then our stress level was right even when it was like, oh, yeah Sorry, it's just you know between 2 and 2 0 5 a.m. It's just when everything restarts right not great But we're working on a new issue and we do not understand what's going on Even if we say oh, yeah adding resources to this or that fixes it our stress is quite high and that takes its where on the team so Why are microservices particularly harder for this? So this is something that's critical to understand is that a microservice architecture is often presented as simply the Evolution of a monolithic architecture right it is something that you will grow into as you as your palette matures And you start drinking whiskey you're gonna enjoy microservices, but this is an area where microservices absolutely are worse So that is an observability right Initially in the era of the monolith Obviously there were only a few people in your org who fully understood the pieces of the model with even somewhat right By the time all of those people ended up on the conference call it 2 a.m The problem was generally understood completely Right there were only so many services to interact they were interacting in weird ways that weird historical reasons practically the way it did but The monolith could be fully understood and so the simplest version of that is right You can get something like a stack trace shows you everything that's on the stack when you have this problem, right? And so you could get to some understanding eventually even though it might be quite slow to fully understand Where the system was in its state? This is an image from D zone now, you know the AWS or the Amazon comm death star is not that surprising The Netflix one kind of does get me though right, it's like You know, I would think you know if only I could you know activate my my Elon persona and be like yeah I can figure out how Netflix works. I just put it on a whiteboard, right? But then you get this right so with microservices, right? You very very quickly build up a number of connections that make it Largely impossible to understand the system completely when I say understand the system completely think about questions like If surface a goes down will service be be affected? Right, which in monolith days, right? You you felt like you pretty much had a handle on Now the only problem with this graphic is it is two very very large services that we understand like hey This is about as big as it gets currently for complexity, but this will happen once you have a dozen developers, right? This is not something that's going to happen Just only with massive mega scale right is something that generally does happen With Microsoft projects and that makes some sense, right? That's almost the contract, right? I understand only my component my component fulfills its contracts correctly, right? And so the map of the connectome between those Services is not something that anyone is charged with fully understanding and I could say more about that organizationally that like when you create a team It's supposed to understand the whole service often their job is just to guilt and apologize Within the they apologize to the customer and they guilt us about making something doesn't work But they do that, you know again because they are not within those embedded inside those teams They they also struggle to understand that map So we get this kind of tower I'm able right where You know, everybody has their own project their own project is fully functional within their own testing in their own observation But when operated together, they're unexpected results, which are very very hard to track down so Open Cholometry exists explicitly to address this problem and that matters because as we get into a little bit of the technical side of this We're going to get to a point where we're like wow, we're doing this in kind of a heavy way When we're like trying to go like hey, let's go report a single metric with open telemetry It's like oh, there's a few steps here and that's because open elementary is not an open source project to report Metric a single log line or a very simple trace in the easiest way possible It's gonna try and give us a window into microservice architecture There are still great benefits from open source observability, even if you have like a Much simpler map, but this is what open telemetry is written to address and think about this slide especially when we talk about Tracing and we talk about distributed tracing as like this is what we came back to Now I'll note that there's there's been some new news I've been doing this talk for a little while or a version of it for a couple years But there is some new news out here, right where initially when I would say hey Microservices are just one of the solutions to an engineering problem And they have they have no special claim on being a superior solution. That was like a fun freaky idea, right now This is a filtering out a little bit that like super-duper lightweight services that you can spin up very very easily Maintain on your own and can interact in this predictable way that we are seeing the kind of vagaries of that engineering approach It was a great talk this morning talking about sidecar implementations and again It's like all the super clean way to implement these sidecars do all this kind of network level control adding, you know milliseconds of latency on every single request between every single service, right and so That stuff added up at Prime Video So this is not in fact like interneicing finding at Prime Video that they're complaining about lambda implementation They're merely observing that the benefits of a microservice architecture did not necessarily result in engineering benefits That makes some sense because when microservice were Initially presented to us the idea was not faster better cheaper the idea was not Oh, this uses less RAM than another approach The idea was your team could fully understand what they're working on and they don't have to get blocked by Unexpected interactions with other chunks of code that are Dependencies for their work, right? That was the benefit the benefit was not all of this runs so much better If you're listening to this, I'll do the Just reading this headline right scaling up the Prime Video audio video monitoring service and reducing costs by 90% So move away from distributed microservices Architecture to a model with application helped achieve higher scale resilience and reduced costs again That actually should not be a surprise You know to those of us who've been around long life. You're like, oh, yeah, that makes sense. Okay, so Off from that little moment of just saying maybe we shouldn't do microservices at all to How do we monitor microservices? So We think about monitoring in the open telemetry space and in most spaces as having three pillars of observability This is three pillars by Kandinsky public domain. So we've got to work it into every single talk where I get to talk about three pillars so We start with The second one that came about I should change this order because I should put it in order of like when these were useful But we start with metrics, right, which is our just sort of counting something that's going on Some metrics are admitted very naturally from any system, right things like memory overhead and And they are our best way to get a high level view of what is going on, right? They very rarely are going to point to a root cause So the classic is pegging CPU memory memory usage getting out of control, right that that is going to say Hey, we know something's going on on this machine. We have no idea what it might be You know the shape of the effect of it, but not the cause the other big advantage of Metrics is they should be very very Lightweight for storage impact, right? We should be able to do math on them We should have a little bit of compression on them to get a ton of information in a nice compact package More on that later. We'll be talking about the cardinality of data And then logs kind of irony with logs all of your answers are there, right? Somewhere in the library of Babel. There is a book that has every truth, right? But Somewhere right Logs are interesting one because they are a really good example of how you can increase the level of monitoring But you're doing and reduce your observability So observability is your mean time to understanding a problem, right? Throwing your production system over into debug will get you more information But it will reduce the likelihood that you can find that information So you can hurt your observability by monitoring more and In general right logs present some kind of storage problem for you five years ago Splunk It's like don't worry about it, and then a year later Splunk was like worry about it. So, you know And then the last is tracing right informatically right this is a hybrid between metrics and logging and you're trying to generalize observed time spans So I'm going to talk more about tracing So let's let's let's dive into Tracing this like newest member of the Troika. So I'll start with the dirty secret about tracing Which is that most tracing is never viewed in its full detail and by most it's like 99.9 percent and Somehow that's not super surprising that logs, right? Like there might be info logs that you log all the time that you're not that surprised that you don't read them But when you go to the trouble of creating these traces And you start to realize like oh some services We never look at these traces and they could be relatively quite dense with data But we almost never look at them and maybe I should move this later in the stack But this is something I kind of want to start with Because as we're doing our work for traces We kind of want to keep in mind that Something needs to tell us to go to a particular trace or something needs to generalize for those traces for them to be super useful Because otherwise we're kind of recapitulating the logs problem where it's like somewhere somewhere. There is there is a trace that shows this problem and You know at 10 a.m. The next morning you're gonna find it, but you're not gonna find it during an incident So the idea with tracing is that I'm gonna skip around very slightly in my little chart Our goal of tracing is some kind of waterfall chart which involves both some kind of objects telling us what's going on Hopefully with some parameters. They're on the stack The service that's touched and the time span that that service is taking out so One of the funny things about tracing out get to that in a minute. So yeah, one of the funny things about tracing is When I was first working with tracing six seven years ago You know I was showing people here's how to get all these time spans to decorate these time spans to get them to be More accurate to add more Measures of time to them that waterfall chart being more and more dense And what they started asking for was something that looked kind of like this They just said I just want to know which services got hit during the tracing and When we think back to that problem of do we understand how our microservice architecture works you can see why Right because the very question is just sort of what is reliant on what? You know what what is going to fail if I take service be down? Right, so this is one of the reasons why actual tracing data is viewed so rarely It's because often what people actually want from tracing data is just where did this request go and They're not as interested in what's causing the latency exactly how much latency was it what? You know individual Java method within the stack was the thing that was running so slowly It's just oh if you do a checkout thing then the profile service gets called. I Didn't know that right and that's often what we want from tracing so that gets us in this concept is distributed tracing So distributed tracing is the idea. You're not just looking at a trace for you know a request passing through a single service We're able to see some kind of interconnected version of that service Slip my reading glasses on so apologies to y'all if this is tiny to you as well Yeah, so the problem that you face when you Try to do this kind of tracing is passing around data consistently right that shows which transaction this was so Pre-open tracing open telemetry days This was all we talked about when we would do closed source tracing is like okay you're gonna have to add some code to add a header and Maybe you'll be doing some stuff Synchronously you're not gonna be totally sure what the header is You got to check in at the end to find this header But you got to pass it around when you pass it on to a new service You have something that receives this new header the taxings correctly that deals with inconsistencies with these headers This is kind of a key question to make distributed tracing work effectively Other problems that can arise if we sort of look at this for a computer science standpoint like issues with timing in a Synchrony and invalid time values negative time values and stuff Those all can cause us trouble in a rigidly type system But they're very rarely the problem the problem is hey It gets passed to the ones service this service strips out the headers or I cannot meaningfully interact with the service to add that kind of header information and so then the information gets lost and so a big chunk of our trace is just here It's going into a black box, and I don't know what's happening with it And that'll still happen, but hopefully less so with this project So this is why we have the open telemetry project is an attempt to Create a consistent standard for passing that information around so We want to add a trace header somewhere close to the clinical start and Then we want to have collector side logic to tie those traces together And that's that's really major right because you might be doing something like kicking off a delayed job or some kind of other Asynchronous work where you're really going to be after the fact that you're like oh This was all kicked off by this request So you want some kind of logic where you're gathering data to see where these are connected? We do it on time not great, but we get there we get there so That gets us to the collector so The open telemetry collector is a service that can be run in a whole lot of places But and this is from the open telemetry.io documentation. Thank you so much for this really elegant draw Representation of this you can have your open telemetry library. Just immediately ship out its metrics out to Whatever other service I'll go ahead and report this metric every time you see it and that can make sense for test cases and for very small implementations or if you want what you want to send is like business intelligence like oh, yeah We sold something for four dollars. Just go ahead and report that directly to a service And that's probably fine if your web request per purchase. That's that's okay, but What you want to do otherwise in a larger system is you at some point when a consolidate data on your side before you're sending it across the wire and That can be within this containers It can be within your network and can be run a whole bunch of ways But but this very generalized ideas that you have this collector and so this is that graphic but bigger Well, this I think in here. Oh Right so No, don't know how we got the slide into it. That's okay Because within that collector you can add some logic in this really nice Modular way. We'll see an example Configure this in a little bit, but along with the collectors and the ability to stitch together distributed traces You can do things like you can debounce your data. You can set a pulling length You can set a maximum memory size you can pull out personally identifiable information all really nice stuff that it's nice to be able to do and you should be able to Configure that in a in a pretty Effortless way with a with a very consistent yaml configuration pattern. We'll see an example configuration in just a minute so When we go about Reporting data, I don't think that there's too much that we need to worry about data engineering right off the top That we say oh, we had you know where you don't need to do a ton of Preoptimization to begin with but when we want to report especially when we want to report metrics It's worth considering this one thing which is high cardinality data so Let's look. This is just a sample You know spreadsheet that I built of some performance information that shows a series of You know page paths how well they perform how much money they produce a few other things I think I pulled this for real analytics some place Looks pretty good right seems like it could be fairly actionable if we're like front-end developers like oh, this is good for us to know so This is less useful, right? Every single in the in the value for path every single user ID is now getting encoded But this is more information right So this is good Well, right. It's not right. It's much less useful. This is the issue is that we now have a Subject variable that says hey This is what this is what bucket this should go into that has become highly highly specific and the rest of the values Are all gonna be pretty standard right the number of hits that it's gonna receive is usually gonna be one between one and five, right? so we have switched from relatively high cardinality data where there's a Extreme amount of variability and the number of hits the number of traffic that goes to each of these paths to relatively low Cardinality data and this I found in my work with large implementations of Observability and other forms of instrumentation Was actually surprisingly difficult to sell to the org or had to be communicated fairly early as Remember we do not want to be pushing just the most detailed for example name metric name or Metric value we wouldn't have a lot of possible values for the metrics within that the metric name We want to be fairly standard because even if you're a front-end engineer It's very likely that actually all these paths could just be like page Right, it's not terribly like we the doc slash index is performing that differently from some other You know like that's not the most likely thing so it's worth thinking about what is a more general name that I can give to these metrics And then I just have I guess a demonstration of that Okay, let's talk about how we can get started with sending some data. This is from The AWS blog it is engineering blog thinking about how we're going to send our data one of the Concepts to think about is that if we use something like Prometheus to gather our metrics from open telemetry We can actually grab and pull that Prometheus data to take operational action So that's kind of this new idea that any of us are thinking about and that may be actionable in the next couple years may come to be a very key part of the story and One thing to think about is where we're going to send this information in the end a standard tutorial that you will find That's like for hackers and boot campers and people who are starting out Let me sending that data to like a Prometheus and Grafana service right Prometheus to Store and let you query that data and Grafana to let you represent it That is what you should do if you're going to try it out this weekend and try and build something yourself It's really good to understand how that works if you're implementing it for a team It's worth thinking about these things around storing your performance data out of open telemetry So you have your storage maintenance stuff, right? Are you rotating your storage? What are you doing if you have metric explosions from the example where you have every metric name is an You know path right down to the user or includes the UTM or some other value You really don't want what do you do when your database stops working as a result, right? Do you have to do the maintenance to clamp that and then if your? Information if your if your observability is useful then you're going to deal with the problem of trying to share that information Which can get quite stressful quite quickly So the first is like accounts and access right where you you build it up yourself You know cool just use this info to log in but you start realizing that contractors and other people have that login info And maybe it wasn't all maybe it wasn't you're quite telling about the state of your business So you get a little stress about that and then the last is just building visualizations that are useful So you know for myself right once I have the CSV open I could say look you see this pattern here, but once we start wanting to share it around the org It's really good to think about what kind of visualizations you're using so that everybody can look at it and pick that up Rather quickly. Okay, so let's see some demo code and demo config so Apologize really really is like not great with this The concept here is that we're just going to go ahead and install very basic instrumentation on like a Node.js service So to start by Implementing the open telemetry node project, which will give us a few standard pieces to report metrics and trace spans then we're able to say hey, we want to register a trace and then we want to add a span to that trace and then we can also create a meter create something that that measures metrics and Increments some kind of counter on that meter. There's a whole bunch of subtlety about how you implement metrics things like you maybe you want metrics to just increase by one every time you call them or you want to increase by a certain value or whether to go to a certain value and It's worth thinking about but I don't have a separate slide on this If you're diving really deep into the way that metrics are implemented within a single service Ask yourself is that logic that should actually be happening on the collector side? So for example your service may see that you sold four pairs of boots today on its service And so it may want to update that metric to four Right, but you realize oh wait I have other things running that also sell boots right and then you just say okay I guess I'll communicate with the other ones and check it This is the time to start thinking about that collector right a central point where all this data is gathered And it can totally have logic to say hey, I want to sum these values I want to receive these values separately and increment them separately so Yeah, if you're diving really deep into how metrics are implemented on one service think about collector side logic So let's look at an example collector config. This is about as basic as it gets so Here we have it setting up to receive metric data on two ports and Then how it's going to process that data and the stuff is kind of critical so it's gonna say hey, how long do I wait to send that data and What kind of batch size am I going to send? So this prevents it even if you have some service that is inadvertently like incrementing a metric every single millisecond It's this prevents it from trying to send every single millisecond as well It also has Check interval and a memory limit are both for the same value if you have something like metric explosion where you're producing a ton of metric names It's gonna say hey, I'm not gonna completely fill the memory of this service with these metric names I'm gonna have a limit from the our report and again You'll be surprised that clamping can be hard to sell within your organization, right? I want to know everything I want to observe everything right, but this can really start to cause like overhead problems Then you have an exporter and I'll get to the concept exporters in just a sec and obviously we have this Yes, so export so experts think about the endpoint that we're sending these values to so one of the really nice things But implementing collector early on is that you are completely divorced from where that Metric data is going so you stand up a Prometheus and Grafana instance at first to collect that data That's fine, but you're not totally stuck with that forever You're not stuck with a DIY solution and when you go out and buy a service and and frankly everybody who does observability is saying hey I want to receive your open telemetry data if you're sending it was just this endpoint config then that's what's entailed in Changing services, right? It's just changing that endpoint. You're gonna lose historical data, but that's pretty nice That's that's not much of a lift to do that kind of Migration so we've seen a little bit of drama around that in the scene come and talk to me after this if you want If you want to hear about the drama about data going in to close teams, but Yeah, and then we have our pipelines which we have for logs traces and metrics And we're gonna find which processors we want those individual components to go through So we can say hey from log to want to do all this PII destruction because we're worried about personal data Not so much worried about that with metrics and so we're gonna implement that differently So one thing to think about with the centrality of Tracing if you're exploring this as an option is that this chart on the open telemetry Documentation which is totally accurate can be a little bit deceptive Because you know when you save yourself well, you know on rails should we launch something that doesn't support logs? You'd be like no right, but That's that's deceptive because you already have a way to export logs Right, you probably already have a path to send logs over the wire and you absolutely can Whatever you're using send those in to the collector and send them to their service So yes support is not yet native in several these libraries or is just experimental in some of these libraries But that doesn't mean that it's not ready to get used and apologies This is like not a live picture. I think a couple of these are updated But the big thing is as long as traces are stable. You're probably in a good place to start using And so this isn't the whole chart so a couple of these Two I think are not listed as stable yet for traces Metrics even these ones that are listed as experimental still work great So worth checking out. Thank you so much Don't worry, I'm nearly done. So we're good Okay, so last little kind of niggling concepts one is the concept of baggage So most of the data that we're passing around is header values It's going to be for the purpose of creating this observability information with these requests But you still have the ability to say hey, I'm gonna attach a little more information Why don't worry about it and it's kind of a negative term baggage But that's the deal right you can have a lot of interesting use cases for why you would want to pass information around consistently For a request for example If you wanted to create an experimental version of one service within your cluster But you still wanted to use your shared cluster to do all of your testing You would want requests to go to your test cluster only when you needed them to right? You would want normal requests to go to wherever they do normally so you have up here You have a B prime new version of our service We want most of the a to b requests to go to be but once some of them are test requests to go to be prime so the people who paid for me to be here signal not have Taken on this challenge and the way that they solved this because they wanted to pass information about requests all Around their stack really really fluidly and have a very easy way to pick up that information was with open telemetry baggage So we're seeing a little bit of this now We're seeing some open telemetry usage to solve like new problems in Cluster engineering like CICD or security that's interesting stuff. We'll see where it goes right now Of course most of open telemetry is for monitoring those are ability. All right. Thank you so much Here's a giant version of my head If we have questions, I will take them now I'm gonna turn off the giant version of my head because I couldn't deal with it You can find me on Twitter at serverless underscore mom. Okay, we have a question here Hey, thanks very much. That was a great talk. I wanted to double-click a little bit on the storage considerations You were talking about mm-hmm So if you actually wanted to do this at at scale water some I guess practical implementations that you've seen So, you know the first thing you should do is just engage at the very Facile level with how the collector is working because that's where your problems are actually going to start that's where you're like day one or day 10 problems are going to start is You can usually just be setting memory limits setting maximum Send sizes and that's going to control like oh, we thought this was a good idea and it went completely out completely Hey where More significantly so you'll see some very large teams will present incredibly cool stuff I just saw into it presentation That absolutely blew my mind with what it did now Obviously, they have somebody who spends all of their time worrying about how they're really really cool metrics Don't destroy their network storage bill or their network storage devices or whatever else So that's that's kind of the point for when you're getting started. It's really just about learning that collector logic Even at a facile level right and clamping it and realizing that when you do things like say hey Never send more than five megabytes of trace data in a single cycle Most trace data is not viewed right so start by compensating a little bit more towards the hey Let's just clamp exactly how much data we're going to send of the cycle Yeah, so I actually my question was more of let's say I didn't want to do that Do you actually persisted to a database or an object store and then and then we can worry about the Yeah, so none of that stuff is covered within the open telemetry project itself, right? So now you get over to like for me the essentially rang or using a surface Myself, I do not think If you're over a certain size then you're totally gonna be able to have full-time permit these people who can handle this stuff If you're under a certain size, I do not think that's a great idea And and I think that you're gonna deal with a number of like storage management headaches eventually within a couple of quarters So for that there are several services among them honeycomb Which is a fantastic one that will just be your end point I know there's telemetry hub. There's like four others and so they will handle all these like storage headaches for you and Again, you're the one doesn't do them as an open telemetry endpoint So you get to feel very cocky when you go into sales negotiations with them at the end of the year, right? Because you're like look I can just change one good thing about you right and start reporting someplace else. Thank you I Anymore questions you got one over here. I thank you. That was great. I'm curious So you talked about the three pillars tracing logs metrics when you're going into a green else. What's that? There is nothing else. Yes, only three When you're going into Greenfield project, how do you think about? You know building observability in that project. Are you just literally tracing everything and Logs where it's relevant and metrics where the business cares. Yeah, like how do you think about that for a Greenfield project? I believe that I am not saying anything too surprising to say that actually like Greenfield projects start out as monoliths and Like, you know when I was first learning, right? I learned both like microservice decomposition and even like function decomposition and so like Would make 11 functions to have my little video game character move forward a step, right? But that all that is actually pre-optimization, right? It's like no just make one huge function called character updater, right? And and then later worry about that decomposition into individual services. So What does that mean for like? Hey, you're working on a Greenfield project. You know, it's going to grow to a certain size It's not just like a this is not a garage project. You know, it's gonna have thousands of users I think the first thing to worry about is Yeah tied together your first four services and making sure that distributed tracing is working between those four Then you should be able to create a standard for any future implementations that they need to decorate their spans and in a custom way For many frameworks if you just implement that they have auto instrumentation available So you may do those four and be like, hey, this just works out of the box So that's what we expect for future services that we add to that stack But you want to you want to try to do that as we see hey Can we get distributed tracing working such that seeing all four of those services and that we're having some kind of span information between those four services Thank you. And just a follow-up question with tracing how I don't know the terminology for it But how valuable do you find like dense traces going all the way down the stack of like yeah tiny little function that does this thing Is that worth tracing down that point? Yeah, so this is this is definitely something that you're going to Evolve as you grow with a tool I would definitely say like one of my big takeaways from this is like that is not super valuable like Ceteris Paribas with like knowing that anything else about the situation like that like classic stack tracing within the service like Hopefully once you've tied it to one service that single two pizza team because you're such a great microservices org is Gonna look at that and be like oh, I know what's going on right so That super deep depth may not make a ton of sense So so worthwhile and great to have and if it comes automatically great Let's let's get it if we can and great to have it documented for your team She'd be like oh, yeah, I know there's always gonna be trouble with this sorting thing So let me go ahead and you know add some spans here, but that is not the first thing that you need out of the box for sure Hi Two questions first one is just more like a sunshine really need to get validated about open telemetry So the way I've been trying to sell telemetry was that because it's open source It's open standard am I correct to say like previously we've been using APMs New Relic there dog and so on but now open telemetry I have that as the middle layer and now I can switch to whichever network monitoring provider I want that the class that is that is how it should be Okay, and and almost everyone is on board with it being that way Okay, which was very surprising to a lot of my teams like well really all these guys are on on board with this But yeah, all right, I'll tell I'll tell this right now It's not it's not any kind of secret that they had our did have an incident recently where they talked to somebody who was building a tool to export some of the data doc data to the open telemetry collector and they like called his boss almost like Hey, can you pull this feature? It's like his side project was to was to do this import and you know You know, this is very much business metrics were none of it in in this for love But it's not a great look to say hey, we love these open standards Go ahead and feed us all this data with open standards. Well, you'd like to look at our data with open standards Absolutely not so that's not really like Still they're asking you to implement by hey send us this data by setting You know data dog or whoever as the end point and send it out to us So you still are if you if you buy an open telemetry in your org Even if you're using a very big expensive APM product, you still have the ability to say hey all this open telemetry stuff We can just migrate right so that's fantastic and that's sort of the point that I highlight is However, the kind of shillie-shelly goes with well, will they send their traces to our collector? Well, they said their metrics and their auto instrumentation to our system Maybe maybe not but anything you're sending with this nice collector config, you know, you can migrate quite easily Thanks, and the second full second question might be covered previously, but I'm trying to send like If I was tell my developers like yeah add all these lines these libraries they'll say like well I've got no time and so on. Yeah that way like out of the box like magically and just yes So in Java Rails node these libraries do it in my example You're like going in and saying hey add this span instrument this method with a span They all have some level of auto instrumentation implemented so You're gonna want to test that out obviously you're gonna you know, especially if it's a ton of developers You're gonna want to try and test out when you can to see like what do I actually get out of the box? Most mature is Java super greatest net, but then like how are we actually more than you just got to see but I would say If you're especially using multiple languages The degree that you would often get from closed-source companies to be like how will do auto instrument this they'd be like fine And then you get it and it's not great So same process you need to run some kind of proof of concept But yeah, this was the as I got really deep and open told as I got started open telemetry two years ago This was the part that really surprised me was like oh wow you get a lot from auto instrumentation just out of the box without having to do anything more than just saying like hey go ahead and load the load the SDK here and then like oh here's where our request is starting Maybe you tagged that but then you would get back like individual method names and stuff So yeah, that's definitely worth a worth checking out. Thank you Apologies we came a little late. I don't know that this was already covered You talked about all the greenfield kind of thing right so for a brownfield kind of application We want to consume Telemetry either metrics counters or logs which is already there in a brownfield old old application, right? Is that Viable option for open telemetry as well. Do you think yeah? So what you're looking for there is an importer right that can say hey I want to consume this data and that's a very active area of development So the classic is logs So you'll see in this like framework support a lot of them list like oh log support is really experimental That just means that with this library you may not have a call available to log to the collector But there's tons of importers available for whatever logging tool you're using now. So That is very in this area very active development. They're even doing it for a close source agents like the New Relic agents and the data dog agents So that's quite impressive so You know I don't mean to default back to like of course you're gonna require a proof of concept You're gonna look at the project There are some really big teams and especially really like I don't know how to say very medium teams like teams that are big But not so big that they can afford to have all team of people worrying about this problem That have had real great success with that. So I know Shopify made this whole migration was using some existing like RPM tools Rails performance management tools Previously and they were able to just get those imported into the collector and then sent up as open telemetry metrics Thank you. And did I see another question? Hmm You all had too many questions and that was so great and thank you so much and it didn't stress me out at all Thank you so much everybody for coming