 Welcome to KubeCon, CloudNativeCon Europe. Welcome to my session and I'm really excited to present to you what Dynatris can do for you around OpenTelemetry. Some of you might find our session title for today a little bit confusing because isn't OpenTelemetry an observability the same? Are they? Are they not? So technically, OpenTelemetry is a framework to observability. And there is lots of advocacy around that. And all the people around it really think that observability is more than just the collection of tools and the standard data types, logs, metrics and traces. And they share or the common perception here that it matters what data you collect and how you collect it and then what you do with the data. And here is where OpenTelemetry comes into the play. It's a way of collecting observability data. And Dynatris can help at both hangings collecting the data and helping to analyze it at scale. And on the left side, you can see here the Dynatris 1 agent that massively automates and simplifies data collection. And on the right side, the brain represents Dynatris's AI, which we call Davis. And you'll see the power of Davis a little bit later in the live demo. And it helps really processing and analyzing data at scale. Now, if we look at the full blown technology stack at Dynatris here, which I'm bringing up on the next slide. On the bottom left corner, in the bottom left corner, you see that Dynatris goes way beyond collecting traces, metrics and logs. The Wickelok 2.0 portion information, which is a key ingredient for analyzing data at large scale, way beyond that, looking then into user behavior data, deeply into code execution flows, and collecting additional metadata for other purposes and so forth, and then also being very open about that, have tons of APIs for allowing you to bring in any data that you want, and then, of course, also the open telemetry standard for ingesting additional data into the Dynatris ecosystem into the Dynatris software intelligence platform. Now, if you go one level up, then those key innovations that Dynatris did over the last decade, starting with the one agent, as said before, massively automates data collection and simplifies that, alongside with the pure path, the most advanced tracing to talk technology for more than a decade out there in the market. SmartSkip topology information that allows you to have everything in context all the time, a real time topology model of your deployments, that also is the key foundation for the days it helps to analyze the data with true cause and effect relationship. And then last but not least, doing that at enterprise scale, providing you with really high scalability and failover data collection and so forth. Now on top of it, we have six solutions, tailored to very important use cases around observability, and then also the Dynatris hard to the far right that helps you to further customize and extend the Dynatris ecosystem to your needs. Now specifically today, I want to highlight three use cases that we also replicated in the demo environment from a real scenario with one of our customers. And they are ingesting, automatically ingesting, open telemetry spans through the one agent. So whenever an application or a piece of code is monitored by the one agent, Dynatris automatically picks up open telemetry spans, then metrics and spanning throw a open telemetry exporter and bringing that data into Dynatris through that way, which is your use case number two and three. Now those isolated and very simply looking use cases, let me bring that into a much more complex distributed application. And here we can really now demonstrate the power of Dynatris. So let's say we have a little bit of a more complex situation where we have the end user to the far left using a mobile application or a web browser to interact with an application here with a front end, then serverless components, and then a front and a back end, which is not typically the case, but you would typically have a very microservice-based architecture these days, and then databases and others in the end. Now if, for instance, your back end now loads a library that comes pre-instrumented with open telemetry, or if you, as developers, decide to instrument parts of your code with open telemetry to capture further context, the one agent that already is there monitoring this component, this microservice or back end or front end or whatever that is will automatically pick those, that instrumentation up and embed that into the pure part. Then we're going to see that in a live demo shortly. And the other scenario really is that you load, or a container image is loaded, or a container spins up somewhere, and when Dynatrace monitors that, as just mentioned, you will automatically get the full stack information here, and if you're not able to put a Dynatrace 1 agent there, or you don't want to, you also can then ingest that data through an exporter, an open telemetry exporter into Dynatrace. And this is exactly what we're going to look at now at the live demo. So let me actually go over here. We can see here in my front end service just a second ago, we have a lot of failed dynamic requests. So let's investigate that and drill into it. And I'd like to check out the elevated failure rates here. So my demo environment is a little problem here. I don't get a load at the moment. Looking at historic data from the past couple of days, as you can see, there is no recent data coming in, because I don't have any load on the system right now. But we still can see there is lots of failed requests coming in, and let's look at the details and analyze those. So I see a couple of product details that fail at 100% rate, but more important here is this checkout card service that also fails with 100% of occurrences. So let me drill in and analyze all these pure paths, which is essentially all those different traces that come in here. So I need to again extend my time frame here to let's say the last 24 hours to pick up some traces. And then let's just look at the top one here. Now I'm now looking at a single execution, a single trace within the front end server. And what I can see here is the checkout card transaction. So if I click on that and then go over here to code level analysis, I can see here this methods or the details info where additional spanner attributes have been captured and they come to life here when I hover over it. So essentially the developer decided here to capture detailed information about what user essentially this checkout card transaction triggered. And in that case, that comes in actually quite beneficial because it helps me to sort of reach out to the user and apologize for that failed transaction. Maybe give him a little voucher that he comes back and does more business with me. If we go further down this code execution, we actually also can see that the place sort of fails which is then leads me further down into that tree. And part of the checkout card transaction is actually a currency converter. And if I click here on sort of the call, so the application actually runs on the Google Cloud and I'm sort of calling over out to a component hosting on being hosted on AWS, I can see that call comes back with a 500. And if I didn't have the additional instrumentation through open telemetry, I would be blind to the rest here and I could only tell there was a 500 coming back. But remember, and this was the top branch on the previous slide, a foreign component where I don't have any capabilities to deploy one agent to. But the code luckily was instrumented previously with open telemetry data. And this is essentially where I can go in to the conversion request that goes over to that exchange rate API. And that tells me I've run into a rate limit, right? So I now get to the core of the problem which means essentially I'm calling that specific API too often, so a good way to cope with it would be actually implementing some sort of caching. So that brings me to use case number three, right? So we talked about instrumenting or span ingest through the exporter. We talked about the one agent automatically picking up open telemetry data which was up here and use case number three as well as looking at metrics. And what we also set up here is that Dynatrace collects and again I'm going back here into the last seven days for a little bit more beautiful chart and more interesting chart, let me put it that way. So Dynatrace also collects the currency from all those checkout cards transactions through an exporter. And if we actually go into the details here, we can play a little bit with that specific chart and the details. And I'm essentially looking at the currency here and I could further, so let me actually go in into we're looking at hipster currency and in my case I've split that by currency but I also could look in, I only want to look at successful transactions or unsuccessful transaction. In my case, all the transaction fails, right? I've implemented a bargain purpose here that we have to look at something. So if I'm only looking at split by currency and all the failing transactions I'm ending up should end up actually with the same dashboard here in my case. Let's go back to the dashboard and then look at the AI component that Dynatrace brings to the table. And before doing that, I would like to briefly make you familiar with the smartscape topology. So let me drill in here. And the smartscape topology is really the key foundation for all the advanced analytics that Dynatrace does with the AI component. So as I said before, the smartscape is essentially a real up-to-date life model of the application deployment. And now when an anomaly occurs, Dynatrace can take advantage of that to really understand what is the root cause and the impact of that specific situation. So I've prepared a problem over here that we ran into before all the failing checkout or checkout card transactions. And down here in my visual resolution path, Dynatrace visualizes only those entities that are related to that specific problem. And if I go up here into the root cause to those components where I have an increased failure rate, I actually come back to what we looked at before, right? So I see that 500 error, HDB 500 error. We precisely ran into that before when we did sort of walk through the pure paths. And if we go to the details down here, we can also see the checkout card transaction with 46 failed requests. And I could follow the same pattern as before, drill into that and realize what the details of those error were to actually understand and hitting the rate limit of the currency conversion. And that brings me to the end of the demo. So let's go back and conclude our session here real quick. If you want to see more of Dynatrace, go or if you want to try it yourself to be more precise, go to Dynatrace.com slash trial. And it's really easy to get started to roll out the agent on any specific host. It's as easy as one to three, essentially download the agent installer, execute it or, you know, run the signal or verify the signature and then install it. And if you want to use an exporter, go to GitHub and find our repository there. The link is available. And this is, you know, a way of getting spans and metrics into Dynatrace. If you don't want to work with the agent or can't work with the agent. And then of course, there is also, you know, tailored instrumentation available in rollout for Kubernetes and other platforms through an operator. Once you've rolled out the agents, it's really a matter of seconds to minutes that Dynatrace automatically builds a smartscape model for you. And as you remember, that's the key ingredient for our Davis AI to really provide you with automated analysis based with the Dynatrace purpose-built AI. And that brings me to the end of this session. Thanks for watching. Remember, if you like what you saw, go to Dynatrace.com slash trial and enjoy the rest of the conference. Talk to you soon. Bye-bye.