 Hello, everyone. My name is Daniela Rabise and I'm part of the Dynatris product management team. Hi, everyone. My name is Thomas Schuetsen. I'm working in the innovation lab at Dynatris. This session called batteries included shipping open telemetry with your open source project has been pre-recorded for KubeCon Cloud NativeCon 2020 North America edition. For today's highly dynamic and complex systems, observability is key. Stay with us if you are interested in learning how we pre-instrumented the open source project captain with open telemetry to ensure that it's coming with observability batteries included. So let's describe our use case. Firstly, we wanted to give captain users the possibility to collect usage statistics in a way that they could find out how many events were fired on each project or service. To make this possible also for non-dynatris customers, we thought it would be nice if captain comes pre-instrumented and customers can choose which observability backend should be used. Currently, Prometheus Dynatris, but also logging to standard out is possible. So captain is an event-based control plane for continuous delivery and the CNCF Sandbox project. It enables users to automate the operations such as setting up quality gates for automatic promotion of feeds, but also setting remediation actions when a service fails. Captain can also help you deploying the application. As captain is written in Go, we wrote an open telemetry Go exporter, but also a component for managing the exporter handling via environment variables called open telemetry Go helper. Both components are open sourced. So Dany, tell us something about observability and open telemetry. Sure, observability refers to the telemetry data produced by software services and metrics, traces and logs have been coined as the three pillars of observability. Projects like open telemetry, which currently focuses on traces and metrics, promote the standardization of telemetry data collection and built-in observability will soon become a must-have feature for cloud-native software. But in context of highly dynamic and large-scale enterprise systems, just looking at these three pillars alone without meaningful relationships between them is not enough in order to get actionable answers and for identifying, for example, the root cause of a problem. Real-time topology discovery in dependency mapping, user behavior analytics, code-level visibility and auto detection of metadata related to the processes that run in your environment are just a few examples that will unlock automatic and intelligent observability for complex systems. Now, open telemetry is one of the trending CNCF sandbox projects and it offers a collection of tools, APIs and SDKs which can be used to instrument, generate, collect and export telemetry data. These metrics, traces and logs can then be used for analyzing your software's performance and behavior. And let me quote Google's Sergei who stated, we want every platform and every library to be pre-instrumented with open telemetry and we're committed to making this as easy as possible. Now, we at Dynatrace are fully embracing open telemetry and we're consistently one of the top five contributors to the open telemetry project and we also have a member on the open telemetry technical committee. We understand the importance of sharing our expertise in enterprise-grade intelligent observability with the open source community. And since we're an integral part of the open telemetry community, we also want to lead by example regarding pre-instrumentation of libraries. Now, that is why we chose to pre-instrument captain to capture relevant metrics. Now, Thomas, can you elaborate a bit on the kinds of metrics we instrumented? Yes, surely. So, some of you might know, so as some of you might know, metrics are an American representation of the current state of an application. So, when we talk about metrics, we should differentiate between different types. At first, there are coaches in the open telemetry line up-down counters, which can be higher or lower than the preceding value. Therefore, they can be perfectly used for such things as measuring CPU load or the temperature inside a server room. If we talk about counters, we mean values which are always increasing during the runtime of an application, such as total requests to a website, but also the sheeps you can't before you go to sleep. When using the absolute values of such counters, you might not get any thresholds you use for alerting. For instance, it might not be very useful for you when you got the thousandth request to a web page. Therefore, it makes sense to calculate and use the delta values of counters to get things you can alert on and get useful answers out of it. For example, it would be useful information if you see that there are thousand requests per minute to your website. If you want to collect values for each request to your web server, you have to deal with data where single values may not give you the information you want. As you are dealing with a high volume of metrics, you want to aggregate them. For instance, you want to get a number of requests, the average duration of a request, and the fastest and the slowest responses. These are things which can be achieved using histograms and some resistance. So let's talk about the architecture of our sample use case. At first, there is a user who sends an event to captain, such as I want to create a project. Afterwards, the captain API sends an event to the messaging system, which is also a CNCF project. And there is our statistic service, which listens for new events on this messaging system. So the statistic service receives the event, it selects the exporter, which is achieved during the open telemetry go helper. Afterwards, it reads the metrics export and sends the data. This is achieved via the open telemetry export. So we needed to instrument the statistic service in a way that it can handle all of this. What you see on the left side is the instrumentation needed for the open telemetry go helper, which is the only thing which is needed to make the exporter choose over. So initialize the exporter and if the exporter type is pushed with a further push stop. On the right side, we have the instrumentation of the metrics itself. So we wanted to create a new entry in a map if a new event type is received. If this is received, it is used. If this is already present, it is used. If this is not present, it is not, it is created. And afterwards, the count of events only gets incremented when you want it received. So during the following demonstration, we will firstly deploy the captain statistic service to export the metrics to a Prometheus endpoint and display this data. Afterwards, we will reconfigure the service to use dinatrace as a backend and take a closer look on this data on the user interface. The open telemetry go helper uses environment variables to decide which exporter to use. As the statistic service is running on Kubernetes and we also use credentials for dinatrace, we created a secret code open telemetry to pass the environment variable to the pod. So firstly, we will create a secret for using Prometheus. So we create a generic secret called open telemetry and the open telemetry provider is Prometheus. As the service is running currently, we need to restart the service now. And afterwards, the metrics point of the statistic service will be put forward so we can show the data afterwards. So now let's open a browser and browse to the metrics part of the pod. And we see that there are new metrics called captain events with some dimensions such as the event type, service create or service delete and the project and the service name and account of those metrics. So as we saw in the, this is working on Prometheus at the first try, I will switch to backend now and use dinatrace as the observability backend. To achieve this, I have to delete and recreate the secret, but now use dinatrace as provider and pass the credentials to it. So I deleted the secret now afterwards, I'll create a new secret. So I create a new secret called also called open telemetry. Now I use dinatrace as the open telemetry provider and pass the dinatrace URL and the dinatrace API token which is masked now to the secret. And afterwards, I will also restart the deployment of the statistic service. So now the service also collects data and sends the data to dinatrace. And after some time, we see this data in our dinatrace explorer. So you can pass in the metric name which is called captain events dot count. You want to see a summary on the query. And now we see the number of all of the metrics which has been passed to dinatrace at the moment. Afterwards, you can also see a top list. And we want to know which project received the most events. So we see that the KubeCon project 34 received the most events at this time. So as you saw in this short demonstration, it was really easy to switch the providers by setting environment variables. And I hope you enjoyed it. Now, Dani will show you how you could help improving this, but also to our next steps. Awesome. Thanks for the demo, Thomas. Yeah, let's conclude with a call to action and our next steps. So first and foremost, contributions to either the open telemetry go helper and exporter or captain are highly appreciated. Please visit dinatrace open source projects like the open telemetry go helper or captain on GitHub. Our next step regarding the open telemetry metric exporter is to contribute it to the open telemetry project. And if you're interested in a dinatrace product demo or want to engage in further discussions, you're welcome to visit our virtual booth. Now, thank you for the opportunity to speak at KubeCon cloud native con 2020 North America edition. Have a good day.