 Hello, everyone. My name is Daniela Robizer, and I'm a technical product manager at Dynatrace. And I'm very happy to be here with you today and share what we at Dynatrace think is key for successfully monitoring complex cloud applications. So you probably know that monitoring used to be a lot about looking at dashboards. So you developed your application, deployed it in your environment. You deployed it either on-premises in your own data center or somewhere in the cloud, maybe on AWS, for example. And then you watched if anything happened or if any threshold would throw an error message. So you probably stared maybe on dashboards, showing graphs like you see on the screen here. Or you probably skimmed through some application log messages looking for some keywords. But the thing is, modern monitoring, we at Dynatrace think that modern monitoring should go beyond that. So you shouldn't have different agents to cover different technologies. So there should only be one agent. You shouldn't have lots of data, but no answers. So when you think of that graph that I showed you, what would that mean? So this wouldn't provide you any insight, any answer to problems that might have been available in your environment. And also, you shouldn't go for a solution that contains lots of duct tape that covers everything, but is kind of like sticking together only via duct tape. And also, when thinking of deploying your monitoring solution, this should work smoothly for you. And since we are here at the Red Hat Summit, of course, containers shouldn't be something that your monitoring solution doesn't understand. And of course, it should just work for the Enterprise Cloud. So we at Dynatrace think that modern monitoring should provide you with a holistic view of your environment. It should be AI-powered. It should be fully automated when it comes to deployment. And it should, of course, be purpose-built for the cloud. Because we all know that a healthy environment is something important. It is important if you do have a small environment, but it becomes highly relevant if you think of a big environment. So if you do have containers in place, if you do deploy your applications using containers, then the typical life of a container is maybe minutes, maybe hours, and not necessarily days or any longer. So given this kind of environment, given this kind of highly dynamic and distributed environment, so it may become difficult for your monitoring solution to monitor the applications as the number of nodes of containers are ever changing. And also, the numbers can get very big since enterprises are currently migrating applications from monolithic applications to microservices. And of course, containers are a big topic to support breaking up monolithic applications. And when thinking of that, you can, of course, have like 1,000 containers per nodes available. And then your monitoring solution needs to scale with your environment as well. And when thinking about that, it may also come to your mind that container health, services health, and application health may be something different. So traditional monitoring tools use, for instance, container APIs to provide an overview of container health. Though it is important to keep track, keep an eye of container health in terms of Docker stats, the Docker API, it is of course not sufficient to only focus on the container health only. The most important thing is to measure the health of the services that are running inside a container. And it could very well happen that the container health, that the Docker stats, for instance, may show healthy results. But the services running inside the containers could be starving for resources, for instance. For example, if you run a Java process in the container, then it could be starved for CPU. And you may not be able to spawn multiple threads to deliver the service faster. And more importantly, looking onto application performance at a component level, at a container level, does not indicate the performance of your application when it comes to user experience or the delivered service that the user is looking for. So everyone who has ever participated in an IT war room during a serious incident knows that everything can look OK at a component or container level. Even when taking a holistic view, the results may be very poor. So if you think of the typical life of a container, then on an infrastructure level, you may do some chaos monkey testing, for instance, to ensure that your infrastructure health is doing fine. On a container level, you could look for Docker stats. You could query the Docker API. On a microservices level, you could look for response times of your services. You could look for failure rates. You could look at CPU usage. You could look at the throughput. And of course, on an application level, you could look at how are my users experiencing the applications when visiting websites. So with a monitoring solution, you should get all that different viewpoints provided in one solution. Like I mentioned in the beginning, you would probably look for a holistic view. And another angle to look at monitoring cloud environments is that deployment dependencies may be different from microservices dependencies. So for instance, with Dynatrace, we do cover both. So you will get on the one hand side the deployment dependencies that you see on the left side of the screen. And you will also get microservices dependencies with what we call in Dynatrace the service flow. And when having a closer look at a service flow, you will also get insights into the architecture of your microservices. You will notice that two microservices that are split up currently are tightly coupled. And you may probably think about combining these two services into one because they are so tightly coupled. So why split or distribute them onto two services? So we at Dynatrace think that in order to successfully monitor your cloud environment, monitoring needs to become, needs to be seen as a platform feature. So you should deploy. And with Dynatrace, you can deploy only one agent per host. This one agent will then take care of auto instrumentation of your containerized microservices. So there is no need to adapt your application containers. The one agent will also take care of automated distributed transaction tracing. And Dynatrace is also an AI-based solution. And it will help you with analyzing the root cause of problems. So in order to deploy the one agent on your OpenShift platform, for instance, Dynatrace provides great support. So we announced yesterday that we are one of the very first partners of Red Hat to have an operator in place. So the one agent operator allows you to manage Dynatrace one agent deployments. So it allows you to roll out the one agent onto specific nodes only. So this allows you to control one agent deployment in a fine grained way. And what is a great value add of the one agent operator is that it will take care of updating the one agent automatically as soon as a new version of the one agent gets available. So the one agent operator ensures that all your monitoring, so that monitoring of your OpenShift cluster works with the latest one agent version. So if you want to check out the one agent operator, so we do have a great blog post around that online. And we also have a GitHub repository available where you can also have a closer look at the source code, at the implementation of the one agent operator. And you can also use the GitHub repository to get in touch with us and if you have further questions to start a discussion around that. And speaking of further questions, so we are happy to give you a demo of the Dynatrace monitoring support. At our booth 4.17, we also do have the one agent operator running there. And you can also check out Dynatrace for your own cloud environment as well and get a 15-day free trial of Dynatrace because you know, life without monitoring is like a box of chocolates. You never know what you get. So that's all from my side so far. Thank you for your attention and have a nice afternoon.