 Good morning everybody, and welcome again to yet another OpenShift Commons briefing. I'm really happy to have with me two of our OpenShift Commons members from Co-Scale, Peter Ar... I'm going to say it wrong, Arrays, and Samuel Van Dam, who are going to talk about monitoring OpenShift and detection performance anomalies. Another perspective and point of view on monitoring on OpenShift, and I'm really pleased that they're able to join us and give us their point of view. So I'm going to let Peter start us off with a bit of an overview of Co-Scale, and Samuel is going to give us a deeper dive and a demo into their offering. The format for this is if you have questions during while people are talking, put them into the chat. When Samuel, Peter, or I, or one of the other folks are on, we'll try and answer them. And once the presentations are done and the demo is done, we'll open it up for Q&A for everybody. So without any further ado, Peter, take it away. Okay. Thank you very much, Diane, and you pronounced my name very well. So no problems there. Thank you everyone for joining. Let me say a few quick words about Co-Scale first. What we do as Diane sat, we offer a monitoring solution. So we call it full stack performance monitoring, but really focused on microservices environments such as in OpenShift. Our solution is a lightweight solution that's specifically for production monitoring, and we use anomaly detection to find problems faster. And we offer this as SaaS, as well as on-premise, and we are firmly embedded in the container ecosystem as an OpenShift-prime partner, but also in Docker ecosystem technology part. So with that, let's talk a bit of how Co-Scale fits in the OpenShift ecosystem. And to do that, let's first have a look at the problem that OpenShift tries to solve. So when we look at the evolution of application architectures, we see a clear shift from monolithic applications that are running on physical servers or VMs in a data center typically towards much more agile development these days of microservices that are supported by containers and cloud infrastructure. And as we all know, on the infrastructure side, containers have become a fundamental building block for microservice. They offer an attractive way to build and package them, to ship them into production, all of this by packing all of the dependencies inside of our containers. But running containers in production and at scale does pose a new set of challenges compared to using them just for development. You have to start worrying about things like orchestration, automation, networking and storage, security, hosting, disaster recovery, logging and monitoring, and general application performance. So these are all things that when you move into production are questions you have to ask yourself. And this is actually where Red Hat OpenShift comes in because it offers a package container platform built on Docker and Kubernetes and various other components to solve many of these issues that I mentioned in the previous slide. And as part of the platform, there are also some basic logs and metrics, but OpenShift also has a strong ecosystem around it for more advanced capabilities. And this is exactly where CoScale comes in. So CoScale adds an additional layer of more detailed container and application monitoring to OpenShift, and that helps you maintain application performance in production and lets you quickly understand when, where and why performance problems occur. And for this reason, CoScale has also been selected as an OpenShift Prime partner for monitoring because we have significant value to OpenShift, letting you deploy applications in production with the right performance guarantees. Now let's look into the monitoring aspect in a bit more detail. I think we all realize that monitoring is an important part of running an application in production, yet it seems that many people are still struggling with this when it comes to containerized application. This is data from a recent survey by Clown Foundry on the top challenges when it comes to running containers and microservices in production. And we can clearly see that that monitoring is pretty high up here, just after container management actually. And monitoring and probably shooting microservices indeed poses some interesting challenges as this funny tweet mentions. So let's look at these challenges in a bit more detail. So the first obvious of preservation is of course that the number of containers is much higher than the number of servers. So the number of instances to monitor increases by an order of magnitude when we use containers. In a typical customer environment that we see, customers use up to 10 or 20 containers per host, but we have even seen cases with up to 100 containers. So this is an immediate multiplication of the number of metrics to monitor. The second aspect is that containers can be very short-lived, but this dynamic aspect also introduces challenges in rapidly picking up metrics from containers, setting relevant alerts, as well as understanding the impact of container life cycles on performance. A third aspect is when we compare container environments with monolithic applications, we see a much larger diversity of application technologies used across containers where people typically use the technologies that best suited for the use case of a particular microservice. So this all comes together in an overload of metrics to monitor and alert on. If we look a little bit closer on how we would traditionally monitor a monolithic application and compare that to a microservices application, we see that in a monolithic application, we typically have three monetary components. This is perhaps a bit simplified, but at the infrastructure layer, there are traditional system monitoring tools where you look at typical resource metrics. At the application layer, typically you would use an APM tool where you gain insight in the internals of your monolithic application. And finally, the end user experience is typically monitored as well, using some form of browser instrumentation or another technique. Now, for microservices, however, on a platform such as OpenShift, we see that an additional layer is introduced and we now have a lot of smaller and lightweight and loosely coupled application components that we need to monitor. So in order to understand application performance, we do not only need to monitor these container instances themselves, but also the way that they are orchestrated, the way that they are tied to services. And finally, also the services running inside the containers. And this is actually where most APM tools start to have difficulties. And this is also the opinion of Cameron Haidt, who's a research VP at Gartner. And one of his recent reports, he also claims that these new application architecture, including containers and microservice, are really stressing the capabilities of APM tools. Now, why is this really? Well, first of all, most APM tools that were designed maybe five or ten years ago specifically for monolithic applications, for example, written in Java, .NET, and so on. And because of the nature of monolithic applications, understanding what's really going on inside your application and interaction between application components requires you to have code level visibility of the application. Now, when we compare that with containerized environments, we see that the application is put up in all the smaller microservices, each running a more limited amount of code, as I said, typically using different technologies. And in such cases, code instrumentation is much less needed. And it's actually more helpful to understand how each microservice is behaving and what are the interactions between the microservices. And this is especially true since containers are lightweight instances. So you don't want to use a heavyweight monitoring tool to monitor them. In fact, most of these heavyweight monitoring tools, they will require you to install an agent inside your container. And this is really an anti-pattern, since containers should be limited as much as possible to a single process. And you don't want to pollute your container by packaging an extra agent in there. And then the final aspect is that most existing tools, they have a hard time keeping up with dynamic environments, especially if they use static alerting. But I'll tell more about that a bit later. So if you are looking for a monitoring tool for a containerized environment, what visibility should it really give us? So what metrics should we monitor? At the host level, we obviously still want to monitor resource metrics, typical things, CPU, memory, disk, and so on. Typically, you would use an orchestration tool in case of Kubernetes. It's a flavor of Kubernetes, but there are other orchestrators out there. And also at this level, you want to monitor things, such as the amount of containers, how they are set up, relationships between services and containers. And this gives you more service-oriented visibility, like which container runs which service or which containers are impacted when a particular service starts degrading. At the container layer itself, we also want to keep track of relevant resource metrics, CPU, memory, and so on, as well as when these containers are started and stopped to their life cycles. And it doesn't stop at resource metrics, of course. We also want to know the requests going in and out of our containers, as well as application metrics from those particular services that are running in our containers. And these could be things like Nginx or Redis or MySQL. So all of these services, you also want to monitor quite in detail. And then finally, our application will serve some end-users and ultimately also a business. And we also want to monitor relevant metrics from that perspective. These could be things like page load times or conversion rates and things like that. So those are the set of metrics that you want to monitor and how does CodeScale handle that? So what's our approach to monitoring microservices and containers? Well, we run one lightweight agent per host. It can be either installed directly on the operating system or in a privileged container. And with that agent, we can get a server resource metric at the OS level. We can also get a container and cluster resource metrics, typically using the APIs from Docker and the orchestrators and OpenShift. Now, there are other tools that do that as well. But CodeScale actually goes one step further because we have a very rich library of plugins for various application components. And we can configure these in such a way that, first of all, any new container that runs a service for which we have a plugin will automatically get monitored when that container starts. And secondly, we will get very application-specific metrics from these containers without the need to install an agent in the container. And this is a quite unique capability. In addition, CodeScale also has a real user monitoring component where we use a little JavaScript snippet to get the end user experience metrics from the web browser. We also allow you to track unlimited custom metrics. We have various ways of doing that. You have scripting plugin or logging, leveraging our APIs. And on all of these metrics, and this is the important part, we run automated anomaly detection that lets us quickly detect abnormal behavior. Final point, we also track relevant infrastructure changes. This provides extra context in what's going on in your environment, things like container lifecycle events or events from your orchestrator, but also things like new deployments or configuration changes. These are all things that are happening in your environment and they can have an impact on performance. And by also capturing these events with various integrations that we offer, we offer extra context in those kind of things. So this picture gives a visual representation of the CodeScale platform with our lightweight agent and all of the plugins, well, not actually all, but representative part of the plugins that we support, the real user monitoring component and the integration for various custom metrics and events. And with this data, we can obviously create nice dashboards. We are a monitoring tool after all, but also automatically detect abnormal behavior using our anomaly detection. So anomaly detection, I want to spend a little bit more time on it since it is one of the differentiating features of CodeScale. And let's try it again why it's so important to use automated techniques such as anomaly detection, just have a look at the explosion in the amount of metrics to monitor when we compare a traditional monolithic application with a containerized environment. Basically, the amount of container acts as a multiplicator in the amounts of metrics. And we can easily end up with, in this example, where we have 10 containers per host, more than 1000 metrics to monitor per host compared to 100 in a traditional environment. Now, if you multiply that again with the amount of hosts, you can see that it quickly becomes unmanageable. Certainly if you use classic techniques such as static alerts. Now, I'm not saying static alerts don't work or are bad. They actually work very well for well understood, consolidated metrics. For example, number of visitors on your sites or some business metric that you have a good hold on, but not necessarily for those thousands of metrics that are coming from your containers and microservices. And these are not the only limitations, the amount of data. There are other limitations as well, like how to deal with dynamic environments which require you to constantly reset or reconfigure your alerts. Also seasonality is hard to handle with static alerts, so you would have to start writing some complex alert expressions based on time. And in fact, the same goes for correlations between metrics. So, if we look at the definition of an anomaly, which is basically a deviation from what is normal or expected, this means that if we can get pretty good at predicting expected behavior, we can also get pretty good at detecting anomalies. And this is basically what we have focused on at co-scale. We will basically look at the historic behavior of all the metrics we monitor and make a prediction based on that. We also include a fair amount of domain knowledge for that. And if we see a deviation from this expected behavior, we'll basically give it an anomaly score depending on how large the deviation is. And then we will alert when this anomaly score exceeds a certain threshold value. Now, this is a simple explanation. There's a lot more sophisticated things going on, but this is the basic concept that we apply. What we will also do is we will group metrics on which anomalies are occurring at the same time to give you really a better understanding on what's happening in your environment. So, this is an example of a screenshot, but I think Samuel will illustrate it a bit better in the demo later, where we see that on our anomaly timeline, we have different metrics at the server, user and business level showing up normal behavior and basically we see that certain services, certain containers are overloaded. This creates an increase in latency on our website. We also see that there's more views on our website and basically also our conversion rate is impacted. So, this consolidated view, giving you all metrics together gives you really a good view on what's happening in your environment and a lot of context to understand what the performance problem actually is. We're also applying outlier detection, which is a different form of anomaly detection, where we look specifically at metrics from similar instances in a cluster, such as containers that are supported by the same, or supporting the same service. So, if we see any containers with a different behavior compared to the rest of the cluster, we can also alert on it. In this example, we highlight containers with increased memory usage. In general, this kind of outlier detection requires less of a learning period than anomaly detection on time series D data. But the basic ID really remains the same, that you can quickly detect changes in performance without having to set up a lot of manual alerts. That's the basic premise of co-scale. So, I'm going to end my part of the presentation here. I'm sure you would like to see how this all works in in practice. So, I'm going to hand it over to my colleague, Samuel, to give us a demonstration. Okay, thank you, Peter, and hello, everyone. Let me quickly share my screen. I think Peter might have to stop sharing his screen. It also, normally, you should be able to see the co-scale dashboard now. It looks wonderful, thank you. Okay, perfect, thank you. So, yeah, welcome to the co-scale applications. If you create a trial with us, this is one of the first screens you will see after creating your account. It shows you the four main components of the co-scale platform. So, we have our real user monitoring, as Peter talked about. We have integration with a lot of third-party services. We also have a lot of ways to do really custom integration, both with config management, as we have a command line tool and API, and other methods of really binding your system together with our monitoring. But today, I'm mostly going to talk about the agents, because the agent, of course, will be used to get server data, and specifically for this demo, OpenShift information and Docker information. So, I'm gonna click through, and I arrive on our agents page. So, this is the page where you would see all your servers and all the agents you've configured. Here you can see I have one agent, the OpenShift agent, and it has three servers. Well, it's installed on three machines. I have a no-shift master and two nodes. Creating a co-scale agent is a really simple process. So, we support Linux Windows, most popular Linux flavors. I'm gonna select Red Hat 7 for now. And if you click next step, you get a list of all the plugins that Peter was also talking about. So, you probably recognize here the most popular open-source tools. And specifically, we, of course, have support for OpenShift and Docker. But I'm gonna show you the Docker. I'm gonna go into a little bit more detail about the Docker configuration a little bit later. Now, because I've selected an OpenShift plugin or a Docker plugin, more specifically, I get two options of installing a co-scale agent. One of those is through package management. So, I get an RPM which I can then install on the server. But because OpenShift usually runs in very dynamic environments where new minions can be started at any moment, we also have the option to start our agent as a privileged container. Now, specifically for OpenShift, we have a configuration available that will allow you to just add a demon set to your OpenShift environment. And then the agent will automatically be deployed on every server that's also in OpenShift. So, here I quickly opened the OpenShift web interface and the co-scale project contains my agent. And you see I have another four servers running each one of our agents. And I'm gonna quickly also show you the configuration for it. So, here is the demon set. And here we can see the config then. So, this is really a powerful system because now if you scale out your environment or if maybe one of your servers crashes, the agent will automatically scale with you or restart when needed. Now, what information do we get from the co-scale agent? Peter also mentioned it already a little bit in the slide. So, because OpenShift in the background runs a Kubernetes environment, you're gonna see a lot of the same concept. So, we have replication controllers. Well, we get the data from replication controllers. We get the data from the services. We get all the containers that are running and where are they running. We also have a very powerful event system. So, here for example, you can see our replication controller overview. And here every time you have an event of insufficient replicas, meaning that probably a container has crashed somewhere, you can clearly see this with our events. And you can go and research what happens. Below we have a little bit our container overview. So, once on a service level and once on a host level. So, you see here I have five replication controllers, some running five containers. I can clearly see which are more the helper containers started by Kubernetes. And you may have noticed here that we sometimes have a different color for a container. This is because you can select a metric for each of these widgets and then set the threshold which you choose yourself. I think in this case, we've selected 30 and 50% CPU usage. And then depending on the value we get back from the container, we're gonna color code the container here. So, this way you can quickly see if some containers are maybe using too much CPU as in this case is clear that we have 97% CPU usage. We might be impacting all the other containers that are running on the same machine. So, this really gives you a little bit of an overview of the entire environment. Now, the next dashboard is a little bit more focused. So, this dashboard as you can see here at the top with our dimension system is, well, it has just the data from the MongoDB replication controllers. I can quickly change this dashboard also if I wish if I want to see data from other replication controllers but here on this dashboard, I get a little bit of information on the container lifecycle so I can see when containers were started, when they were stopped, what the exit code was, also on which machines are they running, what the CPU usage is, memory usage network received and sent. Here again, you can set your own thresholds so it's a very visual way of seeing if the container is performing as you would expect. And then we have the event system that we talked about already. So, every time a container is started, it's gonna first send a ready signal and then a running signal saying, okay, I'm ready to get traffic from other servers in this case. The next dashboard I want to show you is a little bit more general. So, this is a dashboard made by one of our customers and they've chosen to put a lot of their services together so they run a microservice environment and they have a common TAPI, so common microservice, product TAPI and checkout TAPI, all running in OpenShift and they've chosen to put this information very clearly on their first dashboard that they open. So, this is their home dashboard. Now, we also have some information coming from our real user monitoring and I wanna show you a little bit how easy it is to go from that top level view here just showing me the page load time, could also be just a service, all the way down to a server level or a more detailed view. So, let's say I see that my page load time is a little bit too high, I can click through on this tile we call it and I arrive on a dashboard and that was created specifically with real user monitoring information. So, I get the page views coming from there, I get the page load time, I get my most popular pages and the slowest pages. Now, it might be here that you see another page that is a little bit too slow, you can click through again and you arrive on a dashboard just showing you information of that page, page views, page load time and the page resources. Here again, you have the option to click through once more because this is still the front end of the user and now I've clicked and I arrived on the microservice level. So, I get the web request rate from the containers that are delivering this web request, I get the latency and I get the error rate. Just to show you how easy it is to link dashboards together and make a system that shows you the information indeed. Here also, we have the alerts that were in this timeframe, the anomalies, free memory CPU load and then another way of using our event system, so here this customer has integrated with our MailChimp integration, so every time they send a mailing campaign, it's going to be added to co-scale and they'll be able to link it to performance problems, baby, or changes in the metrics. They do the same for software deployment so every time they do a new software deploy, they can clearly see when it happened and maybe what the impact was. Further, so I've mentioned that we gather metrics from containers, we of course also gather metrics from more an operating system level, so CPU load, free memory, network traffic in this throughput again. Here I want to show just a small detail. Peter has mentioned that co-scale is a lightweight monitoring platform so we aim to have a very low resource usage on the servers that we are monitoring and because of that reason, we've made certain decisions in our design process. For example, we're not going to push the CPU load or the CPU usage of every single process running on the machine, but sometimes that's very valuable information. Here I see a clear spike in my CPU and I would like to know what happened at this time. It's for this reason that we added the forensic system but the forensic system is a small lightweight anomaly detection running in the agent and when there is a sudden change, it's going to take a snapshot, take a picture of the system and send it back to our platform and then I can research, okay, this spike was caused by the Docker demon probably deploying a new image or something else. Now, I want to jump back to the agents page because I said I was going to explain our Docker monitoring a little bit more, especially because we do in container monitoring. So the idea there is the plugins you saw in the beginning that are available if it was just installed on the host's operating system. All these plugins can also be used to monitor what's happening inside the container. So let's say if I have an Apache container running with the Apache software, I can get metrics from that Apache and monitor actually how it's performing. The way we do this is I'm going to quickly open the configuration of our Docker image, our Docker plugin, excuse me. So here you see the configuration of the Docker plugin and you see how four Docker images configured. So the way it works is that if you install our Docker plugin, it's going to scan the server it's running on and it's going to see, okay, which containers are running here and then it's going to match that list with the configuration I set here. So when it sees an elastic search image with in this case, a wildcard tag, but this can also be of course a normal tag. So here in the case of memcached, I just match on the latest. It's going to start a co-scale plugin within the container itself. And here, very important to note, this is not, you don't have to install anything in the container beforehand. No, we inject this the moment that we see the container, we inject that plugin, it's going to start gathering data and it's sending it back to the agent that's running on the host operating system. Now this has two very good advantages. So the first thing is that it scales with your containers. If you're going from one elastic search container to five, that's not a problem. Our Docker plugin is going to detect that. It's going to start four more elastic search plugins and the data is going to be gathered and you're going to be able to see the data coming from each individual container or all together on an image level or an attack level. So we really allow you to compare data also from previous versions to the new version. So it's really a powerful system. And the second advantage is that because we start that plugin within the container itself, the configuration becomes a little bit easier. So to give you an example, so we have the configuration for an Nginx plugin. Co-skill gets a lot of this information from APIs and status calls. So we need access to the Nginx global status page or status page. And you might have noticed here that I use local host. I hope it's clear on your screen, but I don't need to mount any, ports and don't need to do any special configuration to be able to monitor this image. No, this local host, because we start the plugin within the container is just the container itself. So this port is just accessible, in this case, 8,000, without any additional configuration. The other advantage there is this is the same for file system. So you don't need to mount any local disks on your host machine to be able to access this access log. No, this will just work. And the moment your container stopped, this access log will be deleted, but that's fine because Co-skill has at that moment already gathered all the information from it. Really a really handy way to monitor live running containers. Now I wanna show you a couple of dashboards that show a little bit the advantage of having this system. So here I have a memcache dashboard. I have general metrics coming from memcache connections to memcache, network bytes received, the commands and hits and misses. Now, you see that the commands had some changes in its metric. So we used to be around 800 commands a second and we dropped down to 400, but we had some spikes, which is a little bit strange. So what I can do is I can zoom in a little bit and I can clearly see here that two containers running and all of a sudden one of the containers started misbehaving a bit because it's crashed. So the other container had to handle a lot more data. And if I look at the events, I'm gonna see there were too few replicas. The one was missing. A little bit later, a container was started. So we see the new line popping up. Here are no manual action on my side that was required. So I didn't have to change this dashboard. The new container just popped up and we decided to scale it up a little bit. So we added some more containers. If I zoom out again, you'll see replicas scaling. So we scaled up from eight and then to 10. Then the other example is our nginx. So here again, we get a general dashboard, which you also get if you create a Coastal application with the amount of connections, the amount of containers, the average latency, the request rate, and a nice heat map that shows me the performance of my containers over time. So I can quickly identify maybe those that are not performing as I'd like. And then here we have a more dashboard that shows me really information coming from the latency of my website and the latency of all my requests. So here we have really a lot of containers delivering my website. You see at one point we added some new containers because there seems to be that there was an issue. This were probably handled by OpenShift itself and then these new containers start delivering the website to the customer and the data starts rolling in. Okay. So now the last thing I wanna show you because Peter also mentioned this and I think it's a very good point that in these new environments, you have so many metrics to watch and so many containers that it becomes very difficult to set meaningful static alerts that don't overflow your mailbox, but at the same time, you still need some warning that something happened in your system. And there we think that anomaly detection can really add to these container environments. So Peter also showed this, this is the same anomaly as we saw in the presentation. So we have the anomaly on three levels. We group it so you can see here, there was anomaly on latency. We had a couple of anomalies on the request rates and then we had anomaly on CPU of those both servers. I'm gonna show some examples coming from containers, but just to show you where the screenshots from Peter came from. So we can see that the latency of my website went up. We have a nice dot plot if you want to look. So here you can see the, yeah, how many of my users are in which level of page load time. And we can clearly see that there is a new group of users that is experiencing a lot slower page load times than normal. Then we have a lot more visitors. So we went from 0.5 to 1.7 visitors and bits. This is on different pages, by the way. It's something to note that Co-Scale automatically builds a tree of your application and it's gonna do anomaly detection on all individual pages. So if one page changes, you'll still be able to see this with the anomaly detection. And then we also have an anomaly on CPU usage. And you can clearly, it went from 30 to 50%. And this is, I think, a very good example because normally you wouldn't set a static alert at 50 or 55%. You would set it at 70, 80 or 90 even. But still, this is an abnormal behavior of your server and you would like to see, okay, what happened at this time. With the forensics, I can then quickly research that NGINX was using more CPU. And this, of course, makes sense. I have more visitors. So my web server has more work. A different example, but more on a business level. So we did a large proof of concept with a customer in the US. They sent us a lot of their business data and our anomaly detection was applied to it. And we were able to find small issues like here in this case, where the amount of orders per minute, all of a sudden dropped. And if we zoom in a little bit, you'll see that it dropped to almost zero. So this was a big impact for them. With the anomaly detection, they were able to identify and fix it pretty quickly. Now as a last example, I have two anomalies here. One on a user level. So this is the request rate. One on a server level. So I'm gonna quickly open the user one. So we went from around 9.5 requests to 14. You see the anomaly detection system was able to quickly identify this. And if we take a look at the anomaly on CPU usage, this was detected on an Apache container. And then this is a very clear anomaly where we go from 0% or very low CPU usage to very high in a very short time. But again, it was automatically detected. So it really proactively helps you finding issues in your environment. Some issues where you might have not set the static alert yourself. Now this is all the examples I have to show. So I'm gonna give the words back to Peter now. And we have a few questions that have been coming in and Frederick, Rick Bosch, I think your CTO has also joined in the call. So I'm gonna unmute him as well. And he has to unmute himself in order to answer questions. But thanks, Samuel, that's a great overview of how CoScale works and showcases the anomalies. Let me see if I can find the first question we had from Lucas Ponce was asking about custom metrics from apps and are they supported as a custom plugins? Because you have a lot of pre-configured plugins in there, but if someone wants something specific for their own apps, how would somebody go about customizing a plugin or creating a custom plugin? I think Fred will take the question or is it clear? Go ahead, Samuel. Okay, thank you. And so I'll quickly share my screen again so I can show you in our documentation. So as mentioned, CoScale has a couple of ways of pushing custom metrics into our system. We provide two plugins for that. So we have a generic script plugin. This is just, yeah, you could write your own script or add it to your server, sorry, to your binary. And then CoScale or the agent will run this script every minute or every five minutes. You can set up yourself and then you can push data back to CoScale this way. So this is really more of a pull. You can also push metrics with our command line tool. So together with our agent, if you install it as a package or we have a container available with a command line tool, with that you can easily push data. I can probably show it after. And then we also have a plugin which is we call our lock plugin. And this is a really powerful tool. If you have existing lock files that contain information that you need, could be latency or just a number. You can use regular expressions to get that information out of there. But this is really an easy way to get data without having to do large changes to your environment. And then we also have the option to push data through StatsD and a CoScale API if you really wanna go and do a custom integration. We have a very mature API available that you can use. And I'm gonna quickly show the command line tool. So here's an example of the command line tool to insert data where this is the metric names, the level and then the value. And now just to show you, you can find more information on this in our documentation, just docs.coscale.com. Perfect. All right, and another question, go ahead. Yeah, this is Peter. I just wanted to say there's also a few good examples on our block for working with custom metrics. And as part of that question, I also saw that this person asked to monitor specific transaction endpoints like a specific request. And for that, it's also worth noting that we recently introduced a new feature which is basically active checks that you can really put in our plugins to say, okay, I want to monitor this specific request, a specific API call or a specific query on my database. And that's kind of an active check system that we have also introduced recently. And then there's also more information on that on our block. So definitely go and check out our block. And the other question, and Frederick sort of answered it in the chat as well, it was whether or not the anomalies are based on standard derivates or are configurable via thresholds and predicted baselines. Maybe if you could talk a little about this, I think this is an important piece about. Okay, so this is Frederick. I would like to elaborate a bit on that. So our anomaly detection technique is basically a fully automatic technique that will take your metrics. It will see how the metrics behave and based on that, it will create a model. Let's say for example, a CPU usage is tightly related mostly with the request rate that is coming in. Then we will create a model that contains both the CPU and the request rate. And we will make a baseline of that that is based on that goes with time. So you have the per hour derivatives, you have per day and so on. And so we'll create a different type of analysis for each of these metrics. Like for example, a memory usage is not that dynamic. You typically see it rising and going down, but it's not as fast as for example, CPU usage. So it's a completely different model that we use then. So we will automatically detect based on the metric, based on the data, which model is most fit for this type of data and then generate the analysis based on that. So there is no configuration needed to be done. You don't have to set threshold or you don't have to specify what your metric will look like. It will be automatically detected and we will have an automatic analysis for that. Wonderful. All right, and I think that is, I'm wondering if there's anything else. It looks like that's all the questions that they had. There was a question that we answered offline, which was actually a good question. I would like to answer it in public, which is regarding our architecture and where we store data. So we want to be very open about our architecture. So I open the slide here. So basically we will use very modern application components. Our metric data is stored in Cassandra, event data is stored in Elasticsearch, also some metadata and Postgres. Our entire architecture is such that it can be perfectly horizontally scaled. It can also be deployed on-premise in a dockerized environment. So that also makes it very easy to set up and scale. We recently even did a proof of concept where we actually handled over a million data points per second. So that's some more context on our architecture. Very nice. Not infinitely scalable, but very nice. And I think that answers maybe well-leads question about where the metric data stores. Exactly. Cassandra and Elasticsearch. So that's using some of the latest and greatest fun bits that are also part and parcel of OpenShift as well. So we're real familiar with a lot of that. I'm gonna give everybody a few more questions, see if there's any other questions. Is there anything else you'd like to add, Peter, Sam or Frederick? Yeah, I want to thank everybody for attending. And if you're interested to try out our solution on OpenShift. So I said we are a prime partner and our solution is available for everybody. Just go to co-skill.com free trial and you can try it out for yourself during 30 days. And if you have any further questions, our contact details are here at the bottom of the slide. So feel free to reach out. All right. Well, thank you very much. We look forward to hearing more from you and then getting some more use cases and stories as well. There's going to be an upcoming OpenShift Commons gathering in Berlin on March 28th, co-located again with KubeCon. So we'll hopefully have some co-scale representatives there as well and people can come and ask you questions in person and or as part of one of the special interest groups. Because I think there's enough interest in monitoring probably to kick off a monitoring special interest group at the rate we're going. And that will be wonderful to see everybody in person in the EU again. So thanks again for joining me, Peter, and Frederick for popping in as well. And we'll let you all get back to your days. And this, if you can send us slides along, we'll add them into the blog post and this will get reposted on Monday and on the blog.openshift.com. Thanks all. Okay, thank you. Bye-bye.