 All right, let's get started I Hope everybody can hear me also in the last row That's fine. Okay So I know welcome to the talk monitoring with Prometheus. I announced this talk Well to be in English Who in this room would like to hear this talk in English? Raise your hand now. Otherwise, it's gonna be in German with English slides. Okay Okay, then welcome to a purely probably German round or German language Monitoring with Prometheus What did I bring up? Yes, I'm more of a developer. That's why I have a bit of the developer glasses on. How can I monitor an infrastructure? I set Prometheus up, what it can do, how to set it up And then let's take a look at how you can get certain metrics out of your system How you can also show them nicely on the sport Who then gets even more fun, Prometheus and Grafana dashboards And even a lot of other things more, maybe you can already see the 24th of June Then we do a Saturday monitoring retreat here in Frankfurt I have a few papers to take with me here in the front Just ask me after Exactly Yeah, let's get into the agenda. Prometheus is manifesto. Well, there's no real manifesto. But yeah, some A few basic things you should still understand what What's behind it? We want to do monitoring. That means there are hosts and Uses that we want to monitor, on which we mainly want to remove metrics We want to send alarms We want to fill some dashboards with it, as we will see later with Grafana and Well, the whole thing has to be brought together and the tool for today is at least Prometheus Well Who of you has worked with Nagios for example? Oh, okay. Who has loved Nagios? That was the least Some but maybe it's a few different things today and It's in the end a monitoring system and behind it is a time series database, that means all the metrics that I There are read out in a database A no SQL database, so a very modern thing because again a database that can not SQL and Has a couple of different ways of this Prometheus, they don't want to be able to do everything, but what they can do, they really want to be able to do it well And you can use it for Instrumentation of application that I can just pull out metrics there From my infrastructure, I can just collect them, I can also store them in all the metrics, I can ask them Can solve alarms with it, I can fill dashboards with it Can also make a trend calculation Yes, it's actually everything, right? What you want to do Maybe a few other things Maybe such a manifest is always something I like Like what do I want to have, what can I support In contrast to other things, there is such a manifest where you always write on the one side what I want to support on the other side, what I don't want to do So Prometheus can especially good Operational systems monitoring, that means I have the system running and I don't want to be updated with something that doesn't work That also works well in the cloud, as we will see later But what it can't do well, if you log data, you can't push it, you can't really do SLR reporting with it Or maybe just let it run over the data, then you should take something else Simply put the right alarm, send nice metrics in the sports pack The whole storage is very simple Storage over a few weeks or a lot of weeks is okay, in the end everything is held locally to have it nice and easy So really horizontal clustering of storage Multitenancy is also not included You should know beforehand, maybe it doesn't fit for everyone, but for that it is beautifully quickly set up and wonderfully reliable The whole configuration, I pack a configuration data There is also a web UI to search for a few things, but not to configure it I like it, and I don't have something like a user management You might have to select a reverse proxy, with a basic authentication or a small certificate What you want to have, but something like that doesn't come with a promise Also, I go here and I get the whole data from my processes By full principle Do others, maybe over push print There are many different possibilities to get to the whole metric Prometheus says, full of data, I ask for every single process And let's get the data from there, that's how we want to do it in Prometheus universe, and not otherwise And then we can also work on all these raw data better I can do these NoSQL queries, I can work on the data a little bit, add a few dimensions And everything in the world is in Flow 64 A Fleece comma number in which I can put in a number So how many threads do I have? It's a Flow 64 How many CPUs do I have? It's a Flow 64 But how many bytes do I have? It's always a Flow 64 Everything is quite multidimensional Something like point-and-click configuration, we won't see A data target where I put in data, where I don't get it out again, doesn't exist Our really complex data types don't exist either Flow 64 is great Exactly, that's enough for Manifesto, I think we should take a look at how to do it in real time How do I set up something like this? Prometheus, as I said, in the middle In the easiest case, I probably ran two processes, a binary, a go-binary, Prometheus I copy one of the data there, start it, finish my monitoring system And if I want to say that I want to set up alerting Then I start an alert manager on the same machine, on another machine And I don't need more, two go-binaries, two or three configuration details, finished And I need a bit of RAM to make the whole database more manageable If the thing should have enough RAM To get the data out, I need an exporter There are the metrics in there, Prometheus is asking there On the node, how many CPUs do I have there? How are they loaded? I would like to look at that, be it on host level Or if I want to run container, like Docker container I would like to know how many CPUs are distributed on which container How many RAMs do they need? I would like to do a monitoring of my processes, in which I look at answers on a HTTP port Then I have a Blackbox exporter Who would like to know how to react to my application And he would like to know how fast they reacted and if they reacted correctly Or I would like to set up metrics in my application That I would like to give out there For example, Java, I brought with me today, because I do a simple client in Java You can also set up metrics from Java That is, technical metrics or professional metrics Then I send out alerts, I can send them to PeterDuty For e-mail, for Slack, whatever I would like to have there If I want to put everything together And Grafana for my dashboards Now I have to say, if Prometheus is to go there And these all different blue boxes on the left side are to be found Then I can go to the first case and write a configuration certificate I already said at the beginning, configuration certificates That is the way to share Prometheus things But, especially when I am on the cloud field now And whether it is now with Cabernets on the road or Consul Whatever I have built to make services carried and to use my notes Prometheus is able to get all this information That may be in another monitoring system, that you have to get in there Let Prometheus read it out there And especially when I am in a Kubernetes environment Then every container that I want to start there and monitor Then I do two labels on it Under which URL I have my metrics Under which port they are reachable And then Prometheus automatically knows, yes, I have to monitor that And then I get all the metrics And next to it, as a preface, Kubernetes Is fully prepared for Prometheus It all fits together with the Cloud Native Foundation And it uses Prometheus, so to speak, the monitoring system for Kubernetes That is where all the components that I have in my PaarStore Can simply be monitored there But well, that's almost enough Let's see how it works Exactly, I have to configure the configuration And the easiest configuration is To do no services carried To do no notes carried I just go and say, well, I define a job Scraping is that I ask for something And get my metrics off And I say, I give the job a name It should be carried out for all five seconds And then I say, here Some IP address, some port There I get the data In the easiest case, that's then HTTP And completely unclogged But I can also use small certificates To make the whole thing secure If I call up something like this Yes, how does it look like? Then I get a big Let's say I deliver text deserts I get a text desert In the metric, in this case No CPU is explained There are the seconds The CPUs spent in each mode From the type counter That is, the seconds are counted That bring a CPU to a certain state And you can see the CPU That was now 4,533,86 seconds idle The CPU zero The only thing I get for the CPU is one It was almost as long idle As I said, the no CPU is the metric And the two other things, the CPU and the mode That are the two additional dimensions That I get for each metric And in the same way I have also brought it with me A running environment That is now the node exporter That shows me all the metrics That for this one node Which in this case is a synonym for the host On, that's a little small Once over the server Because such a node exporter is also written in Go That's why there are a lot of Go statistics in there And yes, no CPU, no disk How many data were read How many are written Yes, so things, file systems How much is free How they are called What amount points are these And from this, all of these metrics are taken off in one Move away completely And stored in its local timesheet database That means I don't have to go anywhere anymore And say what exactly I want to have I actually get everything And then I can build my dashboards And evaluate what's in there If something is going to be too much You can say, okay, I'll just put them in I just want to see some Because I don't want to overload my database But I get a lot of red data That I can aggregate just like that Can impromise you how I want to have it For my alarm Or just the sport that I want to do So now we have In the previous example I took the last part That were the seconds That the CPU 1 was in the mode user 576.91 seconds Since the start of the system When the system is restarted It's still counting up And that's my actual metric And I have these three dimensions CPU and mode We already had And the name of the metric is also a dimension It has a special name You can use a different score According to the corresponding requests That we will see in a moment And In addition, I have The metrics that I read there Do not know from which house to come That's an information that Promises with it So I've read it with which job We had it before in the configuration data And I've read it from which instance In this case, the IP address and the port number If it wasn't in the configuration data But in terms of console or kubernetes Or as I've always read it These information would be automatically added Where did I read it? And that's all in my database And then we can look at the next steps What we can do with it You can also calculate with these metrics As I said, no CPU Seconds that the CPU was in a mode Of the type counter is always up And here, when I ask it It's just a little more And it's good that I don't ask Can I think about what happened in between? I don't lose any real value Maybe just a little granularity And if I want to know now Yes, how many percent of my CPU Was now actually used Procore Then I can start to calculate these things I can say, well I have the rate first I go here So first of all my metric, no CPU I filter on the mode idle Then it goes here and can use a function The rate The change over a certain time Over the The change per second in the interval of 5 minutes And the inverse of it is then 0. 0.1 for 10% Used CPU cores In this considered time frame To a certain point Exactly And I really have to write things like that That's not what I somehow configure in Nagui But I really write What I also have a great power To ask things the same way How I want to have it in my dashboard That you can also compare between different systems Different notes Where it's left out more Where it's left out less Exactly Now I have Percentage of CPU used For a single core You say, well, maybe on a system level Then I can say, well I sum this up I can now say Average by instance So that was also a dimension I had Instance And now I can say the average value For an instance of What is the average The average output of the CPUs Core of an instance So I can then also make data centers And I'll pack the Mechelabies in there And so I can then start My first of the sports to build There are questions about it Questions like in between In the end maybe also time But in between There are in any case answers But now a little A few colorful graphics Yes, a note exporter We had already seen I'll sit in between Let's start with a Hopefully simple variable Note free file system When I go to the GUI of Prometheus I can ask this metric And then I'll start with all kinds of I only have a single system Asked On which a few Docker containers run And Boah So I actually never wanted to know That I somehow know for every Mounted Docker device Somehow know how much space Right now is free But I have it soon exactly Wonderful I can now also go and say Well, I can then look at my graphics There I move relatively little in the time But I can then just say I have already prepared a few questions I have them in here, too I don't have them in here That's a shame I can now start To turn all my filters on And say If I can still read something That's a good advantage I can tell you my certain I can only say I'm interested in certain file systems Types that I ask here with a regular expression Or certain device More things I would like to exclude That I only have my most important Five data systems Somehow get in And get the appropriate values here Asked All nice data Wang to ask It fits When I go and say Well Graphs Well, just with the data system Does a little bit I think it's time to go to the next So CPU Then you can see a little bit more movement Then I go here and say Well Centurale Yes, I can now because of the File System Free Through the File System Size Then I can calculate me here Percentage Then I have my Percentage So 20% free 40% free 60% free 80% free Can I think about it Can I then Various nodes together That works Who of you is running the container? There are a few rooms There are C-advisors Maybe he has used one or the other C-advisor brings his own web interface With To see How the container is currently On a certain house Just running C-advice also has a Fertig matrix url for Prometheus Then I can These metrics As I need them Once completely down That is, I was now here C-advisor matrix Looks almost the same So what you always As a web interface You can see here Which container now How much CPU needs Was broken down on the container How many plates are free How many RAMs do you need Quite practical What can I say Well, container memory usage and bytes For all containers that have a name And the real Docker containers And I can then Show here And Then I see here Reminds me of the RAMs Exactly These expressions that I have here So what you see here Is the console The web console from Prometheus Then I can ask data from the database I can also just Can also see If all my configurations That I can see here And I can also see If the whole strappings really work So my targets Here are all data taken It works Is all nice green It works, that's good But if I want to Make nice graphs I can Take graphana And say For disk space We just saw Or percentage for disk space I go in here Can work And say Well, my request that I had here Can I Give in accordingly Say All my data systems That exist at some point That are not somehow Docker data systems I can ask my metrics here If I want to I can format the legend According to I can have any dimension That I have there As Formatting for the legend Here I have The mount point For the corresponding data system And now I have Percentage free disk space I can convert it accordingly And then I'm quite happy What my free storage space is CPU pro container Then I can say Okay It's like a variable Or a metric That I take Container CPU usage Seconds total That counts again and again My counter And then I take the rate Over a 30-second interval Or If I really just Two measurement points I want to compare them With the previous one I take I rate at this point Then it really Not measured the interval But really From one point To the next point Then it jumps a little more But for that I just Yeah Maybe a little bit more Graph but I see exactly What just happened And I can Can then Yes The CPU pro container To summarize How much it is really used Over the various courses Away That's it again A sum of images Some by instance and name Then I have Prepared So I want For each Docker container The Pro Docker container And note Something is running on it To summarize That I can either In the web console Make Fromisius Then you see it here In the corresponding Floats Which are behind Or I just go Into my graph And drag it in there Go back to my Blashboard And see my Ram usage Pro container At this point In this case Also pro note But I can now Over all my notes To summarize the CPU How often Have I started my container No matter where I can But to summarize How many All CPUs That I have now In my classroom Ram usage CPU usage I was just jumping Exactly That means I install First A Go Binary Promises here To monitor Install Then I may have Maybe already To monitor my containers There I can Drag the metric I install Another Go Binary On each of my notes To And then I start To build my Grafana Dashboard And if I say I want to Send notifications Alloys Then I have Another Go Binary But And Put the configurations And say Where the Alloys Should be Maybe an api key For Slack And then Are sent out A question So the question was I would like to get the questions back Where the metric Where the graphs Are generated here The whole The question Sends Grafana To Prometheus There The whole evaluation is done And the And the evaluation Then In this case I have Five seconds Asked From the browser To Grafana Grafana asks Prometheus Prometheus calculates The data And sends it back So the Sering of the data Then comes in the browser Exactly The visualization Comes in the browser But the data evaluation And the aggregation That happens in Prometheus Easy Rest of the calculation But I hope it works Half of the time I did it Okay Now I can Look at my application I am a Java developer And If you look at A Java process from the outside You can read Pretty bad What is he doing right now? He needs CPU He needs RAM But You don't see much You have to look A little more To look For the CPU Is just used for Gabel collection And how does it work It's like Heapspace Is just loaded With Java code And before I can do something like that I have to Now there are At least There are at least Two options You can connect one Jmx After Prometheus But in my eyes A little better This is the Java Simple Client I can add A Java Bibliotheque I can I can I can In my application In my application Which then Beats the same metrics Like my My So In addition To node exporter Who For example Says How much does my Java Is already machine How much does the RAM How much does the CPU How much does the sweats She just ran So I Then My RAM Can Can I can So A graph That tells me Well For A running container How much CPU components Do I need For the cables How much So Only a little bit Hot air How much The CPU really For the According metric For the Actually Fully logic I can I can I can Build my graphs And so I have my A A As I can I can A A A A A A A A A A A A A A A A A A A A that you saw before. Here you saw that every few seconds there was a Gabel collection, so it went down again. Now it has to be a little more frequent. So to speak, this very rare number fits exactly to the Gabel collection down here that we had. Now it's a little more busy. Now she's doing something. So I take one library I like to add a spring boot environment. You then add a new library, tell you which end point, which URL should be reachable, then I get all my metrics out of it. And the metrics don't look much different from what we just saw. Again, I have a little help, the type, what it is. On a water resistance display is a gorge or if it's a counter is. And I have a lot of information, much, much more than I ever wanted to know about my Java virtual machine. Then I can look in, wonderful. And there are in the end, but also for any other programming language. Then I can see how the thing behaves technically. Now I know a little bit of my application technically. I have my infrastructure monitoring in it. But I usually don't get real professional metrics out of it. When I say, well, actually, it shouldn't be much more demanding than to add a number of codes to a right place and then to take metrics out. Practical metrics, as often was called to a certain professional function. There is a job with that metrics, that's a very well-established Java library. In other programming languages, there are other libraries that can take metrics and collect them. And I can continue over the simple client and then continue. That means, I'm going to go to my Java classes and I'm going to add another notation to them in order to collect some data. That means, I want to collect metrics. I do that once for my application. And then I go here and say, well, another notation. I want to collect timings here. I want to know how long it took. So, quantity calculations, 95% of all requests were faster than, 75% of all requests were slower than. And I'll give you a name, which I'll find later in my statistics. And if I look here, counted, call, example, then I have a lot of additional metrics in here. They are all delivered under the 1 URL. They are automatically packed with it. And now I have a 98% quantity, 75% quantity automatically packed with it. How often was it called? How long did it take? And I can do my corresponding graphs for that. Graphers, right? Exactly. Well, they were counted there. I'll probably have to see if I can start something so that I can move the graphs a bit. Well, something is moving. Oh yes, now it starts to move a bit. Exactly. But still very evenly, what I see here, how many percent of my requests were processed in how many milliseconds. And again, I tried the corresponding metric in the GUI before. And then I entered the Grafana Percopian Paste. Exactly. Now I have collected metrics of my own applications. There are still circuit breaker patterns. I don't know who has already dealt with them. I have built my application now. I am happy with that. But unfortunately I still have to ask other people for more information. I want to measure every call that comes out of my own application. How often was it called? When were the percentages there? I want to define the time-outs. And yes, there are also metrics. You can also enter them. And I get the same metrics I have from them. And yes, the numbers, how many exceptions have been recorded, whether the insurance has just flown or not. That's the only thing I have to do for that. I have to register for another Historic Prometheus Metrics Publisher, so that it sounds to me and delivers everything with one URL. I have my external call. At this point. And then we also start to move to my graphics under Historic. How many calls were there just per second? How was the latency? And then I get nice colors that I can evaluate there, whether I can maybe do a bit of service-level monitoring to inform myself at first when I'm slower than I thought. Exactly. If you then go a bit further and maybe now a bit of the path of the standardized metrics that are already available at Historic, a bit lost and says, I want to deliver my own metrics directly to the Prometheus API. You can deliver a histogram directly with it. And then it would say, I think it's totally cool. You can say, I want to deliver a histogram directly to the Prometheus API. And every time when such a command is done, I tell him, okay, I have an event there and this metric should be on this histogram where I call Observe and tell him how long it took how many milliseconds. And this information I can then continue very granular to Prometheus. So almost completely unaggregated. Where do I have it now? Such a histogram is then delivered in buckets. How many calls were faster than 0.05 seconds? How many calls were faster than 0.75 seconds or 0.05 seconds? And I now get these buckets for every instance of my application that is running somewhere. In the circle that I am now over my Prometheus monitor. And I can then with which I go and say, no, I want to form my histogram and know, I don't want to calculate my one local application my 590% quantity, but over all my nodes, with all the instances of my application, the 590% quantity of the answer time. And I can do that with these raw data that are in my time series data. And I now calculate the rate over the 590% quantity. I hope you can read that from the back. For my metric the corresponding rate over 5 minutes and the whole copied over this bucket that I have, the command name and the command group. And then he tells me this one interface number 1 for this external call is the value of 233 milliseconds. Over all my nodes, over all my instances of the application. That means in this way I can look into my application how the machine behaves, how individual professional calls are, how often they are called. I can look at how my environment systems are, how fast they are. When a user comes by and says, it's back home so slowly, I can say, yes, then I have to see if it's the other way around, whether the values are slow. And then you can see how it is, how something has changed. Whether it's because of today, slowly than yesterday or actually as always. You have to maybe continue to look into what it is. And as I said, I can also take professional metrics into account if I take this at time and how often a user has logged in, how often a instruction was made, what was always so important in your company, how many people have reported it to the Easter Egg. Exactly. That was first metrics, graphs, there were so few questions. I would also like to ask more. There was a question. You have now shown a lot of this, I think, Prometheus Query Language. The interface, I have been dealing with Prometheus since we lost our eyes. Are there possibilities, maybe a bit more user-friendly if you want to introduce it to a team and not everyone wants to fully learn and understand this PQL. Because sometimes I have problems to model quite simple things in this language. For example, which is quite simple with a custom script. Yes. The height for the request is sometimes really high. So if you have seen how many percent of my CPU is actually used, if you look at this expression, it shows that it works, but it also shows that you might have to deal with it for a moment. There is currently no point and click surface where you can do that well. So if you look towards Influx DB there is probably Grafana, then you can click a form better than Grafana. There is the next front-end Weave Cortex which is currently building a possibility that you have collected the information from such a local Prometheus instance and send it away to a pure evaluation instance, Weave Cortex. We have built a nice web surface that still has a bit more and maybe it will also be open source and part of the cloud-native foundation might be contributed. Maybe we have to watch it. And then I can just click and then automatically get the metrics. What is also a bit missing is when I directly request the metrics I had these helpful texts and also the type is a counter is a gosh. The Prometheus is not included which is also a pity. The only thing that helps me with the web GUI is to find out which metric is available and then I can look at it exploratively. But just to summarize Average, Sum, Rate, I-Rate There is a homepage that has become much better in the last few months because a lot has been written with it. Sometimes some technical writers have better to describe to include a few examples. That helps. But I think it is still a bit bleeding edge in some places to formulate the sound. I hope that answers the question was a bit long, but Yes, I am not so the developer so my whole question comes from the other side. How does it look with engines like other solutions monitors like SNMP questions on network devices what else can you monitor hardware data from temperature measurements or something else? There is I have to say there is almost everything an exporter, a finished exporter. There is an SNMP exporter because you have mentioned SNMP. There is a MySQL data bank a finished MySQL exporter I do not have all exporters in mind I am not online right now but on the homepage there is a sign with all exporters that are finished in the rule then also go binaries or sometimes the application is already finished instrumented that you can switch over the metric via an URL Super, thank you There is actually something ready for the alerting, a kind of machine learning on it that in some way the limit is automatically generated I know that when you have a monitoring system you have 1000 meters but it is always difficult to look at it and say there would be very fine if you could run it for a week or two in normal mode and afterwards you could say ok say automatically if the value is totally different than it was done in the last two weeks I think a primary conclusion of the next slide I will continue a little bit a little bit at the beginning you may not have activated an alert as you said last time two or three weeks you can also use it for notification and I can last a minute if the place is full in this case over 70% then I want and the longer than 5 minutes then I just want to send an appropriate email a notification and these expressions that I write in here this between the if I can then give the permission and see how it would have been if it had been run for two or three weeks and see if it had been run once if it had been run 100 times then I can test it beforehand before I take it into the alerting permission and the alert manager of permission they do it a little bit they just take care to generate alerts and also to say which group or what you like to use or don't like to use I have more developed glasses I don't do load tests and other things others now have alerts per email set up email groups you can configure it with all cases without going into big plans who would probably do it in a different way maybe from the company it's like that to set up permission it's really one note it's not clustered the storage is completely local there's a lot of cache in the RAM so when clustering does something wrong or the cluster plays the complicated storage crazy and then the whole monitor doesn't work because it was too complicated then you would rather run two permissions and then let each other run they then trigger the same alerting the alerting ensures that the alerting is sent out once that you have a high availability of multiple permissions and they also scrape independent of each other so that they are really really H.A.A. except that they trigger the same alert manager then of course you can't have different views on the world and then they trigger the same alerting so that's H.A. that irritates me do you ask if you have to have the exporter twice for a high availability the exporter doesn't force but the monitoring house if it's not clustered to set up a second time they work independently but we still ask the same exporter twice the exporter exactly on this thing that you also want to monitor on the node itself and if the node needs it then no one can access it anymore if you have it as a part of your application then either the user answers or they don't answer that means you have the same exporter in the H.A.A. environment but two permissions that go on the same exporter on the same ports, on the same IP addresses maybe it just annoys me that it sounds unelegant that you just have to start several nodes so that Prometheus gets a bit of extra security just for the Prometheus server I start two nodes I would like to have it independent if one monitoring server isn't there with his plate isn't there with his cpu isn't there that's a complete outage next to it starts share nothing but that's the approach that's why some things are a bit different and we might know each other from other monitoring systems now I'll show you my learning I get emails or paycheck calls exactly so I'll say for developers it's a great thing or maybe there's a question back there the gentleman in black T-shirt with the pink letter that I can't read can I define a learning rate of change so I'll say if within an hour you add more than 5% then alert because then it's something unusual exactly, you can do it on a popular formula to increase it over a certain time you can there are a few statistical predictions that you can do to make such a weighty prognosis if it goes on like this and you weigh the newer values a bit higher than the old values where you would land in a short time you can do such calculations and then set an alert to something and without having to define a new metric for the for the fixed-platform or something like that you don't make a new metric for it it's just a formula that you just try out what happens when a metric suddenly disappears for a certain dimension a network interface is gone or a mount point disappears is there an alerting for that? I haven't set it yet unfortunately I have to adjust you can see it if things are stagled stagled is alerted so there was no metric for that setting I don't have to do that I have to try it out you mentioned libraries to connect programming languages like DavaGo how would you compare the feature set to the commercial tools like Datadoc or New Relic is that at eye level? so New Relic is only in the cloud and not on-premise or something like that I don't know if you pay even more but things are getting local something very pleasant you don't have tracing that makes you trace and track metrics that you haven't instrumented that's the opposite I try to do it with as little overhead and without automatic instrumentation with metrics that I have defined I can do similar things I can do it with less overhead but in any case I have to put in more memory and maybe I can use the metrics to get rid of them with New Relic to let it run a bit magic and then it will pull out something that might work magic doesn't exist that's the big difference I'm still here I will have 10 minutes until the next one I have to say a short summary at this point it's really fancy tech let me say a colleague asked me when does it cost to set up a monitoring I said if you have a record somewhere on one server that you want to monitor do it also with pre-sys you should check if you need the RAM otherwise it will take away too much RAM you have some nice graphics you have alerting and you just have to copy one binary and one or two configuration files that's a nice thing when you think about it and say you have teams that have several applications I think it's great if each team gets their own fromisius you can send it yourself with the metrics that you want to have and build your own dashboards you can maybe send the metrics from the neighboring systems to put them into your own dashboard and put them together very dynamically and it's a coder fan at least for the configuration you can try it out whether the metrics work that way and as I said just set it up and you can do something about the applications to export them magic, all the new relics is not but as I said when we build all cloud native applications that's for sure I say thank you I hope it was fun and for questions I'll be there