 Good morning, everyone. My name is Jaime Jera, and I'm an interior engineer at CSDIC, and also I'm a trainer of the open-source project ProNCAD.io. Today, I'm here to talk about different ways to take advantage of your useful information inside your logs and turn all of that into a proper matrix for your stability systems. So listen to it. The first question, why metrics from logs? Well, nowadays, it's very common that we have an environment, a Kubernetes cluster, for example, and it's very common that we are already instrumented. That application, we make a port forward to son of thing or that, and that's a type slash metrics and recover all that useful data. But sometimes, we try to deploy a son application that is not instrumented or don't have any supported either. And we try to make the same things, type slash metrics, and recover the data, and we can find anything. But maybe the other reason is, even with all the data inside our metrics, we can enrich all of that with information in our logs, because we have the bike size, we have the response code, source, destination IP, and we can turn all of that into metrics, because pick up the information, make some operation in that log traces, and to react into metrics. In order to explain different ways to do that, we have to split the chalk in two contexts. The first one is a scripting loss from containers, because it's a smaller one. For example, we have a patch application that is going to write these logs inside some volume, and we have to share the volume with a specific application that is going to write these same logs and turn that information. In this case, we are going to explain the grog exporter. First of all, we configure the source of information. It's very simple. We have to deploy our Apache application and specify the volume that we want to share. And after that, we deploy our grog exporter. The only two things we have to apply here is the grog configuration, and the same volume that Apache logs are using for its logs. So we have our logs in our application, and as you can see, we have that information that we can take advantage of. And this is the final result that maybe we are looking for. We have our metric description. We have labels inside this metric. As we can see, we have the call or the method or the path that we are requesting. And grog use a useful thing for that, that is patterns. As you can see, a simple log line here with the time series or the log level and the rest of information in the log line. And we can use, in this case, pattern is another thing that is re-expressing, but re-expressing could be very hard to read when we configure something. So in this case, grog save this information in some kind of label for two reasons. The first one is easy to read. And the other one is we can use several patterns in order to create another one bigger. For example, Apache already have several patterns and create another one for a whole trace line. And the other thing that we can make with this pattern is save it into labels. In this way, we can save our information in those labels and use it after that in order to build our metric. So the configuration with this pattern is very simple. As you can see, we have the pattern directory here. We will have all our re-expressions save it in our labels. And obviously we can compose another patterns with these patterns already created and use it as we can see here in the metric session. We are gonna build this metric and create it with this labels method. Who we can take this information is with the combine Apache log pattern. And maybe it's very crazy because we only have, we only see this IP or host or user, but in the other side of the label, we have the information that we are saving. In this case, we can see this double brackets and bear request and response that already have in this pattern called combine Apache log. So using these patterns, we can recover the information and we can create this metric from the log information. So it's very simple to use the Glock exporter. And as I said, it's very easy to configure the Glock exporter, very customizable. The metrics are very custom. We can create from scratch. We can create our re-expressions, assign this label, this value to some label like in our own patterns. But obviously we don't have only the Glock exporter. We have another application that we can use. For example, Google has the M-Tech. It's kind of different because obviously return the information as a JSON or a Prometheus, but use and take programs is more syntactic. We can have conditionals or constants and we can reuse it. I won't explain how to call some example with M-Tel, but only take into account that there is not only one choices for our context of work. The other context is a very common environment is scrapping from our Kubernetes clusters. In this case, we won't read for any volume and we can use the Glock exporter. And obviously we are our environment is a node Kubernetes cluster. We obviously deploy several applications because it's a common environment. And in order to write our logs, we are gonna write from the standard output to a standard error. This way Kubernetes will pick the information and save it into the node log. This way if some application go off and go again, we save it the status and let's disable it, of course. And maybe one of the application that we can use for this case is one application that is deployed in every node as a demo set. And for this example, we are gonna use Fluendee. Fluendee is very easy to configure and deploy in our clusters. If you don't want to make any specific configuration, we have our hand char and with barely a few options, we can deploy this application in our clusters. So as I said with the Glock exporter, I've had a shy result. In this case, this counter metric with several label in order to get more information in our metric. And the source of information is this basic line of NGINX log. We have the method, we have the endpoint request, the agent, timestamp, the IP address. But for this case, we only have, we only want to make the label for the method or path or status code. So as I said with Glock exporter, Fluendee obviously uses regress expressions, but for that purpose, Fluendee has something called plugins. Use plugins for everything, for reading the source of information in order to filter that information, make some combination or return that information in Prometheus or other patterns. So this is the kind of regress expression that has Fluendee for an NGINX log trace. Could be very, very scared at the beginning, but obviously we can select our specific values that we want for this case. As I said, the method or path or the status code for that reason, we only have to select this label for this case, for select the post method or recover the path with this regress expression. And we only, this status code will recover this value. So this is the syntax for, in order to retrieve information for any source. And as I said before, Fluendee use and useful thing called the plugins. In this case, for the source recovery, we are gonna use the tail and the regress plugin. The first one is in order to read from the Kubernetes node. And the other one is in order to use our regress expression. We only have to locate our regress expression that we saw before in this context. And we have to tag this source of information with the tag we want in this case is NGINX. This is very important because with big tag, after that we filter that information. We indicate that we are gonna use this filter for that log trace. And obviously, again, in this case, in order to turn that information in a Prometheus pattern metric, we use this already plugin created by the community for Prometheus. And after that, we create our custom metric for the information we recover for the log line. In this case, obviously, we have the name of the metric. We say that it's a counter-type metric, the description and the labels. As we can see here, between these dollar embraces, we have the method, path and status code that this is the same label that we have in the previous slide. And with this, we have the metric we already created. It's very simple, as I see here with you, and it's the same. It's kind of similar to work because we are using the regress expression, but obviously we have other tones of plugins because fluently have a great community and it's increasing day by day. In order to summarize a little bit fluently, it's very native because it's very simple, as I said, we have a home chat that we have Bali to configure in order to deploy in our clusters. As we said, we deploy as a daemon set, so we have a local aggregation, very centralized. And talking about plugins is the more authentic or useful thing that have Fluendee because it has a great community and have very tons of plugins that we can configure for several sources or several return of information or many other filters. Obviously for Kubernetes clusters, we don't have only Fluendee, obviously. In this case, the same guys that build this Fluendee are building right now Fluendee, well, right now it's already built it and it's increasing day by day. You don't have the same amount of plugins that have Fluendee that is increasing day by day. We can select this kind of Fluendee for our context with low resources because it's the difference with Fluendee. You don't need the same memory that have the... Obviously we have to select between these both applications. And obviously there's another applications for in order to aggregate lock information. In this case, we have obviously Prontail. I let Prontail here, but it's not alone because need from Loki in order to recover information. Prontail is deployed as an agent that recover all the information inside our nodes, aggregate all the labels and send it back to the Loki main server. It's very powerful, maybe it's the difference between other aggregators is Loki in the information, use labels for kind of aggregated information and so it is after that in Grafana. And it's very powerful. Firstly, in order to maintain the same thread for this talk, I already talked about Grog Sporter or the Intel Fluendee or Fluendee and Prontail for Kubernetes cluster. I want to talk about it. We have different choices for our contests or our environments. We don't have to stick together with one of them because sometimes our contests or environment changes and we have to know that we have different application to choose and don't stick with one of them. So, that's all. Thank you.