 Hello, everyone. I am David from Cystic and I will talk about service identification with Prometheus. First of all, let's talk a little bit about service discovery. Service discovery is just great. It allows us to scrape almost anything with just a couple of annotations and just a single job. Here there are some samples for sure you're familiar with. In this spot, for example, we added the scrape annotation equals true and the port number. Here we added the path, for example, and all of them will be scraped with just single job. That for sure you have seen before. This is great, but what if we want to make something special but some of the jobs, some of the services like, for example, adding some extra labels for enrichment or whatever we need or dropping some labels of a metric that has a high cardinality to reduce that or just directly drop some metrics that we don't use or adding some relabeling that we need for some metrics. We just cannot do that if one job makes everything. We need to identify the pots, but we still want to use the default Prometheus annotation, so what can we do? Well, here's the first idea. We can use a new annotation. Here we use, for example, Prometheus.io slash service. And in this spot, this new annotation equals service A. To make a new job that identifies and select the pots with the new annotation, we just add the action pip with the source label, the new annotation that we create the service equals service A. And this job will select these pots and scrape only these pots. After this, we can add the dropping labels, metrics, relabeling, whatever we need. Also, remember that you have to remove these pots from the generic job because else it will be scraped more than once. How would we do that? Well, here, we added a new rule that says drop all the pots that has the annotation not empty. This is okay if we can use the annotations, but this is not always true. Sometimes we cannot just annotate pots because maybe they are in restricted environments that we don't have access to that. But maybe we don't even have access to the pots of deployment because we are not the owners of that. Or maybe we have access, but they cannot be just restarted because they are critical services. So if we cannot annotate the pots, how we can identify that one pot belongs to an application or a service. Well, here is the idea. We can use the information that the service discovery offers and make some realistic roles. For example, we can use the container name inside the pot or the pot name to say, okay, if there is a pot where it has a container with the name keyed operator metrics API server, I can say that this is a keyed operator or keyed API server. And I can identify that and I can make a specific job for that. Or maybe I know that the service is exposing the port 9-1-1-3 and it is annotated. So I can say, okay, I will select all the pots annotated with the Prometheus.io port equals 9-1-1-3. And I will use that for making a special job for that. Also, this has a limitation that remember that you have to add this same rule with the drop action to the generic rule because else as we saw before, it will be scraped more than once. So as conclusions, we can say that service identification rules are easy to implement and they're powerful because it allows us to make a special thanks with these pots. But it has some limitations, like for example, as we saw, there is no standard annotation for that right now. I used, for example, Prometheus.io slash service, but maybe you can use Prometheus.io slash application or service name or application name or integration type, whatever you can come up with. So it depends on the implementation. Also, as you saw, one of the problems that we are humming is that we are still using the Prometheus annotations and the pot will be selected more than once. And it will be scraped more than once, making this metric duplicated. So how can we prevent that? Here we used another rule to drop metrics, but maybe we can use a new option in Prometheus to prevent the bothered endpoint to be scraped multiple times, even if it is present in different jobs. Also, we are using some information that the Kubernetes service discovery offers, but maybe there can be more information available to make better heuristic rules, like for example, the image name of the container. I hope you enjoyed this talk and you saw it is useful. Enjoy the prompt. See you later.