 Welcome to the CNCF's online program. Today's on-demand webinar is how to start with Prometheus using Prometheus I.O. I am Jesús Amitier, Senior Software Engineer at Cystic. Thank you for joining me today. In this video, you'll learn how to start monitoring your applications in Kubernetes with Prometheus and Prometheus I.O. PromCAD is an open-source online catalog that offers a curated list of Prometheus resources, dashboards and alerts, so you can start monitoring your applications in Kubernetes in no time. Prometheus has become the de facto standard for monitoring cloud-native applications. In fact, Kubernetes exposes its own metrics in Prometheus format. So it's like a no-brainer when choosing the right application to monitor your cloud-native applications. The thing is, where does Prometheus get its metrics from? I know everyone here is aware of what's Prometheus and everything, but let's do a quick catch-up. Prometheus doesn't get the metrics like other more traditional monitoring solutions. Instead of sending metrics to the Prometheus server, the Prometheus server scripts the metrics directly from the applications, from all the containers running the different applications. So every application needs to expose its metrics so Prometheus can go to those metrics and ingest them. That's the main difference between Prometheus and other more traditional solutions. The first thing you need to search when starting your journey with Prometheus is you might think you need an exporter, but what if you don't? Prometheus has its own SDK so every developer can instrument their applications to expose metrics in the Prometheus format. So maybe the application you're trying to monitor is already exposing these metrics and you don't need an exporter. For those applications that aren't instrumented yet, you can use an exporter. What's an exporter? An exporter is an application that runs in conjunction with the main application that you want to monitor. It gets the information from the application and then exposes that information in Prometheus metrics. How should we choose the right exporter? Let's do for example a quick search. Let's see we need to monitor MongoDB. MongoDB has several exporters. All of them are open source and available for free on the Internet. But what's the right exporter? For deciding which one is the right option for you, you should download all the exporters, try them out, etc. Once you find the right exporter, you need to see how you're going to run it. Do you need to run a Go program or maybe you just need to run a Docker image? What if it's Docker? Do you trust that image? And most importantly, is that image going to be forever on that registry? What if someone removes it from the registry? Then you might find an image pool error in your Kubernetes cluster and that's a mess. That's why we created Promcat. Promcat is an online resource that you can use for finding the right configuration for your applications to monitor them with Prometheus. All the integrations that we have here are curated, they are well tested, and they are maintained by Cystic. So you can trust that once you pick an exporter from promcat.io, it won't be removed from our registry. Let's for example search MongoDB. You can see how promcat looks like. You can see the description, you can see the Docker image that you need to use, the container, the image you need to use to run this exporter, also the setup guide. You can also download some cool dashboards to import them in Grafana and start monitoring. And also some alerts, these are a curated list of alerts that can help you to start monitoring. Okay, so let's see how we can use promcat in a real case scenario. Okay, let me show you. Let me do this a little bit bigger. Okay, so in this covered cluster, I have an NGINX namespace with an example application that is generating traffic. So NGINX is constantly serving traffic, okay? So there are three NGINX bots, and inside of those bots, there are only one container, the NGINX container. So what if I want to monitor this application? This is an NGINX, so I can go back to promcat.io and then I can search for NGINX. And then I go to the setup guide and it's okay. So there are a few prerequisites. I need to enable this stuf status module. This is a config map example in case you need to tweak it in your application. And then the installation of the exporter itself. So you can install the exporter right from our hunt charts. Okay, so let's try this. First, we are going to get the patch file from our home file, our hunt chart, sorry. And then we are going to patch our current NGINX installation. So it includes the new exporter. So let's go back to the terminal. Let's close this and then we are going to do this. Let's download this. Okay, so we just downloaded these parts. Let's take a look at its content. So it's basically adding a container here with some limits request. You can tweak this if you find any issue or if you need something different. It's using this container port and these annotations. This is really important. We'll see this later, but this is how Kubernetes is going to discover these metrics. Okay, so let's do the patch command. Okay, no error here. So let's go back to the NGINX namespace. Let's go back to NGINX and then there are the three pods. But now we can see now in each one of the pods we can find two containers, the NGINX itself and the NGINX exporter. Now that we have the NGINX exporter working as a sidecar of the NGINX server, we can do a quick check to assure that the exporter is working. It's really easy. We can do a port forward to the port to the 9113 port that is the port that the metrics are exposed in. And then we can go to the browser again and here you have the metrics. You can do this and then we are sure that the exporter is working. This is a nice way of taking this because sometimes we configure everything at the end of the process. It doesn't work. And before starting to tweak the Kubernetes jobs, the Prometheus jobs, other configurations, it's a nice idea to be sure that the exporter is working. So now that NGINX has its sidecar running the exporter for exposing all the NGINX metrics, how can Prometheus get these metrics from Kubernetes? Well, this is really easy. We can go to the Prometheus documentation and then we can take a look at the Prometheus service discovery config. The service discovery config allows Prometheus to get information from the Kubernetes API. How? By using these annotations. So basically we need to create a job in our Prometheus configuration. We need to create a job for using these metrics and using these labels for telling Prometheus where are these metrics. So how can we do this? This is an example job. This file is the HEM file that I used for installing this demo environment. I'm using the Kube Prometheus stack with this fantastic, but you can install Prometheus the way you want. This file will be available in the GitHub repository with all the files regarding this demo. So don't worry, you have this in our GitHub repository. So basically, if you can remember, we used two annotations, the port and the scrape. So basically the scrape is telling Prometheus, yes, scrape this port. And then the port is telling Prometheus, this is the port to get the metrics from. The rest is by default. So it's going to be a slash metrics, but you can change that also. So basically, in this job, we are using some reliable configurations. What we did is using the scrape annotation and the IO port. So with the scrape annotation, we configure how to use the IO scrape annotation and then the port with these regex replacements. Also, we could change the metrics path by using this annotation over here, the Prometheus IO path. If you want it, if your exporter doesn't expose the metrics in the default slash metrics path. So you could change the scheme. For example, if you wanted to be HTTPS instead of HTTP, you could use this IO scheme over here. And basically, that's everything you need to know. So let's try it, right? Let's see if this works. Okay, spoiler alert, it does. We have a lot of metrics here. Prometheus is auto-completing all the metrics we have. So we can do this and this. So that's it. We are monitoring our ngnex application using Prometheus with just copy pasting a couple of commands from Prometheus IO. Let's see what else has promcat to offer. You can go here to dashboards and download the example dashboard that is available here. So to have a nice starting point in your ngnex dashboard. Maybe you already have an ngnex dashboard, but in case you don't, you can start with this. So here it is. Here you have the number of instances of ngnex that is 3 and the active reading and writing connections, the waiting connections, not handle connections. This is okay. There are no connections unhandled. So this is it. This is how you can start monitoring your applications in Kubernetes using Prometheus and promcat.io. I hope you liked this video and see you in the next one. Bye.