 So this is going to be FluidBit, getting started with FluidBit, Metrix, Prometheus, and M3. My name is Honorog, and I'm joined by Gibbs. You want to do a quick intro to yourself? Okay, yeah. So my name is Gibbs Cullen. I'm a product marketing manager at Chronosphere, and where Chronosphere is one of the contributors for M3, which is one of the solutions in the workshop. So before we jump into the workshop, just a quick primer on FluidBit and Metrix. I know Eduardo gave a little bit of this background earlier, but actually one of the fun facts is Metrix were the first plugins with FluidBit. So the first plugins that ever existed were CPU, memory, disk, thermal, all geared towards embedded Linux environments, the IoT environments, and scraping that metric data so that it would be usable by folks who needed it in their analytics environments. Well we took those log-based metrics and said what is the main factor that folks are using today, and it really was Open Metrix, Prometheus, and we said let's extend and make sure we can collect metrics in that format. So last year we extended to Open Metrix-based plugins. Those include Prometheus Exporter, RemoteWrite, and most new in 1.9 is our Prometheus scraper. So making sure we integrate really well with that ecosystem. And throughout the workshop we're going to go ahead and walk through all of that. So with that. Yeah, and so just to give a little context around M3 for those who aren't familiar. So M3 is an open source metrics monitoring solution. It has three main components that we're going to see today in the workshop. So the first one being a custom built time series database called M3DB. Then we have an optimized ingest and down-sampling tier, which we call the M3 coordinator. And then finally there's an optimized query engine, which we call M3 query. And then just for some more background, M3 was built back at Uber, an open source from day one, having its first check-in and get-up around April 2016. And it was originally built at Uber to try to replace their existing metrics monitoring solutions to better accommodate the levels of scale that was needed. And it's now kind of used by many other companies such as Walmart, Databricks, Snap, LinkedIn for their own internal metrics monitoring use cases and purposes. And today it's still maintained and owned by Uber and Chronosphere as well. And finally it was designed to be a Prometheus remote storage compatible solution. So for the workshop, this is a kind of high-level overview of the architecture that we have set up. So you have your instance of Prometheus here on the right side, and then instance of Grafana on the right side. And you can see that we have our three main tiers for M3 set up here. So you have your coordinator, which will then send metrics over to the M3DB tier. And then for querying, you have the M3 query tier, which will fetch metrics from the DB tier. And then so I think that's it for this slide. So I guess we have these links here, that link to the instruct course that Anarag was mentioning earlier. And then we also have, if you want to look at it in GitHub, we also have instructions there as well. So I don't know if we want to share them out in the Slack or? Sure. Yeah. So I went ahead and pasted the instruct link in the Slack, so you can copy it real quick here if you need it, or get it from the Slack if you want to access it immediately. I'm going to go ahead and click into it, and we're going to just walk through it. So we'll walk through this entire lab, but it'll be available afterwards as well. So if you want to try it out there too, happy to have you do so there. So let's go ahead and click start. It's going to go ahead and up and load. Awesome. There it is. And in this environment, we talked about all the different components. We have Fluent Bit running. We have Instance of Prometheus. We also have an instance of Grafana that's going to be used for visualization. And then we also have M3 running there as well. So first what we're going to do is look at specifically at Fluent Bit and how do we collect metrics with Fluent Bit? So with Fluent Bit, when we collect metrics, we use this idea of plugins. You have an input and you have an output. And one of the simplest ways that you can input and output your metrics is by sending all of that to a standard output. So what we're going to do is run Fluent Bit via command line. I'm going to go ahead and copy and paste this. It's really small, so let me see if I can increase the size here. And this command is pretty simple. I'm running Fluent Bit. I have dash i to indicate an input. And that input here is going to be node exporter metrics. So you can imagine this will be metrics similar to what you would expect from the Prometheus node exporter. And then we're outputting that straight to the standard out. So if I go ahead and click Enter there, Fluent Bit will start running. And it will go scrape the metrics that it traditionally goes and gets. And it just prints them out here in the terminal. So again, a very simple way that we can see, how do we go get metrics? How do we send them somewhere? What is the plug-in structure of Fluent Bit is? So let's go on to the next lab or the next challenge. And in the next challenge, we're going to do something just a bit more complicated. So now what we're going to do is we're going to take Fluent Bit instead of just outputting it to the terminal because in the real world, we can't just watch the terminal. What we're going to do is output that data into a Prometheus endpoint. So to do that, we're going to modify Fluent Bit configuration. And so in the lab, again, this is all there, so you can access it later too. But we have this configuration file that allows you to export those metrics on a specific port. So here, what we've done is we've said, hey, we're going to export our Prometheus metrics that we're capturing from NodeExporter. And we're going to export them on the port 2021. So I'm going to go ahead and copy that. I'm going to modify the Fluent Bit configuration. So here is the documentation. Here's my file explorer. Let me go ahead and load this. Just going to do a replace. I'm going to save this. And then we're going to run Fluent Bit from the command line again. So we can copy this command, run it there. And this time, instead of running with a dash i or dash o, we're running with a dash d to reference a configuration file. It's going to go ahead and run. And we can see this line right here listening with the TCP port 2021. Now, if I switch into my Prometheus instance, on Prometheus, if I check the configuration, I have a scrape job here that is going and retrieving, oops, that is scraping and retrieving Fluent Bit metrics from that port of 2021. So here's the target. And you can see the scrape interval is every 10 seconds. And it's grabbing those metrics and ingesting them into Prometheus. So now I'm going to go visualize that by switching into Grafana. And we're going to go ahead and use the super secure password. I think DevSecCon is right next door. So they'll know how secure admin admin is. And we'll just skip all those warnings. And then within Grafana, as you would expect, folks have gone out in the community. They've built these great dashboards. Why try to change all that? Let's just plug into what the community has already set. Let's use the node exporter dashboard that already exists. We'll go ahead and click on that. And you can already see it's loaded up with Fluent Bit. Last 24 hours is probably a bit extreme. So let's just switch to five minutes and five seconds. So within this super small set of time, we've already gone ahead, taken those node exporter metrics that we were collecting earlier. We've exposed them over a port. Prometheus has scraped them. And now we're visualizing them just like we would with your traditional node exporter. So you can see how easy it is to plug this into the workflows that you already have existing. And basically get started to have all the same dashboards, make sure all the same alerts, all the stuff is basically preserved. Now the one thing I will say as well is we capture a lot of what node exporter does, but it's not 100% match to match. There's still some things there that are not yet out of the box. But we'd love for the community to tell us, hey, these are the things that are really important to us. And here's what we'd like to see. So that is with Prometheus Exporter Plugin. And now we've seen a lot of the community switch to remote write. So now instead of Prometheus scraping everything, a lot of folks are switching to Prometheus Remote Write. How does that work versus just exposing stuff over a port? So if we go into that challenge, click in here, the nice thing about this sandbox environment, too, is if you or you have a service that accepts Remote Write, you could try this exact sandbox with that live service. So if you're using things like Chronosphere or New Relic or Log XIO, Grafana Cloud, anything that has that Prometheus Remote Write endpoint, you could modify the configuration with your tokens, your URL, and actually see what that data would look like in your live service. So in this case, we've also provisioned an M3 instance for you. And with M3, we showed that diagram earlier. There's a lot of different components. I won't pretend I know a lot about M3. But one thing that we need to make sure is that M3 has the proper namespace set, and that namespace is ready and accepting of remote write data. So we have to run this command. And this command just basically calls M3 and says, hey, please load up this namespace called default. And with that namespace default, please ready it and make it available for remote write with Prometheus. So unfortunately, I have to run this a couple of times until it says ready equals true. And there it is. So ready is true there. So this M3 instance is now able to accept Prometheus Remote Write data. OK. So how do we send Prometheus Remote Write data? Is it vastly different than when we were exposing data over a port earlier? Actually, no. It's still very, very simple. So we're going to go ahead and now modify the configuration again. But this time, with the configuration, you'll see another output here where it says Prometheus Remote Write. And with Prometheus Remote Write, we're now pointing to this host, which is M3. We have the port there, and then we have a specific URI. So you can imagine if you're sending to a specific service like Chronosphere, or Phonocloud, or New Relic, which all our documentation has instructions for most of these services. You'll put with the TLS on, HTTPS. You'll put a specific URI. You'll put a token. All these things are configurable there for you. So I'm going to go ahead and copy this. And what I'll go ahead and do is we'll switch into configuration. We'll switch into flimpit.conf. You can see this is the file we just modified in the last challenge. I'll add the new output. So this time now we are both exposing metrics over 2021. And we are sending this via Remote Write to M3. I'll go ahead and click Save. And this time I will go ahead and also run flimpit. Now with this new configuration, let me clear the terminal here. And it's running. So it's now sending data over to M3. How do we go and check that that data is actually there? We'll switch over to Grafana again. And then within Grafana, we'll go back to our dashboards. We're going to use the same node exporter dashboard. But this time, instead of using our default data source, we're going to switch to M3. And within M3, we'll switch to last five minutes and last five seconds. So similarly here, you can see you're getting all of the data. Looks like some odd thing there with CPU. But otherwise, the same stuff, Remote Write or exported. Actually, I think we should be capturing this here too. So we have both versions of that same data. If you're scraping, if you're remote writing, you can do all of it from the single instance of a flimpit, which is really nice. And the last challenge that we have within this workshop, very easy again to try, is scraping a custom endpoint. And as I was putting this workshop together, I was trying to think what type of application I could spin up. That would be really cool to showcase. It has a dashboard. And eventually, it became too much of a hassle that I said, let me just make this a little more meta. And I will monitor the application of FluentBit itself. So within FluentBit, one of the tabs here you'll see is the documentation. So you can search everything there in the lab as well. One of the ways that you can monitor is by adding a HTTP listener. And on that HTTP listener, it will expose metrics in JSON, as well as Prometheus-formatted metrics. And so what we do is we're going to set up FluentBit to, one, expose those metrics. And then, two, scrape those metrics that it's exposed. So again, like I said, very meta type of action. There's actually a plugin that has those FluentBit metrics itself. So they're literally just called FluentBit metrics. But in this case, say you weren't trying to monitor FluentBit itself. You want to monitor another application, third-party app. We've had folks do things like HashiCorpVault, MySQL, IngenX, all sorts of applications that you want to collect Prometheus data from, export it over a different port, or remote write it to your service. You can do all of that from here. So again, we'll copy this configuration. And we'll modify it here. I'm just going to go ahead and wipe whatever we had before and copy and paste it. And let's go ahead and run it this time. And this time, I'm not using Grafana to visualize anything. We're just going to see those metrics pop out in Standard Out. And as it runs, 2020 will be exposed. It will go ahead and pop all these metrics out. So if we zoom in to this, you can see you have the ability for, oops, didn't mean to copy that. You can see, hey, we started nine seconds ago. This is what we're capturing here, all the labels. And that also brings me to a good point, is if you want to do some more customization or additions on top here, a lot of these things include things like add label. Oops, let me refresh this real quick. You can do add label on top of the exporter. So if you have custom labels, app labels that you need, all of that's available in your configuration as well. So yeah, I'm happy to have folks run through the workshop. And we can walk around. If you run into any issues, we have questions for the next five minutes. But yeah, again, a very quick showcase of how to run. Oh, looks like something happened with the HTTP client pulling from itself. But yeah, it's a really quick workshop. It's going to be available after the conference as well. So if you don't want to run through it now, feel free to run through it after. And we're always looking for improvements. So we're going to try to build as many of these workshops as possible to make it easier to learn it and integrate it. And of course, it's accessible via third party services too. So if you want to pump data to your service, see what it looks like. I'd say go for it. OK, go ahead and close that. So yeah, for folks who are trying it out, we'll maybe walk around five minutes if you have questions about the workshop or run into any issues, happy to help. And I'll also be on the Slack. So if you're here virtually as well, we can help that way too. Yeah, a little walk around.