 OK, everyone, welcome back. So this next session is actually a workshop. So we did FluentCon in Europe earlier this year, and one of the big pieces of feedback that we got was folks want some more hands-on sessions, be able to get their hands dirty, try things out, experiment with stuff. And so we thought, let's go ahead and build a session that does two things. One, exemplify some of the new behavior we've added within Fluentbit. Two, gets you familiar with other cloud-native technologies. And three, something that you could try out live. So if you have your laptop, awesome. We would recommend having a Docker desktop, Docker Compose. And we'll be walking out through the room. And of course, if you want to try this later, or if you're joining us virtually, we'll have a GitHub repo that you could go along and follow along for. So we have about like 10, 15 minutes of slides. The overall demo, getting the workshop running, should only take about five minutes. But then we'll demonstrate some of the capabilities, showcase how you can start to tune this within your own site. So we'll be watching the Slack channel, we'll be watching. And if you have questions, we'll have to make this more of a discussion workshop versus us just presenting. So with that, let's get started with just some quick intro slides. And so again, I'm on a rug and also joined by Gibbs is from Chronosphere. So let's go ahead and get started. Yeah, so our agenda, we're going to go walk through Flimpit, Prometheus and M3. Oh, am I starting? I'm not presenting here. There we go. Okay, so agenda. We're going to walk through Flimpit, Prometheus M3. Give you a little bit of what the hands on workshop overview looks like, what you might need on your laptop. And then within the workshop, we're going to set everything up, walk through it. And of course throughout the session, please, if you have questions and answers, we'll be here and we'll be looking at the Slack channels as well. Okay, so I wanted to just talk a little bit about Flimpit and metrics. I know in the previous session, this was mentioned a little bit about a metrics are brand new and we've been doing some great things with them. But actually with Flimpit, one of its main use cases when it was built in 2015 was for embedded Linux, things like wind turbines, other Linux machines, things that are embeddable in your house. And the first plug-ins were actually all metrics. There were CPU, memory, disk, thermal. And these are all what we consider log-based metrics. They essentially take JSON and send that over into your log pipelines, whether to an output like Elasticsearch, et cetera. The advantages with this is it's compatible with old plugins. It's also compatible with some of the more advanced features and functionality that Flimpit has, like SQL stream processing. But the disadvantages are as we see the evolution of this Prometheus ecosystem, it's not compatible. And it's also a little more expensive bite-wise, right? You have JSON. JSON is a pretty heavy and expensive compared to Prometheus format. So with Flimpit and Prometheus, the integrations didn't just start with the latest release. We do have a lot of Prometheus-formatted metrics within Flimpit. And so we'll do a quick intro for Prometheus for those who might not be as familiar. And Flimpit internal metrics were disposable via Prometheus since 2019 and before. So you can see how many input records, you can see filtering records, output, you can get storage information, you can get all sorts of various internals all exposed in a Prometheus format today. And with 1.8, we finally introduced those Prometheus metrics as an exposable format. And we have two main ways to do that. One is through Prometheus node exporter. We take the great work, the Prometheus node exporter team is done and have ported some of that to Flimpit so we can collect things like CPU, network, disk, file system stats. And then we have metric outputs. So things like remote write, which is more of a push method for sending that data over to Prometheus or Prometheus-compatible backends and Prometheus exporter. So if you're doing the pull model. In the workshop, we'll actually be running through both of those, but yeah. So however you need to get your Prometheus data, we can support that. And then the formats for the existing plugins. So things like you have forward plugin where you might be sending to FluentD. We've made sure that that's compatible. We've insured things like influx if you're sending to influx back end with line format. And the advantage here is it's very lightweight and it's convertible into multiple formats. And the disadvantage, it looks like that thing got cut off but essentially it doesn't work with some of the more advanced features like SQL stream processing today. But that can always change. So with that, let me hand it over to Gibbs who can talk a little bit about some of the more metric ecosystem. Okay, yeah, so I know most of you probably are all very familiar with Prometheus but just for those who are not, we're gonna quickly give an overview. So Prometheus, it's a CNCF project. As you know, it uses a tag-based metric ingestion format. Query language called PromQL. It also does metric ingestion. It's scrape-based, so pool-based metric ingestion. It's also very efficient at metric storage. So very good at storing metrics inside the instances themselves. And it does have the ability to kind of out of the box do visualizations and graphing of metrics. Most people choose to use another solution like Grafana but it does have that capability as well. And then you can do kind of aggregation with Prometheus using the Prometheus recording rules and then doing the alerts on your Prometheus metrics as well using the Prometheus alert manager. And some of the advantages of Prometheus which has made it so popular, first is that it's very easy to get started. So having a single binary for ingestion, storage and query, and then you'd have another single binary for alerting. It's also the CNCF recommended monitoring tool. So that's made it have a very large community around it. So, and that's made that exposition format that it uses widely accepted as well as the query language PromQL. And some other reasons why it has become so popular is because of its dynamic endpoint discovery which can be found on many different platforms including Kubernetes. And then it has a very large ecosystem of exporters which are essentially just existing software integrations and most major software projects, especially open source ones have these existing integrations and there are huge lists of them on the Prometheus docs. So we're gonna, as part of the workshop we're gonna use M3 which is an open source metrics engine. It was created back at Uber in 2016 to help with their metrics monitoring use cases internally. And now it's used by many other companies including Chronosphere which uses M3 as its backend but just to kind of quickly run through the architecture of what M3 looks like at a high level. So you have three main components in here. We have the ingest and kind of down sampling tier which is the M3 coordinator. Then we have a distributed custom time series database called M3DB and then on the query side we have an optimized query engine called M3 query. So we'll be spinning up as part of the stack and then sense of each of these or a node of each of these with like a single node of M3DB. And then typically how metrics are sent into and out of M3 is we have on the right side of things we can accept both from Uthias metrics through remote right and then we also can accept carbon or graphite metrics as well from SSD. And then on the read side, you can again send queries and using graphite or promql through Grafana and then you can also kind of alert on your metrics via the alert engine through the query tier. And so yeah, so I think I just kind of ran through a lot of this but some of the I guess the key benefits of M3 are in some of the design choices around that were to be very large scale and that's because it was built back at Uber. So and to beat that level of scale was a requirement. So it does have the ability to accommodate billions of time series metrics per second. It also has a lot of high reliability design choices as well. So using replication of each of the metrics and that kind of leads to having any fault tolerance. So if anything goes good down, you'll have some backup copies of your data. And then in terms of performance, we do use a reverse metric index inside the DB tier and also kind of a very efficient compression algorithm. And that kind of makes it so your metrics can be retrieved very quickly and makes it very efficient for storage. And then yeah, cost efficiency as well that kind of, I guess that kind of falls into that as well. So just some of the key I guess features of M3 there. Okay, and then that's all for our slides. So I don't know if we wanna send this link out in the Slack if that's easier, but if everyone wants to go to the fluent repo and GitHub and then there should be the M3 workshop fluent conf folder inside there. But I can send it into the fluent Slack. That might be easier. Awesome, and I guess just a show of hands for the folks in person. How many folks are gonna run this locally right now? One, two, three. Okay, awesome. I'll just come around and make sure everything goes well. And for those three raised hands, does everyone have like Docker desktop Docker post? Awesome, awesome. Okay. Yeah, the read me in here should be pretty, should have everything you need. Kind of goes through a stack overview and prerequisites you need to get it up and running. And then there's like the step-by-step instructions. But again, like Anarok said, it shouldn't take too long, but I think we'll have plenty of time to play around with everything once it's up and running. And I guess just to walk through the architecture a little bit, we're gonna be deploying out Prometheus. We're gonna be deploying out Fluentbit. We're gonna be deploying out the coordinator and we'll be deploying out Grafana as well. And so the Docker compose will go ahead and spin all of these things up. We'll make sure everything gets connected as well. And then on the Fluentbit side, we can walk through the configuration, see what it's doing, how hard is it to go and configure kind of these Prometheus node exporter metrics. And then hopefully on the Grafana side, when we go visualize that everything just shows up as you would normally expect with something like node exporter. So with that, let me go ahead and do two things here. I'm gonna go ahead and just walk through the configuration. Then we will go ahead and do Docker compose up, launch everything, we'll kind of walk around, make sure everyone has everything good. We'll check on the Slack channel for those joining virtually, make sure things are great. And then we'll reconvene in like 10 minutes after that, see what we ran into with running everything. So first let's walk through the configuration for Fluentbit. And with Fluentbit, this is what, for those who might not be familiar, what that configuration looks like. There's been a lot of work to make this very, very simple, easy to understand, high readability. And you'll see these big brackets that denote what this might be doing. If it's an input, it's an output, is it reading data, is it sending data? And in this case, we've added this input, which is node exporter metrics. We're gonna give it a tag. So all the data with in Fluentbit, even the metrics are tagged with metadata, which then says this is where it's going to route to, this is how it's gonna be sent out. And then we have two outputs. The first output is for Prometheus exporter. So if you're scraping from node exporter, you might want to expose these metrics on a port with a listening address. You might wanna add labels, so labels can really help with the queries. We'll show some problem-ql queries here later. And then also we support remote writes. So if you want to go ahead and send this to a service, you know, say the M3 service that's running locally or even other services, hosted services, if you're using things like hosted Prometheus, managed Prometheus by vendor, typically they'll accept Prometheus remote write. Okay, so what we'll go ahead and do is I'll switch into the root of this directory after doing a gate clone. And very simply, I'm just gonna do Docker compose up and it's gonna go ahead and start both M3 provisioner, coordinator, query, fluent bit and Grafana. So this should be a very simple step, but ideally what is happening is we are running fluent bit, we are starting to capture those metrics, we are plugging in both to the scraping and we're also sending that data to M3 where it'll be searchable and accessible via Grafana. So we've gone ahead and run this command here, we're gonna take a quick 10 minute walk around, make sure everything is running well and we'll reconvene and we'll check the Slack channel as well. Thank you. Yeah, so we were walking around, it looked like most folks got situated, again, this should be really quick, so we've made the workshop so you can kind of turn key and then start doing some exploration after it, it's hopefully working. So here we had run the Docker compose up about 10 or so minutes ago. Some quick corrections, again, for those joining virtually, the local host and the read me was 3030, it's actually 3000. And so let's go ahead and move over to that. Let's go ahead and log in, local host 3000. And actually for most folks, it should give you a quick username login, you can use admin admin, you can reset the password. And as part of this, we wanted to use the exact same node exporter dashboard that you typically find when you use Prometheus. So when you log into Grafana, you can come into the dashboards, you can take a look at them and you'll find this node exporter full dashboard. And then within this, you'll automatically see that data loaded up. So what has happened is fluent bit, again, was configured with the node exporter input. And on the output side, we had two ways to capture that data, whether you wanna scrape that data via Prometheus exporter, or you can use the remote right to have three. And we can look at the last five minutes of data, you can look at what's going on here. And your dashboard is filled out here. So if folks have other metrics or types of metrics they wanna collect, this is an area that we're always looking to improve from a community standpoint. And then from an ease of use, the idea is to get you started as quickly as possible and as fast as possible. So let's go ahead and just check if there's any questions on the Slack. Nothing so far. So next what we'll showcase here is maybe some quick promql queries and some labels, we'll pass it off to Gibbs. You can maybe showcase some of that within Brafana, maybe what you can do with some of this data after it arrives. Let's see. The one we typically just do is the up command. Okay, so typically it just run the up command just to see everything. Okay, well you can only see two of your instances here. How would I create here by trying? I think the other piece is when we configured Fluentbit as well as you could add labels within Fluentbit. So we can always use those labels as part of dashboards, as part of filtering. And so in the node exporter side as well in the dashboards, there's a couple of dropdowns there where you could say, hey, I wanna look at potentially labels that match specific app profile, particular environment. And so the main thing we wanted to just showcase here as part of this workshop is we're not doing anything like too fancy, nothing that's super Fluentbit specific that it prevents you from utilizing all of the processes that you might have built on Prometheus before. So if you already have node exporter, you have all the alerts defined out, you have everything going for it, you'll be able to quickly go ahead and basically plug Fluentbit into that ecosystem. And then secondly, if you are expecting to traverse and explore the data, you haven't really had too much experience with node exporter or Prometheus, it's pretty simple to get started. There's a really large ecosystem around Prometheus, really large amount of folks and tools out there to kind of help explore that data. So we're gonna go ahead and hang out on the Slack here for a little bit more and just make sure that things are running well for others. But other than that, we'll go ahead and answer questions, things about metrics, and then we will be breaking here for lunch right after. So we're the only thing that stands in your way. So yeah, thank you, thank you for joining in. We'll, again, everything's on the GitHub, so if you wanna try it later, you haven't had a chance to try it, it's all available. And yeah, maybe we made it too easy with just a single step, but so awesome. Yeah, and for the recording, it was saying, hey, I have a lot of Raspberry Pi, I've been trying to do a lot of node exporter setup, and what we can do here with Fluentbit looks a little better. So yeah, that's great news. And this is something where we wanna keep on building as a community for the metrics integrations. And so if you have those requirements, if you have those needs, we've had folks come and say, hey, I wanna be able to measure from my logs how many 400, 500 errors potentially as metrics, so we can add those stats as things that we expose internally. So yeah, we're super excited to keep building here. Okay, I will keep on checking the Slack here for any questions, otherwise we are happy to end a little early, give back some time, jump into lunch. Okay, let's do it, yeah? Yeah.