 Okay. So again, we're using the same platform we used last time, but before I get started, maybe just as a show of hands, how many folks are running Fluimpit on Kubernetes or have plans to? Okay. So about like 20% of the 30% of the room. And of those folks, how many are familiar with Kubernetes operators? A majority of folks. Okay. Great. Great. So yeah, when we talk about operators for those who didn't raise the hand, the way I like to think about it is you essentially teach Kubernetes about this object that is Fluimpit, that is Fluinty. And as you would interact with something like pods or nodes, you can now interact with these new objects, these custom resource definitions, Fluimpit and Fluinty. And so the Fluint operator, as Benjamin explained, is this new project that really takes that to a larger level by including resources like clusters and Fluimpit configuration, things that build on top of the base foundational Kubernetes resources we find like CRDs and pods and deployments. And so in this walkthrough, let's go ahead and start it. What will happen is we're going to go ahead and provision, we already have a Kubernetes cluster provision, and we're going to go ahead and run what you would typically find when you run this on top of Kubernetes. How do I take my logs from Kubernetes and send them to a place where I can further do log analysis, data visualization, and any of the natural workflows that we typically do? So we're going to run through this together. I looked at this maybe 15 minutes before, but I think it's going to be fun. So really the first thing it looks like we're going to be doing is adding Loki and Grafana. And maybe as a show of hands, who's familiar with Grafana, Loki, and a majority of the rooms? So a log aggregation service provided by Grafana. And what we're going to go ahead and do is deploy it. So it's not yet deployed into Kubernetes. So let's go ahead and do that. So it looks like we CD into a fluent operator walkthrough. We can go ahead and deploy Loki. It's going to go ahead and deploy. And it also looks like we have this file explorer so we can actually see what's going on here. And so it looks like we're just essentially deploying Loki into our Kubernetes cluster. And so as we go ahead and deploy that into our Kubernetes cluster, we'll then be able to go ahead and apply something on top of Grafana. So we have access to it. And then we'll go ahead and be able to get the secret so we can log into that Grafana cluster. And if I scroll down, skip ahead just to look at what other operations we'll complete, we're going to go ahead and deploy the fluent operator. So let's go ahead back to the terminal. Looks like it's still deploying. And really, I think that the big benefit you'll find with operators, and especially as you use them with Kubernetes is the way you interact with them is typically the same way you're interacting with all your other Kubernetes resources. So as you templatize maybe a deployment with Helm or as you're trying to go ahead and manage other types of resources, really fluent becomes just another one of those resources that you operate. I think the other notable operators you'll find that are open source and part of the community like the Prometheus operator, there's Envoy operator. So many of the CNCF projects have their own operators available. Okay, great. Awesome. Looks like Loki is now deployed within our cluster. And we have the port forward set up with local host. So we're going to go back and we're going to apply this service YAML, which will define a service definition on top of local host 3000. Now, if we go ahead and click this, we should have Grafana load up. Awesome. And I think Pat chose a much more better way to go and get the not use admin admin, but in fact use a random generated password. So I'll go ahead and copy that. And we can log in. And this is just your typical base Grafana install. And there's nothing there. So we have no logs found. You can see Loki has nothing there. We just deploy this into Kubernetes. So now what we're going to do is deploy the fluent operator. So we have our log aggregation service. We're going to go ahead and deploy the fluent operator within the cluster. Let's go ahead and do that. So deploy fluent operator. And as it goes and does that, let's actually look at what it's doing. So deploy fluent operator. It's really just grabbing. Let me see if I can increase the size. It's really just running a helm install of the fluent operator with a few parameters, like for example, what namespace do we want to use? And for example, we're also looking and saying, hey, we're going to wait for a specific task to complete. So again, all of this is customizable on your use case. But you can see here, hey, we're just going to go and deploy pretty much the standard fluent operator. Okay. Looks like the fluent operator does not exist. So it went and deployed as part of helm. And now if we do kubectl, get pods, fluent. Great. Awesome. We can see that the fluent operator is running. So an operator again is made up of, I would like to call it two parts. One is this ever going reconciliation loop that checks to make sure if you have any objects defined that it will make it so. And so this is really what I would call that reconciliation part. And the second is now we're going to go define these objects. So as we go and define it, this operator will go and say, hey, you've defined a fluent object, you've defined fluent bit, you've defined a fluent cluster. Let's go and create that so it matches that definition that's been set. And so here, we're going to go ahead and apply two YAML files, fluent bit inputs and fluent bit output. Let's go and look at what those are right before we apply it. So here you can see it's a kind while you typically deploy these kind resources that are pod or node. Here in this case, we are deploying a fluent bit object. It looks like we've set some limitations on it as to its size. And then from an input side, it looks like it's reading the container logs, which is very standard, right? You're tailing the logs that are within your containers. Looks like we're adding a filter operation of Kubernetes and a few rules. So it looks like some modifications, nesting and unnesting to go ahead and put them under a Kubernetes tag. And if we look at what the output looks like, not the Elasticsearch one, but the Loki one. Very simple. We're just sending everything to loki.loki.service with the label job equals flip it. So let me go ahead and copy this. We're going to go apply this. Okay, great. And let's go ahead and click enter there. So we'll apply this one too. Awesome. So it looks like both of those have been created. And now it looks like we can go and check to see if the fluent bit objects were created. So within the namespace fluent, which is what we checked before, we can get the object fluent bit, right? So when you use kubectl, you typically say, hey, kubectl get pods. But now we can say kubectl get fluent bit. Looks like there's one created about 26 seconds ago. And if we really wanted, we could run this command to check what inputs do we have, what filters do we have, what outputs do we have. And let's actually just at least look at the inputs, right? And the inputs here should exist as tail, right? We're tailing that file. We're grabbing all that stuff from Kubernetes. Let's go ahead and go jump back to Grafana. And so before, right, we had no logs founds. Fingers crossed. If we run this again, if I refresh the screen and we click log browser, we can see fluent bit now exists because it's tailing that file. And if I go ahead and click show logs, awesome. Yeah, we can see all the logs that are from fluent bit. And if we go and inspect, say, a particular log, we can see here it's also being enriched with namespace, container, Docker ID, kind of your standard Kubernetes filter metadata enrichment that comes with, with Fluent Bit. So here, with just a few commands of kubectl apply, we've basically taken the Fluent Bit configuration, converted it into Kubernetes objects, applied it. The fluent operator has said, hey, we see that you've applied these new definitions. We're going to make it so by applying the following Fluent Bit with the following configuration, and you can interactively look at each of those objects independently. So that's the first lab part here that we have. Let's go ahead and jump into the next one. So plus one to Pat for making the lab very easy to follow. This one I've actually never seen before. So we're going to do it kind of here live on stage and hope it goes well. And this one is going to, it looks like it's taking not just Fluent Bit, but also adding Fluent D as part of the operator deployment. So when you look at the Fluent operator, the way that it's evolved over time is, number one, have objects defined as Fluent Bit. And number two, there's a great way where Fluent Bit can send logs to Fluent D. And from Fluent D, you can do some further enrichments, send that data out to additional endpoints and use it almost as what we call an aggregator. And so in this case, now the Fluent operator supports all of these objects. So essentially now we haven't just taught Kubernetes about Fluent Bit, but we've also taught it about Fluent D as well. So we're going to go ahead and deploy all of these things, which is Elastic Search Kafka and the operator. We go ahead and run this. So it looks like it's deploying Elastic Search right now. It's installing that as we speak. And then we'll deploy Kafka right after this. And then we'll finally deploy the Fluent operator, which I believe should already exist since we deployed it in the last one. And if we just skip ahead just to look and read what's what's going to happen, looks like it's going to, we're going to add some more inputs. We're going to go ahead and have access to Kibana and Grafana. And then we're going to go ahead and set up some rules between Fluent Bit and Fluent D through the operator. So hopefully this doesn't take too long here. And while it does that, maybe we can look at what it's doing. I believe, yeah, here it's just taking the Helm charts that Elastic has for Elastic Search. So you can take those. It will throw Elastic Search as a node right onto Kubernetes. If you look at Kafka, let's look at that one. Yeah, this is using similarly just the Helm charts that exist out there for, I'm going to mispronounce their name, Strimz.io. And it looks like it's pretty simple here. It's just a single cluster, or sorry, yes, one cluster, one node, Kafka, even ZooKeeper there as well. Let's change back. Let's see if it's going. Looks like Elastic Search is done. And now it's moving on into Kafka. Let's see how long that takes here as well. Oh, it looks like, sorry, it's checking to make sure Elastic Search is available and using the cluster help endpoint to do so. And it looks like it's installing Kibana as well. I wish I had some hold music or something I could play as this takes place, but I don't. And by the way, so again, this is a lab you can run on your own pace as well. I know we're going pretty fast through it, but it's available there for you as well. And yeah, you get a whole Kubernetes cluster to kind of play with with the fluent operator. Okay, still installing here. Okay, awesome. So now Kibana is ready. And Elastic Search is ready. Now we're doing Kafka. I'm hoping Kafka is a little faster. And then the other two might be just as slow, though. And maybe while we wait, how many folks are sending data to places like Kafka or Elastic Search today? Open search as well. Okay, so like 50% of the room that covers how about places like Splunk, Datadog. Okay, just like the other half of the room is still going. Let's go ahead and look forward a little bit. Let's see. So the next thing we're going to do is we're going to set what is this object of cluster output. And this is where we're telling Fluent D versus Fluent Bit what the output is. And in Fluent D side, we're going to set the output to the Fluent protocol. So the Fluent protocol is written over TCP. It has some acknowledgments built in that also deal with the buffering. So if something is, say, something goes down with the Fluent Forward protocol, if you get an acknowledgement back, it will buffer on disk. And only send the acknowledgement back once that data has buffered on disk. And here it looks like this is just a really simple configuration for setting that cluster output. So we'll be able to run that once it comes back. Let's take a look at what else is here. And then in this one, we're creating the object Fluent D, which is, I'd say that second part again, where we're teaching Kubernetes about the Fluent D object. And it's now saying, hey, here's the port 24-224. Okay, awesome. Now we have all of these things running. Let me go back all the way up here, make sure our services are good. Fantastic. And let's just double check it. Can I open up Kimana? Yes, I can. Can I open up Grafana? Yes, I can. Excellent. And let's go ahead and create this cluster output. So this is just a fast way to, instead of storing it as a YAML, we're just writing it all in line. So you can see this new object cluster output is created. So kubectl, we could even do get this. And we can see there it is. It's been created 20 seconds ago. Let's go ahead and enable Fluent D with the forward input plugin. Let me go ahead and clear the screen real quick. And so now we're listening on 24-224. Excellent. Let's go ahead and see if that object was created. Just as a double check, kubectl will get there. Oh, Fluent D not found. Let's go ahead and delete this thing. Arguments and resource name must have a singular resource. Maybe I'm, ah, kubectl will get Fluent D. All namespaces. There it is. Perfect. Okay. Excellent. So what we just created was the Fluent D object. And we also created the cluster output. Now we're going to create a cluster-wide configuration. And this is similarly, we're going to be reading all of those logs again. And we're going to output to Elasticsearch. So you can see here, here's the output for Elasticsearch. And we could run this to go ahead and query Elasticsearch as well. Looks like it's still empty. It looks like we'll take a few minutes before everything loads up. I'd say probably about a minute. Typically, what will happen is once you deploy this, it gets deployed, say, as like a daemon set. It starts reading the data. It takes about, I want to say like 30 seconds to a minute before data generally shows up indexed, at least, and available. I'll skip that part for now. This is an example of what that data should look like. Actually, while it's doing that, maybe we can look at Elasticsearch in here. And we can go ahead and look at, see if that is coming through here. Let's look at the indices. No indices to show. So probably still not there. Let's double check if everything got run correctly as well. Okay, try it one more time. Nope. It looks like something with Elasticsearch, it's not sending it correctly. I'll ignore it for now. Let's keep going through the lab. And let's go ahead and apply a much larger configuration. And then this configuration, similarly, sending data over to Elasticsearch. And if we go ahead and look at the CRDs that FluentD has created, looks like it didn't create those either. I think, yeah, this is, so the reason this command is not working is that this config is within kubesystem. And the namespace we're looking in here looks like it is particular for the Fluent namespace. So if we actually run this kubectl, get FluentD config, and we run it with all namespaces, we should be able to see that, which is, again, part of the kubesystem namespace. So this, if Pat, you're watching the recording or the live stream, there's a little bug here. We'll have to fix that. Otherwise, let's go ahead and apply this last part as well. Those are now created as well. And let's go ahead and run kind of the last multi-tenant logging one, which if you look at the output here, looks like it's sending data, not just to Elasticsearch, but it's also sending that data to, sorry, inputting some of that data from Kafka as well. So let's go ahead and see if we're seeing any of that data within Elasticsearch. You click enter. Looks like there's a config thing missing here. Let's go ahead and look at the original kubectl, get configurations. Here we go. Where's the command? Here it is. Okay, let's see now if we're receiving any data. Ah, still nothing. Okay. Well, I'm going to say there's something in here, but I would say check out the GitHub for sure on this one. Use the previous example if you want to look at how just the most basic operator setup here works with Fluent Bit. And then with the Fluent D1, there's a lot more complexity, a lot more, I'd say, feature set available because Fluent D actually has five years more of plugins and outputs. But you can set those, you can customize it, you can templatize it. And in this lab, we'll take a look at the second piece of this one, but at least the first one gives you a really easy way to just start sending data to a very common use case, like sending data to Loki or Elasticsearch. So yeah, I think with that, the next session is lunch, probably the best session of the day. So with that, we can break if you have questions, happy to answer up to the best I can. And then I think Ben and others are going to be available within the Fluent Slack channel. So if you want to ask any questions there, feel free to throw things in there. Maybe I did something wrong here during the walkthrough, and they might be able to explain what I did. So with that, let's go ahead and take lunch. And I think we resume back at one o'clock, if I'm not mistaken, with the next session being monitoring Flint bit in production. Actually, so 1.30. So we're running about 15, 20 minutes early. So get an extra little break, more networking, more chat. And then we're going to go ahead and we have two great sessions. We have a Flint community meeting. So if you have like production issues you want to solve or you want to open up to the group, we'd be happy to run through them. We could try it live. I think it's always interesting to see what we can do. See if you have an issue you want to talk about architecture. Let's do it. So, okay, let's go ahead and break.