 Okay, in this video, we are going to be setting up final beat to run on Kubernetes. So we already had it running on a Docker compose context, and there's some distinct differences between a Kubernetes environment as well as Docker compose. For our purposes here, we need to configure it for those subtle differences between the two environments. And I'll be honest, it took me a few hours to get this going right. There's some small nuances there. And here, this is an example that elastic search puts out to show you how to set up a file beat running in a Kubernetes environment. And what I like about file beaded, it is pretty efficient as far as shipping the logs over to elastic search. And you can see here, they're providing us a script that is going to run in the namespace of cube system. And I'm going to override that and there's a number of things that we haven't discussed in the course here. I'm going to come over here to, this is a manifest file. So it's saying to download the manifest file, run this. And that is the manifest file here. And a few things that we haven't seen here, this is a configuration map. And what's important about this, this is telling file beat how to configure itself for that your environment. And we'll be modifying that and I'll be showing that coming up in the video. But here, there's a few things that runs as a daemon set. And effectively, what this is doing is setting up some security to allow file beat to run on every node of the cluster. So it's very important if you're going across clusters. What file beat needs to do is it has to have an instance on every node. So it can watch the containers running on that node and pick up their file. So for our purposes, a single node is not very exciting, but very important if you're in a distributed environment with a lot of different nodes. So what I'm going to do is I've made a few tweaks to this file. What I'm going to do now is toggle over to IntelliJ and start going through those changes. Okay, so here at the top of the screen, this is the original configuration in Docker Compose. You can see we are going from file beat 7, 12, 1. And here we are setting up the configuration file where these logs are. This is slightly different than a Kubernetes context. And then some information, so allowing file beat to connect to where the logs are and read those logs. So this configuration is for that. And then the other important aspect is we are sharing this configuration file. So we're using a volume share to share that configuration file. This is the original configuration file that's running for Docker and Docker Compose. So this is what we were previously using when we brought it up. Now when I showed you the GitHub manifest, this file beat Kubernetes.yaml, this is a copy of that. And here I close this search window. Originally it was running in the namespace of cube control. That is how they originally set it up. I've switched that over to run on default. All of our services are currently running in the default namespace. So very important nuance there. Again something we haven't talked about in the course. But what we need to do is set up file beat to look at all the containers and specifically containers that we want, we want to be pulling data for. So remember we're running a number of nodes. So we have JMS. We have Kibana. We have MySQL. We don't want log files from those. We just want our spring boot services. So what I'm done here is under the auto discover, we are saying file beat auto discover Kubernetes. This node allows it to go through all the nodes. I'm saying hence enabled true and actually I don't need this. Just get rid of that on the fly now. And the main thing here is hence default config enable equals false. So this saying I'm turning off file beat everywhere. So don't look at file beat, but I'm going to be turning that on per container. So effectively what we want to do is tell file beat to process logs and when it's processing logs it's going to be looking on the host system running the node, be looking at var log containers. And if you're running on OSX or Mac OS, it's actually going to be running in a virtual machine on your system. So Docker's runs in a virtual machine and I'm trying to remember I don't remember if it's that way in Windows or not. I want to say it Windows is similar, but I could be very wrong on that. I'd have to go to the Docker documentation on that. But here we are setting up how to get to the logs of the individual containers running on that node. And here I'm setting up a processor. So our log message, the log message from our log output is put to a field called message in the log output and it is a JSON body. So what we're saying here is we want the processor to decode that JSON. So a very important aspect is that it's going to decode that JSON body and we'll see that come into play inside of Kibana. So this file here sets up file beat itself. You'll need to set this up and the main thing that I changed was the namespace and then this whole configuration section is completely different from what they recommended. There's file beat, the configuration for file beat is very versatile. So there's going to be more than one way to address this. So this is how I got it to work for my environment. I'm saying hence enable true and then hence default config enabled false. So that's turning that off now on the deployment. Here, the main thing here is if you go through the documentation under annotations, file beat is going to be picking that up. So if you add this annotation, logs enable true, that is overriding this. So here at the container I'm saying enable false but on the individual once where I want the logging, I set it to true. So that is how that is going to be picked up. So file beat is going to be looking at everything. I'm hoping this makes sense by default. Logging is turned off. So logging is off on file beat. And then on specific containers, I'm using a annotation to enable it true. And one very easy mistake, there's a metadata section here for the deployment. Be sure that you are on the metadata section for the template. The template itself is going to be applied to the container that is running. And then the container will get these annotations picked up. So I had to add this data to each one of those. So this is configuration for file beat to set up file beat. Adding that to beer service, inventory, inventory failover, and order service. So I had to go through that and then I redeployed those. Let me toggle over to the command line. Cube control, get services. And we see here that we have the various services running. Important thing is Kibana. Remember, we set that up to be node port. And we can see that Kubernetes is going from the cluster port of 5601 to 31166. And now I'm going to come over here. All our services are running. Let's come back over to Chrome. And here I am on file beat. So you can see here I'm on 31166. And this is a log entry. And this is all the data that we are getting. And let's see here. So here's the message. So let's see, this is from the order service. So we can see it's from order service. We have the logger. So that's the package name of the logger, the actual log message. So a number of things that we can see here. And then also you can see the trace span ID and also the trace ID. These are values that you can use to search on transactions to go through. So these last two here, these are being added to the system by Spring Cloud Sleuth. So this is a set of metadata that you can use to trace a call throughout the entire system. So something goes through the gateway into the beer service and then to the inventory service. You'll be able to use that trace ID to go ahead and look through that. I don't have a handy example to show you that, but it does work nicely if you need to trace something through multiple services. Again, that is Spring Cloud Sleuth that provides that data. So let's toggle back over to IntelliJ and just to recap what we did. This is the file beat manifest. I modified it to namespace of default. Use this configuration. So this is completely different from the example provided. So the file beat auto discover has been reconfigured. And basically what I'm saying is it's enabled true. And then the default is enabled to fall. So by default, I'm not going to be looking at log data for file beat. And then here on each individual deployment, I'm using under the template metadata annotations the CO elastic logs slash enabled true. So that is turning on file beat detection for this container. So any container that is pulled up, so even if I told Kubernetes, I want three of them, all three of them would get picked up and have file beat logging enabled. And file beat would pick up the logs from that individual container and ship them to elastic search. And then Kibana would pick those up. So these are the primary important pieces of the configuration to get file beat doing the consolidated logging for us. Log all the logs up to elastic search so Kibana can search it for us. And here I'll be committing this into the GitHub repository so you guys will have a working example that you can pull from.