 Time here for Lauren systems and we're going to talk about centralized logging with gray log If you're not familiar with gray log It is a free and open source logging platform that is absolutely amazing and consolidating all of your log data and then creating correlations queries and even alerts off of all that log data and having it all in one place makes this really easy We're gonna talk about how to get started with deploying it from scratch all the way to Setting up extractors managing streams building your indexes and making sure you understand what all that means I've got some visuals. I've also linked to the gray log documentation for anything that goes out of scope of here Hey, there's a lot more that can be done This is one of those platforms that you can just keep tinkering with and keep expanding on and really create some interesting Correlation data between all of your logs or even parse it and create alerts in some very interesting ways We're gonna be covering all that from the alert system to the extractors to the streams and what all that means now This video is not sponsored by gray log But I did do a video on gray log for which many of them send me a shirt Which by the way, I poked a hole in it So someone if gray log is watching I wear an extra large my address is you know over on my website Now even though this is not sponsored by gray log if you would like to support this channel There's some affiliate links down below or you can hires for a project head over to my website Lawrence systems comm There's a hires button atop. Let's know what we can help you with and we do consulting for many of the things That we talk about on this channel now. Let's jump right over to gray log We're gonna walk through exactly how to get it to play with Docker Everything's gonna be on my github and because this video is static, but my github is dynamic and I keep updating it You always find the latest extractors that I'm using on there and the latest installer if any changes come after this video to the Installer the latest version will be there everything's linked down below and time index down below so you can jump to the part That's most relevant to you. So let's jump over and start getting gray log installed I'm doing this demonstration on a bunch of 2204, but it's Docker So it should work with really any distribution that supports Docker One thing I will make note of is if you install Docker as part of the install for Ubuntu at least in the version 2204 It wants to install it as a snap if you install it as a snap instead of like apt-get install as a package Well, it's going to work differently and the problems you're running to go out of the scope of this video So there's important reasons not to install it as snap unless you want to solve the other issues that may come with that Particular challenge now the next thing we need to do here is make sure we have the time zone set correctly So do time control set time zone UTC It's actually should have been the default for setup on here But I do really recommend that everything you done in UTC or you'll have to do a little time skewing and mess around with This in terms of gray log. It's important that your logging server have everything in the same time zone Once you've said that the next thing to do is install Docker So we'll do a sudo apt-get install Docker compose and all the commands are actually in a forum post linked down below So you don't have to try and copy any of these off the screen You can just copy and paste them out of the forum post now an important step is making sure that the Local user has the ability to run Docker So we're going to use sudo user mod dash AG to add the group Docker with this particular user once it's added just exit and Go back in if you don't do that or if you want to restart that would solve this as well You won't have permission to run Docker until you have those permissions applied by exiting and going back in yes There's other ways to do it. That's just the quick way next We're going to get clone right from my github the latest version of the Docker compose now before we go into Customizing it which we're going to do in a moment You may want to generate a new password the default password is going to be Admin and the default user is admin probably not the best to leave those so you should probably come up with a Better one and this is how you do it you run echo dash in your password pipe shot some dash a 256 whatever your password you want it to be your password one two three four You know make it a little bit better This will create the shot some of that you just copy this and there's a spot will cover inside the Docker compose file Where you would paste that in now? Let's look at the Docker compose file Now this defines the network tray here at the top so we're going to call it gray net driver bridge That's fine unless you have some special use case where you want to do something different the Next is the volumes Docker has application data and volume data stored separately that way the applications can be updated and changed and they're always pointing back to the same local storage We have the Mongo data log data and gray log data Mongo is the database used not for storing logs, but for storing configuration inside of gray log So it's not that big and we don't mind keeping it local log data on the other hand We're using open search in this setup and log data can get very big So we'll talk in a moment about another way to configure it But this keeps it all on this local VM an ideal situation is to have a shared Mount that you're going to mount and put that on there. We'll cover that when we get down to the open search settings Gray log data driver local. This is not much data, but these are some of the settings for gray log itself Then we'll go down here and this just pulls the Mongo image from Mongo Docker And we have each one of these set to restart unless stopped what this does is once we start this up It's going to keep these services up and running even starting back up on restart of the server itself So it'll always be ready if you have to reboot it now in the open search is where we're gonna do a little bit more customization This is where I have it pointing at just log data But I could also have it pointing at last slash mount slash log data if I had a mount somewhere And this is something that you may want to take into consideration because ideally if you have terabytes of data You don't want to stick that in a VM. That's not good storage design So my production system just has this volume pointing at another mount now If you do that, you don't need to define the volume up at the top, but it doesn't matter if you leave that in It's not really using it for anything But this is where you would change the part before the colon to where you want that mount to be if it's external Now we're going to scroll down a little further and Leave all the ports and everything else the same it's attaching to the same gray net same restart policy and let's go down to the Password part. So gray log root password now, they say root password, but not talking about the virtual machine itself here They're talking about the admin password So this is where you would paste that password in right now by default It's got the password admin and this is also something you may want to change as well gray log password secret So these are a couple more things There's more in the documentation about customizing this but at least change something better than admin admin because even once you log In you cannot change the admin password. It always pulls whatever hash you have here and that's what's going to check for the password Now as we go down, it's going to be bound to port 9000 that's fine. There is not a HTTPS or SSL sort on this of going and putting a reverse proxy is not a bad idea on here or tying it to your existing reverse Proxy it goes out of scope of this video Next is setting the time zones. Yes, I have it in here twice because it solves some problem I was having and I can't remember Which one of these is the one that's supposed to be but if you put them both in it solved all the little quirks I was running into so someone has a better answer for which one of these is the right one Leave it in the comments down below or just leave both of them in there Grey log email transport. This is something that Specifically if you're installing gray log with Docker, you don't want to customize any of the server Configuration files you want to customize the Docker compose file to put all this in here I have it mostly pre-filled out here using dual circles mail hop I'm not affiliated with them, but they do offer free mail service. I know it works for gay logs So I threw it in here But throw in whatever mail server that you have and want to have set up in here So transport protocol set to SMTP the interface URL This is actually the IP address of this server that we're doing the demo on but customized this to even a fully Qualified domain if you have to this is going to be where it points to when you have an email So you can click on it and get right back to that gray log port This is all part of the customization It is not required for gray log to work that you have email because it actually supports other alert methods such as webhook But this is if you want to use email configure all this in here, or you can just eliminate all this part See all together if you're not going to use it scroll down a little further a few of the networks service started service Depends on so it says not to start gray log until we know the conditions of service started are met for both mango and Open search and then we have the ports to find here defining ports is really easy So if we wanted to define them and this is my methodology for doing it I like to create a different port for any type of server that's coming in This just keeps it easy and for example, and if we said 1516 and we said 15 16 again slash UDP Say those are my true nas system So everything true nas is going to go to that port that way all the tools and all the Extractors and all the customization I put to the input when we get to that later in the video See you'll tie that to a specific port now There are other ways to do this you can have everything going to one port and then separate and extract by hostname and Land things differently, but I prefer this method right here And so that's what we're going to go with and it's how I run my main servers So once this is all customized we're just go ahead and close all this Save it. We're going to just do docker compose up and this is going to Go and get the latest versions of all of the docker images pull them down Extract them and kick it off and running. It's going to take probably just a couple minutes to get this set up We'll jump ahead real quick to this up and running Right gray logs up and running so we can log in with our admin admin because I didn't change that and we're in We have gray log up and running on this system Now we're just going to log back out. Well, didn't have time I'm just going to shut it down in this short because I want to cover how to do that So it's up and running but I would see if we hit control c or anything on here or leave this session It's going to stop. So we hit this it stopped it But before we put it in daemon mode, let's talk about a couple commands to help clean things up in case you need to do some further customization So first we just want to stop it now We can start it up again with docker compose up dash d what this does is throws it in the background So we don't have to leave this session open in order for it to work So now it's up and running in the background and it doesn't take long enough and run because it already pulled all the images Now if we want to take it down, we can just go docker compose down And with the dash d running in d mode, it will automatically start with the system restarts. So now this brings it down Now the reason you may want to bring it down is you want it down while you edit the docker compose file So if we want to go back in add some ports change some settings Change that admin password that I had forgot or put those mail server settings in This will go ahead and give us the opportunity to do so But if you wanted to start over because you broke everything you can actually do a docker compose down dash b And careful with this command because this will destroy all the volume data And it's now erased and released all those volumes now the docker images for the application are still there So if we do a docker compose up again We'll do a dash d You can see it's creating the volumes again So all the data containers will be recreated and now it's back up and running with fresh data in there This is a good way to start over completely if something has gone wrong You've messed up the configuration. You've locked yourself out of it and you want to just clear everything That's all you have to do But docker compose up dash d we'll leave it like this running right now Because now we want to get to the customization of the web interface and show you how to actually get your logs imported in here Now before we go through and set up and configure gray log I wanted to cover real quick the Blow of data and how it gets parsed inside the gray log system and how it lands to its different indexes So we're going to start here at the server or device that's sending the data To the defined input port and type tcp udp and then inside there you actually define what type of data is coming do there Then we have the extractors that are attached to that input and you don't have to have an extractor But if you do have an extractor and the extractor matches certain data It's going to parse them into different fields If you don't have any parse data and you can actually mix this where the Match is only where certain things with the parser, but not all of the data The structured data goes through into the fields, but the unstructured data gets stored. It's just a long string or a message So it's not a requirement that you have it. You can still search Structured data, but it's even better to have structured data so you can create statistics around it Then from there, we're going to go to a stream rule And if it matches the stream rule, it will then go into the defined index that you have or the default index that you have Now we'll cover how to create these indexes as well But it's really important to understand how this data goes through there's even more elaborate things that are not on here such as the way you can pipeline data And process it in different ways that is all through the gray log documentation But we're going to walk through this process to get you started with the basics To get it up and running and start capturing some structured data. We're going to use pf sensor as our example here Now very quickly. I'll show you how to turn on logging for pf sense Just so you understand how it works. You're going to go into the log settings We're going to put it at port 1514 and we're going to put the ip address of the server Now whatever port you may define may be different But in this one we define 1514 as udp And then we just want to send everything that's coming into this pf sense to that particular logging server We're going to leave the log message format at the default bsd rfc 3164 so that's all the default for pf sense and we'll be going ahead and build off of this as the example Now we're going to log in the gray log And this is the main interface where you'll see your most recent logs But we haven't configured any inputs yet. So there's no logs Matter of fact, there's a warning right here that says this is a node where there are no running inputs So we need to create some inputs before we do that. I would recommend creating a new user You go here to users and teams you can create a new user then first name last name, etc Set any of the specifics set the time zone for this specific user And then of course set the roles such as admin now You can just use the system as admin, but I'd recommend having a separate user But for simplicity in this demo, we're not going to bother with that step We are going to go right over here to system and then inputs and create our first input To create an input we're going to go and choose the input type There's a lot of different options in here, but we want syslog udp launch new input Now the option to set this globally or on a specific node depends on if you've done a larger install with multiple nodes That is something you can do but out of scope of this particular video And we're going to use this as a pf sense input for our example here bind address Just leave it at this unless you have specific ips that you've set Inside your gray log setup to bind it to but we're fine with binding it to the single ip that this system has And this if you remember from the docker config was port 1514 It's also where we told pf sense to send these logs You can leave all of this down here at default. I go ahead and store the full message and we launch the input And once you launch the input you can see how many messages are coming in right now It's empty. Oh cool. We have some type of message coming in if we click on show receive message We can confirm that yes, we have messages data is coming in is unstructured because we haven't added an extractor It's all just all on one line right here. So source was filter log messages here The full message as it was stored was right here as it came from pf sense So let's go ahead and put an extractor in here. So we're going to go back over to our inputs And we want to manage extractors now They have a wizard essentially that will help you create Extractors and then you have the message id and recent message where you can just say load this message and then build a Extractor from here. This is a little bit more in depth. This is going to be out of scope of this video But I will show you how to import existing ones They do have instructions on how to build these extractors within the system and how to pull the messages One thing I will note that can be a little confusing So if you want to load a specific message and let's go back over here to load a message for you The message stream it wants is first this number here Is going to be the message id and then we have this here where it says stored in index gray log underscore zero So you can look at these messages and To our inputs and manage extractors Create extractor message id and then we would put here gray log underscore zero This can one of those slightly confusing things if you don't know exactly how that's stored and what they're asking for So you can load a specific message to build an extractor for it We're going to make this really simple and we're just going to import the extractors that I've already created So here's the json file. It's asking for and we're going to switch over to my github Where I have a pf sense extractor set up So we'll go ahead and click on this one It's going to be the latest whenever you're watching this and we're just going to click copy And then paste and now we have all of this in here. We say add extractor to input Now when we go back We see all the different extractors I have one for icmp serocad alerts open vpn filter log mpf sense filter log udp So tcp filters udp filters if you feel you want more as you can look through these And see how they're configured and then start creating your own from this now to see how the difference is between The extractors and we just need a little more data So we have the extractor but now we have some more data that's come in so Let's go ahead and look at what the extracted versions versus the first versions look like So if we scroll down and look at the first data that came in It's all just on one line as a message. We look at the more recent data It's all broke down to data length destination ip destination port direction Here's the raw data then it has the flags id ip version etc So now it's all broke down into very specific fields This allows you to do things such as add this to the query And now we're specifically querying for any filter that matches this specific source ip Matter of fact because these are now structured tables in here We can do things like look for source ip or source port and then use these as filters And it tells us how many has found so far for these when it's unstructured You can still find it but your parsing data is as text as opposed to as very implicitly Listed ports ip's etc. Now by default gray log is going to put all the data into one main index Let's create a new one just for pfSense. So we create an index set pfSense and we'll just fill these out to keep it simple Now you can get granular here and control the different options for exactly how you want these to be broke apart There's some guidelines within the documentation for this There's also the more important part is the guidelines for how many days you want the rotation to do Or even if we change this to m it would be month So we want to rotate once a month or every two months Three months and then how many of these deleted indexes do you want to keep? Or do you want to do nothing with them and just let it grow forever? Which is a terrible idea now besides a rotation strategy of index time It can also say index size or even index message count So you can change it to be a certain size if we say we don't want it to exceed this This helps with logs that well can get kind of unruly when you're doing something like turning on debugging Where you aren't worried as much about the retention in terms of date But you want to retain only so much data because of some type of size constraint you have So these are the fine tuning you can do for any of these a lot of time time is going to be with a lot of people Like and there's more guidelines within a gray log documentation for all the different ways you can do this But we'll leave it for now of rotate each day and we'll say Delete the index after a number of 30 days of rotation and we're going to hand create this index set Now creating the set does not put any data in it all the data is still going over here to the default index Cent but you can change and say set as default if you want it to default to all data in here But we're going to go ahead and show you how to create a stream to push the data specifically into this pf Sense index and to do that. We have to go back over to our local inputs Now you don't get to choose within the input where the destination is in here This is what the stream is going to be for so we're going to go ahead and click show receive messages Because I want this piece of information right here that says gl to source input and it has this id This is the id for the input that way when we're creating streams and we'll just make sure we've copied this So we'll go to stream and we want to create a stream and this is just going to be our pf sense What's going to be routed in here pf sense? Does it route to the default index? No Remove matches from the default stream. This is going to create a rule So first we create the stream and then we create a rule for it that says These messages have to go into this particular index. So we're going to add a stream rule and that's where we put in that gl to source So it says gl to source input and the value is going to be that id that we matched so gl to source input match exactly and then the value of this and hit create rule Then you can say i'm done And start stream now what this is doing is using this specific index as this stream rule hits And the data is going to flow hit this stream rule and go into that index. So if we go back over here to our search we can see All events default stream or let's look at the pf sense one and get some data in here And we can see data is already flowing into here. So this allows you to filter for that very specific new index that we created And once again, the data is all structured It's all going into that index if we wanted to purge out of the other index as we could Go in there and delete things if you want, but i'm not worried about it they'll fall off with the default indexes or you can go through here and control Those settings manually back in the index set Now moved over to our production system because i want to show you how the alert systems work So we're going to go over here to alerts And we have like for example tracking open vpn logins tracking pf sense web interface logins I have different triggers for different things. So I have an idea of what's going on and have these event definitions defined Before you can define an event definition You have to say where you're going to send it and that starts in your notifications Now I have a basic email notice basic ssh login email notice and a lts slack vpn notice The reason each of these are different. Well, let's actually edit the slack vpn notice here And we'll jump right to the part where you customize the message because this is specifically a vpn notification I only have it pulling the vpn variables that I care about So this right here and I'll throw this also in the Forum post link so you can see exactly how to put these together But they do give you a default template whenever you create a new alert That you can start from and then reduce the amount of data that you want to narrow or increase it to the exactly The right one you want for me I only really care about the username what ip address they logged into and what time they did And it's just sends a really simple slack notification to let me know exactly which user logged in for my company Now creating a notification itself is really simple So we'll just call this test give a description of test and you have a few options pager duty slack notification teams email Or using webhook with http So we were to do for example another slack notification or email provided you'd set the email up in the system All these come with these really basic templates that are set up here that I well reduced the amount because I just didn't want that Much information sent to email But as I said, these are really cool because of the way that you can be customized You can find some help also in the gray log forums for different ideas that people have had for doing this And you can also execute the test notifications in here when you're choosing the notification type and you choose something like email It will automatically for example, send it to One of the users or you can type in an email address So it can send it to me or you can manually type in an email address here or multiple email addresses as needed If you need multiple people sent notices for these things now Let's talk about the actual triggering of those notices once you have it defined And you first before you define the notification you have to understand what data you're looking for So let's filter something and we did process sericata and source ip an internal ip of 192 168 4.104 I have this sericata data piped into my syslog of my pf sense And then I have that extractor that we put in that allows the parsing of this and this is what it looks like So we can see the alert was a alert message was sericata tls unveil insert. All right And we can go down we can see the process id Of sericata. So we matched on that up here as process sericata And we see the source ip is that and let's say we want to create alert based on that So now that we know I can see that there's event definitions And we're just going to head and copy this and we'll go back over to alerts And let's go ahead and define a new event. So create event definition We can give the title bad tls 4104 get more descriptive down here 192 168 4.104 went somewhere without a tls certificate. All right, we'll go next What's the condition type filter and aggregation? All right, what is the search query that we would like to alert on and that search query We'll put to this and we can go more granular because we don't want the system doing too much work So we'll say yeah, this is specifically going to be found in the pf sense stream right now It's not going to find any we could load a message id to make sure it matches So process and source id filter aggregation Search within the last five minutes. Let's for search for in the last, uh Let's say eight hours Let's look back. Hey look, we found some of these that are hitting and that's what this is telling you is when you Search back doesn't find any conditions that are met for this execute. How often do you want this to run? Filter has results aggregation research of a threshold So you can keep getting a little bit more fine grain down here We'll just go we're fine with it hitting on there. Then we would go next I'm not worry about any custom fields add notification This adds the defined notifications that we have so you can Create a new one right from here going back to that process where we can just pick one of the existing ones Basic email notice I have set up to just send me a dump of all the data that comes hit next And the system's just confirming how you set up these alerts And then you can create the event definition and it'll go to alerts and perform the actions and the tasks that you defined Now this was enough to get you started with gray log get you extracting data parsing into fields But there's so much more that you can do you can even pull in external data sources You can use this to consolidate all of your logs for all of your servers as I do And then start creating different extractors for different things And as I said, I'll keep my extractors up to date Or any new ones I may add in the future or maybe you're watching a future There's even more available now on my github leave your thoughts and comments down below and what you like Or don't like about gray log or other things you'd like me to do a tutorial on especially around gray log Also like and subscribe it really helps out the channel and uh head over to our forum for a more in-depth discussion for this Or any other topics on my channel and thank you