 Hello, everyone. Welcome back. My name is Rubix1138, and I welcome you to Seracada, an introduction into OpenSoc FTS capture the flag tools. I'd like to welcome you, welcome Josh to the stage here to talk about Seracada. A reminder, if you are not already in the Discord chat, please go to blueteamvillage.org. Click on the links for DevCon28 and join our channel. So let's please give a warm welcome to Josh. Thank you. All right. Thank you very much for the introduction, and thank you to the Blue Team Village and all of DevCon for all of the hard work that I know is going into this and the opportunity here to talk to you all today a little bit about Seracada. So this will be really a brief introduction intended to give you some idea as to what Seracada is capable of. Hopefully there's a few things that you learn about that you didn't know it could do. I'm going to use a VM here to demonstrate that. So I have a lot of the text and all the UIs here fairly large. It should be legible, so you can see that. A little bit about Seracada then before we get too far into the technology is that Seracada is an open source project. Everything that you see here that I'll be talking about today is also open source. So there'll be a few tools I'll touch on beyond Seracada, but most of these are developed and supported by the OSF, the Open Information Security Foundation. So this is the 501c3 non-profit that manages and maintains and supports Seracada and all the activity around that. So we have a very globally diverse team of executive development trainers and others that support and really make this project happen. There's also a very active community and we have a number of different platforms that we have available to get engaged in the community and so I would highly encourage you if this is a new technology to you to reach out to those communities and get involved. This is me. You can find me located just about anywhere on the internet. I am available at this email address as well as Twitter, but if you search for my name you'll probably be able to find me and please feel free to direct any questions or anything I can help you with after this presentation in the future to any way you can get ahold of me. So to get started then many know Seracada or if you know Seracada likely you recognize it as an IDS, an intrusion detection system and likely in the Open CTF scenario that you'll be engaging in what Seracada will be doing then is generating IDS alerts, intrusion detection alerts. So that is of course one of Seracada's primary capabilities and primary roles, but it can do quite a bit more than that. It can do things like pull packet capture, it can do protocol specific logging, it can do file identification and extraction, it can do offline PCAP processing. So you can take a PCAP maybe from a malware sandbox and run that PCAP through Seracada to get IDS alerts and other protocol logs. I'm only going to talk for 15, 20 minutes or so so I want to touch on and give you a real brief demonstration about that capability. Realize though that we could probably spend days going through any one of those aspects. So again there's a lot of resources that I can help point you to if that's something that would be of interest to you. Now as an IDS Seracada again primarily is generating those IDS alerts and with those IDS alerts then you they come in the form of rules and rules then are simply put they're a you know a syntax, a pattern or a series of patterns that are then applied to your network traffic and if those patterns match an alert is generated. Now alerts can be for a large wide variety of categories. You can have alerts around malicious behavior but then you can also have alerts around policy violations or anomalies in your network. Let's say you have non encrypted HTTP traffic going over a standard HTTPS, a TLS port like 443 not necessarily malicious but certainly could be something that you'd want to know about in your environment. So the rules can be very very broad and they can provide different looks into the traffic in your environment beyond just the you know known malicious behaviors. Rules if you are familiar with you know things like YARA signatures right you write a pattern and you use them that YARA rule to match on different things that's a lot how the rules work. From you know practical perspective most users are going to get their rules from a rule source and they're going to feed them into the engine and then they're going to monitor when those alerts are generated. You can of course create or construct your own rules and that but that's something that you know is certainly a capability of the engine you can define custom rule sets but a lot of users are just going to consume those. It is important of course as alerts are generated as you'll see here in just a moment to also understand the rule syntax a bit at least to the point where you can understand what that rule is telling you and so you have that ability to look at the rule itself as well. But again rules are really a topic in and of themselves so we want to stay a little bit more focused on the on the pragmatic side at this point. Now I mentioned that rules the kind of the you know the way that we can look at rules is that rules come from sources. Those sources then can be combined to create rule sets and those rule sets are what we feed into the engine into suricata. A very popular rule set an open source rule set is the ET open and suricata by default will use that rule set. There are other open source rules that you can go and consume that you can figure your engine to use and then it really becomes up to you to determine the rules that you need or want depending on the environment that you're using suricata in. For example I use it for a lot of malware analysis so I grab pretty much any rule set I can find because I'm okay with any individual analysis generating a lot of noise. In a production environment I'd be a lot more cautious or careful. The VM that we're in I maybe jumped ahead of myself just a tad. The VM that we're in is called CELTS. This is believe it or not my system crashed just before this presentation so I had to restart. This is also an open source distribution. It's Debian based. It's designed to really highlight all of the capabilities of suricata. You can if you're not familiar with CELTS or if you're not familiar with suricata you can go to Stamos Networks you can download CELTS suricata elk stack and you essentially have the VM that you see here in this demonstration. The only minor differences will be a script that I run and then I have some P caps in here you're not going to have any P caps when you download this otherwise everything else should be the same. So that's that's what CELTS provides you. It's sort of like a security onion. It doesn't have some of the host-based stuff and there's a number of differences in tools. With this then you have some interfaces such as Sirius and Sirius then is a graphical interface that allows us to manage rule sets. This comes in CELTS. Now if you were to install suricata at least a more recent version of it by default suricata comes with suricata update. So it's a command line utility that has been developed to manage the rule sets and the rule sources for your suricata instance. I'm not going to run that here because I already have CELTS set up and installed or I'm sorry Sirius and I don't want to confuse the rule manager. I'm just going to use one or the other but if you run the help dash dash help you'll see that it has all of the basic commands that we're going to cover here. We look at updating our sources. We look at listing sources. We look at enabling sources and then applying those sources to build the rule set and deploy it to the engine. Again so that's what we'll be seeing with Sirius. Now I mentioned we have sources and so we can if we want to add any different type of public or custom source that is available you can do that just by simply selecting these actions and adding a source is usually as straightforward as adding the URL where that source is coming from. Now there are some commercial rule sets rule sources and those then of course you have to pay for typically you get an API key or some way to authenticate before you can use those. Some other examples of rule sets that are out there if you go to abuse.ch which is an awesome project and a lot of great resources you can look at the blacklist project and from there we can look at the actual list and you'll see that there are for example in this particular project a number of rule sets that are available based off of things like blocking of C2IPs or J3 fingerprints. Something to keep in mind then as you're looking at the different rule sets that are out there you do need to oftentimes if you find there is a Siricata specific one I would typically I would opt over that. A lot of rule sets are going to be compatible not only with different versions of Siricata and there's a fairly significant version change between four and five or five is the latest but then a lot of them are compatible between Siricata and Snort. Some of the more recent rule sets ET Open or the emerging threats list or rule sets they are starting to also create Siricata 4 and Siricata 5 rule sets and the changes in five are all based off of rule syntax and so you're getting likely more performance out of the 50 rule set. You also have to be careful because typically the 50 rule set won't work with the 40 version or the 4.x version of Siricata. So just some things to keep in mind and again in general you'll find a lot of compatibility but I would always opt for if I'm running Siricata to take a set specifically designed for it. If we go back to Sirius then you can see that we have our sources listed here. Again it's just the click away or if you're using Siricata update to add those sources and then once we have our sources we can create and manage our rule set. The rule set then is just a combination of rule sources. The default here in this VM is to take the two sources that we have add them to this rule set and we can see that we have those two sources and we have just under 21,000 rules. Once we have that configured we have to deploy the rule set. We have to make sure that we've downloaded the most recent version and then deploy those to the engine and in this interface you need to go to the Siricata tab and then there's this rule set action and what will happen here is that it will update it will build and it will push and what you see with a lot of rule set managers is that they will combine all the rules that you've told it to use and they will write it into a single file and then deploy that into the location in the file system that Siricata looks for rule files. Siricata update also does something very similar and in order to create the actual rule file you just run Siricata update so that will run based off of the current configuration update the rule sets and then write the file for Siricata to use. Now I mentioned the rule location and so in order to understand where the rules are written to maybe we want to confirm that they are in fact getting updated so we want to look at the rule files or look at the timestamp on that rule file. The default location for the Siricata configuration is in Etsy Siricata and then in particular Siricata.yaml so there is a yaml file that you can open and look at and yes I did just nano in public. Everything Siricata related is in here if you you probably don't need to get too deep into the configuration file that's something that will come with time but you know definitely there are some things that you might want to check. Probably one of the most important outside from we can look at the rule location here in just a moment is this home net external net. If you dig into the rules the syntax of the rules they use home net and external net and what they are using those variables for is to define the direction the flow of traffic and when the engine should apply the rule should it be on the response should it be on the request and so that's very important it also of course defines the IP space and so the default is to use standard RFC 1918 addresses internal IP addresses and again the default then for your external network is to just negate your home network but you want to make sure that these are correct so if the environment that you're in or defending is using addresses that fall outside of this that's just something that you want to definitely keep in mind. Now one more quick thing about the configuration is that the engine supports sub-configuration and with CELX we do have a sub-configuration which means that in our primary configuration file if we include sub-configurations these sub-configurations if they overwrite any configuration options in the primary then you know including the sub-yaml the sub-config will do that it'll overwrite in the primary so this can become helpful if you have you know a base configuration that you want to use and then you deploy sensors into different areas of your network that have slightly different requirements based off of the type of traffic that they're seeing in CELX we do have in the sub-yaml the change to the default rule file so here you can see that's commented out if we go a little bit further the property default rule path and those rules will be located in Etsy Sericata rules and it's just going to be a single rules file serious.rules right so that's just a real quick about the configuration and again unless you have reason you probably don't need to get into it right off the bat but definitely it's worth checking the whole net external net and tweaking that if you need to. Going back to the interfaces here right so we've now looked at different sources you've got an understanding of where you can maybe grab some different rule sources we know the process for updating and pushing those. Sericata can do a live rule set reload so you don't necessarily have to restart the engine. The next thing that we'll do is we'll talk about just some of the protocol parsing and then file identification capabilities so as I mentioned here just a few minutes ago one of the differences with this VM versus the one you would download from the github is that we do have a couple of scripts that help us with the analysis and then the the pcaps so that script is seri.sh although it's it's a relatively straightforward script and that it's running Sericata essentially the main goal of it is to run Sericata in offline mode. I forgot what pcap I want to use so one of the other again one of the capabilities I mentioned is that Sericata can run in offline mode which means that you can feed it a pcap and it'll process that pcap as if it were you know analyzing the traffic live on the network and then you get the same results from Sericata generating its output. Now one thing I haven't mentioned yet is where Sericata generates all of its output. The default is to to drop all of the data into a JSON file it's the eve.json and so the alerts the protocol logs file stats anything that the engine generates it will it will be in that eve.json the JSON file then is very flexible it's a very flexible file format because you can do things like submit it just the JSON file itself to elastic and other other tools that allow you then to take that data and build you know visualizations and dashboards and then parse it in different ways. Now the pcap capability then is again this is offline mode and when we run a pcap it's going to use the original timestamp from the pcap so we just need to keep that in mind because as we look at some of these interfaces we need to oftentimes adjust the time and date so if I look at a dashboard and it's looking at the last 24 hours even though I just ran the pcap if the pcap was from last week I need to go back and adjust the timestamps the filters in that UI to go back to last week. We can run this script now and again main purpose is to get the pcap running through surcata in offline mode so surcata is going to load the engine is going to load as it would normally and it's going to load the rule set that is configured to to utilize and then it's going to process the pcap now as that is loading what we'll find is a another open source project that is maintained and developed by one of the the ysf core the surcata core developers is called evebox and evebox provides you with a graphical ability then to look at basically alerts and some of the other data the protocol data and the flow data that surcata is generating so we'll look at the end of the script the script does one more thing before it finishes and that it queries the eve.json file looks for any alerts that were generated and prints them here to our terminal now that's great it's helpful to see those alerts right away but again it's it's kind of hard to do all this work in a terminal and it probably doesn't scale that well so that's why we would switch to something like evebox now with evebox and it's going to be a little bit tight here in the UI just because I have the zoom up high but what we're seeing now are the alerts that were generated so not only do we see you know specific alerts for events that happened but we see other alerts and in this case we see other alerts around the same time that you know maybe this first alert was generated so that helps us to possibly build a little bit better context around each individual alert we can see here that we have several in fact these these four right here are all based around the execute an executable download if we scroll up a little bit further then we'll see that after that executable is this appears to be downloaded we have alerts around fioto tracker and win32 imitett command and control activity if you as you start to understand what the alerts are telling you you can start to read into them a bit more we can see that this particular alert here it's an info alert there's color coding to help you visually gauge the severity of the alert so this is an info one it's a different category that the alerts are categorized in an executable was retrieved with minimal HTTP headers right so this is as alert and in one event the download of an executable can actually generate several alerts so this is telling you that something downloaded an executable file and there were very minimal HTTP headers and why this becomes significant is because oftentimes our scripts like power shell for example which is regularly used from an office document to download executables by default they don't use very many HTTP headers typically it's the the git request header the first header that's needed with the HTTP and the version and then maybe the host and so that is telling you that maybe where this originated from was it was something like a malicious office document which which actually is the case here so those can definitely help to add additional context and again we can get many alerts around a singular event we can select any of the alert and get a little bit deeper into the analysis so if we select the ET policy PE or DLL download this guy allows us then to get information about the traffic that matched from this alert so we see of course the timestamp we see the sensor that it came from so if you're working in a multi-sensor environment we have the protocol TCP but this was HTTP we have the source and the destination ports the flow ID we'll take a look at the flow ID here in just a moment the signature like the category as well as the signature ID and what's helpful about having the signature ID available is that not only could we use this in this particular interface we can we can filter off of that so we could look back in time over however much data our environment is collecting on to maybe filter off of a specific signature but then we can also take that signature ID and we can use we'll actually use Serious to do this we can pull up that rule itself and select the rule and then look at the rule syntax so we can now begin to understand exactly what it is in this rule that was matching on the traffic so if we have to get dig a little bit deeper to to provide a little bit better context and understanding okay so we'll go back if you continue to scroll down in the interface you'll have different values from the HTTP request so that's some of the protocol parsing and logging that capability that it has you might have the response body we can very clearly see if you're familiar with PE files anyway and we can certainly see the PE file that was returned and then you know scrolling down further you'll eventually get to the raw JSON so PE files are a bit large but here you have them just the raw JSON data that correlates to this alert so if there's for some reason maybe some data that you know is in in Sir Cotter it's being generated it's in the eve.json it's not there in the UI you can come down here and check it's also a good place to look just to get a better understanding of all of the data that Sir Cotter is generating and maybe there are some fields that it's it's not producing for you that you might need and then there are actually ways to customize the engine to to help correct that now I mentioned the flow ID so we can select the flow ID and what that will do is essentially give us the ability to take a step back and to look at all of the other you know events that are around this particular of this flow so not only now do we see the alerts but we also see the different different type of events we have the flow information we have the HTTP information we even have file information so we can look at the HTTP request for example and now we can get the host the URI maybe the user agent any of the data that is that is around the HTTP request itself very similarly with the file info so Sir Cotter has the ability then to not only identify files in the traffic certain file types it does have its limitations and everything is is available under the read the docs but then also it has the ability to extract those files if you want it to it doesn't do that by default though so here we have an executable we know that it is is pretty you know we know it's bad because we saw alerts related to ema tech traffic after it was requested and returned and so now we can look at information about the file the magic will determine the type of file so your your your lib magic I'm telling you the file type we already knew it was a PE file but here it confirms that of course if we had PE files that we don't have alerts associated with then this is another sort of piece of data point that then we know is data that's being generated that we could we could search on we have the hash which is also potentially very helpful because now we could take the hash of the file and maybe we go and search for that on you know our favorite threat research platform such as virus total so I'll paste the hash in now you'll notice here this is a pcap from a week or so ago and so likely if it's ema tech it would have been probably submitted to virus total by now we don't get any matches and the reason that that is is the state is truncated right and so what happens is by default sir kata only reads so far into the HTTP response and because PE files tend to be large and larger than that response it stops reading at its threshold and then hashes that content so while it's certainly read enough of the file to give us the the the signature the magic the PE file identification it wasn't able to read all of it this is done for performance reasons and it's a very small change in the configuration in order to tell it to read further into the HTTP response but again those are determinations that you have to make based off of considerations for your own environment so but that is again something that you'll want to to recognize and understand because if you didn't and you thought this was in fact the hash of the file you know results like this could maybe be a little bit misleading let's see here okay in addition then to looking at at alerts again you have the ability to comment you have the ability to escalate or archive you have under your events tab in the UI here the ability to view any of the protocol logs that have been generated so if you have you want to see all of the HTTP you could select that if you want to look at just DNS or TLS or SMB it's all available here and it allows you to see it again in the interface this particular pcap did not have any SMB so we're certainly not going to see it there and then there's also some you know some basic alerting or reporting some dashboards that that help with with that and you can see that you know again you'll have a summary of the alerts the top signatures the top categories your top source IPs and desk IPs and so it can provide you you know again an ability to take kind of a step back further and maybe monitor your environment from a bigger picture broader perspective of course there are because the output that sir kata is generating is json you can put it into a pipeline you can modify enrich the data you can submit it directly to elastic and from there you have the ability then to create any number of dashboards or visualizations that you need with cels cels has a number of dashboards and visualizations already built so if you download cels you can use these dashboards and visualizations for inspiration and ideas to get an understanding or a sense of the type of of visualizations that are quite helpful and cabana is another open source tool that's you know something that you can utilize but you're getting into all of the details of elastic and cabana are a bit beyond the scope of this particular workshop the other tool that I always like to point out is molak molak is another open source tool developed by aol supported by aol it's it's a great tool it's like wire shark on top of elastic and since sir kata can create full packet capture it can generate your and capture your network traffic and then you could have molak pick up the pcap and ingest it so that now you can use this interface to search for your traffic aspects of your traffic over over time however you have your molak configured to of course if you're familiar with with molak it also has its own capture capabilities but again I like to bring it up because if you already have sir kata deployed these are capabilities that you might have a system that you're already you've already set up and configured to utilize okay so I think that was my about 25 minute crash course on on sir kata and discussing some of the main primary features what is capable of the kind of data that you can get and how you can utilize that as well as a vm that you can go and grab and start experimenting with it pretty quickly at this point in time though that's all I wanted to to discuss so if anyone has any questions I believe the uh text workshops track one is available for questions of course I have my contact information um and I'll pull that up again you know please feel free to direct any questions to me any way you can get a hold of me or prefer to get a hold of me thank you josh uh yeah there's a number a lot of good discussion in the uh text chat one question that popped up that'd be good to answer uh is the sir kata based on the engine uh from snort or is it developed completely different it's completely different yeah uh there there is some there is a lineage that goes back to to the snort days but it is not a fork it is a rebuild so there is a significant number of differences in the engine's capabilities and then things like that rule syntax so that they are different the best thing I could probably tell you to do is under the sir kata read the docs I'm not sure I'll be able to find it just by searching I think there is a comparison page there's certainly compatibility information but I'm pretty sure somewhere in our read the docs there is a snort versus sir kata so if you if you're looking for you know sort of the high level big differences this is a good place to go for well uh josh thank you very much and thank you all for sticking through the sir kata workshop if in case anybody does have any follow-up questions uh josh will be around in the discord channel like you said it is the text dash workshops dash track one under the flamingo hotel group just scroll all the way down to the bottom to a flamingo hotel check us out on that channel and he'll be answering questions uh blue team village wants to say thank you to everyone that joined and this concludes the the sir kata workshop thank you