 Hi everybody, welcome to my DEF CON 29 Adversary Village talk. It's called Exploiting Blue Team Upsack Figures with Red Elk. My name is Marksmates and I hope you liked the presentation. I will be available in the Discord room or you can ask me questions any time later after talk. You can hit me up via Twitter or some other way that you can find me. So let's dive into the talk. Exploiting Blue Team Upsack Figures with Red Elk, a lot to dive into. A little bit about me for who doesn't know who I am. My name is Marksmates. Hobby wise I'm from InfoSeclares over 1998, professionally since 2006 and I have a very big background in system and network engineering and from 2006 I started doing pentesting. In 2016 I co-founded a company called Outflank which you might know or might not know. My core roles in there are red team operations as well as building some tools and doing some of the trainings that we have created within Outflank. Mainly on the offensive side, mainly on the red side I also have a little bit of experience on the blue team side at some of our clients where I had some threatening experience which is actually pretty fun to do. So that's me, the company Outflank. We are a boutique firm and we specialize in red teaming as well as trainings mainly aimed for blue although nowadays we also have a red aimed training and we have tooling. Since the beginning, since 2016 we have created lots of tools, given lots of presentations and the majority of our tools are available on our GitHub although during the time we have come aware of the fact that some tools are simply too powerful to be shared publicly online and that's why we have created just a few months ago our Outflank security tooling service which is basically a private tool set of all the tools that we use during our own engagements that are too powerful to be used during operations. Hats up, those tools also integrate into what is Red Elk and that's where this is the topic of today. So we're busting blue team, object failures with Red Elk. Red Elk is the tooling and I want to dive through well the whole concept of Red Elk, what it is, why we have created it and how you can use it and then of course there is the whole blue team side and the blue team makes mistakes as well, object mistakes the same as Red Team does. So those are the main two topics for today but before we dive into that broader sense we need to discuss how we, with me, we I mean Outflank as well as myself, how we see Red teaming. If there is one thing that I would like you to take away from this talk is that we believe that Red exists solely to improve blue. Yes, we do similarly to the techs but it's not a wrecking ball approach, it's not that we come by, smash everything apart, knock down the blue team, walk away, loot the gold and start laughing, no, no, far from. We see it as a sparring match, we see it as a training for blue which means that it's fundamentally a different goal. We try to train the weaknesses of the blue team and try to improve them for actually when the real deal happens with the real attackers. Yes, our simulations, our Red Team engagements contain real punches, real movements, it actually may hurt both sides but it's always better to have a practice hit on your face than a real hit on your face. So we exist to train blue, no wrecking ball. When we are talking about our boxing ring, if you like, just a quick overview of how our modern offensive infrastructure would look like, most likely yours looks about the same conceptually wise although there are many different technical bits and pieces. Going from right to left we first have our own attacking infrastructure while we have our command and control servers, multiple most likely. We have our delivery services, web servers where we do tracking, we have all kinds of decoy things, we might have social media profiles, all on the true infrastructure that is on your own, a span of control. On the complete left side there is the victim network or the target network where you eventually have your implant running that calls back via HTTP or DNS or some other spooky protocol that you have as well as internally within that victim network. You've got your things running, your implants connected, things like that. In the middle there is what we roughly call Rida characters or deflecting ways. It's in the modern nature of the modern times with cloud enabled infrastructure, it's very easy to have lots of flexible, disposable, resilient systems in there that is just simply a layer in between to obfuscate some of your own true attacking infrastructure as well as making some smart decisions in the go. Going nearer I hope, but there is a reason that I'm telling you this, because this concept can become quite big if you count the amount of components that you have for your offensive infrastructure during operations. So let's talk about a single engagement which might have several scenarios. For example if you use a Tyber-based approach for a RAD team, you will have multiple scenarios within the same operation, which also means that you will have multiple C2 servers, typical engagement for us around five different C2 servers. We also have multiple Rida vectors, different proxies and things like that. The main fronting CDN type of players, multiple. You will be creating multiple fake identities to do, well the whole social nearing thing. You might create a website, I wanted to. Tracking pixels everywhere, we track everything, both in emails as in delivery as in multiple different aspects, tracking pixels, tracking pixels, tracking pixels. You need to be catching them, you need to be setting them up, things to manage. And I don't know the delivery side, we got multiple web servers, multiple email boxes, maybe some file sharing services, messaging platforms, whatever, all the new hot stuff there is, but there's multiple aspects that you need to manage. That's all front facing, on your backend side you will have generic backend components. Let's talk about communication channels that you have internally with your team or also with the white team. You will have your own test labs, you will have all kinds of log aggregation. Log aggregation is where RHEL actually comes in. The reason that I'm telling you this is that it, this is our boxing ring, this is what we need to use and it's becoming quite big per operation and even if you have multiple operations at the same time, which many running firms actually do, to keep track of that infrastructure is actually becoming challenging. It's not something that it cannot be solved but it's becoming challenging. So when we look at our offensive infrastructure we have two main typical challenges. One being oversight, the other one being insight. With oversight I mean just keeping track of where your infrastructure is, what the state of it is it up, is it running? Is it okay? Is it, in some way you are hurting your own infrastructure, multiple components, multiple different things, multiple engagements altogether, a lot of data components to keep track of. Insight is more oriented on the fact, well besides if it's up and running, is there data in there that can help us to do a better operation? Do we have the proper insight over infrastructure? Looking at other fields we see quite a resemblance if you look on how those challenges are being solved. So if we look at Cowboys, I just said the term hurting, to some extent we need to hurt our own infrastructure and Cowboys have the same way of hurting their cattle, they use dogs to keep everything in control and that gives them some way to manage the hurt in that sense. If you look at the insight I'd like to refer back to Mr. Edmund Locage, which was actually the true Sherlock Holmes, he was a French Sherlock Holmes and he was the one who put forensic science into the field of forensics. He was the one first to start measuring things, having forensic scientific approaches to forensic science, scientific approaches. Early 20th century, and why do I bring in this guy into the talk? Because he's most commonly known for his own, the Locage exchange principle, which means every contact leaves a trace. And this is actually very much true for our own operation. As you know, every offensive action that we do, it will leave a trace on the system. It's up to blue to digest, to see and digest and inspect the trace. But it's impossible to touch a system, to perform an action, to remotely do something without leaving a trace. But now it's the fun thing, it also goes the other way around. It's impossible for blue to do things without leaving a trace. So if you know where you need to look for, both blue side as well as red side, you can see actions of the other ones going. And if you're looking, if you're talking about traces by adversaries, by red teams, it's quite common to have a thing like a seam or have a security operation center, a cyber defense center, or anyway a team of people investigating traces and seeing things. The other way around, well, during operations, we were in need of such a thing. So looking at the tools that we had at Waze, there is ways of hurting your infrastructure, and we needed a way to actually do some investigation on our infrastructure. We started looking in the open source world, and we didn't find anything, and that is what actually Red Elk came about. Red Elk is a tool ready to be used, open sourced, it's available on our GitHub, and you can use it for keeping oversight of your infrastructure, as well as having insight into what is happening in the operation. And it's important to understand both aspects of this. And Red Elk is, during operation, by us and by the other ones, we use it most often used. You've got your live hacking console of your C2 infrastructure, your C2 server, your COPS pipeline, for example, you do your live hacky-hacky-hacky commands, and then there is a second window open where you have the Red Elk interface, a web interface Red Elk available, and it helps you with just having, well, the oversight of the operation inside, and you will see traffic data coming in, operation data coming in, et cetera, et cetera. Like I said, it's available on our GitHub, and I've written a few blog posts explaining why we need it, getting you up and running and achieving operational oversight, and in the next few months there will be some more blogs probably coming out. Red Elk, the name of course comes from Red being offensively oriented, and Elk being Elastic Search, Logstash, Kibana, the technical stack that we choose for actually making this tool. So, diving into Red Elk. Looking at your infrastructure again, a slightly different approach to this. On the most left you have the target network, you've got your implant running, so there's attack and C2 traffic going first to your redirectors or your first line infra, and from there on it's being filtered and it's being put onto your C2 service that you have in your backend. Nothing near here. How does Red Elk fit into this whole process? Well, here you've got Red Elk. It's a year of local infrastructure, we have it on-prem. You could be running it in cloud, we prefer to have it on-prem, but there are connectors installed both on your redirectors as well as on your C2 service. Data connectors, data feeds, we use file data a lot. And from there on it is put into Logstash filtering and it's put into an Elastic NoSQL database and Kibana is the interface, the web interface, we're actually searching through the data. And that goes both for the redirectors as well as for the C2 server components or you could be hosting your own website or whatever, you can pull the data and actually put it into the index of Red Elk. And there's also data copy happening on the back end, so there are some arcing scripts happening to actually copy downloaded files to have screenshots and all kinds of other operation on data of your operation and pull it back to your central Red Elk server. Because, well, in the end you will have five, six or whatever C2 service for a single operation and you do not want to be logging into every specific C2 server to search for that specific screenshot. You want to have it all centrally, locally in your Red Elk instance. Red Elk does a few things, it indexes data, it enriches data that is coming in, it has lots of dashboards in there, you can create your own dashboards and there's lots of search, well, it's a search-based solution. So that's the core functionality of any Logstash or Elastic Stack. It's based on open source tools, so you can modify it yourself and change dashboards or whatever, you are free to do that for yourself. In recent versions we have also added a Neo4j Docker instance as well as a Jupyter Notebook Docker instance meaning that the Neo4j is used to import output from Bloodhound. So, besides the Elastic Stack you will have also a Docker instance with Neo4j and you have a Jupyter Notebook for quick searching through data and this is really awesome because now you have operational data of your C2 infrastructure as well as from traffic data but you also have knowledge about the active directory environment of your target. And by using the power of Jupyter Notebooks you can make very quick queries on pooling data or matching data, both output from your C2 server as well as data that is within your Neo4j instance. For example, you could be searching for usernames that are, or new incoming beacons pick out the username and immediately go through your Neo4j instance and see what if this is a path going to actually domain admin or any type of admin group that you would like. The Jupyter Notebooks is a way to actually quickly make those queries and have you generate quick output. It's really awesome and once you get used to it it's really powerful during the operation. That's the core RedOg with both the Red team and you might as well give the Y team some access to dashboards and as well as you hold the whole interface. Right now we mainly use for the Red team and we use Jupyter Notebooks to make data extracts that we can give to the Y team, but do your thing, you can quickly give access to your Y team, give the Y team access to Red out. But looking at the oversight there's still a target SOC or a SOC within the target network. And as analysts do, they start analyzing when they have a hunch of something that's going bad. So they are doing several things. They are analyzing, investigating your info. So they might be querying your specific redirectors or they will put data onto what I call online security search providers. Think Spamhouse, VirusTotal, IBM X4, multiple different domain classifiers, Spam, Sandboxes, all kinds of different ways of analyzing the business malware as well as infrastructure. Now it's the fun thing that those security search providers as well will be, these are automated things, they start querying your infrastructure as well. So if you look at the log data of your redirector, you will see, you might have the option of actually having a SOC analyst investigating as well as some online security search providers. They might be investigating your infrastructure, they might be querying it, they might be looking at your specific URI path of your implant with different user agents, for example. And now we're getting into the whole scene part because if you have a big pile of logs about your operations, C2 service, as well as from your redirectors, traffic data, you've got a big pile of logs and you have a rule-based approach of looking for things that might be suspicious in your own data as well as querying online resources like Firebase Total to see if an IUC of your own implant or your own uploaded file is already known at Firebase Total, for example. Well, all of a sudden you have a scene type of functionality. So this is where redirectors fits and introduce a bigger picture. If you look at the logs of your redirectors, whether it's your C2 service, you will see that there's not that much data in there, so we will be needing to do some enrichment. And this is where, well, the data enrichment that we have when a redirector comes in. We do multiple things. If we talk about traffic data, we map it to GUIP data. We check if it's a TORG-based address. We take your ownership from IANA databases. We look at reverse DNS and all that type of data, we are put it into the same record and it's stored within the stack, within the database. We also query gray noise. And for those who do not know gray noise, gray noise is an excellent tool for seeing if it's the traffic that hits you, if it's background noise of the internet, yes or no. Background noise of the internet is just the common scanners. Could be Google indexing or could be common type of or C2 botnets. Could be regular botnets that are just querying the internet, scanners, things like that. It is created for blue teams, but it also for red-oriented amids. It's also very interesting because if an address that is querying our infrastructure on a very specific path that matches or is our implant path, and it is not known by gray noise, then most likely you want to be aware of it and most likely an analyst is actually looking into it, looking into your operation. If it's just, if it's known by gray noise, it's most likely just something like the background noise of the internet. Online resources, we can check at Harvard analysis, virus total, the abuse databases, IBM X-Force multiple, online resources that we can query and take data from and use that for enrichment of the data within our stack. And if you talk about C2 data, there is a component within Red Hat that takes the logs from the C2 frameworks. It needs to be aware of how it is, the logs are being set up and needs to reach them and that's what we do. We have full support for Cobblestrike up to the latest version, as well as our own custom on-flank Stage 1 C2, which is also part of our tooling offering. Full support in there and we are working for the other public ones, C2 is let's say halfway, basic logs are being ingested, but the data copying of screenshots and things like that is not yet fully done. About the same stage for Mythic, Mythic has created an option, seem logging option that you need to enable if you install the team server. And thereon, if you install the Red Hat component, it actually takes away, takes up the logs, ingests them properly, but there's no data copying happening on screenshots just yet working on that. On the Longer Road map, we're also working at Covenant as well as Skype. And you are free to make your own, connect it to your own C2 infrastructure or your own C2 tool. In the end, it's just based on normal log-stage rules. It's all open source and you can go nuts if you like. Okay, data enrichment. It's a lot about talking about what we do. Let's see how it works into practice. Let's start with the interface that you as a vet team operator will see and from there on, we go into investigation of our brute team's brute team activity. First about, let's see how it works. Kibana is the web interface and from here on, you just log in to the interface. Username password base and from here on, you will see that it's a normal Kibana interface. And we have several pre-made views for you that are most often used, meaning that it has the right columns and the right names and the right filters already made. So I click here on the redirect traffic, meaning that it will give you a tabular view of, in this case, the last seven days of traffic that came in on anywhere where the data's been taken from. You will see that there is a timestamp and an attack scenario, the backend name of the redirecter, the redirect traffic source IP, source DNS and the extra HTTP request. The attack scenario is an important one because during an operation, you will most likely have short-haul, long-haul or scenario one, scenario two, scenario X if you're using a TIGRA. So multiple scenarios during that specific attack and all the other names actually make sense if you look at the specific index that we're talking about. So if we talk about redirecter traffic, let's go actually use that. So let's filter. This is just normally about interface stuff. Let's filter on only seeing attack scenario, short-haul. You can expand the data component or the object and you will see that there's GUIP data being enriched. You will see the full log message as it was provided within the log and you will see several aspects as in the actual IP address of the font address, the name that you use, that refresh proxy name, so the font and name. The program I was used in this case was Apache. You will also see that it knows how to digest the different headers. In this case, it makes sense because it was done via a CDN network and a CDN actually puts in the proper X4 to the full headers, et cetera. That's being taken up by relq and it replaces to have the proper source IP investment there. Lots of chopping data in there happening and it presents to you an easy and queryable interface. If we talk about C2 logs, so no traffic logs, but talking about your own C2 logs, then we have the same approach. It's still the same division between multiples attack scenarios. You will already see that there isn't a target username, target IP address, internal IP address, hosting things like that. That is the normal log message from Cobblestrike and you will see that for every action within Cobblestrike being done, it has mapped the data from the top line being the username, IP address, et cetera. You can click on the link to have the full beacon log which most often can be pretty big. And from here on, within your browser, you can just simply control F to do quicker viewing. Sometimes just having such beacon log in tech space is actually easier to use than the Kibana interface. So from the Kibana interface, you have immediately just clicky, you will have the actual beacon log. Same goes for the other C2 frameworks. Now, in some cases, you have during the operation, you have made multiple screenshots and going back, you will remember, well, there was this one system where I had the screenshot that kind of looks like it had this specific application in there, well, Kibana or Redock has an interface with quick previews as well as full click to full resolution screenshots and more directly available to your web interface. So because it's pulling from all the different team servers, there's no need to log into all the different team servers. The same approach goes for keystrokes as well as downloaded files. Another one is a central overview of all your IOCs. As you know that Koppelstreich and Niederhaag see to a friend that will generate an IOC indicator that is being adjusted. And from here on, you have that view and you can search through it, but with the power of Kibana, you will also have the option to quickly just export the data and present it to your writing. So you go to share, you will click on CSV report, you will generate the CSV file. It will take some time, but now the Redock interface or the Redock service is making the file for you. And it's just a CSV based of the tab that you see right now. The same can be done with Jupyter notebooks, but from here on it's just a quicky, quick clicky way of having data in there. Easy to use. This is about operation. Might be already that you think, well, this might be useful for your operation. Well, I hope you do because it actually is really useful during your venting operations. Let's talk about spotting blue team activity. There are multiple ways or multiple areas on where we could be spotting any action of, well, action of a blue team and where they leave traces. I'm talking about directly to your info, so traffic that is directly hitting our offensive infrastructure. Got a few examples in here. We also have got indirect where we are querying online security service providers for any blue team activity. And then there's a third category where we're talking about internal checks. So checks that you're implant that is already inside the target network running. It can do some queries and if it's doing the proper queries, you will might spot some activity of the blue team. Some of these are fully included into Redock. Some we are working on and some are for a longer approach, but first we want to talk about the concept and then we can talk about the specifics that are implemented in Redock as well. And if you believe that we are not quick enough with development, it's an open source project. Come join us, come help us. We need to discuss how the redirectors make the decision and how Redock feeds into this. So when we're talking about traffic that is originating from the actual implant, that's one way. The other traffic that might be hitting a redirector is non-target related. Could be scanners, could be just a regular internet traffic. Your redirector makes a decision based on whatever rule you put in there. So this is just a HA proxy or Apache or any way that you configure your reverse proxy. And it will make a decision to either forward that session to the backend, the true C2 backend or a decoder website or forward it to a different website. The logs that are being put out by HA proxy or Apache or NGNX or whatever reverse proxy that you use, those are being ingested by Redock and you saw them in the interface. But for Redock, it's important to have the proper logging. So when you do the installation, you actually need to change the logging of your reverse proxy tooling, but also specific requirements for the naming of your backends. If you, Redock needs to be aware of what is a decoy and what is a C2 backend. So any type of C2 could be or should be named with C2 dash, whatever. Any type of decoy or deflecting should be starting with decoy dash and then whatever. Based on that decision, Redock will also help you with alarms. The redirectors making decisions, that's important. Okay, once you have that up and running, you will see in the interface of Redock that an analyst might be connecting to your info. Could be, well, eventually be routed to a decoy website or to a C2 backend. And especially traffic going to your C2 backend is interesting. And more than once, you will see, or at least we have seen, and I guess you will as well, that when a blue team is doing an investigation, manual investigation, they're using Python or Curl or maybe PowerShell or, and not every time, they will be changing the actual user agent. So well, you will see Python and user agents coming in. And more than once, we've seen that they first tried it from the breakout point of their SOC internet uplink, but also maybe via a top address. So multiple ways that will be querying your, and depending on the actual path and the backend that is being chosen by the redirector, this is interesting to see, well, is there any happening, any investigation going on. Another one that once you have the logs in there is interesting to see is that if they are, the blue team is sharing the URL of a thing to be investigated, some instant messaging clients actually try to preview that specific website. So in this case, it's an example of Telegram, but all the other ones are basically the same. They will try to make a preview. And as you type, for every new character, they will be trying to get a preview of the door, so they will be connecting to that host. Interesting to see, and a clear indicator, if, well, you see these types of instant messaging apps with the user agent come by and querying your C2 backend infrastructure. That's interesting. So that's directly oriented at your own infra. Let's talk about indirect, so via online security service providers. What I'm showing you here is the interface that a blue team would have, in this case, for the ADR products in the ATP. And I've highlighted those little checks or options where it says submit to send box or submit to VirusTodo. Well, you might think that's not smart to do because once it's in VirusTodo, it will have a hash and we, as a red team, we know the hash is over our pieces of malware, so we can query VirusTodo. Well, have you got results for this specific hash? And VirusTodo will tell you, no, it's not. Or if it's there, then most likely, as we have not uploaded the malware piece ourselves, the blue team has, and then there is a big indicator that you have been compromised, that the attack is compromised. Any blue team should know this, but it's made very easy for them to still click on it button. This is semantic. This is Microsoft, the WDATP, so the Mora Protection Central Portal. There is a check for VirusTodo and there is an option of clicking it to or submit it to their deep analysis, which is just a sandbox. It's made very easy for them to click on those links. Talking about sandboxes, you could be deflecting traffic from your, coming from sandboxes based on source IP address to a specific decode backend on your reader vector, or you could just let them come in. Either way, you could be, check the characteristics of that AV sandbox connection and have an alarm in there and having a clear indicator. So, doing some tests during the training that we did, this is just a mapping of, I believe it was over an eight hour period when on purpose, we actually tried to hit sandbox connections. Just to have a bigger data set of how sandbox connections looks like and it most likely maps back to your own experience that you might have with sandbox connection. The funny thing is that they are not very creative with the naming. So, on the right side, you will see the actual names of the computers and on the horizontal bar, you will see the different students that they have tried to, and the amount of new AV beacons that they're connected. So, let me just zoom in. You will see the names that you will probably recognize admin, PC, John, PC to kill boom, boom, virtual PC, admin dash, something, win dash, all the typical windows evaluation images, not very creative. So, this is a clear indicator, which of course you already know, is that there is something fishy going on. But we could also generate alarms based on this, even if it's not coming back to the C2 packet. Another one that I want to highlight is domain classification. You will see where that already does this. There is a config file where you enter domain names and from there on, in this case, I entered outflank.nl, and it's querying on DMXforce, McAfee as well as BlueCode, in a pretty other way. And you will see that for some reason, at DMXforce, we have an error-getting reputation, but that's okay, we still have two left. What's interesting here to see or to look for is changes in those classifications and to see if it's actually a bad reputation, so if it's a rogue type of thing. We went through all the different data or the different classification groups that those domain classifiers have. We picked out the rogue ones and as soon as there is a mapping of your domain into one of those categories, you should be getting an alarm. Interesting other area we can do spalling losing activity is on the internal. So checks on the internal, coming from your implant on the internal network. And let me give you just a few examples. First of all, the damn Nazi KGB TGT. Many networks still have not an automated way of changing the password of the KGB TGT. So if you come into a network and you will see that in this case, even 2010, the last password set of that specific conference 2010 and later on in your operation, all of a sudden it's being reset. Well, chances are big that it was a blue team initiated change. So the KGB TGT is very specific, but you can use the same thing for specific admin accounts or other accounts. And it's hard to judge when a normal password change would be, but if you have five admin accounts and all of a sudden they are changed in the last day, it's a clear indicator that, well, if all admins start changing their passwords, they are onto something. So blue check is a thing that helps you with actually outputting this information. It's a calm-based thing that communicates via Etsy. And it outputs in a certain way that Raddel is actually able to ingest. I'm not sure if we have open source did just yet. How about certificates? Here we do a set check for a specific website our AdVan.gl website, and this check gets the data from the certificate out. It checks to see if it's been SSL intercepted or if there's SSL intercepting a corporate proxy in a way. If this changes during an operation, then the blue team has enabled SSL interceptions, which they might often not always do from the beginning. And it's a clear indicator that what something has changed you need to be aware of. Or blue team investigations. Okay, as a summary, if we're talking about direct based types of indicators of investigation or maybe we should call it indicator of analysis or indicator of detection, I don't know, I don't care. But if we look at the directly based things, we can check for analyst traffic. So specific toll IP addresses, curl user agents, good data going straight to your C2 backend. That's a clear indicator for analysis. Deflected traffic. For some reason, your redirect or logic has said, well, we need to deflect this to our decoy website or some to other thing. We can make alarms based on that. Blue code, other specific security vendors have very specific ways of querying your infrastructure. You can have insight into the data and later on we can have alarms on that as well. It's messaging I show you. Funny thing is if you know the IP ranges of blue teams, which you might know over some time, if those IP ranges are connecting to your infrastructure and even if they go to a decoy, if they are going to deflected decoy backend or if they go to your true C2 backend, you wanna be aware of any type of blue team IP address accessing your infrastructure because it immediately is that suspicious. And then there's generic C2 scanners and AV send boxes. Having Red Dog, you will have proper insight into what those C2 scanners are doing and where they're coming from. Not directly alarm related, but it's very helpful to have that insight. That's on a direct traffic. If we talk about indirect, we're talking about AV hash. Let's check virus total, Harvard analysis, but it's also an infra blacklist. So if you have configured to Red Dog, if it's aware of the URLs that you use, if you're aware of the IP addresses of the infrastructure, specific TLS sets, so the hash of your set, we can check that data onto public lists of known IP, bad IP addresses, URLs and certificates. And the domain classifiers I already showed you. Looking at the internal side, we can check for the password resets TLS interception. Another one that we have already included is the security tool checks. So an unexpected change of the EDR tool is being installed. That's a clear indicator. All of a sudden, they have done something for more investigation. We are working on checks for log forwarding, security config, as well as accounts login, where we log forwarding, if there is a change, all of a sudden, web is being enabled or winlog lead is being enabled, things like that. It's a clear indicator that something has changed that you need to be aware of and most likely is related to blue team activity. We can check for very specific security config changes. This is a very broad topic, but it is a category that needs to be mentioned. So if you have an implant doing some checks and all of a sudden, you can check many different security parameters, local security policies, and if those change, why would they be changing during operation? And then there is an unexpected change of the accounts are logging in. If you land on an inbox and if you go through the witness event view of the past 10 logins or the past 100 logins, you will most often see a clear approach of what accounts are logging into this machine. All of a sudden, you see service accounts logging in or all kinds of different things, in my view or in my view indicator for you, that you're not as stealthy as you think. How did it get started? Need to tell you a few things. On the planning side, my Red Oak installation is intended per operation. Could be having multiple scenarios, but it's for one client, if you like. Do not mix client. Do not make a central Red Oak server where you put into everything together. You want a new system because it contains highly confidential data. You want a new system after the operation. It has three main components, the Red Oak server itself, a connector that you install on your C2 servers and a connector that you install on your redirectors. There are several important identifiers used during the operation and you want to make them clear at the beginning of your installation. That is the attack scenario name that you use as well as the component name that you use. The component name is interesting or relevant for the C2 servers and redirectors and the attack scenario as well. Those are also parameters that you need to put onto the installer on a C2 server as a redirector. An important thing to know, and I mentioned it before, is that the default logging of Apache or HA proxy or any type of other reverse proxy is not sufficient. We've put on the wiki, which is on GitHub and both on the blog post series. We have told you or we put examples, conflicts in the Red Oak code as well. How you change the logging to be specifically ready for Apache to include header logging and the explicit names of the front and the back end. Things like that is already in there, but you need to enable it otherwise. Well, Red Elk is blind for traffic data. Then you do the installation, you get the release on GitHub or you can just try the master branch, whatever you like. Then there is a first step in creating certificates and the installation packages for both the Elk server, the C2 server, as well as the redirector. You do that with the initial setup. You generate certificates that are being used for the transport of the data from those other components back to the Red Elk server. It's encrypted. The less encrypted, you need to configure that. And from there on, you run the installers on your readers, on your C2 servers, as well on the main Red Elk server. And very important, there are post installation configurations that need to be done and you will find that on the Elk server at the specific part that you will see there. There you enable specific alarms and other kinds of things. And it's all explained into the documentation that we have both on GitHub as well in the blog post series. Little bit about the roadmap. Version one, which was 2018, 2019, I believe, we focused mainly on oversight and not very much on the alarms. We had support for Corporate Strike, HAProxy, and as well as Apache Redirectors. But version two, we've been working on since 2019, I believe. It's still a better stage, but almost there. It's a major improvement both on the setup, but majority on the types of alarms, as well as the ups data and supported tech. So lots of more C2s supported, Part C2, but also NGINX and their own C2 framework. And of course, like I mentioned, we have the integrated Neo4j and the Jupyter notebooks. It's in constant development, more alarms, more improved dashboards and things like that. And we're also working on other C2 frameworks and you're happy to join us if you like. In summary, we believe red teaming is to make blue teams better. That's why we do proper sparring. And we, having that insight of movement of your opponent during a sparring fight is actually better. So Red Dog helps you with that specific case. It will help you to see activities that the blue teams are doing. And dear blue, think of your upset. You can make use of Red Dog, you will find it on our getup and you will find information about this also on our blog. And with that, I would like to thank you for your time.