 So, now it is 1 o'clock and with all that set up time, we're good? Yeah. Yeah, right? Right. We're good. So, it's my pleasure. Jay, you presented. Oh, wow. Actually, in this area, it seemed like yesterday, 2016. So, without much to do, it was my pleasure to introduce to you Jay DeMartino-Fidelis. Thank you, guys. Thank you. So, we've all kind of been there. Most of us who kind of work like either SOC work or malware analysis or, you know, where you end up writing rules, right? I'm just going to kind of give you a methodology or at least my methodology of how I write rules and how I go about things. You don't have to adopt it. I don't care if you do. Hopefully, it gets it better out there and we can all rise up and start crafting good detections and whatnot. So, my name is Jay DeMartino, head of detections and countermeasures for the Fidelis threat research. That's just some fancy title my boss gave me. Really, I'm just like the rule bitch, right? So, that's pretty much how it goes. There are some prerequisites to this talk. I do a lot with regular expressions and with YARA. So, that's all my examples are going to kind of center around those two. If you are unfamiliar with YARA, the talk that Ming just mentioned that I did back in 2016 was to catch an APT, YARA, and I go through more of the syntax and whatnot. So, if you want to learn that, I definitely suggest going back and watching that. Regular expressions, I don't know, I have good days and bad days either way. If you're not doing them every day, you're, yeah, either way. So, when you're writing detections, the real problem is that you and your coworkers, you know, you're kind of building this structure and it's very complicated, right? And sometimes, you know, like the guy in the picture, he may be looking down at what his coworkers doing. He's like, what are you doing? I'm up here or, hey, let's build this and it's all disjointed and whatnot. And there's a lot of complications. It's never really easy, right? Because malware will be at all different layers upon the stack and the attack surface is at all different layers. There's not one easy different thing over one protocol or, you know, whether it be the file or the network layer, or even metadata and whatnot. So, what happens when you two have been working alongside of each other and all of a sudden your colleague leaves the company, right? And everybody, you know, it's like, it's a normal thing. Everybody leaves and it's like, hey, cheers, we're going to throw a party for you. And he's like, yeah, man, sorry for everything. We're going to be friends. And then, like, Monday comes around and then you look at one of his detections and you're like, holy, right? It's like I kind of equate it to monopoly, right? And you get the chance. Everybody wants to go and collect $200, right? Everybody wants that. But it's like, in this case, no, you got to pay the other players. You're taking your lumps. Like I said, I'm the rule bitch, right? So I learned this the hard way. So that being said, how do we rise up to the occasion, right? So I'm going to throw another one. I probably should have said this earlier. The other prerequisite for this is a little bit of, you know, set theory and whatnot that most of us probably learned in computer science. So, yes, I'm going to give you some Venn diagrams. It was the only way or the best way I can kind of convey it to you and whatnot. And so, you know, you have your universal set. That's the set of everything. Everything that's going on, whether it be the network, the endpoint. And then you have your detection, your target, right? What are you trying to signature? I call it a target. And so it's, you know, you're trying to pick that needle out of the haystack. So when you finally go for it and you write your detection, you end up getting these other set of events. I really don't have a better name for them. They're just another set of events. We all know them as mostly false positives, though, right? But they're not exactly all the others. There's just some random others that some of which happen to overlap with our target. And those are the ones that are the false positives. And so, you know, you have your red set. Your red set on the left-hand side is the target. That's your detection. The blue set is your other. It may not always be large, you know, your other, your false positives. They could be a lot smaller. And then the purple areas in the middle, that's what you're trying to eliminate, right? You're trying to eliminate that purple area. So how do you do that? How do you do that? I call it shrinkage. So we want to shrink that other event set. If we could shrink that other event set, we can get the union of the two sets to be smaller, right? The typical way that people mostly do this is kind of with some sort of and not, you know, hey, I got this detection, oh, and not my domain controller. Oh, and not this file that passes, you know, from this person to this person every day via Chrome job or something like that. And so you have, like, these kind of one-off detections where, like, you know, this one set is a subset of your other, and then you have another, and it's a subset of that other, and you have a third other, which is all still a subset of that other. And so you end up with these long kind of daisy chains of and not situations where you're just kind of filtering out. And, you know, those aren't, those are somewhat maintainable, but those aren't really maintainable either. At least they can get out of hand and whatnot if not done properly. So how do you do that? So how do you shrink, you know, your false positives? There's, like I said, there's the and not method. But once you do that enough, your other set gets really, really small. And it just, like I said, it gets hard to maintain. But then your target set still stays large. So is there anything, and there's not so much of an overlap, but it's enough that it creates in your workflow where it takes time out of your workflow to be going through some of these false positives, and you just don't want to be dealing with them anymore. So we've manipulated the other set, right? So what about maybe taking a look at the targeting set, you know? And then so back to shrinkage. We're going to try and shrink that targeted set, your detection. How do we shrink that profile, right? That attack profile of that detection and whatnot. And yes, you can do it. One of the situations that the target set grows over time is because of false negative situations. You get, you find new activity all the time, and your target set just, you end up taking them on more of these sets. Your target one, and now you have another target two, some mutual exclusive set that has nothing, it's involved, but then it's kind of not involved with the other set. And then you have to merge those two together. And then, like I said, that's a growing, it's literally a growing problem. You have two sets converging together, and it's getting bigger in your other set, may or may not be getting smaller, but it just doesn't become maintainable. And what I like to call that is kind of the one rule to rule them all, mentality. Analysts like to write these catch-all rules, where, you know, the Lord of the Rings effect, I guess. And so, you know, we write these really large monolithic detections that just, like I said, they become un-maintainable, right? And so, you got to kind of know when to separate or chain your detections, and then you also, you kind of drop or ignore some of the true positives, and maybe then pick them up with another detection, either with a lesser confidence detection, or with a lesser severity detection. And so, you can kind of shrink that target set a little bit. Just to kind of give you an example, right? So, at one point in my career, I inherited this network, network, we are a rule for ghost rat. And it was literally, it was 100 different beacon strings that we were targeting in the one rule. And you look at the condition at the bottom, and then all the number of condition, all the number of beacon strings that we were looking at, notice the first for loop goes from A to Z, and then there's another, the second for loop from A star, and then a third for loop B star and a fourth for loop C star. So, we've enumerated the alphabet going on the fourth time. That's how many times we've enumerated those detections and whatnot. And so, what if one of them starts going crazy? How do you adjust this? How do you adjust something that's so monolithic? You literally have 100 indicators of compromise in your target set. And we all get defensive about our rules as analysts, you know, it's our babies. We curate these and whatnot. And so, there is a kind of effect that it's like, oh my precious, oh yeah. And so, like, we don't want nothing to happen to them, but the life cycle shows that we have to, we have to do something about it. So, let me, I'm gonna give you some clues enough. I mean, I'll talk a little bit, but I'll give you some more kind of examples. So, how do you do that? You separate multiple detections. You take the one, take target set, the full set. You separate it from target one and then target two, and then they have a couple of other sets in between there as well. And now your sets just got smaller, right? So, this one is a lot larger, and then we've separated it out. And now we have two smaller sets to kind of deal with. One set may not be giving you problems, but another set may be giving you problems. But at least when you do it, you've done your work up front, you've done all your analysis up front, that you don't necessarily have to manage the T1 set, or if the T1 set's giving you problems, you don't have to manage the T2 set when you go in and make your changes. And then to even further slice it, you can make it into even more, further slice it, and then the sets become even smaller, and then they have a lot less kind of management and whatnot with them. So, just as a kind of warning, I am including some rules, so I'm going to throw a bunch of walls at tax statue. If you do want to come back and watch the... There's a lot of good tidbits in some of these next few slides. I suggest coming back. You're not going to ingest all this stuff, and especially the syntax that I got going on with some of these. You're going to be like, whoa, so you're going to need to take a second to kind of parse through it and whatnot, though. But the main things that I want you to kind of focus for the next couple slides are the constructs and the naming and whatnot, and I'll kind of walk you through the rest and whatnot. So, one thing you can do with YARA, at least, is you can separate by condition. That ghost rule that I said earlier, notice that... I at least noticed that there was different lengths of our detection set, our indicators of compromise. And so I kind of separated them and kind of grouped them by conditions in that case. So I can group with the condition and it makes it easier to promote maintenance on your rules. In these three examples, I have... I just have three, actually, but I had like up to 10. I had up to 10 for all 100 different indicators. So I had 10 different rules that I was in managing. But it was a lot easier because if one was false positing, I had a different confidence value when I needed to make a change and whatnot about it. And so rather than I can then just focus on this one rule and all the other nine rules, I didn't even have to worry about. So the ghost beacons, I split them up. Length four just got like one little indicator. Length five had 85 different indicators and indicators of compromise. And then length six had... excuse me, had three of them. And so how would you kind of separate just for... because not everybody does YARA. So an example, how would you kind of separate that in regular expressions? I'll let you chew on this regular expression for a second while I take a sip and whatnot. So this guy's kind of looking for like landing pages like Man in the Middle attacks for PayPal, Yahoo, Hotmail. They got PowerPoint online, Word online. So like Office 365 and whatnot. These are all these kind of landing pages that they're trying to man in the middle and get your credentials. And with the regular expressions, you're just grouping the IOCs with the OR clause pretty much. And so that's how you would do the equivalent of this rule from YARA similar in a regular expression. It's a totally different detection, but it's the same idea, same concept and whatnot. So another thing is that you can separate your indicators by... so I talked about the condition statement in YARA, but you can also separate them towards the indicators of compromise as well. Grouping the indicators of compromise. So what I like to do... so these are four different rules for strings out of an APT3 binary. Notice the hash is all the same for all four rules. But I have four different rules for strings. What's the thing that I said earlier? The one rule to rule them all, that's what most analysts do. They all want to just jumble them up into one YARA rule and say, hey, give me this and gives you some kind of complex expression at the end of the condition and say, hey, this grouping or this grouping or this other grouping or this other grouping. Well, you can... if you break them out into different rules and then you be very selective with your rule names... excuse me... you can bring a lot of value to your analysis. When you start separating your detections like this, you start augmenting your analysis and you can say, hey, this... I got this one binary here. It hit on one of these four string rules for APT3. Maybe that's not an APT3 binary, but I got this other binary that hit on three of the four. And so when it hits on three of the four, you're like, okay, it's more than likely APT and then, oh, wait, so we didn't have the fourth rule? So maybe there's a shift in your detections. Maybe you're detecting the shift in targeting and whatnot. And so their binaries are kind of changing. They may have a different campaign going on. And so a different person on the team is compiling these or something or they fix their incorrect spellings and whatnot. And so if you look at some of these groupings, just I have some regular strings, just very unique strings. Then I have network GUID strings. And then notice that with the first rule there of strings that I say any of them, right? So that's basically the OR clause with any of them. And any one of those strings are unique enough that they could stand on their own. You don't need any other supporting strings to kind of make your detection and whatnot. Now there's two rules with any of them and there's two rules with all of them. And so the rules with all of them, where it says all of them in the condition, those are your lesser quality indicators. You can still group them, but you need the strength of all four indicators to compromise to say to give you that confidence layer on your hit and whatnot. And so if you look and then there's output strings, bad grammar, so I'm even calling out bad grammar. So when people make mistakes in spellings, now granted there's a lot of code reuse and whatnot. So when you have all these mistakes like this, especially grammar mistakes, you can generally kind of attribute it to one person or but if you start seeing more and more and there's a lot of different compilations flowing around, then there's probably some code reuse going on as well. And that's how you would detect that stuff. And then you have more output strings. So the output of this tool was our APT remote command tool. I can't remember if I wrote these rules. I probably did. I don't know. I didn't put my name. I guess I should have put my name on it whether I was the author or not, though. But it's a good grouping. It's a good representation of what I'm trying to convey to you guys on splitting up your detections to make them more manageable and whatnot. So let's see. One more wall of text and then we'll get back to some other stuff. So we talked about groupings by conditions. We talked about groupings by indicators of compromise or string sections. Or in with regex, you know, you're using your OR statement. The next one, you still want to try and create multiple detections, but you want to use multiple detection methods. So I have three rules up here that are all looking for the same technique. They're looking for a very simple embedded executable. That's it. Nothing fancy. Well, maybe the detections are a little fancy, but nothing fancy about what the technique that I'm trying to detect, right? And like I said, this is what I said earlier about how when you build enough of these rules, the more rules you have, the more you can augment your analysis. And so you can save this time up front by just by running YARA or your regular expressions and let those automated scans do your triage for you. And so when you can have a large enough rule set to scanning and do triage, and then you can have, without even throwing the binary in a hex editor or Ida Pro or even a text editor or something like that, you can run it against your rule set and you can get somewhat of a disposition, whether it be benign. You still may need to verify it if it's benign. You can get some sort of suspect disposition. It's doing all these techniques, but then there's nothing attributable that's going on, or you can get a malicious disposition right off the bat if you have an attributable signature. And so you can say, hey, this is definitely malicious. I can attribute it to this group. How much more do I need to spend on this? I can move on to the next binary. So like I said, these are three different rules targeting embedded executables. One of them looks for more than one DOS stub. So the 16-bit DOS stub that's in present in the 32-bit Windows PE header, or just a before it, I'm sorry, the MZ header, the 16-bit. So it looks for multiple DOS stubs. The other one, it looks for a possible PE structure in which the DOS stub may have been stripped. And then there's another one that looks for a DOS stub. It just looks for an additional DOS stub past the initial DOS header. And so what happens if the outer binary doesn't have a DOS stub and then the embedded binary does? And so these are three different techniques to kind of catch embedded executables using these rules. So next I want to talk to you about some detection targeting approaches and whatnot. But first let me take a step. So we're all in the hunt, right? We're all in the hunt. What was the black hat talk? 2015, big game hunting. We're all in the hunt for big game, big game malware and whatnot. The original big game was Jaws, right? So as I go through all my data, I realize some things. And that I'm hunting, but as a malware analyst, right? So I'm always on the hunt for that big game. So I can make an argument of this is bad, and it could be just some random file. This is bad any day of the week, blah, blah, because of the context you give it to me in. And I could do that a couple different ways. I can do that with kind of unique code DNA. You could either do it with the strings, with the very unique strings, or you can do it with unique byte sequences, right? So that unique DNA allows you to track the families and go hunt. But I noticed when hunting that the data that I was looking or the rules that I was producing, based on the data that I was looking at, I had a bias. Like I said, I can make an argument for anything to be malicious, right? And given a limited scope of the view. And so my, I didn't realize, but my detections also were becoming biased. And so, you know, we're hunting for that shark, we're hunting for jaws, right? And we see the fin come out the water. But then we look under the water, we look, we see the whole scope, and it's just some guy wearing a shark fin, right? It's like, you're, and you're confused. You're like, wait, I thought this guy was bad. You know, why is he not bad? But it's just some guy with a shark fin strapped to his back, right? Now, once you've seen the whole picture. And I got that way actually, or at least I noticed that, because just enough customers are like, no, this is good. I'm like, no, this is bad. And they're like, no, this is good. And so I would argue with people and whatnot. And so I noticed the bias in the detections. And so I noticed that a lot of my detections were as a malware analyst. And so I got to thinking, well, what would a network admin's detection be like? You know, how would you, how would they write a detection? And whatnot. With now our authors, we're looking in a sea of bad, or a sea of, let's see, what's, yeah, we're looking from a sea of bad that ignore the good. And then the network admins are writing their detections. They're writing detections from a sea of good to detect or highlight the anomalous bad. So they're almost seeing this kind of inverse view of what we're looking at as a malware author. So how do you write a malware analyst detection in Regex? I showed you the one prior with kind of the DNA, the unique sequences. But how do you write that in Regex? So I had a customer that was like, hey, I want to see these, they had these custom mail headers and they were getting mail from all over the world and they were like, look, I know this is bad, it's coming from this country or it's coming from that country and how do I detect this stuff? So I started writing these regular expressions and then it quickly got out of hand for him. He's like, look, now, you know, he's like, you showed me how to do this, but now look, we've got one, two, three, four, five, six, seven different countries and I'm getting new ones popping up every other day or so. And so we can't really take that approach in our detections, right? Or at least the network admins can't take that approach in our detections. So I had to kind of think of it and I had to flip and I had to go inverse. How would I do this? And he's like, you know, really, we only do business with these three companies, I mean, three countries. And he's like, if you can write me a signature that, you know, if it's not these three countries, and I know like the DOD, they got Five Eyes and stuff like that, like people, and then they're smaller companies that, you know, they're only doing, they know they don't have an office in Madagascar or something like that or in Nigeria and I mean, they may have a Nigerian, maybe related to a Nigerian prince, but they don't have an office in Nigeria, right? And so, and then they had these other kind of headers that were coming in through one of their things upstream and they were marking all this mail stuff for us. And I'm like, man, this is a gold mine here, right? And he's like, look, I want to find all these different, these different detections based on these, or the non-presence of these strings and whatnot. So I kind of took the, I went to the regular expression route and when I inverse it, I came up with negative look-aheads, right? And so they're a little complex, like I said, if you want to read into them, go back to the talk afterwards and kind of see it. It's all about creating a set and then you have to match, to non-match on, and then another matching set. It gets a little hairy, but I definitely suggest going on like Regex 101 to help you build your regular expressions and whatnot, though. And I came up with these three detections for the three different ex-headers for the situations that he wanted me to do. And it worked out great. He's like, man, this is awesome. I don't have to do any maintenance anymore for these detections. And it did everything that he wanted to accomplish. So, you know, and that was kind of looking at that inverse view, right? So we talked about kind of the malware analysts, detections. We talked about the bias that it kind of creates and how a network admin has a different view to it and whatnot. But how do you get longevity out of your stuff? And I have a technique that I call, I don't know how else to describe them, just circular detections. I'll write a couple rules based on a few things. And those two target rule sets are actually exclusive, right? They share the sample, but they don't share any same indicators of compromise, right? And so I'll create these circular detections. I'll have rule one, and it detects a piece of malware. But then I have another rule that I built off of my original source. But rule one detected this malware, but rule two didn't trigger. Analyze it. We know it's malware. But when I analyze it, I get an update to rule two. So I got that rule... I got a new indicator of compromise for rule two that I didn't already have from my first original sample, right? And most people who do a lot of virus total or reversing labs hunting, do a lot of retro hunts, you guys are used to this stuff, kind of panning out and finding the broader campaign. This is a kind of common thing. But I want everybody else who's not a malware analyst or not a malware hunter to kind of, you know, learn this stuff. So that's why I'm telling you about it. And so how do you do these circular detections, right? So my first round, this is actually a throwback to my old talk. The payload names. I found these rear little payload names and these small DLLs that were getting loaded up. And they kind of had some of these unique names. Now it set up that MSI may not be unique, but it's kind of unique in the context of it's a DLL and it's only 11K, right? And then I came up with a string de-offuscation routine. And that routine, so I had the payload name that was embedded into the loader and I had a string de-offuscation routine that was also embedded in the loader, but it de-offuscated the payload, not the strings for the file. And so I run it, I do a retro hunt, find a bunch of other files, and then I go on a round two and I found some new payload names, so my set of indicators of compromise just got larger, right? And they all stand apart from each other, so they can all be a detection on their own. But I went from having two payload names to I now have eight payload names. And then instead of having just one byte sequence for the payload de-offuscation, I now have three. I only got two rules up there, plus the previous one. And so I ended up getting three. And I keep doing these iterations and I keep finding more rules and I mean more Intel and I keep finding more malware. And in the end, I end up with like 27 payload names, some of which may not have even been plug-axes for the payload. So they were using a similar delivery method, or at least I suspect they were using a similar delivery method for a couple different pieces of malware. And then for as far as the strings, and then the byte sequences, I came up with 11 different byte sequence rules. And so I would have a subset at any one time if a file was scanned, and I'd have a subset of maybe five out of ten of these byte sequence rules would trigger on one of those things. And so, but if it got like two out of ten, then maybe I'd take a look at it. But if I got a high number of matches, detections, then I would just kind of roll with it, right? And kind of update it as I go along. So I'm all about stories. My next one is a story, because we're all in the trenches, right? And so I had a buddy of mine, Fish, is his name. And he calls me up one day, and he's like, dude, we got hacked by the Russians. I'm like, what? Like, yeah. And so I'm like, he's like, we got hacked by the Russians. I'm like, no, we didn't, dude. Like, all right, we'll take a look at it, right? And it's just this and this. And he's all excited. And it was a day before Thanksgiving, right? And I'm on vacation. It always happens on a Friday at four o'clock, right? It always happens on the weekend, right? Or it always happens when you're on vacation, right? And so we called it like Operation Turkey Eve, because it was a day before Thanksgiving. And he's calling me up. And we're getting on a phone. We're talking back and forth. And I'm like, walk me down the path that you arrive to however it is, right? And so he walks me down this path. And but I'm a guy that I got to look at the data, right? I can't just take somebody else's word off the phone. And so one of the big things that he pointed me out to was the file in question had a hit in virus total. And it was tagged by Thor APT scanner. And it was tagged Fancy Bear, which is supposedly Russia, right? Whether it is, whether it isn't, I don't know. But either way. So that's what he's in. We'll just say that it is just for a matter of fact, because he's like, yeah, we got to Russia. So it says it's tagged by Fancy Bear. And it says it's Mal. Mal Fancy Bear. And it says Compitrace Agent. Yeah, Compitrace Agent. And so I'm looking at these files, and they look like malware. They really look like malware. The loaders are like doing this crazy stuff. They're using like an undocumented API. It's an arbitrary binary loader. And it really looked like malware, just something that you wouldn't normally do, right? And some tearing through it, tearing through it. And for a good like three hours, I'm like, maybe we did get hacked, right? Maybe we did. But I was like, I got to see this through the end, right? And by this time, I think it was like 12 o'clock at night, day before Thanksgiving. Got to get the family up the next morning, right? I'm not giving up. And good thing that the person who wrote the rule, or at least I traced it back, to where the source came for the rule, good thing he actually uses a lot of things that I kind of preach, which is a good naming convention and also having good complete descriptions and whatnot, and then also sourcing where your information came from. So I noticed that this author, I'm not going to say his name because I actually do respect him a lot, but we're all human. We all make mistakes. And it came from a report from Arbor Networks. And so I was like, let me go back to this report that Arbor has, and let me read their report. And I probably read that report like four times, really like four times, I'm like, what is going on here? What is going on here? And at the end of the report, they give a YARA rule. And I'm looking and I'm looking and I'm like, this YARA rule looks similar to the YARA rule that authority PT scanner used to kind of make this attribution. And the name of the rule, and hats off to Arbor as well, they're doing good naming conventions. They're doing descriptions. Now, they didn't source anything, but that's fine though. And the name of the rule just says computer trace, a computer trace agent. And I'm like, well, wait a second. So I had to kind of dig more and dig more. And turns out that what was going on was fancy bear was taking this computer trace agent and they were patching the beacon, the domain beacon. That's all they were doing. A normal computer trace agent, DLL, different from a fancy bear, was only 28 bytes. And then it was x-award, so it was obfuscated. And so since it was obfuscated, you can't pre-predict what those domains are going to be. So how do you signature the unsignatureable, right? There is no real code in there that's attributable to fancy bear. So this was a computer trace binary. And so it took me a while to kind of tear it down and to kind of come to this discovery, did a byte-by-by comparison, and finally I was like, no, we didn't get hacked by the Russians. It's just whoever, we have like one box in the network that has this computer trace software running, let's go get the IT department, let's clean it up and whatnot. And to thaw APT scanners kind of credit, he, so this was day before Thanksgiving, so it's November 2018. And then on March 2019, he actually renamed the rule. He changed it instead of MAL for the first three. He changed it to PUP, which is potentially on one program. Thanks to a colleague of mine, I was like, what is PUP, right? And even I was like, I'm a seasoned guy, and I was like, what is PUP? And I was like, what does that make any difference? Well, it, but he still got the fancy bear attribution in there. And that rule, once you look at it, it's hitting on all these comp, compu trace binaries in virus total. Maybe it's running in reversing labs as well. So anybody who goes and looks up a compu trace binary in virus total, they're gonna get false attribution of fancy bear for this binary. And then they themselves will be think they get hacked by fancy bear. And so you see that domino effect of the just doing things kind of proper, taking your time, making multiple detections and doing proper naming conventions and whatnot, and just kind of slowing it up a bit and not having that one rule to kind of rule them all mentality. And so that's kind of my story. Does anybody kind of have any questions or anything or what? Mm-hmm. I would say, yeah, it's a little, it's maybe you can do some automation into it and look at ratios and whatnot. I kind of, when I do my circular detections, I do a lot of bytecode optimizations. And I'll look back at the assembly table, take a look back at my previous talk and I talk a little bit about it. But where do you stop? Yeah, it's most of the time it's kind of by the analysts because most of us, I mean, we're all pretty bad. We're doing things, we're hand jamming things. Most of us probably should be. You know, because we all, a lot of the SOC people, they're just getting things on a whim. And, you know, if they write a script, take an hour out of their time and write a script, they may never use that script again and do any kind of automation with it. But with the rules that you're doing, you can do some triage up front. But yeah, where do you kind of stop it? It is kind of up to the analysts. You just got to find that line within your workflow, I'd say. If you do have an malware shop and you got a much bigger automated workflow, then I'd say then you got to need to see, you know, give it some thresholds or something like that and say, hey, you know, I got some malware that in my eyes is pretty rare and whatever the source that may came from. And let's say if I get a hundred hits on it, I'll just stop right there, right? If I get a hundred hits. Because probably more than, it's a high confidence, very targeted sample, why are there going to be a hundred hits out there? You know, or you get a thousand. You can bump that up to a thousand maybe. And so once you kind of get to that level, you can kind of say, okay, yeah, this is a lot of noise. But then once your rule set gets more mature, you can leverage more of your rule set to kind of give you that other feeling. So you're not really always relying on the numbers. Does that kind of make sense? So at least that's my point of view. So, yeah. Any other questions? I know it was a lot, guys. And I thank you. And thank you for all the people that stayed. A lot of technical stuff. Go back and watch it. A lot of text on there. Chew on some of those YARA rules and whatnot. If you haven't played with it, play with it. It's a great thing. Regular expressions too and whatnot. So, yeah. Thank you, guys.