 Welcome back and thank you, Tim. Thank you for supporting DEF CON. Thank you for supporting the Red Team Village. The floor is yours. Take it away. Hi, thanks. Good morning, good afternoon, good evening, wherever you are in the world. Welcome to my session. All of the threats, intelligence modeling, simulation and hunting through an attacker's lens. So just to put a little bit of background behind this at a very high level, attack rocks. But what happens when attack doesn't give you the information you need? So introduction. TLDR, who I am, what the plan for this presentation actually looks like. So I'm not a data scientist. I'm not aligned to either blue or red. I do various activities on both sides of the fence. And I fully acknowledge that this isn't a solved problem. This is my stab at in the dark, trying to get somewhere with it. So who I am. So I work at Cisco. I've done various roles. I've done assessment work. I've done threat intelligence. I occasionally help out with instant response when we get weird situations. And indeed, I've also done my fair share of spreadsheet filling as an auditor. I am a technical person. I like bugs. I like Unix in particular. And I like exploiting stuff. Probably the most recent stuff you can go find is Mimicats and Unix. What happens in an active directory on the Unix estate? So the plan for this presentation, I'm going to give you a little bit of background. I'm not presuming that everybody knows the difference between threat intelligence, threat hunting, threat simulation, et cetera. I'm going to talk a little bit about building bespoke threat models. I'm going to talk about how we can use data we already have. I'm going to talk about how that translates into the real world. I'm going to look at some of the stuff that we do for an offensive standpoint and how that maps into things like attack. I'm going to talk about how we can be using this kind of information to improve our threat models and hopefully I'm going to finish up some recommendations and conclusions and it all makes sense and you'll go in and everything will be great. So background. The concept was we need to bring all of these different disciplines together. We have people that do threat intelligence. We have people that do threat modeling. We have people that do simulation with people that do hunting. An attack gives us a common language, all of those things, but does it go far enough and does it help us with, I guess, the business system? So that's kind of where we are. As I said, bring the five disciplines together. Most of the people I'm guessing on this will be involved in at least one of these. Maybe you're doing recovery work when breaches happen. Maybe you're doing detective work. You're sitting in a sock. Maybe you're the person that's meant to be responsible for keeping everything safe. We're all in this together. So another ideal approach. Ideally, we all have a good understanding of targeting hypotheses and how we validate. We'd understand actors, TTPs, assets. We'd be able to draw out hypotheses about how actors get into networks, what they do once they're in networks, and what they're looking for in terms of actions on targets, what they're looking to get away with. Hopefully, we'd have a common way to talk about that from a validation standpoint, either in terms of posture or telemetry, or indeed any of the other mechanisms that may or may not exist. So threat intelligence. That's all about identification, understanding emerging TTPs, understanding what malicious behavior looks like, collecting and enriching that, and providing situational awareness to your organization, to those people that you work with. Threat modeling. The mission there, I guess, is to describe the assets. So take that information that threat intelligence gives you, augment it, build kill chains, map attack surfaces, talk about vulnerabilities or weaknesses and how they inform that threat model, understand the motivation, understand the impact of attacks. Simulators. There's various different classes within that. You've got simulation, you've got emulation, you've got traditional penetration testing, but essentially, that's looking to take that information about what threat model looks like, that information about what threat actors are out there, and recreate it so that your blue team, your defenders can understand, can test their telemetry, can test that they can pick it up, and hopefully so they can improve either the identification or the automation, the response to those threats. Hunting. That's, I guess, the final part of the jigsaw from my perspective. That's about understanding how threat intelligence, how threat models, what they look like on the ground. When an attacker has got in, what are they doing, what does it look like, and how can you go and find them and hopefully kick them out? So it's about understanding posture. It's about understanding telemetry. It's about reading that information, identifying where attackers are actually active, and taking steps to to remediate that problem. So that's kind of a high level view of the people I'm talking to. What of it? What should we be doing? I talked about the fact that hack is great. It's really good if you're talking about heterogeneous Windows networks, Active Directory, phishing through to crypto mining, phishing through to crypto lock-in. The traditional stuff that everybody worries about on their Windows desktops, but where it doesn't, in my opinion, particularly help is when you step beyond that. Most businesses, when they actually have data, they have applications that essentially drive their business, and that's the purpose of this talk is to look at how we take attack and leverage it in those kinds of situations. So building bespoke threat models. I'm going to go through what I think requirements look like, what I think the workflow looks like, some ideas about how to use a workflow effectively, how you can apply hypotheses, some of the use cases that maybe we don't really tackle today, but perhaps we ought to, some of the capability gaps that certainly exist, and why in my head, and this is just my head, I think threat intelligence collection is really just a backstop. There are far more useful sets of information about our platforms, about our assets, about our data, about our users, that should give us a far better understanding of where threats are likely to raise their heads. So in terms of requirements, broadly speaking, we need to define a mission, and I will say the value of any system, well, that's the data. Secondly, we need to have an understanding of what threat looks like from one perspective or another. In terms of building hypotheses, well, we need to understand what the organization actually cares about, and we probably need access to the design. It's very difficult to build threat models purely based on attack once you step into business systems because you're really not matching like for like a Windows desktop, the users, the use cases, are significantly different from an enterprise resource planning system, a data warehouse. So we need to understand what the designs look like, you know, three tier architectures, are they using containers, are they using microservices, that kind of thing. And then from a validation standpoint, one way or another, we need some visibility of those targets. From an intelligence standpoint, I guess third party source evidence is kind of useful. If you're talking about simulation, well, you need to be in a position to go and actually test the systems that you're talking about. From a hunting standpoint, you need configuration, you need access to the logs. Ideally, you'd need dynamic telemetry, that is to say, telemetry that's feeling back in real time, but at the very least, the ability to see the events, the log events that are occurring in SAP in a database environment, et cetera, is pretty key to getting this right. So in terms of workflow, from my perspective, I've sat in all those different nodes at one stage or other. I sat there and looked at TTPs. I've gone and tried to simulate vulnerabilities and weaknesses to exercise attack surfaces. I've considered the impact from a governance standpoint. And in all and amongst that, you've got to understand the motivation. But hopefully, no matter what part of the security life cycle you live in, you'll map into one of those areas fairly cleanly. And then, of course, the challenge is to hand off to the next node. And indeed, in practice, there's nothing to stop you handy across in other ways. Understanding impact will outfeeds vulnerabilities and weaknesses. Understanding motivation might actually tell you something about TTPs. So iterative effectively through it. Understanding a platform, I talked about the fact that attack is great. But if you understand the platform, maybe you can take attack on its own and filter out those TTPs that you're most interested in. Similar work I've been doing recently has been around, for example, taking attack and extracting verticals, extracting regional differences, extracting particular vulnerability classes. In terms of threat modeling in a practical sense, you've got to be able to go and speak to the people. You've got to understand the roles, you've got to understand the processes. Tooling-wise, there is nothing better than pen and paper or a whiteboard. But if you want to record your information, Microsoft's threat modeling tool, others of it's ilk, Visio and Excel, well, they can all help. And then fundamentally, producing a worksheet. And it doesn't really matter if that worksheet is built from the perspective of an adversary, a red team, whether it's built from the perspective of a threat intelligence person, whether it's built from the perspective of a threat hunter. Once you have that worksheet, actually, you can pivot those questions to answer whichever question you're actually asked. So applying hypothesis to real-world platforms and applications. First of all, at a very high level, familiarizing yourself with attack, it is still useful, and you'll see this a little bit later in some of the data I draw out. Mapping out those attack surfaces, you might recognize the four I've iterated through. They're actually straight out of CVSS. It's a start important. Vulnerabilities and weaknesses. Attacks, TTPs are great, but actually, CAPH, CWE, perhaps give you value that you're not necessarily recognizing today. If you're fortunate enough to have good threat intelligence, understanding motivation, understanding threat groups, if you're fortunate enough to have access to business data, understanding system value. And then, of course, impact. It's a pretty lazy way to do it, but stride still helps in that regard. Fundamentally, attackers are still speaking, tampering, causing information to be disclosed, etc. So use cases. In an enterprise, fundamentally, attack gets used for manual design verification. It gets used for sourcing IRCs. It gets used for telemetry configuration, and it gets used for response prioritization. There will be other use cases, and there will be subuse cases to each of these. But fundamentally, the threat model concept is about understanding the design. It's about understanding what's likely to be attacking it. It's about making sure you've got the right security controls, and these days, with consider breach, telemetry is a key aspect. And it's about being able to respond when breaches occur, because, again, consider breach. So capability gaps. I've been into a fair few socks either working in them or talking to them, evaluating their current capabilities. And that's hot one's a pretty key one in my mind. Socks are great at dealing with enterprise. Most people these days are familiar with attack. They do understand enterprise risks. But actually, once you start to grill them beyond that and start to ask them about their SAP infrastructure, their banking enterprise applications, their data warehouses, we're kind of blind to a degree. And I think that's probably one of those things that we need to tackle. And I hope that some of what I'm talking today about is about informing how we can tackle some of this. We're clearly not going to teach every stock analyst every application that business cares about. So making better use of the knowledge they have, giving them the tools to translate generic enterprise risks into key business risks. That's pretty critical. Collection and routing, logs, audit events, telemetry, I guess, depending on where you work, who you work for, you might say, we're pretty good at this. You might say, I'm not quite so sure. You might even just put your hand up and say, no, we're terrible. I think every incident response engagement I've turned up to over the last five years hasn't had as much information as we would require to evaluate the system. And if you talk about enterprise, that's really good example. We did a response engagement recently where it turned out that actually it was a message queue that was being tampered with. And guess what? Their message queue had no useful logs at all. Nothing that told you that much about who was accessing the message queue, where they were accessing from. And indeed, when we eventually managed to find the application logs, the development data, essentially about how the message queue was functioning, and we found that it was logging the messages, we were so excited. But even with that information, trying to take that back to a sock and say, actually, which of these are the legitimate events, which of these are, it was a challenge because the security organization just didn't understand that. Orchestration and enrichment key to those two previous points is getting socks to a point where they know what data is useful, and they know how to use it. It's easier to do with Windows. It's easier to do with endpoint devices. But actually, if you want to kick someone out of to go back to that point, a message bus, where do you start? What data is useful? We had to tell them every step of the way, because they simply didn't know. And then of course, you look at things like the, what is bad? What does bad look like in an enterprise resource planning application in a Unix estate for microservices? So those capability gaps, we need to start to fill them. But all of that data exists, which is why I argue that threat intelligence really should be a backstop. If we're not consuming the data that the organization is already generating in one form or another and using it effectively, we're going to be forced to use threat intelligence as our only means to identify threat actors, identify attacks. But in terms of threat intelligence, even there, we could be using it more effectively. Right? We know that vulnerabilities exist, for example, yet we don't necessarily use that in an effective way from a threat intelligence standpoint. We don't use it in a predictive fashion as much as we might like. We certainly don't construct the same degree of hypotheses around application vulnerabilities as we do around password cracking, fishing, etc. We need to get better at putting together hypotheses, straw man, that we can start to buy off in pieces. And ultimately, from a threat intelligence standpoint, we need to get better at tracking that kind of threat intelligence in just the same way that we track TTPs for enterprise threats. A SAP system will still have that kind of information. We should still better work out the validity of the information. We should still be able to understand where it might affect us as an organization. We should still be able to understand why an attack might wish to do it. We need to start to ask threat intelligence teams to give us that information so that we can make the right decisions. So that's some of the problems. Let's have perhaps a little bit of a look at some of the solutions. And here, what I was really looking to do was to map some of the more traditional concepts of threat modeling, stride, Microsoft's tooling, and equivalent into attack language, into kill chains, into the attack matrix. So working in Cisco, I have a pretty good source of information about all of the assessment work that we do, the vulnerabilities, the weaknesses. I have the ability to extend that tooling. So I kind of wanted to say, well, actually, if we've got vulnerability data from an assessment, how do we translate that into a threat model that a SOC can use? So I want to label our findings, I want to start to analyze our data in a bit more depth. So that's our vulnerability model fundamentally. We take all of the things you would find in a traditional pen test, the idea of scoring criticality, defining the weakness in generic terms, describing the vulnerability, describing the impact, talking about recommendations, industry references, et cetera. But on top of that, we have our not inconsequential VDB, it's internal, but it imports from Nessus, from Weiter, and from other sources. And our reporting engine is really driven off that. So when we report a finding, if it's a finding we've seen before, it will come from our VDB. If it's a new finding, it will end up in our VDB. So we have quite a rich historical view. I won't say it goes back 20 years. We have to clear down a lot of customer ratios, you can imagine. But certainly in terms of the VDB portion, we have an understanding about how vulnerability trends have changed over time. And we should be able to leverage that to build more interesting analysis. So what have we been doing? So we've been automating some of our scenario generation. We've been taking attack using PyTac, which is a Python framework, to start to filter the sticks output from the attack matrix. We've been starting to look at how we label vulnerabilities in our VDB and vulnerabilities in our customer reports with attack TTP so that we can say, we found this vulnerability on a traditional pen test, but actually it maps here into your kill chain. And these are the kinds of controls you're likely to need. Not just the tactical fixes of yet you should go and patch or yet you should go and harden this particular service, but actually, have you got monitoring here? Is this system being protected by behavioral analysis engines, etc. And then of course, we can take that information. We can export it out and we're using sticks for that. And then we can do cross data sharing. So for example, I can take out vulnerability data from our reporting engine. I can share it with Talos or with Pisa or with any of the other security teams that sit within Cisco. The idea being that we can get a better understanding of what our customers environments look like and we can drive some of the features that we think are critical into obviously Cisco's products. And ultimately, we're taking some of that knowledge and using it to make more effective business risk analysis. We're starting to go to customers and actually use these techniques on their data, not in our VDB, but on their risk management systems. So I talked about labeling findings. This is probably a pretty trivial example, certainly with the data that we have. But broadly speaking, we're scoring our findings. We're comparing those schools against attack TTP definitions and we're labeling findings. And then we're identifying similar findings elsewhere so that we can validate that the scoreings we're applying are appropriate. And then applying labels where we're relevant, so that we start to build up a contextual map of how individual vulnerabilities, individual weaknesses found in it as a response engagement, incident instances of technical vulnerabilities in a red team, how they all map together. To do this, it's not necessarily trivial, certainly if you don't have the data to begin with. But in our instance, we've been developing plugins that give us customer views, platform views, attack surface views of our data. Like I said, we've been extending our findings to capture this information. But then we've been able to develop plugins that do that automation for us using the mechanism shown on the previous page. And then we can start to render that information in useful factions. And I think a couple of slides at a time, I'll show you an example where I've taken a bunch of vulnerabilities from our VDB and using a tool called Gefi. I've actually mapped them into a full key chain. And you can start to see where the critical nodes are. This is generic vulnerability data, not red team attack TTPs we're talking about here. Let's stop and pause for a second. CVSS is not a shoe size contest. How many organizations out there look at CVSS as a number? They look at a 10 and they go, oh, we need to patch that immediately. They look at it as a six and they go, we can patch that in a little bit of time. They look at it as a three and they go, actually, we don't need to patch it at all. So frustrating. So what have we been doing there? We've been attempting to map CVSS into the cyber key chain. One of the reasons that CVSS is kind of quite interesting is because nobody feels fearful about sharing that data in isolation. If I go to a customer and I say, I don't want anything else about your estate, I'd like to know your CVSS scores. Most organizations feel comfortable with that. And what that means is that they can then give me some information, which actually, when you start to break it down, it becomes slightly more useful. So this is a rough mapping of our model. And what I've essentially got on there on the left hand side is all of the key kill chain phases. And then as you progress through, I've got the aspects of CVSS that are likely impact or likely to be interesting in each of those stages. And then I've rated them in terms of how useful they are. So I'm able to say that from a confidentiality standpoint, if the confidentiality impact is high, that's probably going to be useful from a reconnaissance standpoint. It's also probably going to be useful from actions and objective standpoint. And similarly with some of those other nodes in that graph. And where you get to is this. If you take all of our VDB data, and indeed, if you take Cisco's publicly disclosed vulnerabilities, and you break down and you score in the way that I've described, you can see where individual vulnerabilities that we report to customers, individual vulnerabilities that get reported to us, well, how they potentially, and it's my model. So only potentially how these potentially map into the kill chain phases. And given that the kill chain phases inspired attack, we're then in a position where maybe we can start to look at actually which parts of the kill chain are we exercising effectively, which parts we're exercising badly. Yeah, we never report almost anything that helps us with weaponization. We rarely report anything that talks to installation. And command and control, well, actually, if you think about it from a PEN test standpoint, that kind of makes sense. We're unlikely to really look at command and control, we're unlikely to get installation, we're unlikely to look at weaponization. So it's almost reflective of what you expect to see. But what it means is actually if we were doing a PEN test tomorrow, if we were doing a threat hunt tomorrow, where perhaps should we focus our efforts? Installation and command and control might be kind of interesting. If you're an organization that has regular PEN tests, simply for the fact that PEN tests are unlikely to produce useful findings in that space. And then I said we talked to some of our customers that started to use this kind of information on their risk databases, on their vulnerability databases. For that, we've been going into organizations of perhaps a couple of steps up the tree, we've been talking to CSOs, executives, etc. And we've been using this concept of FAIR, which is essentially a risk framework that allows you to talk about risk in quite an interesting way. It allows you to talk about things like resistive strength, threat capability, probability of action, primary loss, secondary loss, etc. And it allows you to put proper numeric scores. Indeed, FAIR encourages you to score in, ultimately, a dollar, a pound size value. But what it means is that we can go and talk to organizations at levels that the executive cares about. Executive doesn't necessarily care about CVSS version 10. What they care about is that they're able to trade, that their IP isn't being stolen, that if they're a bank, someone hasn't got access to the treasury and stolen all the money. So starting to take vulnerability data and turn it into business impact, business risk, etc. Yeah, that's kind of quite compelling to a lot of C-suite. So why do the labeling? This statement, yeah, it's a great statement and it's very true. Defenders think you list, attackers think in graphs. If you think about it logically, that makes perfect sense. An attacker doesn't care how they get into an organization. Yeah, if they can't get in one way, they'll get in another. That's a graph. If you think about how risk management businesses work, how risk management organizations work, guess what? For most part, they're operating on spreadsheets or the equivalent. So getting to a place where we can help those risk management functions think more effectively in graphs, that's kind of going to be useful. So when we do the labeling, it allows us to produce documents such as this. So this I think was from recollection was AWS, but essentially was taking all of the TTPs, all of the threat groups, all of the parts of the attack matrix and map them filtered, as I say, through the view of AWS rather than the whole sticks object collection. But we're now starting to do the same thing with penetration tests. We're starting to do the same thing with red teaming and hopefully we'll get to a point where we can deliver it right across the board. Every report in my view, this visualization is pretty key to helping people understand what matters to them. So that's all of the theory, that's all of the data science from a person that isn't a data scientist. What I really wanted to do is have a look at how our data maps to the real world. And I talked about the fact that my background is Unix and big systems and that attack doesn't really help. So I kind of constructed three hypotheses, which I thought were probably pretty short to evaluate. I was probably going to be challenged to find useful data to support me, but it was a starting point. So I chose my targets. As I said, Unix, I built hypotheses. I validated those hypotheses. And I'm hoping to start to feed some of that learning back into our reporting. And I'm certainly more comfortable to talk to some of the missed opportunities that I see along the way. So in terms of targeting, if you're familiar with Port Cullis, obviously we're Cisco these days, but if you're familiar Port Cullis, you'll know that we spent years doing lots of interesting stuff in Unix. It seemed a good place to start. We've written quite a lot of things that you could consider to be TTPs if you were that way inclined. And as I said, we had quite a lot of access to our customer data. So we were able to do things like, actually, what if we reported over the last 10 years? And the way we have our data segmented means that our VDB keeps track of how many times a particular issue gets reported. Even if we clear down the customer data, we still have an understanding of trends going back not many decades, but certainly the last 5, 10 years kind of thing. So the hypotheses I constructed. Attackers are using our tools to target environments. Attackers are using techniques from attack to target Unix environments. And attack is not representative of the TTPs that we, that is the traditional pen testing arm of Cisco, find success with when we go and do professional service engagements for our customers. So hypothesis validation. I used a small selection of our TTPs, in particular two tools. Pen test, Monkeys Unix Prevest Check, and my Linnocats implementation for attacking AD on Unix. And as I said, there's a lack of information out there. So I was kind of hoping we'd go and have a look at a bunch of incident response reports. And we'd kind of say, okay, there's laser views for information in there. Guess what? When you look at enterprise systems, that might be the case. When you go and look at big box servers, far less often does it occur. So we went back to the drawing board. Detonations, bits of attack that might be useful. A lot of Googling furiously. And to the point about VDBs, many of our other data sources. So hypothesis one. Attackers are using our tools to target Unix environments. Guess what? It's almost impossible to determine whether if an attacker was to compromise a Unix box and they would use off-the-shelf publicly available tools. It's almost impossible to tell whether they'd actually be picked up. Unix Prevest Check was undetected in pretty much all of the cases. And even where it was detected, it wasn't classified as malicious. Kind of cool. I kind of like the fact that I could rock up. I could use this tool when I do an assessment. But actually, you probably want to know whether it's on your network, even if ultimately you go, it's the pentesters and they're meant to be using it. Again, looking for detonations. I went and had a look to see if anyone had spotted linear cats in the wild. And guess what? They haven't. Is that because it's not malicious? Is that because people have looked at it and kind of gone, we don't mind if it's only being used by pentesters? Actually, it's probably because if an attacker gets you onto a system, they're probably not going to pass it through a Windows box unless they absolutely have to, at least not in a way that Windows AV is like to pick it up. Which means that unless you have good telemetry on your Unix estate, both those tools, you're unlikely to see them up until the point that the breach occurs and you start going back through your historical data. And even then, you're probably not going to see it unless you've got an attacker that's left it in the history or an attacker who has left it on the file system or an attacker who is ultimately pretty incompetent. If I wanted to get those tools onto a Unix box right now, I could do so. The fact that we don't even have the telemetry to allow us to spot the sloppy amongst us is probably a challenge that we ought to address. So signs of life in attack. Well, neither of those tools are mentioned, kind of understand why it's not generic enterprise attacks tools. But guess what? It's used by almost every penetration tester out there. It's in lots and lots of tutorials. And it's even being used by people that we would probably consider to be threat actors. Phineas Fisher literally mentions it in his paper around how to how to breach organisations. Now, you might argue that Phineas Fisher isn't an adversary you care about, but it didn't really matter. The fact that we haven't even identified these tools exist when our own community knows they exist and is using them. You know, we talk about offensive security tools. These are offensive security tools that will be being used by people and we'll just never see it. So the second hypothesis, attackers are using techniques from attack to target Unix environments. Like I said, that's pretty hard in a practical sense. If you look at the stuff that comes up in attack from a Linux standpoint, it's mostly IoT or front and centric. It's a bit of a chicken and egg situation. If you can't monitor your state effectively, you're probably not going to spot it. And from an incident response standpoint, well, organisations perhaps they're not going to be particularly honest, certainly publicly about just how deep breaches went. But anecdotally, I'll call back to the Phineas Fisher point, they certainly are doing things that would fall into the attack categories. Unix backend breaches do occur. There is almost always some level of application level interaction. Remember, I said the value of any asset is the data that it holds. Some access are truly incompetent. It's not a Linux host. We did an incident response engagement for a mainframe platform that had a Unix like shell. And the attacker had gone on to that Unix shell without really realising what they'd done and poked around a bit, not had a huge amount of success and then attempted to wipe their evidence. If I'm looking at this critically, the TTPs from attacker are actually probably still a reasonable place to start with Unix, with enterprise business systems, with servers, etc. But perhaps we don't have the telemetry to support it at this point in time. If anybody wants to know what a good breach report looks like, this is probably the best one I've found. It's looking at FastCache on AIX. And it's not perfect, but it does support the fact that a lot of this stuff is stuff that attack knows about. Okay, it's AIX, not Linux, so I put question marks against that. But it certainly calls out to the same techniques that the FastCache malware was using in terms of persistence, privilege escalation, evasion, persistence, impact, etc. There's a little bit of speculation on application and entry point, probably not enough. But this is the kind of thing, in my mind, should be going into the attack matrix. Or if it's not going into the attack matrix, should be going into the equivalent matrix for business systems. If we don't capture this kind of information, we're never going to get our socks to start looking for them. So hypothesis three. So this was taken from our VDB. I took the top 10 or so issues that we regularly report to customers when we go and look at Unix systems. I went and I had a look at whether they were captured inside attack as it stands today. Security patches, password reuse, etc. They're all covered. Role accounts for interactive logins, semi-covered, but perhaps not the way I would have viewed it from my standpoint. That third, fourth one down, that's not captured at all in any useful sense. It gets us into almost every enterprise Unix box we go and look at. You get your role account. You get your laborer's user. Almost every single one of them will get to root through that. So we ought to make sure that attack factors that in, describes it, and gives the information that allows a sock to start to detect, to start to respond when those kinds of attacks are used. So broadly speaking, I guess we're at, I think we don't have the right level of platform on application where it is today, which inevitably means there's an intelligence gap, which means that no matter whether you're doing intelligence, whether you're modeling, whether you're simulating or whether you're hunting, you're going to be working, I think, from an incomplete set of hypotheses, which means that the models that you're working to probably don't relate to the systems that you're actually caring about, and you're probably not going to be as accurate as you might want to be when you start to validate them. How do we improve all of this? It's the million dollar question. Threat modeling certainly plays a part. If you were simply to take the approach I did with the vulnerabilities and use that to drive the threat models for systems you don't know about, you would certainly get to a place where your blue team would probably have a better understanding of what bad is likely to look like. Once your blue team have an understanding, then they can drive the visibility, and then you can start to refocus your offensive services to validate that visibility is appropriate. Ultimately, I would posit that at least some of the problem is there aren't enough bombs on blue team seats. I guess what I mean by that, we need to find a way of making it attractive to have those people that understand those applications, the security models behind them, sitting directly in the blue teams or certainly sitting adjacent to those blue teams so that the blue teams can start to understand what some of the applications the businesses care about actually look like. Yeah, we can't fix architectural alignment or mission tomorrow. We probably can work on threat visibility and on target visibility if we know what we're aiming for. If we understand the threats, we can better protect our customers and our data. More importantly, if you can start to talk about individual vulnerabilities in terms of their impact on our threat model on the kill chain, not only can you make sure that your SDRC is looking for the right things, but you can also make sure that the people that are doing the code, the people that are sitting there writing the web applications, writing the code that interfaces with your message bus, your database, etc., so they understand why and how an attacker might attack them. Hopefully we'll get into a situation where less vulnerabilities of note will make it into business applications in G course. Ultimately, organizations will have profit motivated and if we can talk in language that makes it apparent to the C-suite why it matters, what the cost of not doing X is, I think perhaps will be slightly more successful in that regard. So, offensive services, I think there's life beyond Nessus and Mitre, but that data is still useful even from a red team standpoint. I guess we need to figure out how to use it more effectively. We need to be in a position where we can far more quickly, far more comfortably talk to the kinds of bespoke briefings that organizations want, craft war games that are actually appropriate to the things that matter to a business. We need to be able to articulate all of that and obviously better metadata, visual representation certainly helps with our communication piece. We need to refine our assessment with methodologies. I think too much of pen testing today is focused on the break in and I think the data I showed around CVSS shows that actually, given that pen tests are still the lowest bar most organizations will jump through, they're actually getting testers to a place where they feel comfortable to talk about the other side of the line. The system has been compromised. How do we get back out is something we know probably ought to focus more time on. So some conclusions. What have we learned? How do we do this better next steps? We can automate extraction of hypotheses. We can build threat models in an automated fashion. Vulnerability data, no matter how poor you may think it is, could still be useful with the right labeling applied. Actually, from a test standpoint, it'd be kind of great if we rocked up organizations and they had a threat model based on their existing understanding of their vulnerability estate. Extract your quality of information, build that threat model, give it to the tester at the start and say, this is kind of where we think the threats live. Help us validate that. Again, if you can get to a place where you can take your vulnerability data and start to think about it in attack terms, then you can start to think about the control sets that you need. And potentially, you can start to think about automatic generation and validation. Ultimately, it's a communication problem. Attack has given us a wonderful tool to help us bridge the gap between red and blue, between operational and management. We just need to make sure we take the best advantage of it. So doing this better, better consideration. I know that from a red team standpoint, we would argue that perhaps we're there, but actually, I don't think of VA. A generic pentest necessarily needs to consider the scope they're given to be an impedance to giving better outcomes to customers. We do a really good job of collecting awesome information about threats and vulnerabilities, whether it's Reddit, whether it's PeerList, unfortunately closed down, or anywhere else. But even that data, we don't particularly do a good job of enriching it. Having a platform that allows proper automated enrichment of that kind of information, maybe you're lucky and you're sitting in an organisation that has it. It's a bit of a full-on dream for me. Personally, getting our database to a point is fully labelled. And being able to run queries against those labels would be pretty kick-ass. Taking that to the next step and making it actionable. I mentioned that we use sticks. Sticks is good at talking about the types of threats. It doesn't really capture the level of information that's required for machine actionability. There are some solutions to that, the Sigma and others, but we're really not at a point where we define vulnerabilities in a way that's particularly helpful for scale. And ultimately, if we don't generate for scale, then improving EDR and workload protection products is going to be pretty forlorn. If every vulnerability report that every pentester, every security researcher turned out, had essentially a call chain graph that an EDR solution or workload protection product could import and then look for, rather than being reliant upon hashes, IP addresses, hostames, that would take us a huge way forwards. The next steps are called to action, I guess. Just because we don't have the capabilities to look for bad guys doesn't mean they're not there. Attackers will use the easiest TTPs that get them to the re-prompt. Those things I was talking about are pretty damn easy. Behavior, don't rely upon agents that are merely relying upon hashes and IP addresses. If you're doing Unix stuff, make sure you're exercising those things because the bad guys certainly are. And Linux apps, Unix Prevest Check, I kind of look forward to the day when they appear a virus title, and they're being detonated and being marked as malicious. It'll make my job a little bit harder, but actually it'll make blue team's jobs a hell of a lot easier. Finally, a few thanks. My old colleagues, Cisco's wider team, are MITRE. Attack wouldn't exist without them. Swimlane, PyTac is still the most comfortable interface I found, rather than writing in pole. And blue team's everywhere because you do a thankless job, and for the most part, you do it as well as you can do. And with that, questions.