 All right guys, thanks for coming. We're from Devo and we're giving a talk on our detections research and the studies that we've had covering at the end more than 6,000 cloud native detections across the world, across industries. My name's David Wolf. I'm a security researcher and I'm part of the Devo SISECT organization. We're responsible for different forms of product innovation and engineering. Josh here is a detections engineer actually responsible for creating the living breathing alerts that Mike on. I'm good, boom. All right. Yeah, by the way, I hate that photo. It's really, really, really bad. I don't even know how they got it, but please don't take a picture of that photo. No, I have a long background in offense and defense operations, technical operations, doing pen testing, going after nations adversaries and doing defensive work. So my role at Devo is to create content or create out of the box alerts. So I will recreate environments to exploit systems so I can obtain the logs. And I write some of the alerts that David's gonna be talking about. And off. He created the innovations that we researched and studied looking for optimization, migration opportunities. Interoperability was the core mission for assessing these alerts compared to other seams. How can we enable transmission in a round circuit? My background comes from an OT and an ICS environment setting financial services, connected healthcare, the extended campus and IoT. And for me, coming and seeing how detections are deployed in different organizations around the world was a new research opportunity. We didn't, the item in this part of our research portfolio came from product innovation, but it became immediately apparent that we were learning new things that we weren't expecting when it is we started digging in. This session follows typical research format. We'll give you a little bit about our research lab and our team as to why it is that we're conducting this type of research. I'll give an overview of our methods and scope. One question was how did we aggregate the alerts and where are they coming from and how did we get them and what were we looking for? And yeah. One thing I will note is the slides were prepared in advance and put out on the site. We modify those slides. So if you're following along on the ones that were originally posted, there will be some confusion in there. We're gonna get the newly revised ones uploaded to sketch.com or whatever it's called to as soon as possible, but after this talk unfortunately. So I didn't want there to be any confusion when people were trying to follow along. Fair enough. At our labs, we have three major themes that our portfolio aligns to. We have the autonomous sock, sort of the second brain for analysts and how is it can we instrument action that follows an intelligent response and reaction at enterprises and feeding into SOAR. Detection is the trigger for actual orchestration. The second major thing that we have is the augmented analysts. How can we put and enrich the data supplies and the views that analysts are taking in the sock while monitoring their data lake? How can we give something approaching actionable intelligence and at least augment with additional data that leads to new insights in the right place and at the right time? And then third, alert management. We conduct an annual research report with some thousands of survey respondents that are leaders and staff at SOCs around the world. And we're sensitive to the feedback and the pain points that our survey respondents are reporting. Alert management as it turns out is a major pain point especially cited by analysts themselves as being the primary place that they want help in the sock and that high performing socks have lots of alerts. And once we start getting into managing hundreds or thousands of alerts, it's a whole different ball game and scale and alert management become a problem. At Debo SciSec, we have a clear mission. We as researchers are researching emerging threats, reviewing the vendors and products in our ecosystem and the data supplies that all the enterprises are using and we use those supplies to solve novel customer use cases with a security focus. As a SIM provider, Debo is strongly related to visibility and analytics and enriching and deriving insight from data pipeline is a core part of SciSec's mission. We have the data scientists and security researchers and practitioners putting one-on-one together is the focus for how our organization is built. We have the three primary themes, automated sock controls and the autonomous sock, the second brain to the augmented analysts whom we enrich with context and fundamentally alert management being a third catch-all bucket to illustrate problems and the solutions that we're trying to provide to our end users. For this research project, we loosely adhered to the MITRE ATT&CK framework for the purpose of research, being three phases of assess what's there, assess the coverage, identify the gaps and then tune and acquire new defenses. So this was Detection's engineering innovation project. Like, what is our coverage? Where do we have gaps? Where do our customers have gaps? And what do we need to either tune or build in order to round out a fuller strategy and a portfolio view for our Detection's? In the SciSec research lab, our team is primarily Detection's engineers like Josh who have the focus on cloud Detection's across platform providers. We have our data scientists who are busy in the lab right now crunching all kinds of data and I'm a security researcher as an example. Real quick to David's point, one thing we will talk about is when we're talking cloud providers and the disparity of how analysts have to understand and know those cloud providers. So I just wanted to point that part out since when it was brought up about the cloud providers and the Detection's. That's all, my bad. And I hadn't said anything in a while and I wanted to say hi. Overall, research methods and scope. In the original sample were approximately 14,000 alerts after narrowing down for keying to specific customers or managed security providers. We came with 6,000 alerts that were within our sample all based in the cloud. Internationally, almost two thirds are coming from the States, one third from Amia and Asia PAC. Inside the lab, our project was focused on developing machine learning classifiers, for example, for zero trust and minor attack labeling and the generation of new kinds of alert metadata as well as identifying what type of motives attackers would have, what verbs they're executing and against which nouns. So what fundamentally assets were being targeted, especially within a cloud context. Where we have AWS and Asia and GCP is three primary providers in our data set and the ultimate attack path might be the same but the vocabulary is different. Overall, we had something on the order of just about 400 distinct customers within the data set. Let's see here. Now, when it comes to mapping the detections, the labeling it is that we generated filled in most of what we can refer to as the MITRE attack cloud matrix. And this is a composite matrix for both infrastructure and workspaces. So when we're talking cloud and when we're developing detections, there's a big difference between cloud infrastructure like AWS, Asia and GCP compared to the workspaces, cloud-based workspaces like Office 365, Google workspace, SharePoint, these types of systems. The detection portfolio that Josh and company have been developing spans both contexts of infrastructure and workspaces and we mapped our controls accordingly. Briefly in terms of scope, a few points to harvest from here are that financial services, technology companies and operational technology companies provided the bulk of our data set. We have significant presence from managed security providers, managed security service providers and the differences between how MSPs are developing custom alerts specifically compared to enterprises was interesting. It's clear that there are good reasons for an enterprise to outsource to an MSP depending on the maturity and capabilities inside. Overall, we had significant success mapping alerts and their metadata to fundamental pillars in zero trust and we had 58% of our alerts mapped to MITRE attacks successfully where we felt that the labels, that this was the ratio at which we had confidence in our MITRE attack labeling. Another key part of the research is comparing out-of-the-box detections to custom detections. We have three research themes and the first is the autonomous sock and the autonomous sock is a place where action happens, influenced by machine learning, influenced by artificial intelligence. The ultimate output is some kind of decision-making that's been automated and in the autonomous sock and using that as a research theme, some of our findings came and emerged that shed differences between how the sock in a traditional enterprise is different than that for a cloud. One key finding is that cloud defenders are significantly more likely to use out-of-the-box detections. There are multiple reasons behind this. In our interviews and our annual sock performance report, we see that cloud specialization for analysts is a huge skills problem, familiarization and understanding of what to do and why to do it per cloud makes cloud defense even more difficult for an analyst. Josh is building detections for all three major providers and just having three testing environments is enough of a pain, but to actually build the skills and the vocabulary and to understand the resources involved and how they're different is a huge skills barrier. In the end, cloud defenders are not only deploying infrastructure as code and we're at a Kubernetes conference now, the standardization and non, let's say non-intrusive customization we're supposed to be using as much boilerplate as possible whenever possible and it shows itself in the detection strategy as well compared to an enterprise, especially something like a hospital that will have exotic devices with many different types of protocols that require custom detections in comparison. MSPs and let's say non-cloud defenders and same strategies are significantly more likely to use custom queries for custom data sources, bespoke applications, for example being non-standard log sources and inputs, something that became clear as a pain point in our interviews as well as the data is that we're well past the point that organizations are defending a single cloud. It's painful. At the very least, if a company is on AWS, they are more than likely to have a second cloud. They're more than likely to be also defending Office 365, of course. They need to have a cloud detection strategy and a source strategy, both for workspaces and infrastructure and it's not just a single cloud anymore and that increases the complexity for the SOC analysts. It increases the complexity in terms of alert management and what we see here is we're at the point now that approximately one in four SOCs have a majority of their detection stack monitoring cloud has become a new normal. When it comes to out of the box and custom detections, we have a sense of on the far side, we have managed security providers being the most likely to craft custom detections. That's part of the service that they provide. They have their own libraries. They're building custom detections, rolling them out across all of their customers and it's part of the secret sauce that the MSPs bring to the equation for defending enterprises. Whereas if we go and look at, for example, in OT and ICS, we have, if we're thinking about factories, we're thinking about supply chains and transportation, we have less innovation within the SOC and we're still at basic visibility, implementing controls that are off the shelf and available. And in the case of technology, companies what we have are more standardized assets. We have fewer different types of devices, fewer kinds of IoT devices, fewer custom detections that are required. So two takeaways from the autonomous SOC. The first is that out of the box detections are key for cloud defenders. Being able to pull from a library means lower barriers, less skill, expensive skill and expertise required and a more coherent strategy for control and to end. And that as of today, cloud is a major control area for many companies, sometimes a majority of the alert and detection coverage. And this is key for any SIM and SOC automation strategy. Thanks, David. So sorry, let me just, sorry about that. Let me start off by saying real quick, I am not very comfortable with this. There started off with three people which I liked and now it's grown. I should like it and I will like it and I appreciate everybody coming here. So let me get right down to it. I just had to put it out there for you. Let's talk about an augmented analyst and what it means. So augmenting an analyst is simply just providing them enriched data. You're giving them metadata and you're giving them information that they need to help make informed, educated decisions. One of those methods, every SIM out there or a SIM, however you wanna pronounce it, everyone out there, they use MITRE as part of their and we're experimenting part of augmenting the analyst. And from what we can see here is typically the analysts have more visibility towards the beginning portion or I'm sorry. Yeah, sorry, less visibility at the beginning portion. Told you, I'm not very good at this. At the beginning portion of it and more towards the end. So like persistence for cloud-based coverages, right? There are cloud-based detections. They got a lot of cloud-based detections for persistence but not a whole lot for execution in the cloud. It draws some concern because you kinda, it's a different environment than people are, we're still not wholly accustomed to working in the cloud and they may not know what to look for or they may not have the information they need. Sound, yeah, all right. Whew, sorry, thank you. All right, so let's talk about zero trust and the typical alerts that we've seen with analysts have been on devices, networks, and identities and it's not covering all the pillars of zero trust and it's been, it's something where we need to, we need to help provide better education into, I would say, or better, more assistance. And at a basic level, something that we found both in our live deployments and in our data set is that enterprises today are still operating in, let's call it an old school control model. The reality is that endpoint detection, controlling devices is a huge part of the alert stack. Not just MDM but nodes in date, like safeguarding data center is all about the devices that are there. That the campus is filled with devices and EDR solutions, antivirus solutions are some of the most mature, log generation sources that enterprises have today, especially those that have a distributed campus. And so we're coming from this device or box-based sense of control from once upon a time and before cloud. Of course we have network controls, network activity, the network pillar is key. Devices communicate with one another through a network and network controls are fundamentally different. It's not antivirus, it's not an agent, it's not a passive scan. What network controls offer are coming from firewalls, are coming from proxies, are coming from the communications between sources and destinations, clients and servers. And network controls are again coming from sort of an old school perspective of firewalls create tons of logs. The classic enterprise, the number two, the first two buckets of alerts that they're going to have are device-based and then firewall-based alerts. And of course, why do we have these alerts? At the end of the day it's users, it's people, it's identities and alerts based upon logins, authorizations, identity, authentications. And these three are three of five zero trust pillars, but those three pillars account for three quarters of all alerts in an enterprise sock. It's at the end of the day, and I think on the next slide as well, a way of conceptualizing the sock control strategy is we have these three flavors of alert and everything else. So spilling into that a little bit more is just the type of device protections that we've seen the most. And I really can't add much to what you said. So that's basically what this is. It's these two slides play into one another where this is talking about the analysts protecting, caring more about device protection. So what is device protection? It's these endpoints, it's the cloud detections. So yeah. With one note on this slide, that map to zero trust, devices, networks, and users are the resources that are primarily covered by our alert strategy. We see a bucket here that's cloud, and that's what it turns into, is that the cloud detections are fundamentally different. They have only a few types of providers. They have different kinds of assets. We're not thinking about antivirus, we're not thinking about classic firewalls and appliances that are hooked into networking gear. And cloud by itself is something of approximately 20% of all detections in the stack. So from an analyst perspective and from a strategic perspective, by mapping to zero trust, we can get classes or buckets of alerts that themselves can have specialization. Network alerts can be by protocol, and we can stack our alert strategy and detection strategy in the networking layer. We can have our DNS alerts, HTTP alerts, FTP, SSH, our remote desktop and file sharing protocols, or in a hospital or in an industrial environment. Protocol-based detections are key. Cloud is an increasing percentage in the trends that we have over time as being a percentage of the alerts. And it's a fundamentally different bucket than what zero trust is bringing to bear for the standard enterprise. So I mean, it's just, this is just elaborating further on it. It's just talking about the different cloud providers and which one's more predominant. The top takeaways that we wanted you to get from this is that the analysts need more, there's too many cloud providers out there, not enough analyst knowledge. So we need to work with the analysts to get the detections. They need to understand cloud providers. And honestly, we need to start pushing for standardization across the cloud providers. It may not be good business for them, but it's great detections for us. We've moved on from the days when an analyst just had to understand Windows and then you had one team that knew Windows and one team that knew Linux, right? Those two would talk to each other. Those days are long gone. They also need to know the cloud setup. They need to know what to look for. They need to know what credentials, where credentials shouldn't be getting stored. They need to know all these different things. And part of the purpose behind this research is to also be advocates for the analysts. Cause I mean, that's what a SIM is, right? It's supposed to advocate for them. And you can't expect someone to have AWS knowledge and come in there and work on GCP detections. And so that's the long story, sorry. That's a very big sticking point for me is the helping out the analyst and giving them the visibility that they needed in the MITRE framework. You look like you wanted to add, you're good? And to add to that, poor, the result of our interviews and what it is that we're hearing from customers and also during alert migration, where we're trying to define the original strategy, what we see is that customers that are coming that have multiple clouds don't have a coherent strategy that's one for one, consistent coverage across cloud sources. GCP has a different set of controls that have been custom crafted to AWS. Workspace controls are patchy for organizations, especially after M&A, that might have multiple kinds of workspaces for users that are global and using different applications within the org. So at a strategy level, it's a patchwork and there are gaps, there are gaps in the cloud coverage and that when we were looking at the MITRE attack mapping, we were able to see, especially trapping execution in the cloud was something that was simply not happening in the control set, including the customer controls, including what the MSPs were implementing that the ease of the, there's something about creating easy controls that are obvious, but if we're trying to map a strategy that covers MITRE attack, for example, end to end, that taking that coherent strategy will illustrate in most of the customer deployments that we saw, areas where there were control gaps and that something that we were surprised to see is as a ratio of controls, how few were actually related to data exfiltration, data collection and the data pillar of zero trust is like in some models, it's at the center and what are our fundamental safeguards around data and when we're looking at visibility analytics and triggering SOAR, we saw that controls around exfiltration and data collection after we've gotten past the perimeter were weak across the board in financial services for MSPs. So we understand and what's changed in our innovation portfolio, what detections we're going to create are to help to fill in these gaps in visibility that the analysts are naturally having, as well as the MSPs to help, to help enable our MSP partner strategy. Thank you. I'm gonna have to speed through some of these things. If I skim over something that y'all that people really did want to see, just stop me. This thing is about as informal as you can get and if you don't like informal, that's fine. It's formal, don't worry about it, we're good. All right, so alert management. All right, so the purpose of this graph is to highlight a couple of things. One is in the Devo SOC report, we questioned about many different things. You can go to SOC report and see what the different types of questions are and as you can tell, I don't remember them offhand but the important thing that we wanted you to grab are the two ones that are highlighted up here about alert management. So the one on the left is alert management was critical to high-performing SOCs, some way to manage all of their alerts. And the one on the right is showing that management inside of high-performing SOCs, management doesn't care as much as analysts. So the analysts need alert management, but the management isn't too concerned about it, nearly as concerned as we would want them to be. And if your management just reverse it, you'll be happy. Everything's good. All right, so we needed to also establish a baseline. Some of you may or may not know what Zero Trust is, a lot of Bing, here it is. It's off the CISA Zero Trust maturity model, 2021. I would tell you what page to look at, but I forget. The important thing behind this one is to know that there's another step in there, as David had pointed out, where you should add the SIM in there. And going back to the other things that we were talking about, about the various pillars and what we were missing, this is what we're referencing. And as a note, so we're coming from Devo, and Devo is a SIM provider. Many of our customers are using it as a data lake to enable visibility and analytics for infrastructure for application performance monitoring for devices and machine data. Data lake use case, primary use case. Secondary use case is actually generating security decision making in action based upon a functioning data lake with full visibility. So Devo is a visibility and analytics foundational layer, or that's what we operate as a SIM. And we integrate with something like 500 vendor products that are covering the pillars of zero trust. The endpoint device protection vendor products are feeding into our visibility and analytics platform. So the coverage that we have across vendor products is from a visibility and analytics perspective. And fundamentally, we're able to see that across all functioning organizations, all pillars are represented, but we saw weaknesses with data. And in cloud, protecting the application workload. Thank you. All right, so here we're just trying to show you that when we talk about the various alerts, we noticed that in cloud detections, they were more concerned, the analysts were more concerned with visibility. You know, what can I see? Who's doing what, things like that. But in enterprises, they're more concerned about device connections, which would make sense. You know what I mean? So it's just trying to show you the criticality or the importance of what your detection should be going for and what kind of detection should be looking at based off of what your analysts, where your resources lie. Actually, another note about this is what's the difference between cloud control and traditional enterprise control is being very clear that the traditional enterprise is thinking about devices. And yes, in cloud, we can think of containers and VMs as being devices. We can think of every node in our architectural map as being devices, and that's accurate. But the device focused for actual controls and detections doesn't even come close to the traditional enterprise. Dealing with MDM, dealing with workstations and dealing with the IoT. So devices are fourth in mind for cloud defense and the cloud stock. And then we see a very real difference in workload protection. And trapping the runtime on all endpoints is an expensive undertaking for a traditional enterprise and impossible across IoT and under managed devices. But in the cloud, where we have, let's say, better access to runtime or more consistent access to runtime, we're able to generate workload detections. We're able to build native detections for application monitoring and application control. Regardless, at the end of the day, by the time, and we see it in our migrations as well, by the time we've managed to deploy our device protections and network detections, we're losing steam at the end of the actual goal of protecting the data. So we need more controls, more detections that are designed to safeguard, prevent the expiltration of data, prevent collection of data within the corporate networks, cloud or not. Thank you. So the other thing to consider is, there really, and David helped me show me when we were preparing for this presentation, really brought us to mind is that financial services aren't financial services. They're technology companies in the financial industry. And so when you're looking at detections or you're trying to target this area, then as you can see, they're moving right in line with technology between enterprise and cloud detections and they actually have a lot more cloud detections than what you would think being in the financial services, right? So I don't know about you, but ignorantly, I always like to think that my bank is somewhere in a nice little area that they control and everybody's happy, but in reality, it's somewhere in wherever, USA on a server firm. So more and more of your financial services are going to the cloud. One of the biggest things we want to talk about was logging per vendor. One thing that I did find out yesterday, I was speaking with, we were speaking with Melinda Marks over at Tech Target, I think, yeah, Tech Target. And she said that through their service, we're able to drill down to get more data about each one of the cloud service providers, AWS, Google Cloud or Azure. This is something that we pulled off from websites, is trying to understand what do they offer? Again, cloud, you can see right here the disparity in what's offered between the various providers. It's annoying. And it's also hard for you to drill down to be able to find what you need to audit, right? You don't know what you don't know. So sorry, I could spend forever on that one too. All right, so the key takeaways, and I am already over time, so I'm gonna race through, I think we're almost done anyway. Out-of-the-box detections are key in SOC automation, so in other words, you're supposed to, you need something that provides you detections written for your cloud provider, written like GCP AWS, whatever, you need those detections already built into your SIM to help your analysts and make sure that they don't start off from this ground zero. So whatever you're looking for, you need to have those out-of-the-box detections. Now, I could sell you our product, but that's not my job. But yeah, so that's one of the key takeaways, and yeah, I think it took care of the other one, so boom, right? We're good, yeah. This is just the stuff that you're supposed to learn from here. Five takeaways and five lessons learned during the course of this research. The first one was that cloud was big, and we were surprised at how many detections enterprises were using, especially in financial services to cover cloud. We were surprised to see one in four socks that a majority of their detection stack was cloud-based, safeguarding infrastructure and workspaces, and we can expect those to continue to grow, that cloud eventually dwarfs data center and campus for coverage and control coverage. We became extra sensitive to the concept of augmenting the analysts. Two innovation outputs from this research were machine learning classifiers for zero trust and for MITRE attack labeling, and what we hope for those product innovations is to augment analysts regardless of what their alerts are, what the results are, and what their data sources are with the additional enrichments that help to frame a strategy for both zero trust coverage, as well as MITRE attack end to end. Practically, we discovered that there are five kinds of detections that socks are actually implementing. Cloud is its own distinct bucket, and that, of course, devices, networks, and users as zero trust pillars form the vast majority of alerts in all enterprises, that we have a shortage of protections for application workloads, at least at a seam visibility layer, that defending the runtime is a weakness, that applications of workloads need further control, and, of course, at the end of the day, data and preventing exfiltration, controlling for data collection will help reduce impact, and that enterprises need to shore up the final stages before impact actually occurs. Out of the box for the win, we see it clearly for cloud that the extra, for multiple reasons, the extra complexity, the skills required, the challenges of multiple providers and different vocabularies, that out of the box, boilerplate detective strategy, where you can essentially piece together a strategy, as opposed to individual alerts, is the best way to support alert management. Instead of having 400 random alerts, we have five buckets with a coherent view mapped to zero trust, with pillars clearly defined, and mapped with MITRE ATT&CK in a way that, end to end, provides a coherent strategy for coverage and future developments. Using a strategy is crucial, and having a strategy at the end of the day, a focus for our research was interoperability. We have, in an ideal world with interoperability, let's say for our medical records, we go to one hospital, we go to the clinic, we go to another state, and our records are able to transfer and make the round trip and be enriched at every stage. And what it is, we're far from that, it seems. Shifting from Datadog to Splunk to Sentinel, to Devo, back and forth, we're far from transpiling our queries, from mapping alerts on a one-to-one basis. But what we can do, is we can aggregate our alerts in a way that has a coherent, bucketed strategy, and we can migrate our strategy. And this is what it is that was a practical output for our actual customers that put our research into actually moving the needle in our business processes to make migrations faster, to enable alert management in a coherent way. And then to have specializations within analysts that analysts might come from a network security background and therefore be better suited for network alerts. We're coming from a cloud background focusing on cloud alerts, and to enable specialization within the SOC. So, we went five minutes over, I apologize. No, we're five minutes over. It's, we're supposed to stop at 11.35. We, I'm available, we both will be available out there if you have any questions. There's a young lady, boom, Cass, Tess, what? Yeah, her. Caslin, she needs interviews for her podcast, so look her up. Good questions, just trying to get word out about this conference. If we can do anything to help, if I can answer any questions since I do do the detections, and I know I didn't talk much at all about that, feel free to see us, and whoever the speakers are for the next one, that's all me, my bad. All right, we're good.