 Welcome everyone to theCUBE's live coverage of MYs here in Washington, DC. I'm your host, Rebecca Knight, alongside my host and analyst, John Furrier. Great to be back here with you. Great to see you. And today we are joined by Brian Dye. He is the CEO of CoreLight. He's in from San Francisco. Welcome, Brian. Thank you, Rebecca. Great to be here. So the news hit the wires this morning, this expanded partnership between CoreLight and Mandiant. Tell our viewers a little bit more about this amplified partnership. Yeah, it's really got two big parts to do it. Number one, what is Mandiant using CoreLight technology for and how is CoreLight integrated into Mandiant and Google proper technology? So for the Mandiant professional services team, their instant responders, they have the ability to use CoreLight technology in their instant response engagement and in their managed defense services. And we'll talk about that why in more in the future. And then for CoreLight, we've more deeply integrated our technology into a bunch of parts of the Google platform, not just the Chronicle security operations suite and the Mandiant threat intelligence services, but also the GCP pack and mirroring services as well. There's a lot of talk about shared responsibility. You see all the attacks, the MGM thing was in the news all this week, and we were actually there at an event when it was happening, the elevators, half the banks were gone. I mean, complete disruption. I mean, this is the ransomware problem. There's so many seams now. Detection, response, what's the answer? I mean, you guys are in the middle of this right now. There's so much activity going on. This marketplace is brutal and we need answers. What's the, how do you guys see that and where do you guys connect and how does the partnership help companies be more secure? Because this is code red. It is code red. And I think what you're seeing is a dramatic bifurcation of the threat landscape. You've got situations like ransomware that are incredibly fast, incredibly destructive, but then you've got the advanced threat actors that are still actually incredibly quiet, incredibly slow and long. So what we really see organizations doing is kind of doubling down on four big areas, right? They need endpoint, they need identity, they need a cloud and they need threat network. And what those really represent is a balance of depth versus breadth and how you get the right intelligence to go and find these advanced attackers, especially when what you're essentially doing is looking back in time, right? Take MGM, right? And I wish everyone there the best. They're wrestling with a really hard problem right now. They are trying to look back and figure out, are they diagnosing a problem from a week, a month, a quarter or a year? Like that's your step one problem. And so you need this balance of depth versus breadth and your insight. And again, that's that same mentality is actually behind the Coralite Mandium Partnership. So can you give our viewers an example of how this partnership works and what it entails? Yeah, so the, and it really comes down to thinking about the world not as a technologist, but really as an instant responder. And so let me walk you through an example that's not MGM, certainly the details on that are not publicly known. But actually one of my first interviews, research interviews, when I joined Coralite was with a consultant here at Mandium. And I said, hey, look, treat me like I'm an idiot. Why does this network data matter so much for you? Like what's useful here? And he walked me through this story. Said look, one of his earlier engagements was a multi-thousand entity franchise organization, right? And so they're incredibly large. They got notified by Visa that they were compromised. Step one, which couple hundred endpoints do you go worry about? These point of sale terminals when you've got 10,000 plus franchise locations. So the network tends to have this great role of helping prioritize your overall investigation. So they went, they prioritize, now you flip back to endpoint, right? Let's go instrument those points. Sale terminals, let's find out where the malware is. Great, once you find the malware, now you're back to the network. Cause how did that malware get there? You've got to reverse track the entire kill chain, right? What was the lateral movement? What was the command and control? What was the point of encouragement? So the network tends to be fantastic to accelerate that instant response process. Third, now that you think you've got the attack, how do you actually get rid of it? There's a bunch of rebuilding individual servers. Once you think you've got it out, how do you prove that none of that activity's recurring? Right? The network tends to be the source of ground truth for that. And the last piece that frankly I thought was most interesting is he said, never forget that there might've been 15, 16 consultants from Mandi working this thing super aggressively for six or eight weeks. And folks worry about how expensive that is. That's nothing. Cause when those 15 or 16 people leave, one or two of them stay and they get joined by three or four lawyers. And the lawyers stay for the next three months while you're trying to figure out what defensible disclosure is for that organization. So it turns up the network not just provides you connectivity to connect the dots across all this, but it gives you ground truth of what's really happening. And that defensible ground truth that can stand up to the audit committee for example, tends to be really important. I like the ground truth, networking the ground truth angle because it's like the footprints are in the network. You can follow the packets, you can't hide them. When you're moving around, you can't hide. So that's always, we've heard that before, but it's getting complicated now. You get all these endpoints, cloud is involved and connectivity is increasing. So you got edge emerging with AI here and the data is everywhere. Is it a data problem? Is it a network problem? How do you guys look at the new sprawl of data networks when you look at the problems to attack that defending side of, of course, speeds a factor. So you got increased roll of data, budgets aren't increasing 10x. So how does a practitioner think this through when they look at this? Yeah, it's a really, it's a really good one because all of those things are not created equal and how do you prioritize them, right? We often find defenders are kind of thinking about just like there's the attacker advantage, right? Where they only have to win once and the defender has to get it right every time. There's actually a defender advantage where you want to position yourselves and focus your efforts where you have multiple chances to get the attacker, right? Those tend to be reconnaissance, command control, lateral movement, right? To use the kill chain stages. So that's really where we see things playing out. And if you think about the role of data, the thing that we see folks focusing on is A, all data is not created equal and B, you have to actually start thinking not just about data but about time, right? How much time coverage do you have looking backwards to go find what the issue was? Like think about log foreshel, really big issue. How long could you look back and figure out where that was in your environment three months ago, six months ago, nine months ago, right? When that vulnerability might have been exploited. So those are the two big issues. And then also storage, how long do you store it for? How is it available? What's AI of access to it? Is there automation? These are all the questions come up and I got to ask you, because this comes up a lot. We're going to talk a lot about here at this event and certainly the rest of the year and ongoing. Threat detections like a, you know, first responder. Go figure out what happened. But now you've got the more holistic operating system view of networks, you're in networks. So you got to look at a bigger picture. You mentioned a little bit of how big that is. Love to get your feelings on how big that truly is. But how do you look at that and balance the holistic view versus jump in and just send a SWAT team or a response team at something? When you, because you got to look at the detection, got to throw a blanket on that right away. But zooming out, time, how far back do you go? Where's the data stored? That's going to require architecture. It is. And I think what folks are starting to realize pretty aggressively here is it's not an or, it's an and, right? You absolutely need detections. You need kind of live threat detections. You need anomalies. You need all the things they're going to enable, not just your initial instant response, but follow on threat hunting programs. That's really critical. But then the broad base view of the data becomes really important because you've got living off the land, right? You've got a bunch of stealthy techniques that folks are being used that will not and cannot actually drive detection, right? You're not going to alert on PowerShell just because there's PowerShell within an organization, right? There's PowerShell everywhere. That's not the problem. Who's using PowerShell incorrectly is actually the problem. So we kind of think of it in a couple of layers. One is what Kevin Mandi actually recently called second stage detections, right? Where you have an incursion from an actor that looks like they're doing nothing unusual. But then you have a whole bunch of very unusual leading to malicious activity afterwards. So it changes the shape of what you're looking for, right? That becomes one big one. The second big one becomes really thinking about this time coverage piece that we've talked about already, right? How much can you actually look back, right? And the third, in a really, really big way, is how do you actually think about, sorry, enabling threat hunting programs because you talked about this balance between the attacker and the time. What a threat hunting team is going to do, and even if it's not team, by the way, even if it's just giving your higher-end instant responders a few hours on Friday to go and do threat hunting, they will potentially find bad things in your network live. If they don't find bad things, they're probably going to find policy violations. And if they find nothing, now they know your environment better, right? There's a great Rob Joyce quote from years ago, right? Back when he was running tailored access operations where he got up and said, your number one obligation is to know your network because we will, right? And that's the attacker's viewpoint. So we're really living in a time where ransomware attacks and security breaches are an inevitable part of doing business right now. But what is going on in the moment? What is about this particular moment in time that makes your partnership so important? So I think there's a couple of things that are behind it. Number one, there's been a, and actually a bunch of Mandiot's own research has highlighted this. If you look across the different geographies, there's actually very, very different drivers for the initial incursion, right? You know, phishing and social engineering continues to be very big, but there's actually been, especially because the number of, unfortunately, network-based vulnerabilities that have been disclosed over the past year has never been higher, right? If you look at the last five years, the last 12 months, we've seen more broad-based network vulnerabilities than ever. So you've got more ways to get into an organization undetected. Once you're in undetected, we're back to this depth of breath problem, right? If you're in undetected, that means the perimeter defense, the detection engine has by definition been bypassed, right? It's not solar winds. I mean, that was a super aggressive way to do this, but a classic network vulnerability for either router switch or, you know, heaven forbid firewalls or VPN devices, which we've seen a lot of in the past year, that becomes a really big problem. So now it puts a lot more focus and weight on the raw network, both for the network evidence and the network-based detections to find those follow-on indicators that compromise those follow-on attack signals. And I think that's been the biggest thing in the landscape over the past year especially. Can you talk about dwell time? This comes up a lot. You kind of mentioned a little bit on the defender side. What's the definition of dwell time in security context? And why is it important to understand that? Yeah, dwell time for us. And again, Mandion has great research on metrics, right? There's a, for folks that don't know across your viewership, there's a set of metrics they use called Drain Cover. Drain CVR, technically, D-R-A-I-N-C-V-R. There's some research you can Google it that kind of does a really nice walkthrough of each of these metrics and why they matter. But at a simple level, dwell time is a measure of how long that attacker's been in your environment before you found them. And so, A, that's essentially your exposure. How long did they get to recon? How long did they get to find their payloads? How much exfiltration time did they actually have before you found it, right? Are you actually left a bang to use the exfilt analogy or not? And then second, can you even see the beginning of that? Right, if they've been in there so long that you can't find the beginning of it, now you've put everything else in your instant response process at risk because you're not truly sure whether you found the full scope of that attack, contain them and eliminated the threat, right? If they've been in there longer, then you have the visibility for it. But that's, you know, dwell time in a field. And that gives you the range of how bad it is, basically. If they've been there for a while, you're like, oh man, what's going on? And that's kind of what happened at MGM. They were already in there and they did some damage and then kind of, when they were detected, that's when they encrypted, went to the ransomware move. This is kind of early reports coming out. Not yet clear, but clear the disruption there. I mean, just generally zooming out, a lot of people see the AI trend, they see the security trends. There's almost like, and I hate to say this word on camera because I don't believe it, there's a doom and gloom mentality. People are scared, they don't know what to do. I think it's a little bit overblown by a mile, personally. But the unknown scares people. What's the truth when it comes to security and AI? What's real, what's not real? What should people pay attention to? How should companies defend themselves? I mean, it's a free-for-all. Like, where's the government helping? So, I mean, it's a lot in there, but feel free to share. Plenty for us to riff on on this one. There's a lot here. So just to get us going, I think there's kind of three layers here. There's what are the attackers doing? What are the organizations doing what are the defenders doing? And happy to come back to this because we've seen some fascinating things both within our customers and within the open source community. But what are the attackers doing? They're doing the same thing the defenders are doing. They're actually using AI to automate their attacking and their code development, by the way. So they're getting faster, right? One of our partners made the comment that with AI the incremental cost of labor has gone down. Everyone gets more productive. And unfortunately that includes the attackers, right? So they're doing that. What are the organizations doing? They're trying to figure out what exposure do I have, what policy should I put in place and how are people even using this stuff? And that is all over the map right now because you've got some early adopters that are being very aggressive. You've got some folks that are just outright banning it and their organizations, right? So you've got a huge spread there. And the defenders, I think are actually being really, really smart about this. The folks that we talked to, and again, we have the privilege to serve some really, really elite missions, right? We're a bizarrely kind of high-end focused organization from that perspective. We see defenders already going in and saying, help me translate alerts into English. That simple, practical, valuable doesn't incur hallucinations in these other AI problems. What is the wisdom of the crowds of how I would start investigating the following things? If I've got distributed DNS exfiltration, where do I begin? But they also know that after that, you have to kind of stop because the Gen AI, remember, it's predicting the next letter of an answer for you. It's generating stuff. It's generating stuff, right? It's actually not good at generating like interesting detective hypothesis. This is way, way too hard of a problem to ask a Gen AI machine. So if you go down the wisdom of the crowds and it doesn't work, it's back on you as an answer responder, as a defender. The Gen AI, you get a massive spike in hallucinations. You get a massive spike in just bad advice and a bunch of stuff. Well, the human in the loop is critical here because then that's the expertise that's being augmented and the heavy lifting from AI facilitates the human to be better. So when they get better, what's next? What's so, some reasoning happens on the human level. Are there other AI techniques coming or there to go to the next level? I mean, we've seen this in chess. Well documented. Humans plus computers against computers. Humans plus AI is better than AI by itself. We've seen that. Where's this profession go next? Because I'm just thinking there's going to be a creative class emerging out of in the cyber around how to defend and attack, counter-attack or counter-measures. There are 100% is and I think it's important to realize that the defender and the defender's needs is not a one size fits all, right? So let's kind of break this down a little bit. You've got incredibly high end defenders from very large organizations, right? Hundreds, maybe even thousands of people that are actually just in the InfoSec team or maybe even just in the sock. They've got one set of needs. Let's contrast that if you've got maybe five or 10 people in your security team and you're trying to cover the same scope as what those incredibly elite defenders are doing. So I think the answer of what's next on Gen AI is a little bit different for those two groups. In the really high end, we see folks already doing alert translation, doing investigation guidance, even doing like taking SIM queries and doing draft SIM queries using the Gen AI, right? And I think, frankly, there's going to be incredibly broad adoption across the technology landscape of AI accelerated workflows and security to serve kind of those high end needs. I think there's going to be a tremendous amount of that. And that stage two thing you mentioned earlier also can be automated detection. There's another good point. I got to ask you about the use cases. Obviously we're kind of seeing AI as that third inflection point. Web, mobile, AI that changes the app environment to underlying infrastructure and the user experience expectation on good, bad, and ugly. As AI comes out, you're seeing things like vishing, voice phishing, as popular. Now you can see voice activation. Now you have all these new ways to social engineer. I mean, a lot of the companies coming out of the MGM and others is that the LinkedIn is a great environment to get that kind of identity based bait and switch. And then the social engineer, and then you get vishing, voice phishing. What's next? What's the next? This is like a whole nother level. It's a whole nother level. I mean, it's next level. It is, and I think it's going to continue the speed problem, right? And again, the paradox, right? We talked about this dwell time challenge earlier of ransomware that's super fast and destructive and the APT stuff that's super slow and kind of sinister, right? And so take that into the gen AI and like the image generation, the voice generation, exactly what you're saying, that's all going to accelerate the pace and the speed of attack development. It's not necessarily going to create new attack types. It's going to accelerate the development of the existing ones, right? So now we're going to get kind of these two things, right? We're going to get things like ransomware that phishing effectiveness and vishing and we're even, we're having to invent new acronyms. That's never a good thing, right? It's going to accelerate there. So then you need defenders. They're going to focus on how can I actually radically improve the speed of response of my tier one and the automation of my tier one against these common but very fast attack types. So I think it puts a tremendous premium on speed. You're going to get the other thing too though, because if you can accelerate the attack development, you can accelerate all your attack developments. So I spoke to a CISO last year that had a really interesting strategy that kind of followed this twofold punch. He said he has two, one strategy that is I need to be able to respond to routine threats in six seconds, six seconds, right? That is fully automated, zero touch, right? You can use AI productively in this case, right? AI plus the automation platforms, plus the Sims, everything else. You can drive that. But on the other end, he said look at the kill chain. I want to go focus on lateral movement. I want to take that stage to the kill chain and I want to identify and drive to zero every vulnerability in lateral movement. Because that's actually where I get defender advantage. It's where I get most ability to see what the attacker's doing. I get the most bites at the apple. And if I can truncate that, if I can cut the kill chain right there, I buy myself the most time to find that attacker kind of in the rest of the kill chain. So I think it comes down as the Genai continues to kind of amplify in power. You're going to get attack velocity by the attackers. And that means the defenders have to almost get bipolar, right? And I mean that in a good sense of the word. And the thing about IT, Dave Vellante and I talked about, Rebecca and I were just talking about before we came on camera about the personnel, the workforce. The pace of play and security is so fast. I mean, you look at old traditional IT, if you're an organization and you were school, your education, they have like old school IT environments. That's why they're getting hit with the ransomware, low hanging fruit for the hackers. But the pace of play is so high. The budgets are on increasing 10X. The talent pool is not increasing 10X. The data is increasing 10X plus, some say more. The hackers are increasing in groups and organized crime units at multiple levels. What's the answer on the personnel front? Well, I think it's a couple of things. Number one, I think constructively using Genai to offload the lower value, kind of more routine stuff from your team, that both gives them time to focus on the more advanced topics and frankly gives them time to train and learn the environment. This is such the core issue. To your point, pace. Let's ignore that super high end organization for a second. Let's come back to a team that might have five or 10 or 15 people in their security function. They're trying to cover an entire enterprise, all the different cloud services, all the different end points, network combination, zero trust, you name, pick your buzzword throw it all in the soup and mix it up, right? Those folks have a shockingly hard problem because they have the same attacker profile, the same kind of savvy, but they've got to be jack of all trades and master of them all. So if we can use the Genai pieces to kind of offload and automate a huge chunk of that team's routine work, so they can actually just breathe, catch up a little bit and then that'll enable them to do some of the threat hunting. And you and I were talking before we got live, like what's the kind of thing that I worry about in the background? It comes back to the government regulation question because there is a bunch of appropriate concern around how do you take, what is essentially dynamite and make sure that we're kind of making tunnels and not blowing people's hands off, right? That horrible analogy, but let's make sure the tool is used appropriately. But we've got this massive mismatch between the speed of the legislative engine and the speed of not just cyber, which has always been a problem, but the speed of tech development in AI. So how do we hit that balance where we're providing some guardrails around the same? But let's solve the problem, right? It's not a new problem. It's a new problem, yeah. The regulations BS in my opinion, because one, they can never get the speed. Number one, two, it's emerging too fast. They got to let it run, let chaos reign, reign in the chaos as Andy Groves famously said at one point. There's enough guardrails in place. And again, it's not like the government's helping companies. They have to hire their own militia, AKA the security department, not the IT department. So I mean, it's a wild west right now. It's really regulation, I think would kind of hurt things in my opinion. I think it would be incredibly hard. It's incredibly hard to regulate something that you don't really understand. And I think very few people truly understand what's going on. And the state of that changes every three to six months. Which is fantastic. And again, I think in terms of bets, I think if you've got a material player in cyber that doesn't have an AI-based kind of offering or integration by the end of this calendar year, it says what, three and a half months left, then they're either drunk or asleep at the wheel. I mean, and when things are moving that fast when you've got fairly nimble, fairly technically agile team is focused on exploiting. And I mean that in a good way, the capabilities of technology, and you compare that to the regulatory apparatus. Like, they're trying to figure out what the practitioners knew a year ago. That's kind of the state of research, which I think is just, it's normal, right? So it's just, it's an impedes mismatch that we got to figure out our way through. It's all exceedingly complex. Yes, yes. Well, Brian, thank you so much for coming on theCUBE. It's been a really interesting conversation. Thanks for having me, I really enjoyed it. Thank you much. Thank you. Stay tuned for more of theCUBE's live coverage of MYs. I'm Rebecca Knight for John Furrier. Stay tuned.