 Welcome back to SuperCloud 3 live here in our Palo Alto studio. I'm John Furrier, Dave Vellante unpacking. Next generation cloud data security, obviously with security and AI. And now a generous AI, a lot of hype but a reality coming to the picture. We're going to try to break it down as the next gen applications hit the market. We're here with Jay Peric, CEO of Lacework. Keep alumni, great to see you. Thanks for coming in for a live performance. Thank you. Thank you for having me. So SuperCloud 3 security plus AI obviously part of the big picture. Security has got to be baked in all operations. That's kind of the table stakes. People talking about now, it's also a data problem. It's also an opportunity to build in a cloud native like experience. People talking about these things. But at the end of the day, the hackers are attacking on offense faster than the defense can keep up. So it's got kind of a pro game, but you got a developing market at the same time going on. Love to get your thoughts and perspectives on what you guys do in Lacework. So how you see this? Yeah, absolutely. So ever since the company's founding back in 2015, we've always approached and thought about securing the cloud as a data first problem. And that what's happening in the cloud is just constant chaos. There's so many things changing. You want to be driving faster and faster rates of innovation. The cloud provider, the cloud infrastructure, the cloud stack itself is also always adapting, always getting better, multi-cloud, different technologies. You've got different applications that are bringing in different types of services. And whenever the cloud providers themselves do their events, they launch a whole bunch of new capabilities, which is great for driving and building new applications, driving innovation, but then it creates a whole nother category and other set of facets on your risk model in your organization. So the only way we have always felt that you can keep up with potentially stay ahead of these security risks is to just drive this through processing, collecting and processing a lot of data. That's the only way to automate driving the security outcomes. You can't do this the old ways we handled security on-prem in our data centers. Talk about the old versus the new. And you have a historical perspective. We've seen many waves of innovation in the past. The conversation that's happening, well, that's all that's happening. It's still handsome though, you got that going for you. The old way comes up a lot. Like, oh, we don't do firewalls anymore. That's attacking us. We do it this way, zero trust, all these kind of new architectures are coming out. The perimeter's gone. It's more surface area. What are some of the old ways that have changed and what hasn't changed? And what are people doing that's on the right side of the historic wave here now? What are people, what are you seeing? I think fundamentally it's just knowing that where we are today with cloud and where it's evolving is just forcing yourself to rethink the problem or the solutions from first principles. And trying to copy and graft ways of doing things from an on-prem perspective into the cloud may help you when you're trying to first get started with the cloud as an organization, but they quickly become these speed bumps or these impediments that really then don't allow you to realize the value of the cloud, which is moving fast, being able to build these things, adapt, measure, iterate and just build better value for your customers. So I really think it's just one of these things where just go back to first principles, what are we trying to do from an innovation perspective? Know and think through how you're going to be compromised, how you're going to be attacked in the cloud with things like AI, you're generating a ton more data. And now you have all of the data that drives business value, but guess what? It's also very attractive to people who are trying to get in and get that data to do something with it. So you mentioned chaos. Chaos is actually opportunity, certainly for the attackers. It's also opportunity for the technology companies that can defend and help defend. So you're talking about AI or a security as a data problem. So explain where you get the data, what data are we talking about here? And I'm interested in next phase is how AI fits into that. Yeah, again, so ever since the beginning of Lacework in 2015, we've always taken this as a go find and get all of the right data that we need to then figure out how to drive those right security outcomes, put those outcomes to the right person who can then drive the action there in a automated kind of less burdensome, less scattered way and to make them more productive, right? To take this data and to drive more productive and faster outcomes for these security teams, for these developer teams. So where we get the data, it's pretty complex, but I would say there's one broad bucket of data that we get from the cloud providers, native services themselves. We get third party data. We get other real time telemetry. There's a whole set of ever evolving categories of data that we get. And then that comes into our platform and then from there, whether it be things from a preventative side, like here's the things you ought to do to really secure your configurations or here's things that from a reactive side, these are things you should go investigate because what we detect here doesn't look like normal behavior in your infrastructure, right? And that's ever evolving as the cloud providers themselves offer more services, but then the application stack of the customers of the companies out there building on the cloud also are evolving. So that data lives inside your platform. You persist it, you analyze it. Now you bring AI and have been bringing AI to that data. Can you talk about what AI that is and how if when we see the AI heard around the world that changed how people are thinking about bringing AI. Yeah, absolutely. So one of the core use cases of where we apply AI and sort of machine learning to the technology is really inside of a customer environment, understanding, collecting all this telemetry and understanding what normal behavior is, what your employees are doing, what your machines are doing, kind of what are the operational activities that are occurring in your environment. And for example, John does this operation reliably every afternoon, but now, and he does it from this location, but now all of a sudden, wait a minute, he's doing it on the weekend and he's doing it from these three other countries all within like, you know, and this is a simple example, right? But those are the types of things that the system will say, hey, normal behavior is this. Now we find that there's this weird behavior happening with John maneuvering in the production environment. That is something you should probably go investigate. Again, this is a very simple example. The platform itself has many more capabilities to look for not as obvious use cases as that, like where there's little tiptoes happening in your infrastructure, which each one of those taps would be honestly not noticeable, but when you look at 14 of these taps, you've been comparable. Can AI mask that and sort of make it even harder to detect those anomalies? Absolutely. So when you have the other side of this conversation, which is, okay, so we use this technology to really sift the signal from the noise and to give practitioners and companies much more accurate things to go investigate and to go and protect against so that they're not working on things that don't matter, but think about the bad attackers out there, the attackers out there use this to actually mask or to be much more sophisticated about the types of attacks that they can do. And they can orchestrate things now with AI, maybe that they had to do manually through more sophisticated kind of training or scripts and whatnot. They can program these AI systems to explore, to discover in very innocuous undetectable ways or very minuscule ways, but then to put together a broader attack with a lot more steps because it's all data and machine controlled and orchestrated or guided, I should say. So steps for instance that could self form when they get to the other end. I mean, think about this outside of the context of cloud security, but think about just your own experiences with some of these like chat GPT and what it's done to help you write or copy edit an email or put together a presentation and think about how that can actually be used to socially engineer an employee and having a conversation. The perfect phishing app. I mean, you know, spam all of those things from a textual email perspective, we're going to see a whole, I mean, we've already been talking about that, that stuff's out there, but think about the social engineering verbal. Like, you know, you're talking to somebody you think is a human and it's not. Awesome example. By the way, my Netflix, about the time they got to that password sharing it's simultaneous access from all different places on my kids. You mentioned developers and I want to get into two years. We see a lot of activity on around the kind of super cloud and security data operationalizing data at scale is one of the conversations we hear a lot about more data, more leverage, more access to better things, goodness around that. And then the developers, as developers start building apps to solve problems with data, like having data available for developers, we're calling that the data developer is going to be more commonplace. And right now you're starting to see the beginnings of that in open source, a lot more activity going on, certainly in AI and open source, but we envision a future where the developer is going to be absolutely immersed with the data capabilities to embed into their applications. What's your reaction to that? Absolutely. I think that that's already happening in many companies today. And I think that as we collect more data, we understand it through these systems and you can discover and explore and drive new features and you can enhance that customer experience because of what you can do from data. Think about these companies whether it be Facebook or a Netflix, what value you get as a consumer as a company because the ability of a company to mine that data to really find the right path ahead for you to be like, hey, I can get more value from this insight that you get from the data, right? This is back to, Lacework is today you would have to toil through looking through all of these logs and data and graphs manually. And what we often do is take that alert volume down by 100X, right? So you may be dealing with 1,000 alerts a day with kind of conventional systems, but we can give you that like 50 or 20 or 100 alerts that really matter down from that 1,000 that saves you a ton of time that putting it in front of the developer in terms of what to go action versus chasing a bunch of ghosts is massive impact to the developer. The observability market, we've seen the hype of that, obviously there's some consolidation, but the game gets changed now with more apps and more telemetry coming in. I was talking off camera to, I won't say the name of the person, the company, but that we collect everything, tons of telemetry data. They use only a fraction of it. They can only get to it. Their hope is AI will help, the generative AI will help pull that forward, pull that value forward. So that's one, a lot of data being hoarded and now stored, so more data storage happening again, never stopping. That's the key area as you start to get the telemetry coming in from applications is a real promise area. I know you have a lot of experience in that area, been there, done that. Where's it going next? Where's that puck going to be where people can skate to the puck? As we start thinking about, as we get more data, I can see some value being pulled forward. What's that next step? I think it's a continuum to be honest with you. I think we're on this curve of kind of evolution and I think the more data we get and then there is these from a supply perspective, there are these disruptive technologies, so gen AI and the large language models themselves, but there will be things after that. I think we're very enamored and very entrenched and kind of enthused by what's currently here, which at GPT and others, but this isn't, this is just the beginning of the beginning in my mind. And there's a whole set of things that are going to come out of this as people experiment. And I think right now we're gonna be in a phase where lots of experimentation is gonna happen. There's going to be a lot of stuff that's what I would describe as kind of trinket and I would say it would be, it would be fun to use for a little bit, but it's not going to drive a lot of long-term value and deciphering that data problem. And there will be then other use cases that more emerge that are immersive that actually change buying behavior as well as user behavior. And I don't think we've seen that yet in the enterprise when it comes to security. Jeff Jonas was on theCUBE for this event. He said all these hot startups are getting a term sheet by the time they get their money, their features out of business. Disrupt it. Because the change. The rate of change is something that humans do not, I feel like we do not understand how fast this space is actually changing. It is changing so fast, right? So you can have an idea on Monday and it can be disrupted by Wednesday. This is the long-term value play. I think this is an important point. As you look at, try and identify where that puck's going, what's a trinket as you say, or what's a fad, what's real, what's that foundational? How do you look at that? And what's your vision for how lacework's gonna capture that? Because again, this is an ever-changing thing, but the foundation has to be laid. The pace of play in security is huge. You can't fake it until you make it in security. This is a whole nother ball game. No, and I would say there's a couple of points here to cover, which is one is, we again have started with this premise where it is security is a data problem fundamentally, right, through and through from start to finish, and how you drive insights from there will change over time, whether it be graphical through UI, whether it be other insights through kind of action, whether it be interacting and exploring and discovering things in that data through search or through a kind of large language model interface to it. I think these are where we'll constantly be experimenting and these will be platform capabilities that influence and change all of the user experiences in the lacework platform. And then over time, what we have to think through is from a customer perspective and an industry perspective, there's a whole new cast of data that's gonna get created. There's a whole new set of threats that are gonna be created, talking to our current customers about what they're worried about from a threat perspective, right? Which we talk a lot about the value that Gen AI is bringing from a top line or a new business outcome perspective, but you also have the other side of this conversation which is, hey, how is this stuff gonna be used to take me out in the business? Because now there's this new superpower that's been granted to everybody, right? And you have to think about both sides of this equation. So how are you gonna protect your systems as you build these new systems at the same time? I don't think we can talk about these independently anymore. So who ultimately do you think benefits the most from AI? Is it defenders or attackers? I think it's too early to tell, but I think it is right now. I think it is equal. I want to ask you about applying large language models and Gen AI is specifically in security. Sometimes you watch TV and you see somebody and she or he is very articulate and forcefully say that person's smart and the tone is so confident and you believe them. I feel like chat GPT in particular has similar tone. So you have to be careful about how you apply it and especially in the context of security. It's like Jeff Jonas said, it gives you different answers every time. So where should we think about or do you think about applying Gen AI in security? I think in certain fields like security and there's going to be other fields which I think are pretty obvious. The efficacy, the accuracy of these technologies is going to really matter, right? When you're putting together and you're looking through data and applying these technologies in your 75% accurate or 80% accurate, that's not good enough in a security context. So how do we take this technology? How do we mix it with other technology? How do we remember the system learns over time? How do we remember those learnings and keep getting that feedback so that the efficacy of the outcomes that Gen AI can be better, right? Because they have to be 90 plus percent, 95% you'll never be perfect. But to have a system that really does drive high accuracy I think is really important because if you are not accurate and it's a fun user interface but you're only right two out of three times then the useful life of that over time it may be cool to demo and you may get people to experiment with it but when you actually are called and you're in the trench dealing with something and one third of the time that thing is wrong or it sends you down the wrong path it's going to be a very frustrating user experience and that could be business, severely business impact. And that accuracy, is it not a moving target? Another thing Jonas said is entropy is winning. This is randomness and the data is problematic. So once you get to 95% accuracy it's not like you're not assured of maintaining that level. Yeah, and that's where the feedback from the systems, from the practitioners is got to be incorporated has to be incorporated in the systems, right? We have to learn together. This is sort of a shared learning model where the users, the threats, the technology all have to be harnessed. Those are also signals that these systems in a company like Lacework we can harness and we can build in advance and mature the models that way too. And the human plus AI is better than AI by itself. That's the key part of that accuracy and some fields that good enough is good but not in those accuracies needed. That's a big thing. Correct. Let's bring it into the cost piece because in AI and now security it's growing compared to the rest of the market. Obviously every security is never slowing down like data, right, never stops. But accuracy but also cost to run workloads. If there's an AI component it might cost more to run it over there versus there. So cost to manage and secure services is a big deal. What's your vision on how that's going to play out and how do you think about that cost equation? Because cost is a relative term but at the end of the day it could be massive costs. Yeah, absolutely. I think it's a trade off that every business is going to make based on that part of what's a priority in the company, right? Because oftentimes you may put your cost into new things where you're really trying to gain market share or you have a competitive threat and then you're going to optimize it later, right? I think it is really hard to kind of invest in optimizing thing when it doesn't have product market fit or it's not scaling yet. That's probably the wrong trade off, honestly. So I think it's going to really depend. I think we're in that phase right now where there is innovation, there is support for kind of experimenting with things and dollars are going to shift into this for experimenting with the AI stuff. But hey, keep it secure too. So we need to put some dollars behind that and people behind that as well. But then we'll optimize elsewhere in our budgets to fund that. That's the trade off in the conversation I hope we have more of. But also I think these things are on a curve where if you think about the cost of running these types of models a year ago or even three months ago versus where it is now versus where it will be a year from now it is dramatically shifting and lowering right now. So what is your expense envelope today? What you can do with that dollar, six months from now is not just 10% or it's a lot more. Security's held up pretty well generally throughout this macro headwinds. Yeah, absolutely, it's definitely still top priority. AI was sort of getting quiet there before GP10, now all of a sudden it's shot back up all across. But you're seeing other, you're seeing trade offs. People are saying, I see in the data little less RPA than I saw before that could be some cannibalization, automation. Automation. You're certainly seeing less laptops we have for the last service. It seems like they're overall saying we're going to shift things and as you say put them toward machine intelligence to figure out what we can do with it. And as companies figure this stuff out and they unlock value, they build more products and they're able to deliver more value to their customers then they're going to feed this back into their budgets and invest more, spend more, et cetera. So I think this is all part of the cycle we've seen. And we love open sources growing, it continues to be great. So that's a big factor for telepathy. It's a big part of the disruption in AI is the stuff that gets constantly open sourced. Yeah, good stuff. Final minute we got, give a plug for the company. What's your vision? What are you guys working on? What are your key things you got going on the market? Yeah, absolutely. I mean, our vision here is to be the security platform for the cloud. And for us to really approach all of these security problems that companies face around the world in this fast moving, kind of constantly changing cloud environment with a data first or data at scale approach. And we wanna do it in an intelligent way where we can bring things like ML and AI and drive these workflows so people can focus on the things that matter and do the right amount of work and not waste and feel a lot of toil in the day job to secure their infrastructures. So things that we're working on honestly is just really staying focused on a set of outcomes to help developers, these different people out there. So what can we do to help developers write more secure code and to not waste time on securing things that are never actually running in production to helping people that have to be paged at two o'clock at night and have a page that says, hey, I've got a threat. I've got a breach happening right now. Like what do I do with this? How do I mitigate this? We had a customer recently that said they were using Lacework for part of their environment and they weren't using Lacework for another part of their environment. They got breached and they said, hey, if I were had Lacework in this other part of my environment, I would have found the attacker in minutes and kicked them out versus spending five weeks to investigate what happened in that breach. And they still got a case to join as they say and do recon on the hack. They got to go investigate so you can get early warning detections with data. That's data native market. So everything we're doing from a data and an AI perspective is really trying to secure everything from the code, from the developer all the way to the cloud, the production environment. So that's our vision. That's what we're working on. Tons of technology, tons of products that accomplish that. Yeah, perfect super cloud conversation. Bring data native to the scale, scaling up the data as people store more. I mean, data is the competitive advantage. That's what we're hearing in the AI. Jay, thanks for coming on SuperCloud. Thank you. Okay, I'm sure Dave Vellante, we'll be right back with our wrap up after this short break.