 Welcome back everyone to MI's, theCUBE's live coverage of MI's here in our nation's capital. I'm your host, Rebecca Knight, along with my co-host and analyst, John Furrier. We are joined direct from the Bay Area, Anton Chewbacca, he is the senior staff security consultant at Google Cloud. Thank you so much for coming on theCUBE, Anton. Perfect, thanks for inviting me. Yes, so we're going to talk about security operations. SecOps, as they call it in the biz. Threat detection, can you just paint a picture for our viewers right now? Sort of, what is the detection and response landscape like right now? So, for this, I would usually refer to back to my original Gartner background. As some of you know, I left Gartner back in 2019. I dealt with security operations in DNR. Today, when I'm looking at this space, of course we see the emergence of new tools, most tools, most platforms are migrating to the cloud, but at the same time, something very interesting is going on. As the tool sets migrate to the cloud, many customers are challenged with monitoring their clouds. So the interesting thing is, even if the tools you use for security operations become cloud native, some of the customers thinking is not. Funny enough, my presentation at M.Y. is not to be very self-serving, is kind of about SOC meets cloud, what happens. And in many cases, what happens is a lot of angst and a lot of anxiety because people have the modern tools, but the processes are very much as I sometimes joke in the 90s. So the result is that they approach monitoring for cloud, threats, monitoring for cloud issues with very much the mindset of just give me the logs, I'll put them in my SIM, and everything, I'll do everything the same way. And the results are occasionally, should I say hilarious? No, they're really quite sad. And the result is that the thinking need to chase the tools. Tools modernize faster than thinking of many security operation teams. So when I deal with many of the SOCs that were built in the, well, many years ago, the SOC transformation concept comes up, is that they need to basically catch up the thinking, the practices, the skills with where the tools are. Building a modern tool, to be a bit self-serving, like Chronicle and a few others, is comparatively easy than changing the thinking of a generation of security professionals. So in that sense, I see this as kind of a practice process challenges being more interested and more painful than modernizing the tools. What's the data challenge? You mentioned it's like feeling like in the 90s being, it's kind of antiquated, because the change is so fast. You got to keep up, how do you modernize with the speed? And what specifically is outdated? If you had to peg a few things and change a few things, what's the knobs that are turning? What are optimizing for? Sure, so as a kind of a funnier side, I run a podcast and one of my guests a few weeks ago told me, we live in the world where developers are in 2020s, IT operations in 2010s and security in the 90s. I think that's a little harsh, and she kind of meant to be a little funny on that, but the example would be the SOC where people watch their screens, expect an alert to pop, and then they would like handle the alert, and it's all optimized for the volumes of data and the types of environments that's a lot more static. People update their asset depositaries every year, and I'm like, what? These assets change sometimes every minute if it's in the cloud, automation runs, systems change, images gets deployed, microservices. I mean, a lot of modern IT realities just don't match this type of fixed assets tracking by IP address, alerts handled by humans, humans escalate to like more well-paid, more intelligent humans presumably. All this is fine if we are in the 90s, but it's very much not fine in the rapidly changing world of cloud. So I got to ask you, you meant SOC meets cloud, they got collision in my mind, I'm like, I just see it exploding and like dynamite and carnage everywhere. You're talking about service levels like in the cloud services, cloud native services, and IT products, almost a mismatch, does that really, I mean, that's not a good fit, or is it, or how do they deal with the idea of I'm an IT on-premise mindset, meets cloud, higher level services, elasticity, scale? I think that the services, cloud services aren't the problem child. It's, in some cases, the problem child is the idea that I have servers, servers have IP addresses, they have names, they're on the third floor in the data center, and this person owns it and it's used for this business application. This type of a static lay of the land that really doesn't match the reality of modern environments. And so when this security practices, threat detection practices were born in that more static landscape, landscape changed, but the practices didn't catch up. And sort of we, we would talk to people about, say, a lot of pipelines and automation and how you'd need to be more engineering-led to kind of like work with developers that build services so they can monitor. While at the same time, and this is my pet peeve of literally today, when I was kind of unloading on Twitter over people who said, we shift left, but when they just left. I was just going to say that. Yeah, they shift left and they forget right. But where does the threat detection exist? It's kind of mostly on the right. And so when, and that's why I was trying to say, let's expand left, let's work with developers, let's work with applications as they're being built for improving detection response, but we are not leaving the detection of the run time state at all. Like we still need to do it and we need to do a better job on the left, but we shouldn't like abandon right. Yeah, yeah. And also there's the data piece too. Like the intelligence data on threats, not just an instance happening. There's a lot of information, data telemetry and observability data. And seeing a lot of those companies go out of business left and right. It's like, how many startups are doing observability? Or, maybe I'm over top. No, no, it's, it's another fun one, because the same kind of like the 1990s mindset that I kind of like joke about sometimes, does refer that you're looking at mostly infrastructure telemetry, like server logs, you know, Windows logs, Unix logs, but suddenly it's the distributed application. There's no OS because it's either SaaS or pass and it's all services, market services, whatever else. And the telemetry is coming from inside the app and the server-minded, system-minded people look at this and say, what is this stuff? Where is my familiar username, login, IP address, server name? It's a stuff from inside the app. And so sometimes it's a matter of skills and learning, figure out what it means, but it's also a matter of understanding how threats manifest in this new telemetry. You sort of know how password guessing looks like in the system logs, but how would somebody attempting to use the key that they found in code somewhere from another source, using it against the application? Like it's very much detectable, but you need to think about it before you do it rather than just replicate your old thinking. So Anton, how do you get teams to think differently? I mean, you're talking about these 1990s mindsets and how they need to be updated, new kinds of thinking and processes and change management really within these teams. So how do you get people thinking differently, thinking for a 2023 threat landscape? So I wish there's like magical answer and I say, well, read this blog or read this book or and then suddenly you have it. But unfortunately, apart from just good old fashioned training, learning, education, there isn't all that much magic in there. Like ultimately my, for many months, I was telling some SOC people, SOC analysts, to learn IIM. And they're like, but IIM is boring. And I was there. I thought IIM is boring. When I was a gardener, I made fun of our IIM team for being a password change team. Like, sure. But today, unless you understand how IIM in the cloud work, nothing you do would ever work. And moreover, when I was sort of spouting this advice and pointing people to some IIM resources, somebody saw stop me and said, Anton, you shouldn't say learn IIM. You should say learn IIM and keep learning IIM in the cloud because it changes as providers build new things, new paradigms, developers do new things. So in that sense, identity and access management becomes an area that you almost have to learn all the time and learn how cloud looks like. And by the way, when I said cloud, I really shouldn't have said that either. How each of the clouds, how we look, how our friends on two other friends we have in the cloud business look. And it would be a lot of differences even in what they name things. Yes. Well, that makes the stacks different and the naming conventions different between the hyperscalers for sure. But the cloud securing the cloud is a big challenge. And I want to get into, you wrote about this in your paper, the shared responsibility model is talked about a lot, but that creates a lot of seams. Okay, how is that going? What's your opinion on that these days, on shared responsibility and securing the cloud? So painfully, this is a very painful topic and it's also kind of my favorite topic because in some cases when people present a broad theoretical picture of saying, hey, share this possibility, let me explain. Data center physical security provided as an applications client as it's elated by. Like, sure if it were 2008, that's probably a good explanation, but there's a lot of nuance and there's a lot of activities that are inherently joined. Detection response is a good example because if I detect threats, I'm the client detecting threats using cloud provider tooling configured by consultant, responded to by a managed services provider and I jointly develop what I detect with somebody else in the cloud. Like five parties doing things around the activity. So that if I'm doing the naive visual red or blue or cloud provider client, that would be a gray area because it's a lot of inherently joined activities. And so to me, a lot of fun stuff and a lot of exciting new developments. We have multi-cloud, multi-vendor. I mean, Octa was just put out a statement today about the casino hack. They were involved, they got into the Octa. So they're involved, so it's not just. Moreover, if some kind of a second tier provider who uses them configured it for you, there are more parties. And to me, this is kind of a realities of shared responsibilities are a lot trickier than a blue versus green versus whatever table. So we are trying to do what we call shared fate. I'm pretty sure Google speakers who like to highlight that model and people say, well, what is this? And again, we try to say, that's where we help with things that are traditionally on client side. Because my former colleagues at Gartner like to highlight that 99% of cloud breach is a customer's fault. And there's a very, a lot of stuff written on this and there's a lot of my colleagues at Ociso said, it sounds a little blamey. And yeah. It's technically true, they're the party. They're the party of the hack. Of course it's their fault. But it's also kind of a little unfair in terms of how a world is built. So we are trying to work with the side that is traditionally seen as being joined or being on the client side. And that to me is the model that is just better. I think the Mandy and Google combination actually brings down the bar for the average company that needs protection. I got to ask you while we're on the topic, is the CASB cloud access broker thing work at all, help here in terms of multi-cloud security, CASB? Does that play a role in your mind? So this is kind of out of the left. Cloud access security broker. I have a really good answer to this. So CASB was born when I was with Gartner. And while at the time, people had some hesitations about fallout acronyms, now we have a whole lot. Pretty much every cloud security acronym is a fallout. There's one with five, one with five. CNEP of course, everybody loves that one. But CASB was seen as the way to go about securing software as a service. If you have five, 10, 50, 500 software as a service application that you use at a large company, the way to take control over that would be CASB. Is it working out? Deep in my heart, I feel like the jury's still out. Obviously it's working out better than the alternatives. But has CASB become the standard security control? I very much want it to be, but I feel like there are enough companies who are still kind of not quite seeing it. And moreover, there are companies who think that they need more C something, something, something acronyms in addition to CASB to secure SaaS services. So to me, unfortunately the answer is either it depends or it's complicated, kind of pick your poison. But it is very much part or should be part of the modern arsenal. I don't want to run many software service for business without CASB. So there's been a lot of conversation at this conference about the enormous potential of AI. You've recently authored a paper about how these systems are also very vulnerable to attack themselves. Can you talk a little bit about the ways in which you've seen these systems get tricked into spouting out malicious data? So this paper was born out of sort of obsession I had for some time about securing AI versus securing a large enterprise data intensive system. I wake up one day and I think, well, this AI application is pretty much similar in many regards as something else enterprise data intensive application in the cloud. So many of the security features would be the same. But then you read one more crazy article about Robert Rebellion somewhere and you think, no, no, no, it must be completely different. And this type of weird, is it more different? Is it more similar? Started to annoy me. And I thought, okay, I'm going to put my ex-analyst hat and I'm going to try to apply a systematic approach to deciding, okay, these are two buckets. This bucket has similarities. You have, you deploy AI or you purchase AI type services with its vertex AI or something else. There's infrastructure, you need to secure it. There's actually finding enough share dispensability behind who has this. And some of the things are similar. And then you say, we need to filter inputs because of prompt injections. But hey, you also need to filter outputs. That's unusual. If I have the normal system, it outputs whatever I put in. I don't need to filter outputs. For AI, this goes into the bucket, hey, that's new, I need to filter outputs. And the idea was to really systematically look at things that we know in security and see, which of them are just versions of what we had before and which of them you need to build from scratch or you need to reinvent. And a lot of stuff around data governance and data security sort of mostly fits into the new bucket. And a lot of infrastructure stuff, it's mostly in the old bucket. Meaning, do what you've done, just do it well and you are in decent shape. And then there's new stuff you need to do. A lot of regulation discussion and obviously there's a lot of hyperbole around AI. LLMs, foundation models, multimodal, that's the theme of AI. What needs to happen in your mind? We just put out a piece on the power law of models. You got the proprietary, which is mainstream and you got the long tail emerging in open source. Obviously, the software supply chain, data supply chain, what we've been calling it, there's a lot of innovations. But also, it's fast and loose. What's the old expression from Andy Grove, let chaos reign, then reign in the chaos? We're at an interesting time. It's clearly an inflection point, like the web and mobile, structurally. What's your vision on what has to happen next for a healthy, robust ecosystem of entrepreneurship, innovation to thrive in AI? So, when I think about this, that type of broad question, I kind of think about, again, kind of two buckets, one bucket is exciting innovations, even for security, using AI for detecting threats, using AI for explaining things, using AI to find connections between things. And then, so there's a benefits excitement bucket and there is, of course, the risks bucket. So, to me, when I think about both buckets, I'm kind of balancing using AI for security where it works well. But the risks bucket, to me, should be clearly separated within risks that affect you today. If your training data is controlled by an attacker, you're not going to have a good time. So, data supply chain type stuff, because you program with data, you need to focus on that. But there would be risks that are longer term risks and you need to think of them, but you probably shouldn't obsess about them today. So, if you're worried about deeper connotations about what will happen when AI becomes even more powerful, useful direction to think. But if you are a CISO being raided by ransomware and whose business adopting AI very quickly, Robert Rebellion is probably not your first priority or second or third. It's probably should be on the list given how rapid the stuff develops, but it should be clearly not confusing with immediate matters. So, to me, this type of risk prioritization, mundane advice, I know, but it does apply because if you read the headlines, you think that a lot of existential huge risks are coming any moment now and they may be coming. Don't get me wrong. But certainly not any moment. And, but there are risks that are coming at any moment. Again, training data poisoning, all the prompt injection, all the other exciting stuff. So, to me, this is type of the time ranking, the risks is probably how I would restore some sanity, by the way. Not, I don't think I'm going to restore sanity. Single hand. LLM is almost like an endpoint, right? You've got to protect what's coming into the prompts. You can actually manage that inbound prompting. The prompt injections. But also the outputs observing what the outputs are, are they with policy? And I've seen some exciting use of AI for securing AI. That's also very fun. Sometimes you have unexpected results which are like good or unexpected. When you use AI to secure AI, either the same or different. So, to me, we shouldn't, as a security professional, I guess, I shouldn't ignore doom and gloom. Otherwise, I don't know. They take my car. They live doom and gloom every day. They live doom and gloom every day. They wake up saying, I'm not going to be attacked today. My doom and gloom is there. Don't get me wrong. It is an AI, that's for sure. It's not AI, it's tech. So I guess the question I would have, and we ask all of our guests this question, is that this practicality, certainly, and that's great advice. Is there any clear lines of sight around low-hanging fruit opportunities to deploy AI today in a risk-free environment? Is that automation? Or is that just old DevSecOps kind of things? Or is there more of a wait and see? What's your advice on how to, if you're going to put your toe in the water, so to speak, what would you do? So it sounds like you're asking mostly about deploying AI in service of security goals, like to achieve security outcomes, not so much secure in AI, right? Or to help the organization, we heard in the keynote that burnout is a huge issue, and Kevin Mandi was saying, hey, I used to write big logs and go through logs and spend all my time doing legal reports. Compliance in journaling and do all that heavy lifting. It's like, AI, take care of it. So I would say that a lot of low-hanging fruit are probably not in generating code that you would run on your production systems. And I've seen people experiment with this, and I would say that the reports explaining, connecting the dots, helping to communicate and explain things is where I would look at first. People obsess about the G generative and they expect that AI would generate something in Python that they would then run on prod systems and then magic happens. And I mean, sure, no. But that doesn't mean, oh, it's useless. It's the opposite of that, because I think that some of the stuff we do, say with virus total data, what we call VT insight, where it kind of connects the dots and explain what you observed based on the data it has, is immediately useful, practical and risk-free. I thought your paper did a good job and thanks for sharing that with me, Rebecca, because the first advice was, first two things, is understand what the opportunities are and then get the data. Yeah. Yeah. Start there. Great. Excellent. Well, Anton, thank you so much for coming on theCUBE, a really great conversation. Perfect. Thanks for liking me. Thank you. I'm Rebecca Knight for John Furrier. Stay tuned for more of theCUBE's live coverage of MYs.