 Hi everybody, welcome to Moscone West. This is theCUBE and our coverage of RSA 2023. The keynotes are kicking off. We're here with Sunil Padi, who's the Senior Vice President and GM at Google Cloud. Sunil, my friend, good to see you. Thanks for coming on theCUBE. Good to see you again, Dave. Making security invisible is what your mission is. You heard that term before. You know I have, but how's that going? No, it's, I think one of the things that, I think what I've found between my prior life growth stage companies and something like Google was, Google was always big into this, have big dreams and make it an awesome environment for people to innovate. And I think since the advent of Google Cloud and the enterprise and so forth, I think the one new thing that got added was to act fast, while still dreaming big and having fun. And so I think with security, we can afford to now take the long-term view rather than yet another new innovation and so forth. And so within a Google umbrella, I think that's what invisible security to us is, can you actually in five, 10 years, materially make a change on the cyber side? Maybe not in a year, but in 10 years. And can you do that by completely making security as pervasive as possible, but hopefully materially less complex? It's probably going to take that long. It's funny, John and I were talking earlier. I think we were actually right here, maybe it was in Las Vegas talking to Pat Gelsinger and I had asked him, is security a do-over? This is probably five years ago. He said, yes, absolutely. And it kind of is in a way a do-over. People talking about zero trust. Now we're going to talk about large language models, but it's hard to do over when you have so much technical debt. So I think it is going to take that long to really rethink it. But how do large language models fit in? You guys make some announcements around there. I'm also curious, because you have the mandiant visibility now, is how much are you seeing the adversaries using foundation models in the attack factors? Yeah, no, absolutely, absolutely. We could start. No, no, I'll start with some basics there. So essentially the way we look at this is, to your point of like it needs a do-over, and any market that needs a do-over, as you know, requires some platform inflections, like mobile. Yeah, probably 15 years ago, right? And cloud was another version of that, even though it took a while to differentiate cloud from hosting. I think in the world of generative AI, the analogy that I used to set the context up is, look, when we did mobile first, everybody had to go from a desktop browser to a mobile browser, because if your website didn't work in the mobile footprint, you know, my checkout button wouldn't be on the screen, you know, it can't work, right? So you got to do that. Equal into that is what we're seeing as chat interfaces, or conversational interface. You know, backed by whether it be Google's bird or chat GPT or whatever it is. So we call that layering in generative AI on top of existing environments. But the real inflection point happened with mobile as a platform when you created an app store and you created the ability to create mobile first apps. So you could leverage the camera, do swipe left, right, right, but create an experience. Overnight, it became all about the apps. Yeah, yeah. So and I think the opportunity we see in Google for generative AI in each of these functional areas and security being one of them, and you'll hear more about this at Google IO and so forth, is if you can infuse AI rather than just layer in AI so that, yeah, it can be a chat interface, but every function that you do, oh, I'm parsing logs, can I generate log parses? If I'm generating code, why not generate the security controls as we were there? You know what I'm saying? So there's a plethora of things if you think through holistically, you can infuse generative AI in every piece of the security workflow. So just of what we're announcing today, which is based on obviously a lot of the work that Google DeepMind has done, Google Cloud Vertex AI as a platform is, what we've done is we've taken a large language model that we call PAM, it's just quite widely used. You know, we've just released a medical version of that called MedPAM, but essentially we've trained the LLM on all of the Mandian threat intel data, the Google threat intel data so that you can actually create, quote unquote, industry first security LLM. But it's constant in an enterprise-grade platform that we're calling AI workbench, security AI workbench. And the whole idea is independent of any products that are built on top of that from Google, a customer could start prompt engineering a security use case on this platform while keeping their data as their data. And suddenly they have access to a security LLM on an enterprise-grade infrastructure trained on all this data. So that's the first step. So the corpus is the Mandiant corpus and the Google corpus. To start with, to start with, and then part of the- Which happened pretty quickly, I mean, when did the Mandiant acquisition close last fall, right? Close in November. And you know, outside of the AI part, obviously, we've been rapidly integrating Mandiant threat intel, incident response into the platform. So the, you know, related but discreet is that independent of generative AI are sort of strategic synergy between Mandiant and Google cloud security was the following. You'd, Mandiant is sort of like the premium incident responders for day zero. Awesome threat intel. Imagine if you could empower every one of those frontline responders with the best scalable security data lake that captures all your demands, best analytics, and provides the best proactive remediations so that you don't have to necessarily come back for the same issue downstream, right? And it turns out that in the world of AI, what has happened is that we are now taking the security LLM and in each of the products, whether it's Mandiant products, like Mandiant threat analytics or Chronicle or any of our products, we are now building AI capabilities based on this security AI workbench because now essentially every time a customer or a Mandiant responder is using, you know, a tool, it's actually sourcing like auto detections based on this large language model. And every time somebody sort of helps a customer in the frontline, you know, I hope this is the first example of invisible security is, you know, it's pretty hard to prevent patient zeros, even though people talk about day zero and all that. But I think with generative AI, we have a chance to prevent patient ones. And what I mean by that is, we are there in the frontline, we find something out in the wild, we triage it, we try to remediate it, but if the rest of our customers are hooked up on the same value chain on the same platform, the input is given to the large language model to say, hey, you know, this new thing is in the wild, every frontline defense and every other customer that has the same Chronicle platform can level up to detect that. So therefore there won't be a patient one necessary. Okay, so I get the design in piece. So ahead of our RSA, you get inundated with data from you guys did the double supply chain hack, which Mandiant wrote that up. CrowdStrike came up with a study, Palo Alto came up with the study. And everybody does, and it's very useful. You got to take a bath in the data. One of the things, and I want to ask you about sort of design in versus how a practitioner can use it, maybe that's just invisible. Maybe that's the point. But one of the reports, I think it was Palo Alto, said that 80% of the alerts come from 5% of the rules. This is the same old, same old. Right, and so this, and there's got to be a way to prioritize this, presumably AI. You know, I mean, if you just think about it, AI in general had some stages. Like we really bothered on the first principles, it had identification, classification, and then generation. So generative AI is the third part. But if you really used AI, one of the examples was, hey, I would classify things as patterns. And in security, it would classify things as high value versus low value. And generation actually built on top of that to say, hey, what, can I actually generate the code for you so that you're not doing the job, right? For all these rules. So writing these rules are pretty hard. So, you know, can I actually generate the rules for you? But the classification of high priority versus low priority was actually pre-LLM in reality. Because, you know, there was AI before LLM-based AI, right? So, I think, but if you, you know, kind of to address some of that report questions and all, I think the way we looked at this problem was, look, there's like the three T's of security that is always top of mind. Threats, toil, and talent. You started talking about threats. And when you look at the threat problem with LLMs in particular, good actors can become better, bad actors become way worse. Just because of the way that LLMs can actually impersonate phishing sites, they're already doing that. More capable, meaning, when you say we were sure. And also they can impersonate a real-life phishing vector way more than what you could. Have you seen that in the last 120 days? Is it? Not explicitly at a breach level, but we've seen incidents of it as examples being thrown out. With a signature that has a foundation behind it. Yeah, I'll give you a simple example, right? We released Wires Total Code inside today. Within the first hour, we found a new malware being detected that none of the existing detectors from the community detected, okay? Based on the usage of that, right? So I think the whole point is, you now get the system working for you rather than just people or dispersion of tools, right? So that's the threat part. I think the second part, which we've talked about is toil, which is really every security practitioner has too many tools. Yeah, you got to have more tools and all that usual stuff. But ultimately the look, again, what we mean by making things invisible is if code could be generated, why can't it generate security controls? Why can't it generate compliance controls? And every time some of that is generated, why can't it generate the test for it too? And so we have a series of capabilities there that we're building in. A good example of that is open source, right? I mean, you know, the doubles up, right? In fact, that is a huge vector. So what we've done is we've pulled a version of GitHub, send it through our pipeline. We do first testing, we do a whole bunch of vulnerability testing, and then we give it back to the world to say, hey, this is our assured version of open source. But that pipeline, currently we used to do it using our own tests. Now we use LLMs to generate tests into the supply chain. So that's another example of where LLM-based technology is actually, you know, hopefully making it non-linear in the surface area of God. So you generate the test. Does it actually do the test? Because a lot of people just don't test. And I think, but here's the, it sort of cuts over into the last dimension of talent, right? So when you look at the security practitioner, as you were saying, there's two types. There is the tier one, tier two, or whatever, specialist. And then the talent problem exists because you can't get enough of that talent. But in reality, every customer or every company has developers, business analysts. So the art of the possible and the talent, which is sort of related to that prior question is, could you get everyone who's not a security specialist to help you in a security outcome? As an example of that is, hey, if I'm a developer and people talk about DevSecOps and all, but in reality, could you actually programmatize the ability for you to generate a, just like you have unit tests, you should have security tests. Right, you just think about that construct. It took a while for, right, for unit testing to become a programmatic paradigm. So I do believe that this is the lowest hanging fruit, is I think security in the life cycle of a developer and an operator is going to become like unit tests. And then the next step is while you use that to democratize a larger value of the surface area of talent, these tier one, tier two operators with some assistive capabilities. Like, you know, you take the Mandiant Thread Intel, you summarize it, you give it some insights. Like, you know, you don't have to have a PhD or be a Mandiant specialist. Then suddenly your tier two analysts can become like a Mandiant analyst. So just like generative AI can make a lousy writer, a decent writer, and a really good writer, an even better writer, the same thing applies in security. That's exactly right. And I think that's fundamentally how you break the talent grid lock, is to do both these steps, right? Because you still need the specialization, but at the same time, you need to find a way to kind of bring more people to help you with security outcome. And I like the concept of invisible, because, you know, I don't know if Google says this, but a lot of vendors say, you got to spend more. And we do spend more. Every year we spend more. We spend a hundred billion dollars or more, and the problem seems to get worse. So I don't think spending is the answer. It's, I think, do over and rethinking, maybe. It's a starting point. I think, again, if you apply the analogies to mobile and cloud. And again, I personally don't have a good answer generally to this, but I think intuitively it feels like you have to spend, you still have to spend more, but hopefully you're spending more on fewer things that are more platform things that give you non-linear step function values so that downstream. Big levers. Yeah. Yeah, yeah. And I think we need those, right? In security, because it's happened in infrastructure, as we know. It's happened in computing, as in mobile or no. I think in cyber, it's time that it happened. So I got to ask the question, because everybody knows Google's ahead on AI in the last 120 days, 150 days now, I've been saying, do you, I don't know if you can answer this question, but it's hypothetical, would you have, do you think you would have announced these things at RSA, or not for all the hype around large language models? So would you have waited? You know, people are saying, hey, pause. No, no, no, no. I think, so the work, as you can imagine, has been going on for much longer. Yeah, of course. The question of timing it, at least in the security landscape, you know, the way we have our roadmap internally is all these features that you've got, there are quite a few that are not on this list, because they fall into this curve of risk and reward. And if you prioritize that two by two, Dave, there's high risk, high reward, there's low risk, high reward, and the other options. What you're seeing, what we've announced today, are falling to the high reward, low risk, because it's about threat, intelligence. And low risk. Yeah, it's low risk, and high reward, right? And then, depending on how the uptake is, and then the general awareness, because Google in particular has, you know, overused word, it's great power, comes great responsibility, we are a bit more aware of the responsibility portion. And that's why you've seen us behind the curve, right? She's big. You get a lot to lose. So, yeah. But I think, like in general, as you've heard now, I think the art of the possible is to make sure that people are aware of all the things we could be doing, but on ramp them a little bit more responsive. Right. Like you said, Google's, I mean, you're Google. I remember I was interviewing Robert Gates. Yeah, yeah, yeah. Of course. It was mid last decade. Yeah, yeah, yeah. And I would say, it was naive, I was saying, yeah, but doesn't the United States have the best security technology? Can't we go on the offensive? And he goes, well, yeah, we do, and we can, but we have the most to lose, right? So, our infrastructure, you take that down. We have to be very careful about how we approach this. No, but I think that is the, sort of like the art of the possible there, but I can tell you that, the inflection point for what I would call high reward, low risk, there's quite a bit of fertile ground across Google that you'll start searing. Starting with security. It's okay, that's a great answer. That's where you're focused. What about, what are you doing with Accenture? I know that was- Yeah, yeah, Accenture. You're not friends there. So, one of the things that's going on across the world, as you know, everybody and their dog is building an XTR solution, right? But if you actually look at the buying centers, there's not really an XTR buying center. There is a SIM buying center. You know what I mean? A SecOps buying center and so forth. People are pivoting. But in reality, a buying center transition is that there's quite a few customers that are like, look, whether I'm a large bank or a mid-market manufacturing company, maybe I should pivot to a managed offer. Because a managed offer offers a level of warranty, a level of risk management, and so forth. Even if it's not all people, a managed service, I'm sorry. And essentially what we have found is, with Chronicle as a platform that we have intentionally built is basically say, look, we remove, basically, security is a data problem. So therefore, if security is a data problem, you can't put any limits on data. You can't put any limits on retention. You can't put any limits on cost. You know, things like that. We had the prior world of security as a data who did that. And so once you unlock that as an architecture, then suddenly every managed services provider that was providing security suddenly has a platform that they don't have worry about cost. They can provide value, right? So Accenture's basically announcement is, look, globally they've made a strategic decision to partner with Google Cloud. They're re-platforming their managed services that they're delivered to the global 500 or whatever on this next generation platform. And as part of modernizing their security operations to their customers, they're also going to become the first partner to contribute threat intel to the LLM. Because ultimately, the more data that goes to train the product is what value comes back to the customers. So security is a data problem, I agree. And Google is good with data. You got data at the core of your business. We're good at storing lots of data. We're good at indexing lots of data. We're good at searching through lots of data. You know what I mean. And it's at the core of your business, right? I mean, most companies, that's not the case, right? Their data is, as you know, it's in silos. We've talked about this before. How do you see that problem being solved? You know what I mean? It's that the data today is locked inside of applications. It's inside of business processes. It's not the reverses. And so the answer is everybody says, oh, let's put it into a central data warehouse. And then we'll move it out of those. And then you talk about IoT and the edge. You're not going to solve that problem by sticking it in a single database. No, I do think there is a parallel in some ways to the non-security data lakes, lake houses in a construct. And there are some differences too, because essentially the similarities there are that, look, at the end of the day, unlike the business data analytics where there is a business ROI, everything that you do with security events is risk. Yeah, right? Risk reduction. And risk reduction and regulation compliance, right? So what you really don't want is you don't want to pay for that. But the consequences of not doing it is high, right? So that was one of our core fundamental principles was to break that gridlock where you hate paying for a security data link in the past. Because you didn't get the value that you got in a business data link, right? So what we removed, and because we could afford to do that, very few companies can afford to commoditize security log storage at scale. Like we use the same storage that search uses. We use the same computer search uses. So our costs are cents compared to dollars of somebody, right? So once you did that, then suddenly the world's like, look, let me send all my endpoint data now. Let me send not just two months of data, let me not just summarize, let me send DNS logs. Let me send everything. You take care of it, you index it, you store it, and now you have a nationalized interface. At your economics, at Google economics. And sort of fundamentally, I would say, that's an example of a platform that could potentially, like one example of a cyber platform that changes the game, right? Of how cyber could be done 10 years from now. Five years from now. So what's, I know the vision is make it invisible, but what's the strategy? If I compare like Amazon and Microsoft, they seem to be different. Yeah, and it's doing exactly right. And so Microsoft wants to compete with CrowdStrike and everybody else. Amazon, it's like, you know, here it is. Yeah, yeah, yeah, yeah, yeah, yeah. And where are you guys? And with the- No, no, no, no, no. The easy answer to say we're in the middle. But I think you are, actually. But that's good. So I'd say we are on the third word text of a triangle. Yeah, okay. Let me just tell you what I mean by that. So you've got Amazon, obviously. They've got fries to the burger, right? Everything's like, you come to Amazon and then we'll sort of make you secure and all that. I think Microsoft is on a phenomenal job. My opinion, at least Satya has done a great job of up-leveling it, making it a first-class citizen. And, but as you said, you know, they're trying to push products in all directions. What we have said is, look, there are many areas that we're not good at. And we want to build a platform approach that the best of breed of those areas can come integrate with us. As an example, Palo Alto with Zero Trust in our product. We were one of the pioneers of Zero Trust, but functionality-wise, there's a nice little compliment there. Crowdstrike Sentinel, one of the endpoints. They can integrate workspace on our side. You know, so forth. Octa and identity and so forth, right? Yeah. I mean, in fact, even partnered with Microsoft and Amazon on Thread and Tell, with Mandy and so forth, right? But there are some areas where we've taken an opinionated view, Dave, and made it very clear. Like confidential computer. Like confidential computer. But also just this construct that all security data needs to be stored, indexed, analyzed, and then made available to all security apps using one ubiquitous platform. Yeah. We're pretty good at it. Google tooling. Yeah. And that's what Chronicle is. So Google data store, for example. Yeah, yeah. So we call that Chronicle and essentially, that is what we think is one of the four underpinning for all things security operations, whether it be reactive or proactive. Another thing is, look, we have a ton of information around the world's thread and tell between Mandy and Google. Can I take that, synergize that, and make it available to all the products? Oh, by the way, we have a lot of open source hygiene that we have had to do for our 100,000 plus developers. Can we package all of that? Open source hygiene so that we just give in a city bank or any bank, hey, here's a thousand Java and Python libraries that we have assured. Wouldn't you want to use that? It's the same function that GitHub, but it's made secure, right? And then on an ongoing basis, we kind of keep it up. So I think those are the points are that we've taken an opinion review on certain fertile markets where we genuinely have 10X. And for the rest, we have a pretty transparent ecosystem so that there's synergies of one plus one equals three. Last question, on those high risk, high reward opportunities. On the generative AI? On the generative AI, come back to that. Is it likely you will be, it's likely you're going to watch the market unfold and see what happens there. I presume you're not going to take the arrows in the market. Well, I mean, I think, is it fast follower, or is it like kind of see what happens? I'd say that in the platform approach to security LLM based stuff, we're already number one in the market. Like nobody's got that. They also don't have the data that we have. You know, making that, we've got to land it now and make something happen in the broader market. But I think because we are out there, independent of whether we are first or second, in this case, we happen to be first. I think the adoption of that, Dave, will drive the needle on how much on the risk vector you do. Because if our customers are rapidly deploying and they're using it to generate detections, like virus total and all that, it's only a leap forward to generate remediation. And even when you generate remediation, that's an example of a higher risk, where you actually change your posture on your behalf, right? You can send it through a little bit of a scenario. Hey, validate, Dave knows what he's doing, send it to him to just validate it for a little bit and frame it. And then eventually take the action. But at least you're getting the shot circuit done there, right? So within cyber security, you actually will lead. Is your intention to lead in that quadrant? That's full of you. So nice to see you, Matt. Good meeting you. Thanks so much for going on with theCUBE again. It's been a long time, so. We'll do it again soon. We'll be back. Yeah, I hope so. I think maybe this summer. We'll do it this summer. This summer. All right. Okay, keep it right there. Dave Vellante for the whole CUBE team. We're here at RSA Conference 2023 in Moscone West. We'll be right back.