 Hello, and welcome back to SuperCloud 3, where we're discussing the future of AI-enabled security in the cloud. I'm pleased to welcome to the program Phil Venables, a Google VP and the Chief Information Security Officer at Google Cloud. Phil, welcome. Yeah, thank you, great to be here. Hey, set up your role for the audience. A lot of tech companies, CISOs that I talked to, they're pulled into really helping their customers improve their security, or the Salesforce loves to take them on calls, but to try to help them better understand best practices, what the shared responsibility model is, how Google, for example, does security, share with us your role, and where do you spend most of your time? Is it securing the Google Cloud? Is it working with customers or both? Yeah, so most of my role is internal. So I focus a lot across not just security, but privacy compliance, trust, risk, resilience, across not just Google Cloud, but areas like Google Workspace, all of our enterprise businesses, and a big part of our underlying technical infrastructure. So I've probably spent 80% of my time there, and then 20% of my time on customer facing, external facing initiatives. But I would say I have a whole team, we call our office of the CISO, which is about 40 people. They're all former security leaders, former CISOs from organizations, and they're my main customer facing arm that works day to day with customers, not just on how to think about Google Cloud security, but how to think about security in a hybrid, multi-cloud, or to use your language, you're kind of a super cloud environment, and that's something where customers really enjoy working with our team on that because they have that lived experience of being a CISO in a customer environment. So I spend a lot of time supporting them, but I have a whole org that does the customer facing stuff. I wonder if we could talk a little bit more about that regime. People are always asking us, what's the right regime? How should it evolve? And it's particularly interesting in this world of AI. There's now front and center on everybody's mind. I mean, 40 people, it sounds like a considerable team. On the other hand, it's Google, so it's a pretty efficient team. What is that right regime from your standpoint? Well, so our internal teams are like much larger than that team that just faces to our customers at a senior level. And in fact, in our customer facing organization, we also have professional services teams, customer engineers, security solutions engineers. So it's much larger than that. But generally when we think about security across all of our organization, it's this combination of having some central teams that provide consistent infrastructure and tooling to really drive our philosophy of secure by design and secure by default in the infrastructure and the products. And then we have a whole series of federated specialist teams that are embedded inside product areas or product teams. And so for example, inside our Google Kubernetes engine, our GKE team, we've got a very large security engineering team that focuses on the security of GKE just like we have for other product areas. So it's that combination of centralized providing central tooling and central capabilities plus these federated teams embedded in product areas. And then similarly on our customer facing work, we have obviously this security we embed into the products and services, but then we have all of our field teams that are embedded on customer accounts, helping them figure out how to do that transformation to the cloud and figure out how to do and sustain security in this hybrid multi-cloud environment. When you think about your Google's consumers and their security requirements, I mean, the number one thing I hear from CISOs when I ask them, what's your biggest challenge? They say, lack of talent. I presume you don't have that problem, but really it's your job to address that problem ultimately for your end consumers. Isn't it, I wonder if you could sort of discuss your philosophy and how you think about that. Yeah, so luckily when what drives why we think about secure by design and secure by default is so important is we don't want to, we want to help customers with the toil of securing their environment, not add to it. And you talk about the cybersecurity talent challenges. I mean, we've spent a lot of time thinking about this and there's really two sides to the skills and talent challenges. One is the raw number of cybersecurity professionals that exist, everything from entry level up to expert. And we've done some announcements recently with our Google cybersecurity certificate which we've made available to the world to try and train more people to become cybersecurity professionals. But on the flip side as well, we've got to think about how do we 10x the productivity of the cybersecurity and IT workforce we've already got. And a lot of that comes down to the secure by default, secure by design, making these things just intrinsic to the products. We all want secure products, not just security products. And a big part of what we're doing is to try and enable that. The final thing I'll say is, we all talk about the shared responsibility model of cloud which is obviously correct in that the cloud provider runs the base infrastructure and the customer is responsible for many parts of the configuration. We've taken over the past few years a slightly different approach to talk about what we call shared fate, which is how do we reach across that line of shared responsibility, provide better defaults for customers, provide better guidance, better guardrails, configuration code to help them stand up an environment. Again, so that it becomes less of an effort to have a secure by default environment. And we're going to keep focused on that. You know, that's an interesting point, Phil. I want to maybe take a quick detour here because if you think about the cloud, obviously, I mean, I've always said it, I think it's going to be more secure. I think that's proven that it is. But the CISO, there's sort of, the cloud is the first line of defense. And then he or she is sort of, I feel like oftentimes on their own, you've got the application developers are being dragged into security and the whole shift left movement. You've got audit, which is kind of your last line of defense. If you cross multiple clouds, you've got multiple shared responsibility models. Am I inferring from what you say that you actually reach beyond your dividing line into some of those other areas within the customer base and potentially even across clouds? Is that a fair assertion? Yeah, I mean, through the provision of kind of tools and services, we're all about kind of giving customers the means of securing their environments, not just on Google Cloud, but across all of their environments, whether it's products like Chronicle or Virus Total or any of our other services. And also we spend a lot of time and we have cloud products that run on other clouds, like Azure or AWS, where customers want to use our products on a different cloud. We think heavily around how do we set the security standards for that. And then we've also spent a huge amount of time in the open source community and the standards communities to make sure that we're baking security in to not just the most critical open source, but across the open source tooling in general. We were one of the co-founders of the open source security foundation, for example, that drives that. Plus we've invested a lot of time on software supply chain risk reduction. And then we invest a lot of time in the standards communities to drive its security improvements, not just for the cloud, but for the internet and the IT infrastructure overall. And we think ultimately that's the right thing to do, but it's also the commercial thing to do because it grows trust in technology and cloud services, which I think up to your point, ultimately benefits everybody because I think people have now realized that cloud is a means of managing their risk, not just a risk in itself. And the final thing I will say, you talk about kind of the lines of defense and that makes me think about defense in depth. And we're not just all about, how do we provide defense in depth from a tax? But it's also in the cloud, as you know, about providing defense in depth from the configuration errors that may occur. And how do you think to support the CSO about the separation of duties between ops development security teams so that they can work around one security mission, but then there's defense in depth to reduce the risk of configuration errors and what they're doing that could expose them to a tax. Right, and the more heterogeneity, the more complicated it gets. I've always thought, in a lot of ways, Google's built a super cloud, which is an antithetical to the actual super cloud definition, but it's probably the biggest cloud in the world. But the heart of this question really is, what's different about multicloud security versus on-prem or hybrid in the cloud? Are there ways in which customers can enhance security across multiple clouds or does multicloud just in and of itself bring from a security standpoint a more complicated set of obstacles? Well, I think the challenge is that every cloud provider, well, they have typically a consistent level of security give or take. Some are better than others in different ways. But generally speaking, there's an identity and access layer. There's an infrastructure configuration layer. There's configuration of how you run all of the software and services, how you federate identity and authentication. So there's a lot of things, but they're all configured and done in different ways. So a big challenge for customers is, how do you provide that configuration guidance and approach when you're deploying across multiple clouds? And there's various different initiatives in the industry to look to see if we can harmonize some of those ways. There's various open standards initiatives on identity and authentication and access management and increasingly, a lot of the cloud providers, especially ASA, providing these tools that can provide an umbrella of monitoring across all of your cloud and on-premise deployments. So I think we're getting there, but there's still a lot of work to be done on harmonizing how multiple clouds can be managed. And that's probably gonna be a big area of competition, not just amongst the cloud providers, but amongst the other technology vendor community. And that competition is probably healthy. It's going to benefit everybody, but we're really, really focused on making sure that we're driving that secure by design, secure by default, driving security into the open source community and the standards communities that will ultimately benefit everybody on all clouds, not just us. So just turning to sort of AI, I mean, chat GPTOs, it was the AI that was heard around the world. I joke sometimes it actually wasn't invented last November, but we saw that the Google IO, we saw Google flexed its muscles and showed what's capable there. So I'm interested, where is the low-hanging fruit for attackers? And I wonder, have you seen, what have you seen since the sort of AI awakening? We've gotten mandient now as part of the portfolio, which gives you even greater insight to the massive resources that you guys have had. Are you seeing a change? Where is that low-hanging fruit for attackers? And then I'm interested in the same question for defenders. Right, well, it's interesting as you call out. I mean, AI is not anything new to Google. We were one of the, you know, we've been the inventors of many of the technology, including the transformer technology that begat this latest wave of generative AI development. So it's something we've been very, very focused on about the responsible and safe development of this technology. And so we also as part of that keep an eye on the threats. I think a lot of what we're seeing so far is what I think you would expect. It's the use of this technology to drive threats around misinformation and disinformation, threats around improving the ability of attackers to dupe people, whether it's phishing emails or kind of frauds or other scams. And you just, you see the early signs of it being used to generate or augment malware capability. And the way I kind of think about all of this is well, we absolutely have to be focused on the safe and responsible use of AI. A lot of our defenses against these attacks are the defenses that we're driving anyway for the way the attackers have been operating. You know, we may need to keep getting better because the attackers may have more scale by using these technologies. But ultimately, for example, on phishing, you're not just doing something different because an adversary can generate a better phishing email. Really what you're about there is reducing your risk from phishing by deploying stronger forms of authentication so that you're not vulnerable to any form of phishing. And so I think we're gonna have to recognize that. And then to your point about the use of AI for defensive purposes, I'm tremendously optimistic about this, I think AI is gonna be transformational again for defenders. And I think the thing to remember is it has been already not necessarily in generative AI, but more traditional AI, machine learning, large parts of our defensive environment, whether it's safe browsing or malware filtering in Gmail or many, many other things all the way down to VM threat detection are all driven by our machine learning technology. And so it's already had huge benefits and this next wave of technology, we've taken some of our large language models, Palm, trained it with security data, we call it SecPalm and already we're starting to use that with customers for them to do high-speed malware analysis, detect vulnerabilities, analyze the configurations in the cloud, give them more intuitive interfaces into some of our security monitoring tools like Chronicle. And again, this is helping not just manage the threat, it's back to what we started with around how do we think about reducing the toil for security teams so we can scale the talent we've got so that talent is better equipped to meet the challenges of defending all of our organizations. So again, I think AI is going to keep being transformative for everything that we do. But Phil, you mentioned that in addition to security, you had wider scope, privacy, compliance, trust. And I wonder if the discussion that the industry's been having around, talk about guardrails and the like from generative AI, when you think about transformer technology, obviously it's really good at figuring out what the next word is going to be or the next sentence, but it really hasn't gotten to the point where it can either ask you for clarification when it doesn't know, it's just putting things in, whether it's BARD or GPT or Bing GPT, whatever it is so far, the ones that at least I've seen, is that part of your scope when it gets to things like trust and compliance and privacy, is that sort of somebody else's swim lane at Google and what do you expect going forward? So we have across Google, we have a whole trust and safety team that works and looks at all of the potential for safety and abuse issues on all of our platforms and services. And so my team partners very, very closely with them to make sure that the services were deploying through cloud fits with our AI principles, has responsibility and safety at the heart of it. Then additionally, my privacy and security teams, they're very, very focused, for example, on our vertex AI products where we ship our large language models to customers to make sure that when customers are using those and extending those models with their own corporate data, that data is kept absolutely protected in the context of that customer and is not making it back into the wider environment. And so we've been very, very focused on the security, privacy and responsible use to support customers' objectives such that their privacy and intellectual property and security is managed through their adoption of our AI platform. And so this has been a great example of how security, privacy, compliance, trust, fraud and abuse and all of these topics come together to make sure that we're delivering a solid set of controls that can enable responsible and safe use of AI by our enterprise customers, not just our general consumers. Yeah, and this is not new to you. This is not something that you just started thinking about in 2023, I'm sure that there's been a lot of conversations internally. One of the themes at this year's RSA conference was the identity crisis in a tongue in cheek facing the security industry. Is identity across clouds and you think about what we call SuperCloud going to be a showstopper for adoptions or you think the industry can collaborate on standards that can actually foster adoption and innovation. What's your take on that? Yeah, I mean, I think that, you know, the good news is identity and authentication have been some of the areas where there's been most standardization, not just between clouds, but between all of our use of kind of internet based services, API integration, other types of service integration. And I think a lot of those standards have proven to be relatively secure and highly effective. I think what we're seeing though now is in some cases of some of the incidents that have been documented in the industry is attackers, if you like, going after the seams between organizations. And I don't think there's really any kind of major concern about the standards themselves, but like with anything, all organizations need to think about how they're setting up their base identity, how they're setting up the federated authentication between their organization, SaaS providers, external providers, they're through their supply chain and with their cloud providers to make sure that they're not just having a silo of identity protection, but they're thinking about the end-to-end identity authentication and access provenance across their entire set of services. And a lot of organizations, including us with some of our IAM technology, are helping with that. But that's one of those things where all of the providers are trying to make that easier, but ultimately enterprises and of all shapes and sizes need to think about what's their attack surface for their identity and authentication infrastructure. And that may extend across multiple products. So it's definitely something that needs to continue to be focused on because attackers are gonna keep going after those seams. Bill, my last question for you is, I've often said bad user behavior, Trump's good security every time. What's your approach and maybe your best idea that you would share with customers and peers on building a strong and sustainable security culture? Yeah, I mean, it's a fantastic question because unless you have the culture of security, you're never gonna be able to sustain the long-term effort that's needed to manage security. Never mind any other type of risk. And so we like to think about this is it's all about embedding this intrinsically to how you build products and how you think about treating security as a first-class business risk that's there alongside all of your other critical business risks. And again, a big part of what we talked about earlier on the need for security by default and by design means that how we build things and how we give tools to customers to build things is try and make security the easiest path as opposed to a path that developers have to really work at. So we concentrate a lot on providing frameworks and tooling and embedding capabilities into products and services both internally and externally that makes security the easiest path. And then finally, you've just got to test yourself with a huge amount of further whether it's your own internal security, design reviews, penetration, testing, continuous control monitoring and validation, having really, really strong internal red teams that are constantly exercising the security of your products. And then finally, making sure there's a path for people in the outside world to tell you about the issues you might have in your products through bug abandonment and vulnerability research programs. So I think if you do all of those things and you just persist with it, embed it into the operations of your organization and the way you develop product, then usually you get good outcomes, but there's always going to be a need to find the things that get through by the use of red teams and making sure you're part of the bug abandonment program. Yeah, I got to do the work. All right, Phil, we're going to leave it there. Thank you for the work that you and your teams are doing to keep us all safe and thanks for coming to the program. We appreciate your time. Love to have you back. Yeah, thank you. Pleasure to be here. It's great, thank you. You're welcome. Okay, keep it right there for more discussions, fireside chats. We've got power panels and conversations with tech athletes like Phil. You're watching SuperCloud 3, the future of AI-enabled cloud security on theCUBE.