 Welcome back everyone. We are wrapping up the Cube's live coverage of MYs here in Washington, DC. I'm your host, Rebecca Knight, along with my co-host and analyst, we have John Furrier and Rob Streche. We are joined by Phil Venables. He is the CISO at Google Cloud. Thank you so much for coming on the Cube film. It was a pleasure to be here. Yeah. So the conversation at MYs has really centered around the enormous potential of artificial intelligence and the need for businesses and the government and academic institutions to work together to make sure that this is going to reach its potential. How is Google thinking about AI and cybersecurity? A very broad question to begin. How do you look at this? Well, first of all, it's kind of fascinating because when everybody's thinking about the potential of AI from our perspective, we've been making heavy use of AI for a long, long time. Maybe not the generative AI that's come about in the past few years, but traditional deep learning has been the foundation of many of our products, whether you're a Gmail user getting malware filtering and spam filtering, or whether you're a Chrome user and having the safe browsing experience. All of that is an AI based system using conventional deep learning. But certainly since we invented the transformer technology in 2018, we've been building on that with generative AI approaches. And we think the potential for this is huge. I mean, just focusing on the security potential, what we've already demonstrated with the ability to take virus total data, mandiant threat data, Google threat data, and use that to analyze threats to support the cybersecurity workforce, to drive automated detections. You know, we're just at the beginning and already it's proving a significant boon to defenders. Phil, one of the things that we've been talking about, you were on our SuperCloud 3 cyber program, we just did, also we were at Google Next, we were joking on our wrap up there, we were drunk on AI because it was so exciting. Google has a lot of stuff coming out in products. Okay, great. We've been so focused on what AI will do for the businesses, creating a creative culture, more heavy lifting kind of abstracted away with AI, that we've kind of forgot, a lot of these that I can speak for myself, took my eye off the ball and how do you secure AI? So here, what we're seeing when the threat landscape is, AI is good for cybersecurity, but also it's more about securing AI and certainly in DC, even this week, more hearings are going on around what is AI, do we regulate it? AI as an opportunity, but at the same time, you've got to secure it. What's your vision on, what's Google's position on that, how are you thinking about it, how should practitioners think about securing AI at least today and then going forward? Well, it's interesting because as a company, you may have heard us talk about our approach to AI is about being bold and responsible and everything we've done, whether it's in the security space, the consumer space, all across the business, it's about being innovative, driving these things, but being responsible while doing so and that includes many things, security, risk management, compliance, safety, a whole array of things, but particularly on the security of AI rather than AI for security. It's a whole array of things and we're certainly partnering with a lot of organizations to think about how to deploy what we published with our secure AI framework. How to think about not just the security of the AI system, but the protection of the data, the protection of the surrounding ecosystem, how you deploy AI with guards that make sure it's operating correctly and this is a fantastic challenge for security teams because they not only have to think about this as a secure software lifecycle, they have to think about the end-to-end data governance of the training data, the model weights, the test data, how you keep this thing tuned and monitored for safety and security and this is something that I think security teams in partnership with their risk and compliance teams and many other teams are going to have to build a bigger practice around security of AI. As a CISO, the psychology is do more, do more, do more, 10X data, 100X more data, budgets aren't increasing by that much, but the threats are, you see more zero days out in the wild living off the land. Is there a modernization opportunity with AI because we've heard from CISOs and others, Antoine was on theCUBE yesterday saying, if somebody was joking, they're in the 90s using databases, so tell them about SOCs, you know, secure operations. So DevSecOps is here, how do CISOs modernize while the plane's flying at 35,000 feet, they're going to swap out the engine, so to speak, this is a challenge. So that it maintain the defense, yet modernize. More tools than ever before. Well, it's a timely question because we often encourage people to think about, and again, not just for security teams, but for boards, executives, CIOs, chief technology officers, the path to better security is to modernize your technology to a more defendable platform where security's built in, not bolted on. And a big part of that transformation can be accelerated through the use of AI technology. So some of our products we've announced that we call Duet, helps people build more secure code, it helps people build more secure cloud configurations, it helps people do the migration, they can accelerate to the cloud quicker. And the same goes for many of our security teams as well, where when you look at the security teams, if they want to get to a more advanced detection platform, they can get there quicker with AI because the AI assist gives them that 10x productivity to do the transformation. But certainly, it is, I won't diminish it's a challenge for teams to go through the learning curve, but when they've applied that, I think they can get there quicker with some of the AI solutions. So looking at the Google secure AI framework, I mean, one of my favorite things is S-bombs. I don't know why, but I just love the word, and I just say, is that the road we're going down with AI is, hey, you're going to have the supply chain, know where your models are coming from, where your data's, because I could see, and we talked about it earlier in the week, that injecting misinformation into an LLM could be catastrophic to an organization. Well, I think it is a good analogy because when you think about a foundation model with training with customer specific data, you're thinking about not just the software supply chain that surrounds that, but the data supply chain of what came into that. And so I think, as part of the framework, we talk about model cards and data cards, which are transparent descriptions of how the model was built, trained, tested, how the data was sourced. Now those are important ingredients, but I think it's interesting with the S-bomb analogy, so you may be familiar with what we did with Salsa, which is the supply chain levels for software artifacts, which is a complement to the S-bomb approach. And you not just need the ingredients, need the process to build it securely. And one of the great things I think we've done, and I see other companies doing it as well, is building an AI platform that encompasses that end-to-end life cycle and the controls to make it easier for companies that don't have a large amount of expertise to get that high degree of control end-to-end from the data to the model. And I think that's the way to go. And part of it, companies are always, I would say skeptical, and we've seen it from a end-user perspective. I was talking to some at the bar last night and talking about how they're already doing tagging, they're already doing stuff within their security landscape. How do you, what do you tell them about how they're going to be able to use that data and keep it secure in Google? Because they're all, everybody's like, hey, I want to know what my data is, my data and it's here and it's secure. Well I think we've spent a lot of time innovating on that where you've got a foundation model and then you've got the data that a customer organization will bring in an adapter layer to be able to provide their specific data to tune that model for their particular corporate purpose or their use case. And that data does not come back into getting polluted into our main model, it's kept isolated, and we spent a lot of time thinking about how do we make sure that that is a high assurance barrier and so nobody wants their data to end up via a model into somebody else's queries and so we keep that very, very separate and I think that you'll see over time I would think more of the AI companies have that kind of barrier and that assurance and this is why it's very, very useful for companies to use an enterprise ready AI platform that's been battle tested and designed for safety. I think that makes sense. One of the things that's really been clear from this conference is that no organization is safe. I mean we've heard from Barracuda networks, Microsoft, Coinbase, MGM, there's so many examples of companies that have faced these cyber threats. Are there any sectors or industries or organization types that are more prone to these kinds of threats in your estimation? Well, look, the way I look at this is most, I agree, no organization can ever be 100% invulnerable. You know, you can get progressively close to that with investment and assurance and the combination of prevention detection and many other attributes but there are plenty of organizations that are more vulnerable but back to that point before where they've just not modernized, they're operating on legacy infrastructure that wasn't designed for security, they're maybe not keeping that legacy infrastructure up to date and they're just using technology that wasn't designed for the threats that we see today which is why we think it's really important for people to modernize to a more defendable platform. Phil, one of the things everyone, well we know we're in Silicon Valley and the industry, we're kind of inside the ropes with theCUBE, Google's well known for publishing the papers, even everything across the board from the Hadoop, MapReduce, Daze, all the deep learning, you mentioned a few of those, Google's deep with machine learning for many years, everyone kind of knows that. What's the secure AI framework paper that you guys put out in the past summer? What was the motivation? Was it a collection of best practices? And for those watching the secure AI firm as they was published in June, I think June, roughly. It's a well documented piece of work. What was the motivation? Was it to share? Was it to put it out there for practitioners? Was it a collection of best practices? It was really a little bit, there was a lot of dialogue going on in the AI industry around security, very kind of detailed technical information about threats to AI, whether it's poisoning, extraction, all of these other, and it's all really important stuff. And we documented and written about a lot of that. But what we saw that was missing was that a framework to enable companies to get their arms around just how to build these things and how to establish the governance and the control and the testing frameworks. And we didn't really see anything that covered that. There was some reasonable work with some of the NIST standards and some of the other emerging standards. But we thought putting together a practical framework that people could use based on our real world experience of how we operate internally. It was just going to be a useful contribution. And we've since published some implementation guides about how people can get their teams together. And we've also published some additional documentation on how to run red teams for AI. And so gradually our philosophy is as we're building up and have built up our experience list, we're going to keep sharing it. We're clearly going to keep embedding it in product. But we also want to educate people about how to do this. It's a great document for folks watching. Check out that secure AI framework that they put out in June. This brings me back to our favorite point of we love sports as well in theCUBE. But the CISO is like a tech athlete. We hear varsity game. We heard Kevin say Chinese now varsity sport. They're the top team, Apex, competitor, hacker. For CISOs, the game is fast. It's pro level, whole another pace of play. What does CISOs and practitioners need to do to be a better player in the game? Actually the framework's out there. In your experience as the game evolves, what can people do to be better up their game for themselves and their team? Well, so it's interesting. So I like the kind of the team sports analogy because a lot of us for years have said that cyber is a team sport. And there's many constructs in the industry, whether it's ISEC, professional associations, great events like here at Mandion, M-Wise where the big value is outside the presentations, the networking in the halls, the sharing of the battle scars and the battle stories. But I think really the big thing I find for CISOs is to up their game, it's about not taking on so much individually that recognize that a CISO is a senior executive in a company. And no senior executive in any role in any company takes everything on themselves. They work with their peer executives, whether it's the CFO, the chief risk officer, the CIO, other business leaders. And they figure out a way of getting shared responsibility and shared accountability, and they work together on these things. So I think CISOs talk a lot, some parts of the CISO community talk a lot about the stress and the pressure of the job. Clearly it is a stressful job, but most C-level jobs are pretty stressful. Most senior leadership jobs in the public sector and in the government and the DOD, they're all stressful jobs. But the way you make it less stressful is by not taking it all on yourself, by sharing it and making it a shared accountability among the leadership team. The stakes are high, I mean, pressure is there, but that's not why it's not a department anymore. It's not an IT subgroup. Yeah, exactly. It's the thing. Yeah, well, and again, this comes back to, and I know I keep harping on about this, it's about technology modernization and having a platform that the CISOs have a better chance of securing. And if the CISO is just going to get given a lot of money and the IT department has no money, then they're going to build and buy a bunch of cyber solutions and deploy it on a foundation of sand. So you've got to invest in cyber, but you've got to invest in your technology platform and your business processes and your mission systems. Yeah, it would seem that there was a lot of discussion over this course of this week also about the social engineering aspect of it and how that's really, and to me, it would seem like, and I think somebody mentioned it yesterday, it may have been on the panel you were on but it was around the fact of people using deep fakes and stuff like that as part of their social engineering. Are you seeing from a threat landscape that that's really, one of the things that people just need to be aware of, of how hard it is to defend against this social engineering aspect? Well, yeah, we're seeing the early indicators of that and as the phrase goes, the future's already here, it's just unevenly distributed so I think we're going to see more of that. But the way I look at this is sometimes it's a temptation to think about we've got AI threats and we're going to have to have AI to counter that, which I think is true in many respects but when you look at some of these social engineering threats and some of these phishing threats and other things, I think the answer to that is not necessarily how to counter those attacker tactics, it's how to just change the ball game entirely and for example, using strong fishing resistant cryptographic authentication tokens, having different levels of control see less vulnerable to phishing entirely, not just less vulnerable to AI driven phishing and I think we all have to think about how do we defeat whole classes of attacks, not just the evolution of specific attacks. That doesn't make sense. There's a lot of talk about AI and the future of work and the job dislocation that will come with that but I'm wondering what you think about AI's impact on the cyber security workforce and whether or not it will make an impact on the unfilled jobs in the industry. Well, I think it will. I mean, I think it does in two regards. One is it helps organizations scale their workforce and so I've often talked about we clearly have a cyber security jobs challenge but actually I think we need to solve that by tenixing the productivity of the people we've got as well as trying to find more cyber security professionals and AI certainly helps amplify the productivity of teams by reducing the toil, by providing assistance to their job but the other thing as well is I think it amplifies talent. So the great thing about the AI assistance to certain roles whether it's in security operations or whether it's in software development certainly provides a scaling in skills as well as capacity. You can take a person that's maybe kind of entry level and with an AI assist they have some capability that gets them up the skills ramp quicker and then you can maybe take somebody that's kind of mid-level and have them functioning near expert level because they've had that capability augmentation from the AI. So I think I'm on the whole very very positive the fact that this is going to amplify and solve some of that challenge but we've clearly got a few years to go in the maturity of the tooling so I'm not unrealistic about this at the very early stages of this. It's not tomorrow. But early signs are it's going to be very very positive to just help us deal with the productivity issues that will help the workplace challenge. What's your opinion on the asymmetry aspect of it because that came up as well Rob, yesterday the balance between the kind of the metaphor the other sitting on the couch that offenses just throws a few things out there and then everyone has to respond on the other side. Does that take down the symmetry work needed or not? Yeah, so it's interesting when you think about so clearly the use of AI can benefit attackers and as well as it can benefit defenders. I think all of us are focused on is how do we make sure AI is benefiting the defenders more than the attackers and how do we make sure that gets faster and better so we can keep driving that ahead? You know, I think you could kind of analyze about what's the logic behind AI will benefit defenders more than attackers. I think some of it comes down to the fact that if you can feed your organization's proprietary data into the AI defensive system you're going to have more capability than an attacker likely would have assuming that the attacker doesn't also have access to all of your data and they, you know, even in attacks they generally don't have that ahead of time. So I think there's reason to believe that it would be a true statement over time that AI for defense moves ahead of AI for attackers. But again, we're at the very early stages of the evolution of this and I've learned over, sadly learned over many, many years that you can possibly predict what comes next but what comes after what comes next is inherently unproductive. We're living interesting times. I mean, web, mobile, now AI are structural inflection points. Rob Rebekah, we're going to be at KubeCon, the CNCF. You should see the open source aspect because one of the things coming out of this innovation on AI is the open source models are merging very rapidly but also it's fast and loose. Let chaos rain, then rain in the chaos as Andy Grove once said. What's your view on open source? Obviously open source is the standard now for the software industry. How does open source get better in this era of software supply chain, S-bombs and data supply chain? Well, so it's interesting. I mean, just here in DC last week we had the open source security foundation summit and we had a number of people from the White House and DHS and other areas came and also spoke at that and so that's been a great area of partnership and we and the other tech companies and other organizations continue to invest in that foundation to provide tools for the open source community. I think AI as part of discovering vulnerabilities and helping people build secure code and how you get that into the maintainers community is clearly going to help the open source community and then similarly on the open source AI models I think you probably saw, we now have some open models available in our Vertex AI model garden and I think the interesting thing is that again a platform basis to this is if you want to use a model whether it's our model, a third party model an open model, you want to use that in the context of an end to end safe and secure, well managed AI controlled platform. Curated, they call it curated models. Yeah, I think it makes sense because I also look at it and go from a tax surface perspective, AI is going to become a target and again, it's out of my head to figure out how they're going to go after it but the data has always been a place where people but I could see where, hey, I've built and I use an LLM for the CFO's group and it has all of our jargon and it gets people up and running really quickly. Are you starting to see new and creative ways that people are trying to get at that data even to your point about poisoning it and things of that nature? Well there's a lot of risk so in an AI system as you know you can query it in certain ways to try and extract the data, you can inject prompts to cause the model to go make a query that it was not designed to do. There's a whole array of things and then the security of the AI system itself is very much grounded on the training data, the integrity of that, the model weights, the tuning feedback, the test data and it's interesting so there's some types of organizations are very well-attuned to thinking about end-to-end data control, data governance, data lineage and there's some organizations that aren't so some regulated industries come at the AI problem and they take a quite natural approach to it cause they've been obsessed by data governance and data integrity and data lineage and test data protection over many years. Others haven't, they've been maybe good at software security but not at the end-to-end data life cycle so everybody's coming up that curve about AI security is this combination of software security plus data security plus the ability to test the thing end-to-end when it's deployed. Yeah, it seems very complex and I think that's one of the places where you got to take a step back in fact it was one of the discussions again with a different customer at the bar last night was okay how am I going to bring this in and secure it and make it sure and I'm like well how do you do that for your data today and they're like oh well because I'm like it's data you got to put the regulation. You know again this is why I think there's and I know this is a bit of a pitch from what we do but it's only because we've spent so many years building this stuff is we think it just has to be a platform and it's an integrated set of tools that do marry the data governance, the testing, the model development, the software security and I think ultimately just like if you took a step back and tried to develop software without an array of tools that exist today you wouldn't be able to do it and I think AI is the same thing you're going to need a platform and a set of tools provided by an organization that's got a lot of experience in building it. Phil, thank you so much for coming on theCUBE a really great conversation. Yep, always a pleasure, thank you. I'm Rebecca Knight for John Furrier and Rob Stretchy thank you so much for joining us on theCUBE's live coverage of MI's. We'll catch you next time.