 Welcome back to SuperCloud 3. I'm John Furrier, host of theCUBE with Dave Vellante. Day third episode of SuperCloud 3. We're bringing down security plus AI. Mario Duarte is here, VP of security at Snowflake. They know a lot about data and data is a security opportunity. Mario, great to have you on SuperCloud 3. Thanks for coming in to our studio, we're live. Dave, we've been talking about security and AI and you get all the good gigs like Snowflake. I missed it this year. What's interesting, Mario, about your regime. So you're a VP of security, but you're the de facto CISO, I would say. And you report to Sonny Betty. Yes. And Sonny is the CIO and the CDO. So the chief data officer and the chief information officer, roll it together. And you don't have the CISO title, but essentially you have that job. What is that regime? Tell us about that sort of org structure. I think it goes well with how we are structured as a company. And Snowflake is a data-driven company. And of course, security is actually a data problem. So it makes total sense to be in Sonny's organization. But sometimes it's rare to see the chief data officer and the CIO together. That's, I guess it's becoming increasingly, So how do you think about your time? Are you mostly spending time internally on securing sort of internal Snowflake or working with partners, working with customers? What's the balance? You know, as you mature, when you first, I've been at Snowflake for about nine years now. And so your roles changed as the company grows, as we go into different verticals. Early on, I had a lot of communications with customers. Why? Because customers were going to trust you with their data and they wanted to see your face. They wanted to know who they were speaking, who they were trusting with their data. So I did a lot of customer engagements early on, but then you have to build a group because you're not going to be able to scale. Because ultimately my job is to protect our customer data and our data as well. So if I spend too much time outside talking to customers, I'm not focusing enough on safeguarding our data. And I want to say I did watch Frank Sloobin's earnings call. And Dave, you were mentioned actually on the earnings call. That was two years ago, that was investor day. Investor day, yeah, video of Frank Sloobin on an investor day. Mentioned Dave Vellante, he mentioned SuperCloud, who's asked the question around the growth of Snowflake. You guys have done great job with the cloud, now increasing the market. As you get multiple environments going on, the data is a super important part of the velocity that security needs to operate at. It's a whole nother level of speed in the game. How do you look at that? How should customers be thinking about it? What are they looking at doing? Because security and AI now is an opportunity. It's a data opportunity. How do you see that playing out? Well, it's supposed to a blessing and an opportunity. I mean, an opportunity, but also there's some caveats to it. Look, most security folks that I speak with, when ML first started, you heard ML all the time. And if a vendor came to us with an ML idea or an ML pitch, we usually frowned upon it and we just walked away, not interested in it. But I think generative AI is a different story. Gen AI has definitely made us start thinking about how things are being done now. This ability to create unique content based on the data that you've provided, these models, has increased velocity and the bad guys are going to use this. The bad guys are already doing this. And so the question is, are we going to let those guys beat us? Because we're playing defense here, right? So you need to adjust. You need to evolve as a security expert. So before we get deep into the Gen AI, I want to ask you about multi-cloud, cross-cloud, because you guys actually used to say they're the poster child for super cloud. It kind of helped us sort of with our thinking when John and I started promoting this notion of super cloud. As a SecOps pro, how do you think about cross-cloud security, the different shared responsibility models, all the different APIs, all the things that you have to do to create that secure governed private environment? You know, we were born initially in AWS. So I know AWS. I'm more familiar with AWS. Less with Azure and even less with GCP as my role in the company changed. As we went through different cloud providers, it's like learning a new language, even more. Imagine having to drive on the right side one second and then have to ride on the left side the other second. These cloud providers are different. Yeah, they use some similarities. Concept of a firewall, maybe a security group. But they also use different terms, even the terminology, the technology, the laws are even different. Now imagine having to do that and learn that and then become an expert at it. And you got to get the talent. You got to figure out the talent. Then you also have to figure out, keep the talent and then you got to get really hyper-scale, right? You have to do this really quickly, really efficiently and more secure than maybe even sometimes your own customers and doing three different clouds. It is a daunting, hard problem. Is AI gonna help with that problem and if so, how? I'm still fundamentally, if you think about it conceptually or fundamentally, yes, it should. Have I seen it yet? Not yet. So I think that's an opportunity for companies to do. I mean, there's a lot of different companies trying to do this today, more established companies, public companies, but also startups. But it hasn't really resonated as well. We had Jeff Jonas on the other day and his comment was, yeah, but it's generative. And so meaning it doesn't give you the same answer every time and with security and in the enterprise, you need that consistency. So it's our sense that practitioners are struggling is okay, how do I really apply this? Yeah, I can apply it to summarization or maybe helping write code and all the other things, the ideation, things that you use chat GPT for, but in terms of specifically applying generative AI and large language models to security, it seems like it's very much unclear. Maybe it'll help you write run reports or something like that. I think, yes, for now, there's to run reports makes sense. You got some insights into things. Look, the reality is nothing stays the same. AWS, the Azure, the GCPs of the world are always coming with new services, new API, new ways of interfacing with their services. That's changing. Guess what? Your customers who are using those as well have a unique way of interacting with it, just like their data is unique. So to say that we can't make predictable, yeah, you're dealing with an unpredictable environment. You know, one of the things I've always loved about Snowflake is you're not a data warehouse, but you're a data cloud, which is essentially the data warehouse and the cloud refactored in a way that customers want to use it and the results are a great success. What's next? Because as data becomes more important for say developers or those data scientists which gets democratized, there's more data available. It's going to be stored everywhere. More observability data is going to be stored. When people see that data is the value, they're going to store more. They're going to want to do more with it. How do you see that from a security standpoint and AI working together as more data gets hoarded for the customer? Because, you know, that's the common theme. Data is the value, control your data, use open LLMs and foundation models where appropriate, use generative AI. Where do you see that connecting the next step? I'm going to go with the fear factor first instead of being the opportunities, but I'm going to look at the fear because I'm a security guy. We have a flair for the dramatic, okay? No. So imagine this whole game about somebody trying to breach a company or breach you or compromise you. The whole thing is a money game, right? They're trying to figure out how influential you're going to be, whether you have access as an administrator to different cloud providers or applications or data, right? They want to make sure they understand who they're targeting first. Well, you know, look, companies have made money out of anti-fishing technology, but fishing attacks still get through. And by the way, these things are really simple, really easy, and we human beings constantly fall for it. Imagine now having these bad guys using Gen AI where they start looking at public data about you. And they start having a conversation with you, maybe online, maybe email, maybe even a phone call that sounds like you. How are you going to be able to tell the difference? That that's not Mario on the other line. This is happening as we speak. Now that presents opportunities for companies. And that also presents challenges for companies who are trying to defend against it. I don't have an answer at the moment. We've had a couple of your colleagues on over the years and we're going to hear from one later on today. Minyona Kote, who's with NetApp. She's the CISO of NetApp. Lena Smart, I've talked to Lena before. My question is around building security-conscious cultures. Lena has a technique where she puts the deepest security minds with the people who know nothing about security and said, talk, see if it's some common ground. Minyona says it's very simple. I communicate in terms that everybody can understand, not SecOps terms, but don't click on the link. Okay, so that's how she, one technique she uses for building a security-conscious culture. How do you approach it? You have to use different techniques for different audiences, right? Maybe you have to understand that people take information in different manners. Some people would want to be getting, they'd like to get a phone call. Some people would want a Slack message. Some people might even email. I don't think email is a very good, useful tool, quite honestly. You have to understand your audience. You have to understand what is their motivation, what is the job that they're trying to do. And I personally believe that you need to listen to your audience first, your employees understand what their challenges are before you start telling them what not to do. So it really does come down to who it is you have in the company. When you're smaller, you might have a lot of techies, a lot of engineers, if you're a software company. You have a different way of approaching that than you would with some folks who have marketing or sales. So it's a very different, you have to tell your message to your particular group. We had Z Scaler in earlier today. We're talking a lot about zero trust. You know, you often say zero trust before the pandemic was a buzzword after the pandemic has become a mandate. David Strom, one of our journalists wrote recently about the challenges of adopting zero trust architecture. Everybody wants to do it, but there's all these piece parts that they're struggling with. So it's like they're partially zero trust. How are you approaching that problem and what are you recommending for your customers? I think it is partial zero trust. I think it's where we want to go. One of the biggest challenges about zero trust. Look, we as a company spend a significant amount of money in securing our environment and it all starts with you, with the laptop that you're using. How do I ensure that that laptop belongs to Snowflake? Which my company? How do I ensure, okay, so now I know it is. How do I ensure that it's the height that has all the security services working effectively, right? How do I ensure before you connect to the more important data there, the production environment, the development environment. So you wanna make sure that the laptop belongs to Snowflake. You wanna make sure that the laptop is healthy, right? But even after you do that, after you get some sort of authentication, even if you're not using passwords. So now people are getting away from passwords. They're using unique keys. Think of it as a password. It's like, I don't know if you ever heard of enclave technology. They're kind of like the Roche Motel, right? Roaches go in but they never come out. It's the same concept. Your digital certificate goes on an enclave, never comes out. Guess what? That acts as your password and your persona. But what happens when you authenticate to the IDPs, the authors of the world? Well, they give you a SAML token. They give you something that says, this is you, I've authenticated you. Now you can proceed and go to the different, you know, sales forces of the world, the work days of the world. The applications, the business applications you're gonna interact with. That business application is all they're gonna look at is say, was that signed by somebody I trust? And the answer is yes, they do. Go ahead, advanced go. Well, bad guys are getting good at that. What they're doing is they're bringing in your laptops and then they scrape your SAML token out of the computer and they put it on a different computer and guess what happens? Now you've become Mario. You've become that laptop as well. That sales force, that work day application, it's not pointing sales, I'm not blaming sales force work day, I'm just talking about the actual, as an example of a SAML problem, right? Where zero trust just breaks down. Where's your zero trust at that point? You have a machine that's not trusted, a person who's not even Mario impersonating Mario with a certificate that's been signed, a SAML token that's been signed, it's game over. There's your zero trust. Do you think, so when you think about the, you know, studied a little bit the anatomy of the solar winds attack, which is quite fascinating. How do you think the anatomy of attacks will change as a result of AI? They're going to become more, more, look, I have a team, a blue team, I have a red team, I have different teams that report into me. The blue team is protecting the company. The whole job is to try to figure out what is suspicious activity? What is different about what John did yesterday than what he's doing today, right? And understanding that. And you need a lot of data. You can't be in silos. You need a bunch of data about John. I'm not talking about your personal use, John. I'm simply talking about, I know John logs into his laptop at this time. I know John does X during this time, et cetera. And you collect that information and you start creating a portrait of who you are. Now, if an attacker is in there and it's clumsy, they just stole your credentials, they may act very clumsy. They may not act like John. Those things that they're doing will trigger a suspicious activity. And somebody like in a blue team like us will look at that and says, is that you, John? I'm gonna call you John. Is that you? No, I never did that. What are you talking about? He only has one tab open. I have 10 tabs open. That's fine. Exactly. 100, 100. Now imagine having an AI tool that actually really does impersonate John really well. To even the activities he does. But he just does something really slightly different. Something that's not gonna get picked initially. That's a problem. We've been asking a lot of our guests and our journalists as well. We've been asking ahead of SuperCloud. Ultimately, do you think AI is going to be more beneficial to attackers or defenders? The attackers will initially, defending, okay, look. So how do you defend against some of this stuff? A lot of the stuff, if you're working with these models, you ultimately gotta say, where are my models running? Oftentimes, models are running outside of your own environment. So as a security person, you struggle with, well, do I really move my code outside of my company? The confines of the protection of my company? Do I move my data? These logs, that maybe it's not customer data, but it's behavior, logs, how you're accessing systems, modifying things. So we in security are gonna be hesitant to move our data outside of our environment for those models. You need to bring the models to your company. Because what happens? Well, you're kind of hesitant. You're slowing down. That guys, they don't have any problems, right? They're just going out there and looking for opportunities. They're not being whole, beholden to compliance of regulatory mandates that a company, a normal company would have. So I think initially you're gonna see a lot of the offensive people taking advantage of it. I'm hoping, Dave, that it's either gonna, most of these innovations come from new companies, right? That's where there's VC money involved in this. And I'm hoping the VC community, the good startups, young startups, develop tools to help us out, help all of us out. You know, one question I want to ask you, one theme in amongst security practitioners that we hear a lot is it's a speed game, pace of play is high, but you gotta be where the puck is gonna be. You gotta think where, what's next? Cause constantly defending, gotta stop the offense. What's next? What do you think about next? When you think about that next mile marker, that next dot to connect, where that puck will be skating to where the puck will be. What's that for you? What didn't you look at this from a security perspective and super class? You think about that next step. Where is it? What's next for you? It really comes on to understanding, we talk about insiders with privilege access. So for years, for the last 15, 16 years, if you look at all major data breaches, and just look at the data, major public data breaches, million records from more, they have been a result of an insider with privilege access to misconfigure something in the cloud, left a security group open, S3 open, publicly available. And people were able, opportunities are always out there to grab the data and take it with you. We need to advance. We need to go up to the level of, think of the insider with privilege access that wants to do bad. Think of an employee that has that kind of privilege that understands your environment, understands your weaknesses, your strengths internally, and they also have the keys to the kingdom. And now that's where the opportunities will rise. Because that's what AI is gonna do, really. More social engineering, faster. Faster, better, much better. And the economies of scale when it comes to this stuff for the bad guys, look, there are companies out there, they sell malware to bad guys, right? They even have support, customer support. They want to make sure that when you buy their wares, their malware, that it's functioning well, and they want to make sure you get good ratings for them. Make client services. Yes, no, there is. This is absolutely happening. I know any tool can be a ransomware. Exactly. When you go out in the dark web and say, hey, I'm inside of a company, and you talked about enclaves earlier and confidential computing, that only takes care of part of the stack. Correct. It doesn't take care of somebody who's inside that wants to do evil. Correct. So look for those tools, look for the companies who are thinking about this problem, because it's a really difficult problem. It's a scary problem. But that's where we gotta go. So it transcends technology, right? I mean... Yeah. But you also need lots of data. It sounds like a broken record, right? You need the data. You don't need the data to be inside of them. You need the... I know I'm... Data guy. Yeah, my data guy, right? Well, I mean, people are hoarding data now. They see the value of the data. What the LLM's and Foundation models have shown us in GNAI is the value of your data is important because you're going to interact with other data. Correct. Data fusion's happening. Correct. A lot more of that. Yeah. And how do you know if your model's worth anything if you got garbage data, you know, garbage in, garbage out? So your model's stink, right? Well, I'm fascinated by the application market and data. And I think the data world, that's why I like you guys not being called data warehouse because cloud enables people to do things. And I think apps are coming, data native in apps. And I think developers are going to program with data in real time, in line with their CICD pipeline, like security. Yes. So shifting left and security, great advancement. Take care of business on the pipeline. Data will have the same role. We've been talking about this on theCUBE all the time. Yeah. Well, it's what Jensen said Monday at the Snowflake Summit. We are going to supercharge the, you know what, out of Snowflake, bring the AI to the data. I mean, Databricks talked about the same thing. I mean, that's what leaders are going to do. You know, Warren Buffett talks about moats. Jeff Jonas talked about that. The moat, at least in part is going to be the data and the value of that data, the quality. The other thing Jensen said that Monday night was that if I were you, I would be asking what's my best data? What's my best database? How do I supercharge it with AI? That should be a focus of your conversation. We had a guest on that said, you know, the bad guys are like the mouse, they want the cheese. The cheese, the cheddar is the data and they want the data. So protect the cheese. Yeah. That's the admission at the end of the day. To put up mouse traps. I can't shit at you for raising this. The cheddar, you know, the cheese. Bob Ackerman from Allegious Cyber, love that line, but that's true, the data is a value. Right. You got to protect it. Unless you start shifting left and more people are doing more automations, stuff is happening less in the production environment and it's happening more in the development. You talked about solar winds as an example. The battles are being fought on your laptop. They're being fought in your development environment and you got to start monitoring that. You got to start analyzing. And guess what? That's not predictable. In a dev environment, they're constantly downloading things because they're trying to improve or improve features, writing new features, et cetera. So it doesn't make it predictable, right? Now you got to start figuring out what's bad and what's good. Well, I'm going to start buying storage stocks now, Dave. More storage is going to be needed in store. More storage. Never been against data. Never been against data and storage. Mario, thanks for coming on and sharing your insights on security plus AI here in some cloud three. Thank you, John. Thank you, Dave. Thanks. All right, so we have three. I'm John Furrier, Dave Vellante back with more live coverage here in the studio after this short break.