 Hey, hey, everyone, welcome to the Q pod episode 18. I'm John Furrier, Dave Ollante, we're on the road, always getting stuff done is the first for the podcast. I got my headphones on Dave, Ollante, and Snowflake. I'm in, I'm here in San Francisco for Databricks. Summit, data plus AI. Dave, good to see you. Hey, John, how's it going? So check it out. So no snowflake just announced on the Q that they're going to be in Moscone next year, next June, the first week of June. Yeah, and not head-to-head with Databricks. This year, the big controversy was Snowflake picking the date that was head-to-head against Databricks. And when I pressed Databricks, and I pressed the CMO, I talked with the CMO and the top executives at Databricks all confirmed that Snowflake actually was the one that knew that Databricks had the event and Snowflake planned the event to go head-to-head against Databricks, which we love the kids mentioned. That's not the story here. So there you go. What are you hearing? What's your story? No, no, we had this locked in in 2019. But we're going to get into the bottom of it. The pin is, we'll put a pin in that day. We will find out the truth. Yeah, definitely. What's the show like, John? Give us the low-down. How's it going over there? I'm in the Montreal. And until now, I just moved from the show floor to the leg house. We're going to do our afternoon set in the press room. Mattai from the co-founders, and they're doing a briefing right now. We're going to try to get him on camera. Ollie's been super busy. Databricks had a home run here. I've got to tell you, the analyst grades are all A's straight across the board, not 1B. So the analysts are definitely giving praise. I think it's a home run. I gave him an A minus mainly because they didn't have a semantic layer angle. We saw some at-scale folks here. Dr. Sanjeev Mohan, who was on the queue, by the way, with you, he and I both agreed that Databricks is a home run, but they don't have a semantic layer story. And their answer to this is that AI will take care of it. So this is a little nuance there. So I think it's an opportunity for companies like AtScale and others to have this semantic layer brewing. So I also talked to Vast Data, another company that's coming from storage angle. So generative AI is hitting everything from the physical storage layer compute all the way to the application. So to me, that's the big story here is that generative AI isn't just a fact or any specific thing in the stack. It's going to enable innovation up and down the stack from the physical layer all the way to the application layer to the point where a new term was set on the cube in the first time by a nice co-founder, Sharon Zau, who was a former Stanford faculty member who started the company doing LLM. She calls it LLM engineering as a new discipline. So move aside, prompt engineering. LLM engineering. This is what we're seeing. And you're starting to see the emergence of applications in the Databricks ecosystem. Again, this is a tell sign that there's a robust developer market right now inside Databricks, inside these open source ecosystems where generative AI is the hottest thing. It's super hyped up, yes, but it's legit traction. And we're seeing that up and down the stack. So I'm super impressed with Databricks. Did I think it did a good thing on the positioning side? I dropped a story on Silicon Hangar with my exclusive with Matt Garmin, who's the sales chief at AWS. Former ran EC2 business, technically savvy. He had great insight into the master plan of AWS. Yeah, so the story here is all data, all workloads, high level, and then down beneath, it's all about taking multiple ways to query the data, multiple data formats, all this complexity and bring it into a single coherent view. And that's something that, so we saw the extension of the new query types with Neva, which basically takes natural language processing and translates it into SQL. That's at least the plan. And it's interesting, Neva was a consumer, LLM is consumer search, kind of disruptor. And Snowflakes thrown that away, bought the company through that away and said, no, no, we're going to apply this to Enterprise. And then you saw Pinecone in some of the announcements, so they get vector database, we saw relational AI, so you got knowledge graph in there. So that's the secret sauce of Snowflake is pulling all these different unique data types together. They didn't either have an answer for the semantic layer. That's, I think, futures here. And that's an open game, you know? As we build data apps, you remember John, back in the day, sort of the ERP days, you said Oracle won the database wars. What they didn't do is they sort of left an open door for the application vendors. So that's where SAP, you know, got in IBM to a certain extent and then BEA took that application logic layer and was able to take that and build an entire industry around it. And so Oracle then had to go out, of course, and buy all and the application software vendors that it bought, we had to sort of redo fusion. But so that's a future thing here. I don't think either of these companies has the answer yet to the semantic layer. And, you know, whether it's DBT or AtScale remains to be seen. The big, the four big announcements here were that Monday night they had Jensen physically here, which is rare. Jensen oftentimes doesn't come to these events. And they basically, what Snowflake is doing is they're containerizing the NVIDIA stack and bundling in LLM and the whole NVIDIA stack into Snowflake. And then the other three big announcements were, you know, open iceberg tables they announced last year, but last year you had Delta. In other words, if you brought it into Snowflake it would perform better than if you left it in the external table. Well, that's changed. So that's now first-class citizen. The other big announcement was the basically they're doing taking this application framework. They announced 25 companies that are actually building apps. So it's not a huge number, but a lot of people are really excited about it. And then of course, the container service inside a snow part. So they're basically containerizing everything, making it easier to build applications. And then they had a bunch of sort of mundane announcements for developers, but actually the developers loved them. You know, stuff like, you know, synchronizing GitHub repos and stuff like that and log stuff that, you know, it's really boring for the headlines, but you know, makes developers' lives easier. The other thing that I'll close on is, we talked to Denise Pearson and she was saying that next year they're gonna take their developer conference and merge it with Snowflake Summit. There were 10,000 people here this year. She said that will probably likely double in size. So their objective was to have 20,000 people next year. So it could be interesting. So Dave, I'm trying to pull the co-founder Databricks who's right in the press room. Don Klein, our business analyst, is gonna try to grab Matei to come over on his way out. He's bummed he couldn't come on theCUBE, super over-scheduled, as is Ollie. I'm gonna see if I can grab him for five minutes, talk about LLMs. I heard him in the press room giving a talk. I mean, again, I come back on my rant about, you know, how press people do these meetings and they don't videotape first. Like, I don't, like these speakers gonna say the same shit over and over again. So, you know, it's like, what the- It doesn't really scale well. It doesn't really scale well, does it? You know, Databricks is copying our strategy, by the way. They're data plus AI and we're video plus AI. So, you know, similarity data between Databricks and theCUBE. Well, you know, plus we're ripping them off of injecting AI into our CUBE AI. So there you go to turn about as fair play. You know, the Neva acquisition in the Mosaic ML, that was kind of interesting. The narrative there is, well, they paid 1.3 billion for Mosaic ML. It was really an all-stock deal. And it was valued supposedly at the last round, which was like a $38 billion valuation. So that number's probably cutting in half, maybe a little bit less than half. Maybe it's a $20 billion valuation now. What the point is, you know, if you do the math, they probably paid $650 million in stock. But that's still a significant premium. Mosaic ML, I think, got $20 million in ARR. Contrast that with Neva. Neva, I don't think had any revenue. Remember, Neva actually spun out of Google, what came out of it, the founder came out of Google. He, I think, started AdWords right after you figured out that that was the future, John, that he beat you to the punch. And you beat him to the punch, he executed, I guess. Well, I mean, the thing about the scoop that I have on Mosaic ML was, I put my ear to the ground, the capital markets, and clearly they were doing the financing. So what happened was, is that Naveen Rao, the co-founder and CEO was doing the financing. And apparently they got stock full of GPUs too. They got tons of GPUs. So what's happening is they had tons of GPUs, vision aligned with Databricks in terms of democratization of machine learning, definitely strong there. But they had tons of capital needs to build out Silicon at scale, okay? And so this was a way to take less dilution and make more money. So their choices were, get bought by Databricks for $1.3 billion or do a round of financing, which would cost them a lot of equity dilution. And that was the real story that's why this came together so fast. Databricks was smart and they parlayed that directly into their show, which got them a hell of a balance on that big time. And they put them in the front door of the front page of generative AI. Now, when you look at the technology, the scuttlebutt is that they'd have a great team around training. And this is a key area that Databricks is taking advantage of. Also, Databricks was also kind of put off by the fact that Snowflake got the Haniva, right? So a little bit of a jealousy FOMO. And it's a white space that they needed to fill. So it's a huge deal. I'm totally bullish on it. Naveen Rao is a great executive entrepreneur. Him and Ali are going to be two peas in the pod relative to the vision. There's no conflict on execution. You got another person on the team, another killer team member. So Databricks is building quite the team. They got a great bench of leadership and then got killer engineering, right? So they're just, I mean, they're just poaching from Berkeley and all the best computer science programs. And I gotta say, I'm super impressed with what they've done. But that acquisition, home run for Mosaic ML, making a lot of cash. And they didn't have to do a round of funding. So again, good for them. Well, so contrast that with Naveen. So Mosaic ML, well, it's a lousy time to raise funding, right? So you don't want to raise now because valuations are way down. So yeah, so they get rather, I'd rather have Databricks stock at this point than have to slog it out, eat glass for another seven years. I think you got a good value proposition. Let me just get the data out there. So Mosaic ML took in $37 million, right? And they got a $20 million ARR. So let's call it $600, $650 million when you discount it down. Naveen took in around $70, $75 million and reportedly sold for $150 million. So two lexed their money in, but I don't think Naveen had any revenue per se and they basically shut down their consumer search product. And I think in both cases, John, to your point, both these companies are getting talent. That's really what they're after. And Snowflake's going to use it to add in another query type, to write and search, translating that to SQL. And I'm not sure exactly, you seem to have a better handle on it, what Mosaic ML is going to, what the fit is there. Cause my understanding it's a very narrow for people that want to build LLMs, but I think that's not Databricks intent. I think they want to broaden it to a much wider audience. Well, I think there's an awesome deal. It shows that the tech trends are hot on Genome AI. And the thing that we are continuing to unpack is we coined as the key, it's a little angle with people and we coined the term data developer. That's getting traction. And we're seeing evidence of that. In fact, two startups that I interviewed here to show, both are building essentially applications on top of Databricks. Okay, one's neural networks and graph databases as a service where you have all this IP and data associated with recommendation engines and personalization. And they're literally a data module. So for any company's got a lot of data, like say, for example, us or anyone who has got a lot of data, you can basically just pull plug in their software. So you're starting to see this plug and play mentality of the data developer emerging. And this is new persona. Okay, I will tell you that from the 12,000 people I took a video of and the opening keynote yesterday, the demographic shift Dave is a lot younger. If you look at the audience members, they're attracting the young talent. The AI data story is attracting the young developers, not data science, not data engineering. You're sure they're in there, but they're attracting just classic, good old fashioned entrepreneurs who want to build stuff. And so we are in a builder mode right now. And I think that's the thing that people tend to miss right now as everyone's talking about how to run AI, not necessarily build it. So all the focus is on building. We use a lot of your analogy so that you like horses on the track, people switching horses, not a clear leader at this point. And data versus making a play that'd be the brand to go to for the Genevieve AI developer and doing things with the lake house. They opened up the Delta sharing which is open source protocol. They got massive traction on that. And they also unified the format wars. Ali Ghazi on stage said, we're ending the format wars. We are open sourcing uniform and that's going to tie together iceberg and all these technologies, the metadata layer so that everyone can run and they give demos to show it too. So, you know, Databricks is laying down some serious deals here to create de facto standards to move the industry forward. And I think, you know, if they do that and continue to put these standards out there and keep it open source, the emergence of data products won't happen. And you're going to start to see product catalogs emerge in terms of the data products. So, you know, this ties and then after that, you'll see the developer kick in. So, I really think that we are on point with this developer notion of a data developer and that's happening in a way out in time. So, both companies are trying to appeal to developers but I think the Databricks persona has been much more toward the data science crowd. Whereas, you know, here's mixed. I mean, there's, you got CIOs, you got chief data officers, you got developers. So, you got a really robust ecosystem. Did you say there were 12,000 people there? Yeah. What's the ecosystem like? It's booming. The traffic's at the booths, amazing. People are, I mean, again, the analysts give me a straight A's across the board, mainly because the ecosystem's developing nicely. Yeah, so you basically, yeah, very similar type of, even though they're coming out from different places, of course, there's nobody coming out from the analytics world and Databricks coming out from the data science world and so on that collision course. Both companies, I mean, there was probably 10, 11,000 people here, maybe even more. The ecosystem is really robust and is cranking. It was so big, they outgrouped this place. They had to have spillover, the keynotes were at Caesar Palace and then all the exhibits were down here at the Caesar Forum. And so it's, you know, but, you know, you said de facto standard, I think, you know, what Snowflake's trying to do is make, put all the data into the Snowflake data cloud. And the narrative here is the sort of question is, okay, do I want to do the data pipelining, the data cleansing, the data engineering inside the Snowflake cloud after I put my data there? Or do I want to do it outside? Maybe in a Spark tool chain or maybe with some kind of Informatica or other ETL. And the thing that we sort of poked at, John, is it's a little different business model. Snowflake bundles in AWS pricing. So you don't see the AWS pricing. You're just consuming that from Snowflake. Databricks doesn't. So a lot of people say, well, it's cheaper to do outside. And I think part of that is the perception that you're not paying for, when you pay for Databricks, you're not eating the AWS costs. Now you're still going to pay for those separately. So somebody's got to, we're going to do a full blown TCO model to try to really look at what those full costs are. But that's something that's kind of interesting. Snowflake's argument would be, hey, you bring it in here. It's all about the governance and the security. I think Databricks obviously would have a different angle on that. What is the governance place? Oh, the other thing I wanted to say is, Delta sharing, that's Databricks answer to Snowflake's sharing in their stable edges. What's Databricks answer to the governance and the compliance and all the security, that's really what Snowflake is, that's a big chunk of their value proposition is all around that component. So I'd like to hear what Databricks story is. So there's two things that to talk about that specifically address that. One is the Delta Lake 3.0. That was democratizing the lake houses. This was the big news. One of the big news and Delta Lake with a uniform basically creates a universal format for data for Delta Lake, Hoody and Iceberg brings them together because you have a connector ecosystem that's diverse. You got the DBTs, you've got the Kafka's of the world, Starburst, Jemio, all different areas of connectors. And then you got the data layer, you got Parquet. So that's the one unification. That's the format wars that are being eliminated by having a universal format for metadata. Okay, that's critical. The second thing that they're really focused on is the unity catalog, a federated way to manage metadata and cataloging. That's a huge part of their deal. So they talk a lot about unity. Okay, that's their big thing. So the combination of Delta Lake 3.0 and the unity catalog is really where they're focusing all their energy. Again, lake house is a data lake with a structure or house is a structure. And that's why the term did lake house has got that format laid the data lake with some structure. So they want to give a little bit of a data warehouse vibe because you need that compliance. And then when I asked them directly on camera where the value is going to be, I said, how do you create a dynamic ecosystem when you have compliance and governance that could potentially drag you down? How do you not slow down innovation? And they said the unity catalog and the Delta Lake 3.0 will do a lot more things with AI. So the AI allows you to do things differently, whether it's big query or not, you move things around, you can do things, it's a translation layer that the metadata is. So I think that to me was very powerful in this uniform announcement. Yeah, so I think you're right. The question you asked is the really good one because it does slow things down. I'll give you an example. So Snowflake last year announced Unistore which allows them to deal with transaction workloads. So, you know, you've got analytics in columns, you've got transactions in rows and they announced it last year, it's still not shipping. And the answer that they gave was it's really hard. We got to tick all the boxes on compliance and security and governance and just taking time and it's getting close. The other point I wanted to mention, so I was talking about Snowflake bundling AWS revenue ends and you're paying AWS through Snowflake, Databricks is different. If you stripped out the AWS revenue, it would be interesting to compare what the revenue would be with Snowflake. I mean, Databricks, we don't really know what the revenue is in Databricks. We don't really know how much of, what the net revenue is for Snowflake. I don't think they're marking it up much of it all, the AWS cost. But if you strip out the AWS cost, the argument is that could be comparable in size and revenue size. So that's kind of interesting and the UTR data is kind of showing them kind of really converging in terms of spending momentum, not quite Snowflake has bigger, you know, account base, but those two worlds are definitely coming together. And then my other question for you is, what was the presence of Microsoft? Because Microsoft was essentially Databricks distribution channel for a long time. They standardized on Delta tables and now with open AI, you know, sort of questions remain about, okay, what does that mean for the future of that relationship? Well, first of all, it was noticeable that AWS was not on stage with them. South Den, Nutella was keynoting. First interview, Ollie Godz, he did was with South Den, Nutella. Who's Skype, who's Skype, I don't know who he was. Versus Jensen, who was actually here. So yeah. He didn't come in through teams. He's Skyped in, which I don't think that's a Skype connection, but we'll see. But yeah, big part of that. And I think that's optics. I think Microsoft's making a bet on Databricks because that they win with Databricks, right? Databricks has an enterprise customer action through Microsoft increases their TAM, but Databricks still is a huge customer of AWS more than Microsoft. But Microsoft clearly embracing Databricks, clearly. And that's a direct strategy extension Microsoft and also for Databricks to extend their TAM. Why wouldn't they do a deal with Microsoft? So it's a smart play and smart play by Microsoft. Well, so supposedly Databricks was in like 40% of the VMs running on Microsoft. And so Microsoft said, all right, let's standardize on Delta tables and then, you know, make the best man win. And so, and that was their Databricks we relied on Microsoft for go to market. And so with open AI, you know, it's going to be interesting to see how that plays out, right? Because obviously Microsoft's, you know, doubling down on open AI. So the question there is, can generative AI, so think about Databricks ML AI tool chain, it's really a supervised, you know, model. Can generative AI create an unsupervised model and could open AI and potentially Snowflake with NVIDIA's stack, try to leapfrog what Databricks has done. That's going to be, you know, kind of an interesting battle to watch. I mean, there's just so much stuff going on here that's amazing next gen. You got a lot going on. You got, you got wrap up there. I know you got, you're tearing down, you're going to get kicked out of the hall here in the San Francisco. Just other quick news to hit on. Larry Ellison says Oracle is going to spend billions on NVIDIA GPUs and three times that on Ampere and AMD CPUs in 2023. As the company expands as cloud services. Yeah, and you saw the NVIDIA news with China. So NVIDIA, you know, the A100 was, the U.S. restricted the export of the A100. So NVIDIA came up with the A800, which was not 600 gigabytes, you know, for a sec bandwidth. It was 400, it's 400 and that's below the threshold. So they did it a reach around on the U.S. export law. So the U.S. is maybe squeezing them. NVIDIA took a hit this week. So, you know, they're trying to restrict, obviously China's access to technology, which this does, you know, they need the A100 for supercomputers, but the U.S. is saying, hey, NVIDIA kind of slapping their hands saying let's squeeze you even further. And that's supposedly worth $400 million each quarter to NVIDIA. So that's something to watch. A lot of action going on, Dave. What's your rant for the week? My rant for the week is the airlines suck. And I might not get home. And I think for the flight that I'm on tonight, three out of the five flights that I'm on tonight were canceled this week. So I got another backup flight that I'm gonna try to get out of here to try to make it home for the fourth. Hopefully I can do that, but the airline stocks are all going up because there's more demand than supply. Wow. So there's a lot of people who are impacted yesterday by all the flight cancelings. Was that from the weather? I think it's a combination of the weather, lack of staff, and then there's this whole 5G compliance thing. You know how they make you turn off your phones and go in airplane mode? Well, evidently, and everybody ignores it, well, evidently, 5G actually creates some problems and you have to retrofit some of the planes to be able to accommodate that. And some of the planes haven't been retrofitted, so they're not being allowed to land. So yeah, it's kind of a mess right now. I saw 4,000 flights last night were impacted or yesterday, 400 flights were canceled. And my jet blue flight out of here that I usually take was canceled once this week. The Delta flight was canceled once this week and the United flight canceled three times. So I'm rolling the dice tonight, John, still trying to get on the red eye. Come to San Francisco, jump on Southwest. Yeah, everybody's going west, it's easy from here. So you may see me, you may see me tomorrow. All right, come on in, water's fine. All right, Dave, great to catch up with you. I know it's a short pod, abbreviated podcast. I'm super tired, been working, burning the midnight oil, getting all the stories. We have two sets here at Databricks. One in the press room and one on the lake house floor. Before that last week, you had Snowflake, we had MongoDB and HPE, we got the big super cloud event on the 18th, which by the way, we got Doug Merritt as a speaker. He's coming out of retirement. He's taking the helm, the AVATrix. We have people submitting in talks now. So it looks like we're going to get a lot of folks from the VMware ecosystem sending in talks that were rejected at VMware Explorer. So as that's the case, that seems to be the staff as well as just other people who want to just share their vision of super cloud. So it's not just VMware. We got some Microsoft folks, we got even Oracle people kicking the door down. So you see the practitioners move to super cloud. I'll tell you what Databricks is doing here and what Snowflake's doing is the tell sign that the super apps will be AI native. AI native, not AI augmenting an application and making it better. That's going to happen right now in the short term, but you're going to start to see the emergence of AI native applications and models in applications while some of the bigger ones will be horizontally scalable as a service. So we're seeing the emergence of a new kind of data stack, if you will, emerging. And it's super exciting because it's actually happening in front of our eyes. So it's like no one's pushing it. It's not like a vendor saying, use this stack. It's actually the developers voting with their code. You started to see the entrepreneurial innovation set the table and set the standard for what's coming. So the bet on open source is a good one. And I think anyone who's going to play in open source will have taken advantage of this trend. Clearly something to tell, I've mentioned that in this keynote. And if I'm Amazon, I'm doing the same thing. And again, I just dropped my post with Matt Garmin and my big Q and A. I was decided to write up, not to do a big write up, but just do it intro because the Q and A was kick ass. It reveals Matt Garmin's master plan for AWS, for Agen and AI, he lays it all out. Now he's not just a sales and marketing guy. He's, he ran EC2. He knows the technology. So Matt Garmin gives some insight into AWS master plan for Gen of AI. So that post, I went to bed last night, woke up this morning, 30,000 views already. So this is going to be really, really a killer battle. You start to see Microsoft take the knives out. They're not going to let Microsoft roll it over like this. So I think Microsoft's winning the PR war, I said this publicly, Amazon's getting beat, but Matt Garmin is not afraid, Dave. He's like, hey, you know what? We're playing the long game. That's what we do. And just like we did in our original cloud, the value proposition is still the same. We're going to enable people to stand up stuff quick and run it fast. And that's what they're going to do. And they're going to do it in a cost effective way. Let's see if they don't miss it. Again, again, AWS a little bit distracted right now, losing the war in public court of public opinion, but they got to get back on the games. That post was really, really, really good. So I appreciate it. And Dave and folks watching, let me know what you think, Pod. We'll see you next time as a short pod. And we're going to go to a little bit of us, some cuts from Frank Slutman and some of the cuts from Databricks that we can get them in there. So thanks for everyone to listen. Go to slutmanagle.com, thecube.net. That's where all the stories are popping. Thanks for listening. And I always DM us at Furrier on Twitter, John Furrier on LinkedIn, Dave Vellante, on Twitter. Thanks for listening. Hi everybody, welcome back to Caesar's Forum in Lovie Las Vegas. I'm here with Frank Slutman, as the chairman and CEO of Snowflake, friend of the Cube. Thanks for coming on and making some time for us. Absolutely, Dave. Good to see you. So investor day was yesterday. They obviously liked what you said. The stock's been up two days in a row. Of course you had Jensen on Monday night. How did that go investor day? What was the conversation like? I think what was important at investor day is, we had to fully lay out our strategies for enabling AI on our vast amounts of data that we managed and host on behalf of our customers. And they felt that it was a very, very convincing strategy, very compelling strategy. And obviously that was the number one topic coming in through the day. But we reiterated and up long-term guidance in a couple of places. And it's just good to sync up. I think the conference content helps an enormous amount to help them understand the vastness of the strategy and how far we've come. So this is where it all comes together. It all becomes real, right? There's no longer words. You get to see it touch it and really assess the reality of everything. When you turn the engineering investments into product, that's kind of what it's all about in tech. But they obviously bought the story. I'm sure there was some skeptics in the room asking you some hard questions. They talk about the competition. How did you address sort of the... They're always asking you about the TAN, but how did you address the expansion into new markets? Like I say, they obviously bought into it. But what was the narrative there? Well, the narrative for Snowflake as a data cloud is it's a multi-layer kick, right? We obviously have infrastructure elastic consumed by the drink. We have live data in extraordinary amounts. We have the complete workload enablement layer, the programmability platform, which is Snowpark, the marketplace, and then the transaction model. Transaction model, people can monetize data and applications. So the strategy really is, we enable data engineers big time. That's sort of our, I call them our homies. Those are the people that we're super close to historically because we're a database company from way back, but we now have completely embraced the functional layer that lives above the data layer. You have data engineers and software engineers, and we now said, look, software engineers and data engineers, we address both these audiences. It is a big vision, but we think in the cloud, you have to have this. In a non-premise environment, it's very, very different. You can really stratify these things. But in the cloud, it's like, wait a second. All of a sudden it's like, who manages security and governance here? Well, it's not you, it's them. So in other words, unless we step into that void and say, no, no, no, we own it. If you're on a snowflake, you're safe, you're compliant, all these things, that's a really important thing because if we're not doing it, who's going to do it? So there's a lot of discomfort around where the software engineers live, where the data engineers live, how they interact. And that's really, we want to really usurp that space, if you will. Well, governance and security is obviously a big part of your promise. Now, yesterday at your keynote, you had a little joke. You said, every time I say AI, the audience has to take a shot. So I thought, Frank, here's the thing. Turnabout is fair play. So I got a little Jameson here. I don't know. Yeah, that's my son's favorite drink. And then I had to go up and down the strip yesterday to find these beauties, you know, a little Flamingo shock glasses, but there we go. So we'll put this aside, maybe later on tonight. Noah, keep count of how many times we say AI. I have to consider an entire bottle probably. You can't have the CEO of a public company, you know, doing shots on camera, but anyway. Okay, I want to go back to Monday night. Sarah Goh, I think is how you say her name. She asked you a question. Why Snowflake? And then you said, because we're the best. Yeah. Okay, what makes you the best? I want to dig into that. Well, you ask a general question, you get a general answer, right? But what makes us the best? I mean, our core is an extraordinarily optimized database engine, right? I mean, a lot of the, the superlative performance comparisons in the economics, you know, are because of our database engine. Okay, so, you know, when we move into snow park, right? The database engine, you know, backends the snow park later. So people are seeing these two to four X, literally two to four X comparisons in performance and economics relative to their spark jobs and they're like, how does this happen? Well, it's because of the database, right? There's a bunch of other things as well. But fundamentally, the database is superlative, right? So we bring that to every single workload, right? I mean, you're running Python, you know, you're running our database engine at the back end. And that's what creates all these opportunities for extraordinary performance and economics. Never mind the governance and the simplification operationally. So I want to dig into that a little bit because the high level messaging, I think, is really clean. All data, all workloads, very powerful. And when you go down deep and you talk to folks like Benoit and Terry and the technical people, it's really solid, a deep understanding. I find, Frank, in the middle, when I kind of get lost in all the products, it's hard to sort of connect the dots and I want to run something by you and see if we're understanding this correctly. I take a lot of notes when I'm on a show like this. So it seems to me you've got a lot of ways to query data, right? You got SQL, you got data frames, you got Neva now, you got search, you got supervised machine learning libraries, maybe with AI, do a shot later. You got unsupervised. You also support a lot of different data types. You start with relational, rows, columns, streaming, vector. We saw all of that this week, especially when you squinted through there. What doesn't come across, I think, to people, and I want to get your take on this, is the magic is that's all integrated. I can query those different data types, those storage platforms, and then you take care of it and give me a consistent, coherent return. That's magic, that's not easy to do, because Amazon has a lot of different data types. They have a lot of different query options, but it's that last piece that creates the difference. Is that the correct understanding of the magic? Yeah, one of the things that's been said a few times in the last couple of days, Snowflake is a single product, which is quite an extraordinary thing because in software engineering, in the spirit of expediency, people lose their religion very, very quickly and they start hacking separate engines. Sometimes they have three or four different flavors of the same thing, because it's just quicker, right? I spin up a separate team, I build a separate engine, and I get to market quicker that way. We have resisted that. We still have one product, which is an extraordinary engineering feat, not just one product, but also, we can do two man and a dog, as well as the Fortune 10. It scales from small to the largest around. But in terms of your comment, we have to support all the different user types on the data cloud, right? You're an end user. We support you, and a lot of the generative AI is going to be very much aimed at end users, because if you're literate, you'll be able to get considerable value from data now. Well, that was a heck of a lot harder for most of our lives, right? But you move up the spectrum, dealing with hardcore programmers, Python, Java, and a very different world view, and their life experience is completely different from a SQL engineer. We have to support all these people, but behind these interaction methods is the exact same product. You're interacting with the exact same engine, the exact same governance layer. It just, you engage with us differently, based on how you want to do things. We've historically, SQL engineers, SQL analysts, basically data people. Data engineering people were really our folks, right? But now the whole software engineering culture is coming into Snowflake as well, because we got data, we got function. You heard Meheir from Fidelity yesterday talk about data and function. Data engineering, software engineering. Well, they both live on Snowflake now. Yeah, in 170 databases, I think. Somebody said yesterday that we're all data engineers in a way, that's true. Some of them are more sophisticated than those. The other point you mentioned is small to large. That's Snowflake's spectrum. You don't really have that at service now. Great company as it is. It was really kind of mid to large. I want to come back to that example, because one of the things you talk about a lot is supply chain. And it's company Blue Yonder, which is interesting because it's a legacy company. It's an old man logistics firm, right? And they're kind of replatforming on Snowflake and relational AI. So my understanding is with the container services in Snowpark, they can bring in all those legacy apps. Exactly. Containerize them and then actually have a consistent data platform. And a fully governed data platform. And using our database engine again at the back end. Yeah, there's two things that really matter in supply chain management. The first one is complete visibility across the supply chain. This has historically been an unsolvable problem because the supply chain is made up of, I don't know how many independent entities and they're like, well, this is my data and you're not going to see it or touch it and building network connections was hard. We have EDI, remember that term, right? I mean we're just hacking data connections from A to B and the people are like, I'm giving up. This is just too hard. So that problem gets solved with Snowflake because you got two parties on Snowflake. It gets a matter of minutes for them to have visibility to each other's data. It is a data integration problem, first and foremost. But after that, yes, you mentioned all these legacy engines that Blue Yonder has. They can go and rebuild those and rewrite those. I mean, some of these things are 20 years old but they're still used by many, many manufacturers, retailers and so on. Now they can just containerize those and use them as a service. And they get all the benefits of a modern cloud platform. This is pretty great. So the other thing is they have to bring enormous compute to bear on these engines because they are very, very, very short burst and there's an event in the supply chain and now they have to run all these scenarios and what do we do, what do we do? And it requires tons of compute. Snowflake is ideal for that because you can stand up the cluster and the workload in seconds, massively provision it, run it for its duration and then you back off. So it's just the elasticity of the compute and the data is ideal for supply chain. When you say back off, you mean dial down the compute? Yep, unwind it. Which others have tried to say, oh yeah, we're going to separate compute from storage. They quite get there, but you guys were the first. I want to ask you about the ecosystem because the ecosystem continues to grow. It's critical. I did a little mini video yesterday saying the hallmark of any cloud company is its ecosystem and that's proving true. It's critical as you become the platform for application, for data apps. How have you thought about the ecosystem, its growth and what are you specifically doing to advance that ecosystem and grow it? Well, you look at things like Streamlid what has a huge community around it, right? I mean, that's just, Python program is reflexively reached for Streamlid when they want to publish something, visualize, animate because machine learning models are for programmers, right? Or at minimum, fairly sophisticated technical people. We have our native apps framework, right? In other words, what we're trying to do with the data cloud, and I said this yesterday in the keynote, it's like we're trying to set up a renaissance and software development by really lowering the bar, right? If you wanted to build and publish and monetize an application, right? What does that take to do historically? I mean, you got to raise venture capital and you got to staff up, you got to buy tons of hardware, right? And then you need to have a scalable enterprise grade, you know, high trust platform. You pretty much give up before you've gotten started. So we created a full stack where not only can I build it, not only can I sell it to the enterprise because it's on the Snowflake platform, and but I can now market it through our marketplace. By the way, you can find it through Neva Search, you know, very easily. And I can monetize it, right? I get paid on the transaction and we announced yesterday also that people can use their commit dollars to Snowflake to also buy data and apps, right? So if you're too many a dog and I want to build the service, I want to publish it and monetize it, all you have to do is catch the check at the end of the day. So we've massively lowered, you know, the, you know, what it takes to start a software business and it's very, very fine grant. And we're hoping to set up a renaissance and software development because we've been in software for most of our lives here. It's been hard and risky and expensive, all these things. You know, we bring in that way, way down. You know, that's our agenda. That's what our native apps framework, you know, it compared it to the iPhone. Obviously, you know, Snowflake is our version of the iPhone, but it's, that's what it is, except that it's multi-clouds. You can build one app and it runs on all of them. And it's the app store for data. It's like the next generation of cloud, as I see it. In other words, when Amazon started, if I were a startup, I didn't have to go buy a bunch of servers and Oracle database licenses and that was great. Then this is next generation, which is the integration. Cloud really hasn't had a software development paradigm the way we've had it in mobile, for example. You know, so we're really asserting that paradigm as a cloud company. Like, this is how you build apps in the cloud. Right now, they're both every other shop, but we've got one, you know, and it has a lot of advantages. You know, it has life data, right? It has full governance framework. It has full workload-enabled marketplaces, transactional models. Just think about it, you need all that stuff. Well, the other thing that the nuance that a lot of people might not have caught is that how you leveled the playing field with iceberg tables. There was, you know, a penalty for keeping them external. You could have an advantage coming inside a snowflake. That's gone now. If you want to leave them in an external iceberg table, you're going to get the same performance. So that's, again, a little nuance of how you're, it seems to me you want to be the best place possible to build apps, not just tick a box on open source. People can choose. Look, you know, you want your object to be, you know, an iceberg open table format object. Do you want us to manage it? Do you want us to be the custodian? Do you want some other tool to be the custodian? You get to make those choices. Do you want to manage the storage? Do you not want to manage the storage? Right, so people will learn over time, you know, what is, and by the way, you can change your mind. Okay, it's not the decision you have to make for all the time, right? But these decisions all have trade-offs. People are going to learn, you know, what really fits for their particular circumstance. Sometimes people are just reflexively reacting to, well, you know, I don't want to duplicate my data, but when you duplicate your data, there's actually value to that because it's highly optimized. It's highly organized. You're sanctioned and trusted. You know, I mean, a lot of data lakes are struggling. And the reason is people are like, I don't know what I'm dealing with. It's untrusted, unsanctioned. And then people don't trust the results, you know, that the workloads generate, right? So they back off and they go back to their data warehouse saying, well, that's sanctioned data. Everybody uses that. Everybody believes that, right? So, you know, data lakes really need to come up, you know, in terms of the level of function that we have already brought to the data. By the way, we know, we of course, you know, are the data lake for most of our customers. They view us as a data lake. You said that in the day. Yeah, early. The other day, the other night with Jensen, it was kind of funny, you guys going back and forth. You did say that consumption models require discipline. And he said, you know, AI or LLMs are going to take that to a new level. So, explain what you meant by that and how you plan to deal with it. Well, look, it's all very fun, you know, to sort of feed the entire great despy, you know, into your prompt and then let it summarize and everybody goes, ooh, enough. What is the economic value of that? I mean, the reason that search became such a gigantic thing is because there was a business model to pay for it, right? There needs to be a business model that's going to pay for AI as well. I mean, once the fun and games were off, people are going to have a very hard-nosed look, like, what am I getting for this? And you can ask the question, what should I have for dinner tonight? But how does that translate, you know, to the business model? Where's the alignment, you know? And if you can't do that, people are going to go, like, well, this is an expensive hobby, you know? And you are sure in academia and all these places, but in business, we need to see returns for spend. And that's a very important thing that's going on in cloud in general because people are consuming, consuming, consuming. And you see, if O comes in, wait a second, you know? Well, what's the relationship with the business side here with all the money that we're spending? This is really important. They go hand in hand. It is the technology and the business model. There needs to be a business model, you know? Meaning consumption is aligned with value. Yeah. And then the more you consume it, it's like. Return. Yeah, yeah. It'll be a little bit more pointed. Yeah, yeah, ROI, right? We're going to spend a dollar, we're going to make 10 or whatever the IRR is going to be. You need to be able to pay for it, you know? So in observing you and Mike, my understanding is you don't incentivize your Salesforce to go hard after new logos, rather you're focused on consumption. Is that the right understanding? No, we do both. You do? Yeah, we actually have selling motions that are strictly and only focus on new logos, specific new logos. Not all logos are created equal. So we have a, we're very targeted in that way. And then there are the people that are dealing with existing Snowflake accounts. Those are consumption driven and based because they need to be. So two very different models. Yeah, okay, so I didn't realize you had both. And how's that, how's that former going in terms of the new logos? Well, it's actually working much better because we have created full separation because landing is a totally different muscle motion than expanding. You know, expanding is all about use cases and workloads, developing the account where there's landing, those are technical battles and CTOs and CIOs, you know, all the stuff that you guys, you know, revel in. Yeah, so you're a software guy, but you've had a couple of stints at some hardware companies that Data Domain and EMC a little while. Hardware companies, they don't want to announce a new product while there's an existing product in the field because it'll cannibalize sales. You guys take a different approach. You're giving the network, the ecosystem visibility on what's coming. Can you explain your philosophy in terms of a lot of stuff in private preview or public preview, what's the philosophy there? Well, I mean, a couple of things. First of all, the preview is a signal very precisely what's coming. So you know, you know, secondly, you know, we need to put the thing through its paces. I mean, the long-pwned attempt for us to deliver content is always the security and governance aspects. And that, in the case of Python, it took us literally years to fully plug Python as a high-trust enterprise-grade programming framework because you can't have developers sort of download their libraries really nearly and stick them in and who's watching that? You know, who's certifying that, right? And so we took an approach where we're like, wow, we have to look through the entire supply chain and understand that this is bulletproof because we're going to give it to an enterprise. They're going to say like, hey, Snowflake, I can take it to the bank, right? And we're going to have to say, yes, you can take it to the bank, you know, we're liable and all these kinds of things, you know, for them using it. So that takes time. So these previews, and I heard a little bit of, you know, of the analysts, you know, complaining about that. Yeah, sometimes it takes more time than we'd like, but that is a commitment we have to the large enterprise. We are, look, we sell the two men and the dog as well, and they may have less concerns around governance, right? But it's the same product that we provide to the largest institutions in the world. Christian said we'll ship it when it's ready. So we wasn't apologizing for it. Neither were you, there's a lot of fun, a lot of smiles, the culture here is very playful. You having fun? I'm having, look, you know, this is such a great time to be alive in this industry. I've been waiting for this for decades, you know, we've been grinding it out, you know, since the late 80s with technology that was just agonizingly difficult. And all of a sudden we were in this place where, you know, wow, you know, the acceleration, it's warp speed. You know, I think I'm against it the other night, I mean, it's very funny, you know, you turn the AI factory on and, you know, while you sleep, it's generating all these amazing intelligence and all that. Maybe a slight overstatement, but probably not by much, you know? I mean, this is coming like a freight train, you know? Well, we've seen a lot of waves, Frank, and it feels like, and I think most of us agree, this is potentially the biggest one we've ever seen. Frank Slubin, thanks so much for taking some time with us. Really appreciate it. You're back. All right, keep it right there. We'll be right back with our next guest. You're watching theCUBE Live from Snowflake Summit 2023. Right back.