 Good morning, everyone. It's theCUBE live day one of our coverage of Snowflake Summit 2023 from Caesars Forum in Toasty, Las Vegas, Lisa Martin with Dave Vellante and George Gilbert. Guys, great to see you. It's been just days, Dave. I think we should just move out of here. We probably should. My family would love that. So here we are. There's about 12,000 attendees here. Bigger than last year. Last year was great. The excitement, the energy was fantastic. Dave, give us kind of a rundown of what's going on with Snowflake from a financial perspective. Where are they? What are your thoughts? Well, Snowflake, as people know, have been a wild ride. George, you brought them to our attention, I think maybe 2014, 2015. And then we've tracked them ever since. And then they brought Slutman in to do the IPO with Mike Scarpelli and the thing just rocketed. So they're basically on a run rate. If you take the last quarter, they're on a run rate about 2.4 billion. Take the analyst estimates. They'll have about 2.7 billion for 2024. They've got about a $60 billion market cap at one point there, George. They were over a hundred billion, as you might recall. They were actually more valuable than ServiceNow, a much more mature company, which kind of at the time made no sense, but it just underscored the hype around data and the belief in Slutman and Scarpelli and their model. And they've got a very high net revenue retention. It's over 150%. And they're committed to that, to the long term, they've got about 8,000 customers. And they're free cash flow positive now. And I think their long-term model is they'll throw off 25 plus percent from free cash flow standpoint. So their strategy is to invest in growth, but not growth at all costs. And they'll stress that. And it's going to be interesting to see with the economic headwinds and the sort of conservatism, how that approach, if that approach changes, but generally speaking, this is data week, right? You've got Snowflake. You have Databricks in San Francisco. You have Google trying to chime in with what it's doing. Obviously, Amazon is the other big player. And of course, Microsoft is kind of off doing its own thing, but those four are really vying to be the next generation data platform. They are, and we heard a ton about that. Starting last night, last night was a great fireside chat between Frank Slutman and Jensen Huang of NVIDIA announcing this really huge strategic partnership. Dave, you had a chance to be live in that keynote. What were some of the things that, first of all, having Jensen Huang in person significant? Yeah, that's a big deal because Jensen's a busy guy and he often comes in, like Satya, over the wire, but he was physically there with Frank, you know, discussing Sarah Guo, is that what he was saying? Yes. He was doing the interviewing, she's a VC. And you know, Frank started off with, he said, we've had a long, strenuous relationship with data. I thought that was very powerful. And then Jensen just went into a sales pitch. It's like the more you spend, the more you save kind of thing, right? Yeah. But the interesting thing is, when you looked at coming into this event, a lot of the analysts, a lot of financial analysts and financial markets were cautious about snowflakes and headwinds, headwinds, headwinds. Well, the stock was up probably 4% free market. It's probably up three, three and a half percent now because of the AI hype. And so I think, George, we've got this kind of bifurcated market. You got the AI players and you got kind of everybody else and it's kind of an interesting dynamic. We've sort of seen this before, but it feels a little different this time around. So you're referring to the AI fairy dust. There's always, whenever we go through a bubble, we wash everything. There was cloud fairy, cloud washing with cloud fairy dust. But the AI, it's not really just cloud washing or AI washing here because they do have deep integration that leverages the data. Remember, in the age of AI, you train models with data and snowflakes advantages that they own a lot of data. And the play they're making with AI is really two-fold. One, they're saying all the analytic data that you've collected, you can use to train models so that you're not just looking at business intelligence, historical data, but also predictive. But because you have all this data here, you might want to bring all this other document-oriented or complex structured data with a new type of models. And that was like before you had a data engineering pipeline where you would refine all the operational data to turn it into analytic data. Now, like with Aplica, you have an LLM, a large language model, so you use generative AI to parse and shred essentially all the documents into their structure. And then you have a unified repository of not just all your analytic data, but all your documents and eventually images, video, audio, and then you have this single source of truth for everything, all the knowledge and data in your organization. That's the aspiration. And we've heard that before, a single source of truth, 360-degree view of the customer, and generally as an industry, the technology business hasn't lived up to those promises. I think we're finally at a point where that could happen. There was some other really great takeaways from the Jensen Slutman interview last night. Jensen basically said, quote, a large language model turns data into an application. The goal should be to build an AI application, not an LLM. And then there was an interesting discussion about, well, of course, Jensen calls everything an AI factory now. The data center's going to go away, they're going to be replaced by AI factories. Slutman said, well, look, this stuff is not free. And there was a little funny moment there where Jensen said, well, let me answer that question. And basically said, it's the end of CPU scaling. Now it's all about GPUs. Accelerating computing is now here. The OS is completely different. And that's what they've done with their CUDA architecture, et cetera, and their software architecture. Quote, we are going to turbocharge the living daylights out of snowflake. And basically Jensen went into his sales pitch about, again, the more you spend, the more you save, more you make even. So it was a very powerful sales pitch by Jensen. I was talking to some customers after the fact and they were like, yeah, wow, that was a really good, exciting sales pitch. We're still not sure exactly how we're going to apply this. But there was a distinction that Jensen kind of wanted to gloss over. Jensen, he was like, this is the biggest platform change since the IBM 360, 60 years ago. But what he was saying was, it's a big platform shift. If you throw out essentially all your CPUs and all your data center infrastructure and you build it all around the NVIDIA stack, they have a new way of not just building the GPUs, but building the clusters, building the data centers, building the software on top of that. But you do that and embed it in a Snowflake container service only if you're training a new LLM from scratch. And not a lot of people are going to do that. It's going to say, how realistic is that? It's some people, let's say Bloomberg has their own, total de novo LLM that they want to build. Most people are going to take an existing model from OpenAI or Cohere and fine tune it. And there's a separate pipeline for that that doesn't require the whole NVIDIA stack. Well, wait a minute, let's take that, we're going to jump around here. Well, let's take that Blue Yonder example. Blue Yonder, I tweeted out as Duncan Ango's latest heavy lift. And I call it a heavy lift because Blue Yonder is a collection of assets and one of them is manugistics. And then Snowflake has announced this container service within Snowpark. So you would presumably take the legacy manugistic app, bring them into Snowflake. Of course, I think George, you've shared with me, Blue Yonder are re-architecting its system around Snowflake and relational AI, which is this hybrid graph database. And so, but those now those manugistic legacy apps can play in this new data world, right? So square that circle with what you just said. So this is where they could have, but didn't show us an aspiration for what data apps look like three to five years down the road. And this was all around supply chain, by the way. I failed to mention that, which is a mess, right? So the idea is, again, break down all silos, but break down all silos doesn't mean just move your microservices with their Mongo databases into one Snowflake database. It means bring all your applications into one data platform where Blue Yonder is, it's a showcase because it says you can take a real enterprise app and host not just the analytics part, but the transactional part, the operational part on Snowflake. What was not pure Snowflake was the equivalent of the application server, the application logic has written in on another database, relational AI, which is a relational knowledge graph, and that's what the semantic layer we've been talking about. So it's using Snowflake for data persistence, and then it's using relational AI essentially as what in the old days we would have called the app server for the app logic. But the main point is, you're putting all your application data, all your microservices, everything in one Snowflake essentially account so that you have no silos. That's the idea. So let's try to zoom out a little bit here because at the very high level, you get great messaging from guys like Frank. I mean, it's all data, all workloads, right? And then at the very low levels, meaning I don't mean low seniority, I mean like deep in the bowels of the technology, you get really smart people who can explain this stuff in detail. And in the middle, which is kind of what we heard on stage today with Christian Kleinerman, and he's coming on later, is this like this AWS-like fire hose of announcements. And you're trying to squint through that and say, okay, what exactly is going on here? So when they talk about all data, all workloads, Joe, Joe and I have talked about this. We're talking about a unified data management platform that has a lot of different ways to query the data and a lot of different data types that can be stored and managed. So Snowflake essentially is stretching its, I'll call it fabric or mesh, to as many data types, giving as much optionality as possible so that you don't have to move the data and leave the Snowflake environment. But explain the data types and the different ways to query and that overall strategy for the audience. Okay, and to set the context, the alternative is, let's say with Databricks, you have a standardized table format, Delta Tables, but you have a couple of analytic engines when you want to access the data. In Microsoft Synapse, same table format, but many analytic engines. With Amazon, you have 12 different operational databases, I believe it is, and a few in-lift databases. Yeah, so every time you want to, if you have a different workload that needs a different database, you're sort of either accessing a silo of data or you're translating between database types. And don't leave out Google. Yeah, so Google has done a good job really of standardizing a spanner for, you know, distributed transactional data and BigQuery for all the analytics. With all the ML, Google ML embedded. Yeah, although that can be an inconvenience because often you don't want to do it in context of the database. Yeah, right, okay. But the real secret sauce, which you really have to work hard to pull out when they talk about it here, is that you can come in with your traditional SQL query, you can come in with a Python data frame if you're comfortable, you know, where there's many Python programmers as SQL programmers or more. You can come in, they're teasing this, they haven't announced it yet, but they're talking about it. You can come in with a Google type search query, so natural language, and then that would translate it into, it could translate it into SQL, it could translate it into, essentially what's called semantic search where it understands your meaning, but it doesn't, you know, it's not spitting out SQL. And then the data, that's different queries, you can also query with graphs, and graphs means when I want to know about customer 360, don't return a flat table with, you know, customer and all those attributes, tell me how all the data is related, because if I want to train a machine learning model to understand, you know, where there might be fraud or what to recommend, I need all that data linked. So that's the magic, is that you can get at all the different data types from all the different query types, no one else has that right now. So you've got structured data, you have unstructured data, you have semi-structured data, it's in different formats, you've got transactional data, you have analytic data, you've got graph, you've got vector, you have all these different data types, and Snowflake is like this translation engine that hides all that complexity, so that you can query, and it knows where to pull the data, and then it sends back a query, and this is where AI potentially comes in, that is logical, sensible, and very high probability of accuracy. So, yes, so the AI comes in a couple of ways, where one, you can query a natural language, and it figures out, you know, where to serve the query from, what type of data, but you could also query, you can essentially use like a snow park function to say, to feed in a time series from the database, and then it passes it to a machine learning model that's pulling the data essentially out of the database, and then giving you a forecast. So normally SQL tells you historical, it gives you a historical answer, what happened? So now you can feed in data from the database, and it'll tell you what will happen, or what should happen. So yeah, it's the fact that they're integrating all the different query types, and managing all the different data types. That's so powerful, because Dave, as we've talked about before, on Amazon, they've built you, like the breakfast cereal alphabet, all the 200 services, not 26 letters, but 200 letters, and Werner stood up at the re-invent, not last year, but the year before, and said to the customers, literally it's your fault, you asked for all this choice, you know, it doesn't fit together, and what these guys have done is they figured out how to give both choice and convenience of integration. And how, if I'm a retailer, and then we've only got a minute left from retailer, healthcare organization, life sciences manufacturer, what's in it for me? What's the business impact that Snowflake is enabling with all that they've announced? I mean, you heard Fidelity basically saying, our dream is to have a single place where we can go and get all of our data, and he said that, I think he said we have 170 databases. So, they want to unify that whole picture, we talked about it before, it's the 360 degree view of the customer, it's the single version of the truth, which generally has never existed. And I think the other piece of this is building apps, data apps, on top of Snowflake. It was interesting, they're kind of talking to two audiences here, one audience is their customers, they were using their data platform, data warehouse, data cloud, whatever you want to call it. And the other audience is application developers, but I would argue that everybody's going to be an application developer. So, those retail companies, those manufacturing companies, they can now begin to build their own applications using large language models to build applications that are relevant to their business that give them competitive advantage. What do you think? Customer experience, yep. They're building LLMs into all parts of the stack, one, let's say, to make it easier to query. Or an LLM would be a personality inside a streamlet app, so that if you build this lightweight, you know, essentially user interface, where you're coding it in Python, but you might make it easy for the user of that streamlet app to ask questions of the app or the data. They're using, they're going to use an LLM to make it easy to build a data pipeline. So yes, all up and down the stack. The whole point is, LLMs, despite the fact that they're disruptive in the terminology of Clayton Christensen and, you know, innovation, it's actually a technology that favors incumbents because it magnifies their advantages. It's, so it doesn't change the business. It sort of accelerates the performance vector that the existing technologies were on. I've kind of had this debate with John Furrier on the Kube pod, and I'm kind of in the same way. John feels like it's more balanced, that you're going to have incumbent advantages like you did with the internet, and it's going to be disruptive as well. He's probably right in that sense. It's hard to predict where it's going to come from. Well, guys, we have three days, two and a half days of wall-to-wall coverage on the Kube. We're going to be unpacking all the announcements, really understanding how they all integrate together, what's in it for customers. We've got customers coming on, some that we talked to. I know, we didn't even get it to the announcements today, right? There's in this segment, but just real quick. Sloobin at a high level mentioned three. Iceberg OpenTables, the second native application framework, like the App Store for the enterprise data apps, and then Snowpark Container Services. And then there was like, I don't know, 25 or 30 underneath that that Christian mentioned. Yep, and we're going to be talking about some of those today, tomorrow and Thursday as well. So you're going to be joining Dave, myself, George, all week hearing from amazing guests. Frank Slootman's coming on tomorrow. We've got customers, execs, partners, you name it, a lot of AI coming up. In the next segment, Dave has a special roundtable with an analyst panel, including Tony Baer, Doug Henshin, and Sanjeev Mohan. You want to stick around and hear what these guys have to say about what Snowpark, Snowflake is announcing and what's in it for their customers and prospects. Stick around, our next segment comes up with Dave in a minute.