 Okay, we're now going into the technical deep dive. We're going to geek out here a little bit. Benoit Dajaville is here. He's co-founder of Snowflake and president of products and also joining us is Christian Kleinerman, who's the senior vice president of products. Gentlemen, welcome, good to see you. Yeah, nice to see you there. Good to see you, Dave. Thanks for having us. Yeah, very welcome. So Benoit, we've heard a lot this morning about the data cloud and it's becoming, in my view anyway, the linchpin of your strategy. I'm interested in what technical decisions you made early on that led you to this point and even enabled the data cloud. Yeah, so I would say that data cloud was built into three phases, really. The initial phase, as you call it, was really about one region of the data cloud. And that region, what was important is to make that region infinity scalable, right? And that's our architecture, which we call the multi-cluster shared data architecture, such that you can plug in as many workflows in that region as without any limits. The limit is really the underlying cloud provider resources, which the cloud provider region has really no limits. So that's region architecture, I think was really the building block of the Snowflake data cloud. But it really didn't stop there. The second aspect was really data sharing, how multi-tenants within a region, how to share data between tenant of that region, between different customers. And that was also enabled by architecture because we decoupled compute and storage. So compute clusters can access any storage within a region. So that's phase two of the data cloud. And then really phase three, which is critical is the expansion, the global expansion, how we made our cloud diagnostic layer so that we could port a Snowflake region of different clouds. And now we are running in three, cloud on top of three cloud providers. We started with AWS in US West, we moved to Azure and then Google, GCP. And how this cloud region, we started with one cloud region, as I say in the AWS in US West. And then we created many different regions, we have 22 regions today, all over the world and all over the different cloud providers. And what's more important is that these regions are not isolated. Snowflake is one single system for the world where we created this global data mesh which connects every region such that not only the Snowflake system as a whole can be aware of all these regions, but customers can replicate data cross regions and share data across the planet if needs be. So this is one single really, I call it the World Wide Web of Data that is this vision of the data cloud. And it really started with this building block which is a cloud region. Thank you for that Ben. Well, Christian, you and I have talked about this. I mean, that notion of a stripping way of the complexity and that's kind of what the data cloud does. But if you think about data architectures historically, they've really had no domain knowledge. They've really been focused on the technology to ingest and analyze and prepare and then push data out to the business. And you're really flipping that model, allowing the sort of domain leaders to be first-class citizens, if you will. And because they're the ones that are creating data value and they're worrying less about infrastructure, but I wonder, do you feel like customers are ready for that change? And I love the observation, Dave, that so much energy goes in enterprises and organizations today, just dealing with infrastructure and dealing with pipes and plumbing and things like that. And something that was insightful from Benoit and our founders from day one was this is a managed service. We want our customers to focus on the data, getting the insights, getting the decisions in time, not just managing pipes and plumbing and patches and upgrades. And the other piece that it's an interesting reality is that there's this belief that the cloud is simplifying this and all of a sudden there's no problem. But actually understanding each of the public cloud providers is a large undertaking, right? Each of them have 100 plus services sending upgrades and updates on a constant basis. And that just distracts from the time that it takes to go and say, here's my data, here's my data model, here's how I make better decisions. So at the heart of everything we do is we wanna abstract the infrastructure, we wanna abstract the nuance of each of the cloud providers. And as you said, have companies focus on this is the domain expertise or the knowledge for my industry. Are all companies ready for it? I think it's a mixed bag. We talk to customers on a regular basis every week, every day. And some of them are full on, they've sort of burned the bridges and like I'm going to the cloud, I'm going to embrace a new model. Some others you can see they complete like shock and awe expression. Like what do you mean I don't have all these knobs to tweak and turn? But I think the feature is very clear on how do we get companies to be more competitive through data? Well, Ben was interesting that Christian mentioned the managed service and that used to be in a hosting guys running around a lab, lab coats and plugging things in. And of course you're looking at this differently. It's high degrees of automation. But you know, one of those areas is workload management. And I wonder how you think about workload management and how that changes with the data cloud. Yeah, this is a great question actually workload management used to be a nightmare on traditional systems. And it was a nightmare for DBAs and they had to spend most a lot of their time just managing workload. And why is that? It's because all these workloads are running on the single system and the single cluster they compete for resources. So managing workload, I always explain it as playing Tetris, right? You had first to know when to run this workload and make sure that two big workloads are not overlapping. Maybe ETL is pushed at night and you have this nightly window which is not efficient, of course, for your ETL because you have delays because of that. But you have no choice, right? You have a fix and more of resources and you have to get the best out of this fix and resources. And for sure you don't want your ETL workload to impact your dashboarding workload or your reports impact and with data science. And this became a true nightmare because everyone wants to be data driven meaning that all the entire company wants to run new workloads on this system and these systems are completely overwhelmed. So workload management was a nightmare before Snowflake and Snowflake made it really easy. The reason is in Snowflake we leverage the cloud to dedicate compute resources to each workload. It's in the Snowflake terminology. It's called a warehouse, virtual warehouse. And each workload can run in its own virtual warehouse and each virtual warehouse has its own dedicated compute resources. It's on IO bandwidth and you can really control how much resources each workload gets by sizing these warehouses, adjusting the compute resources that they can use. When a workload starts to execute automatically the warehouse, the compute resources are turned off by Snowflake is for resuming a warehouse and you can directly resize this warehouse. It can be done by the system automatically if the concurrency of the workload increases or it can be done manually by the administrator just adjusting compute power for each workload. And the best of that model is not only it gives you a very fine grain control on resources that each workload can get not only the workloads are not competing and not impacting any other workload but because of that model you can have as many workloads as you want. And that's really critical because as I said everyone in the organization wants to use data to make decisions. So you have more and more workloads running and playing this Tetris game would have been impossible in a centralized one single compute cluster system. The flip side though is that you have to have as an administrator of the system you have to justify that the workload is worth running for your organization, right? It's so easy in literally in seconds you can stand up a new warehouse and start to run your queries on that new compute cluster and of course you have to justify the cost of that because there is a cost, right? Snowflake charges by seconds of compute. So that cost, is it justified? And you have to, it's so easy now to have new workload and you do new things with Snowflake that you have to see and look at the trade off of the cost of course and managing cost. So Christian Benoit used the term nightmare I'm thinking about previous days of workload management and I mean I talked to a lot of customers that are trying to reduce the elapsed time of going from data to insights. And their nightmare is they've got this complicated data life cycle and I'm wondering how you guys think about that notion of compressing elapsed time to data value from raw data to insights. Yeah so we obsess or we think a lot about this time to insight from the moment that an event happens to the point that it shows up in a dashboard or a report or some decision or action happens based on it. There are three parts that we think on how do we reduce that life cycle. The first one which ties to our previous conversation is related to where is there muscle memory on processes or ways of doing things that don't actually make us much sense. My favorite example is you say, you ask any organization, do you run pipelines and ingestion and transformation at two and three in the morning? And the answer is oh yeah we do that. And if you go in and say why do you do that? The answer is typically well that's when the resources are available back to Benoit's Tetris. That's when it was possible. But then you ask would you really want to run it at two and three in the morning if you could do it sooner or you could do it more in time, real time with when the event happened. So first part of it is back to removing the constraints of the infrastructure is how about running transformations and data ingestion when the business best needs it, when it's the lowest time to insight, the lowest latency, not when the technology lets you do it. So that's the easy one out the door. The second one is instead of just fully optimizing a process, where can you remove steps of the process? This is where all of our data sharing and the snowflake data marketplace come into place. How about if you need to go in and ingest data from a SaaS application vendor or maybe from a commercial data provider? Imagine the dream of you wouldn't have to be running constant iterations and FTPs and cracking CSV files and things like that. What if it's always available in your environment, always up to date? And that in our mind is a lot more revolutionary, which is not let's take away a process of ingesting and copying data and optimize it. How about not copying it in the first place? So that's vector number two. And then vector number three is what we do day in and day out on making sure our platform delivers the best performance, make it faster. The combination of those three things has led many of our customers and you'll see it through many of the customer testimonials today that they get insights and decisions and actions way faster, in part by removing steps, in part by doing away with old habits and in part because we deliver exceptional performance. Thank you, Christian. Now Benoit, as you know, we're big proponents of this idea of domain-driven design and data architecture. For example, customers building entire applications and what I like all data products or data services on their data platform. I wonder if you could talk about the types of applications and services that you're seeing built on top of Snowflake. Yeah, and I have to say that this is a critical aspect of Snowflake is to create this platform and really help application to be built on top of this platform and the more application we have, the better the platform will be. It's like the analogies with your iPhone. If your iPhone had no applications, it would be useless. It's an empty platform. So we are really encouraging applications to be built on top of Snowflake and from day one, actually, many applications and many of our customers are building applications on Snowflake. We estimated that's about 30% are running already applications on top of our platform. And the reason is, of course, because it's so easy to get compute resources. There is no limit in scale, in availability, durability. So all these characteristics are critical for an application. And we deliver that from day one. Now we have improved or increased the scope of the platform by adding Java computation and Snowpark, which was announced today. That's also is an enabler. So in terms of type of application, it's really all over. And what I like actually is to be surprised. I don't know what will be built on top of Snowflake and how it will be delivered. But with that sharing also, we are opening a door to a new type of applications which are delivered via the marketplace where one can get this application diving inside the platform. The platform is distributing this application. And today there was a presentation on creation keynotes about quantifying which is this machine learning which is provided to any users of Snowflake, of the application and machine learning to find and apply model on your data and enrich your data. So data enrichment I think will be a huge aspect of Snowflake and data enrichment with machine learning would be a big use case for these applications. Also how to get data inside the platform. A lot of application will help you do that. So machine learning, data engineering, enrichment, all these are applications that we run on the platform. Great. Hey, we just got a minute or so left and earlier today we ran a video we saw that you guys announced the startup competition, which is awesome. Ben, while you're a judge in this competition, what can you tell us about this? Yeah, first, I mean, I have to say that for me, we are still a startup. I didn't yet realize that we are not anymore a startup. I really feel about helping new startups. And that's very important for Snowflake. We have, we were a startup yesterday and we want to help new startups. So that's the idea of this program. The other aspects of that program is also to help a startup to build on top of Snowflake and to enrich this space, rich ecosystem that Snowflake is or the data cloud, our data cloud is and we want to help and boost that excitement for the platform. So the ends, it's a win-win. It's a win for new startups and it's a win, of course, for us because it will make the platform even better. Yeah, and startups are where innovation happens. So registration is open. I've heard several startups have signed up. You go to snowflake.com slash startup challenge and you can learn more. That's exciting program and initiative. So thank you for doing that on behalf of the startups out there and thanks Benoit and Christian. I really appreciate you guys coming on. Great conversation. Thanks a lot, Dave. Thank you, Dave. You're welcome. And when we talk to go to market pros, they always tell us that one of their key tenets is to stay close to the customer. Well, we want to find out how data helps us to do that and our next segment brings in two chief revenue officers to give us their perspective on how data is helping their customers transform businesses digitally. Let's watch.