 From theCUBE Studios in Palo Alto in Boston, bringing you data-driven insights from theCUBE and ETR. This is Breaking Analysis with Dave Vellante. In order to support the vision of a sixth data platform, that is a capability which allows a globally consistent, real-time, intelligent digital representation of a business, we believe the industry has to rethink this idea of a single system of truth. Specifically, we envision a new data platform that marries the best of relational and non-relational capabilities and breaks the multi-decade trade-offs between data consistency, availability, and global scale. Now further, we see the emergence of a modular data platform that automates decision-making by combining historical analytic systems with transactions to enable AI to take action. Hello and welcome to this week's theCUBE Research Insights powered by ETR. In this Breaking Analysis, George Gilbert and I welcome two innovators, Eric Berg, who is the CEO of Fauna and Soma Segar, who's the managing director at Madrona Ventures. Gents, welcome, thanks very much for spending some time with us today. They've made it to be here and thank you for having us here. Yep, you bank, great to be here too. All right, before we get into it, we want to set the context with a little bit of spending data from ETR. So this is a spending intentions survey data from the January survey of 1,766 IT decision makers. And what we're doing here is we're showing some of the top operational data platforms. The vertical axis represents net score, which is a measure of spending momentum on a specific platform. And the horizontal axis shows the presence of that platform within those 1700 plus accounts. You can see the table insert on the lower right. It shows you the data for each company and how it's plotted and forms the placement of the dot. That red dotted line at 40%, anything over that indicates a highly elevated spending velocity level. Now we've stretched the ETR tax on me a bit here. So let me explain. The categories used in the survey, they don't break out operational from analytic databases. So we know that pure plays like Mongo represent operational data stores, but the asterisk indicates that several firms like Microsoft, AWS, IBM, et cetera, comprise multiple database types. That said, the following key points are things that we want to note. First of all, Microsoft's ubiquitous as they always are in these surveys. They got a huge N in the data set. And within the portfolio, we've highlighted Cosmos DB, which is Microsoft's globally distributed NoSQL and relational database. Postgres SQL, it's an open source database. It's an alternative to Oracle. While it's not in the main spending survey, we have data and other surveys that suggest it's very prominent. So we've sort of estimated its position on this chart in context. AWS, as you know, they've got dozens of databases. We've highlighted DynamoDB and Aurora, two of its most popular operational databases. AWS, as you can see, is the highest net score or spending momentum in this data at a 51% net score. You see Google just at that 40% mark, not as much market share. And we've highlighted Spanner, which is its globally distributed, strongly consistent database. And part of the theme of this research today. And in Mongo, a pure play operational data platform. You also see SAP HANA. It's got strong spending momentum. And then several other players on the graphic, including MariaDB, Cockroach Labs, and of course, IBM and Oracle. The latter with Oracle database and the more novel MySQL Heatwave, which combines transactions and analytics and an in-memory architecture. Okay, so this lays out a picture of operational data stores. Now, to get a new vision of the single source of truth, we need these operational data stores to be married with historic analytic data. Now, with AI, we bring intelligence to begin to enable automation, systems of agency, and we can begin to build what we call intelligent data apps. So SOMA. Madrona each year has your IA Summit, intelligent apps. Explain your point of view on what are intelligent data apps, please. Great, thanks Dave. So this will be, actually, if I remember right, this will be the fourth year that we'll be doing our intelligent application summit this fall day. And the reason we started doing this a few years ago is because we saw that the world is going through a transformation. We're like, hey, if you look back, like you know, say 10, 15 years ago, kind of thing, the world was moving from what I call on-premises applications to SaaS software as a service applications. And we felt a similar transformation was likely to happen and we started talking about it even back in 2015. But the notion that like, you know, hey, every application in the world is going to become an intelligent application. And by definition, what that really means is if you're not an intelligent application, your shelf life is, what should I say, is finite. And what do I mean when I say intelligent applications? You know, as we all know, there has been like an explosion of data. Applications have access to more data. And these applications today are being built with a continuous learning system in place. So that they look at the data that they have, they train whatever models they need to train, they learn and then they deliver a service, they get more data and then they learn more. And it's sort of a continuous learning system that the application has. And that's what makes the application or a service that much more valuable to people, right? And we felt that given the like, we thought that every application is going to be an intelligent application. We thought it would be a good thing for us to do, working with the rest of the industry to identify what are the top 40 intelligent application companies in the world, at least in the private company space. And we thought like, hey, that would be a good thing to sort of work with the industry, identify, and then be able to publish that list. And then bring those founders and CEOs and investors together for a one day conference on what is happening as far as intelligent applications and AI goes kind of thing. And that's sort of been the context behind why we did that. Great, thank you. Dilma, let me jump in for a second ask. Part of the focus of our narrative has been on how the data platform is evolving to support these types of applications. And sometimes you can find like a lighthouse example, not just of the applications, but an attempt at building the platform to support those applications. One example we use for the application side is Uber, where you have riders and drivers and fares. But another platform company that is attempted to enable these digital representations of things that are driven by data is Palantir. But there's something missing in that platform, which we think is transactions, where you need to perform operations that require shared visibility to help us walk us through how the platform needs to expand to submit with that. Yeah, absolutely. Before I answer that directly, George, let me quote a statement that Microsoft made in one of its recent earnings call, actually the recent earnings call that happened earlier this week. As Dave was mentioning earlier, Microsoft has its own Cosmos DB kind of thing, which is a no-sequel document relational database kind of thing. But in that context, Microsoft was saying, hey, if you really want a goal of database to build AI-powered applications at any scale, you need a document relational database. That sort of validates why, in some sense, I would say Fauna exists and what we've been sort of focused on building over the last many years and why we are super excited about what a data system or a data platform, like what Fauna has, may be helpful. Having said that, let me sort of take a step back and then talk about some of the attributes that I think these data systems need to think about or need to evolve to have to meet these demands, okay? First and foremost, like, you know, gone are the days when like, you know, everything is going to be just structured data, okay? There is a growing need for what we refer to as semi-structured data, structured data, unstructured data, semi-structured data kind of thing, but really like, you know, for the kinds of decisions that you want to make, for the kinds of experiences that you want to enable, that are AI-driven and AI-powered, you really need to be able to handle at least semi-structured data if not more, okay? So that's sort of one important thing. The second thing is your customer base is really a worldwide customer base. They can be from any part of the world kind of thing. And you ought to be able to have a data system that is able to provide what we call like, you know, super-fast, very responsive and low-latency experiences, no matter where you are coming from. It doesn't matter whether you are, you know, in the same zone as I'm in or, you know, in a completely opposite part of the world kind of thing, right? So being able to have a globally distributed data system with low latency that is super-fast in terms of responsiveness is absolutely essential. The third thing is things are changing fast, okay? Like, you know, what is the hot today may not be hot six months from now, which means like all these intelligent applications need to be moving at sort of lightning speed kind of thing in terms of new capabilities in terms of new things that they're doing. And so you want to be able to have a data system that provides a seamless developer experience that enables you to be able to build those capabilities in a very agile way, okay? Finally, you also need to make sure that the data that stays in your system, data platform or data system is highly, highly secure and you are willing and able to comply with the ever-changing, you know, privacy requirements and other data security requirements from different parts of the world because like one part of the world may say something is important, the other part may say something else is more important and you need to have a system that can really look at all of these holistically and be able to, you know, comply and manage through those, you know, sort of out-navigate through those things in almost a real-time world, right? So these are some of the things that I think of as like, you know, core attributes, the data system need to have in place today. Okay, great, thank you. And so, Eric, we're going to bring you into the conversation, but let me set it up if I can with this next graphic, which highlights some of the trade-offs that we have to make over the years. So George and I were talking last night and we use this metaphor, George, thank you for coming up with us, of the Gordian Knot and to describe some of the challenges that we face. So the legend of the Gordian Knot says that whomever can untie the seemingly intractable knot is going to rule all of Asia. We're going to come back to that. So today we're exploring how to rethink the single system of truth and untangle that knot with unconventional methods. So the idea is combining the best of sequels, ability to join data, and the schema flexibility of not only sequel, and then solving for globally distributed consistency without the complexity of having things like hardware-based atomic clocks and further continuing on that thinking, breaking the trade-offs, we've been talking earlier about two-phase commit, having to choose between waiting for synchronization and scaling out nodes globally, addressing Brewer's theorem or otherwise known as cap theorem problem, which says it's impossible for a distributed data store to accommodate more than two of three key attributes, consistency, availability, and partition tolerance, which is a fancy way of saying when things go bad between nodes, the system can recover without losing data. So imagine a world where you didn't have to make these trade-offs. How would that change the way applications are written and the value proposition to customers? So come back to that idea of a Gordian knot, Alexander the Great slashed the knot with his sword. In essence, he solved this impossible problem with an unconventional approach that broke the accepted rules and removed the constraints. Eric, that brings me to Fauna. What has Fauna done to rethink these trade-offs and essentially slash open that Gordian knot? Yeah, great question. And I'll start that answer by sort of oriented around the customer, in our case, developer and engineering team challenges. And I think Soma mentioned a lot of them. So start with the data itself. As Soma mentioned, today's data in these internet-facing cloud-native applications are certainly semi-structured. And so they need a database and an operational database that can accommodate that. And I think as you mentioned this in your prelude, sometimes it's great to be able to start with that unstructured data, but as applications get more mature, as teams working on those applications get larger, you want to be able to add that structure over time. And so in Fauna, we've introduced kind of into that theme of the best of SQL and no SQL, that underlying sort of document model, but with the ability to apply schema and enforce that schema over time, as your application grows. You mentioned, and Soma mentioned, hey, these audiences that people are building these intelligent applications for are national at minimum and usually global in basis. And so they want to provide fast interactive, responsive solutions to those customers. And so that's where our underlying, what we call our distributed transaction engine, that allows Fauna to run in a multi-region concept. And that can either be across multiple regions in the EU or in the US, or even globally based on your configuration. So it allows customers to build applications, these intelligent applications that are responsive for users across the globe. And then I think a really important point and probably a weakness of the database traditionally is that Soma brought up the point about agility. Historically, the database has always sort of been this, this part of the application stack that no one wanted to touch because it was very fragile. And we've done a lot to ensure that Fauna can really be a first class citizen in that kind of agile software development lifecycle that's needed. So it starts with that data model that I talked about when you start out on application and you're not sure what the requirements are and how you're gonna have to add features and capabilities to compete and to respond to your customers. You need that flexibility that you don't get out of a SQL database. And again, with the ability to add that structure over time. The other thing that we've done that's pretty innovative is there's been a lot of work in the industry more broadly to automate processes across the software development cycle with infrastructures, code platforms like Terraform and Pulumi or automated processes from things like GitHub. And so we've actually taken that concept and internalized that with Fauna. So we have a schema language that you can fully identify and drive all of your configuration for your database. And what that means now is as you have a software development lifecycle from and you store your code in GitHub, for example, and you're iterating and deploying that, you can just store your schema language for Fauna right along with that. And that database becomes every bit as flexible and agile as changes to your code. And then the final piece of that agility which is super important, we've kind of taken the concept if database is used to run on-premise, a lot of those that you showed on your slide up front are database as a service. But still in that concept, you need to know about the physicality of the database. You pick a machine size, memory, et cetera. And we've taken that abstraction up another level with Fauna. So it is served up as an API. So none of our customers have to worry about sharding and replication and those kind of capabilities. And so it really makes it, you know, agile. You know, a couple of other things I'll just hit on quickly. You know, some of mentioned these interactive collaborative experiences. A lot of these intelligent applications are being reimagined as native cloud services. And so collaboration and interactive kind of like that Google docs like, you know, capability in the UI is important. And Fauna has, you know, natively a real-time event streaming capability that enables that in those UIs. And then, you know, the last thing I'll talk, hit on is, and I think, you know, some of mentioned this is security and data residency. We have a really interesting capability within Fauna where you can spin up multiple, you know, databases. We have a multi-tenancy inherent in it. A lot of our customers, whether they're B2B SaaS customers or B2C, use that as a secure container to capture the data for their customers, allows them to meet their security requirements much easier. And then globally, we have Fauna deployed in these multi-tenant region groups but provide a very seamless way for UIs and application to route all your requests to the right database in the right region, which is a great way to deal with, you know, data residency requirements. So lots of things that, you know, you can do when you, you know, when you do build a database, you know, from scratch, as someone mentioned, it takes a while, you know, to do this. And we've been working on it, you know, for quite some time. But, you know, it's, you can really achieve a lot for customers, you know, by doing so. Great, thank you, Eric, appreciate that. So on this program, George and I, we often talk about working with strings, in other words, stuff that databases understand, like rows and columns and things, objects that represent people, places and things. And so George, maybe you could set this up and explain the core issue here, where we want to preserve the simplicity of working with objects, but at the same time, we want the ability to take multiple views of the data, what we refer to as strings. So, yeah, Eric, just to drill down on this, that, you know, in the theme of marrying the best of Mongo and the best of Spanner, help explain why, like, you can work with what starts out as a schema-less, what seems like a schema-less database, but then, as you said, as the application evolves, and you might have different views of that data, you can still get that capability that SQL would give you with, you know, essentially the ability to join, but the developer still gets to work with objects, you know, with things, and, but the database has the flexibility of working with strings that it can join. Can you elaborate on that? Yeah, I think that, you know, you're getting to the core, you know, data model innovation, you know, that we refer to as document relational, and, you know, for database historians, I think we have to go all the way back to, you know, Cod's original paper, you know, on the relational database, and at that time, he tightly coupled the relational model, relational capability to the, you know, tabular rows, columns, you know, tables, and what we've done is effectively, you know, sort of apply kind of the modern data requirements in developer requirements, frankly, that are, we think are more aptly suited in JSON and in document form, but brought the power of that relational model. There's nothing fundamentally that prohibits the relational model from being applied to a different, you know, underlying data model. It just so happens that that's kind of, you know, traditionally how it's been defined that those two things have been tightly coupled. And then you brought up something else, which is very important, and you can't, you know, we found that you can't just iterate on the underlying, you know, data model. But as you mentioned, it has to come through also to how the developers interface with that. And you brought up a really good point and one that we saw as we talked to customers about their constraints, you know, with SQL. If SQL was great when you needed to return a result that was in rows and tables for, you know, reporting, you know, situation, which is kind of what it was developed for initially. But when you think about how data usually needs to be returned from an application development standpoint and, you know, to populate a user interface, for example, you can't really achieve that with SQL today. And so, you know, our query language, if developers are, you know, very familiar with TypeScript or JavaScript or Python or Go or a modern programming language, they'll feel right at home in FQL. And it is set so that, you know, those object-oriented programmers, you know, can interact with their data in a very natural way. And as you said, almost, more importantly, they can return that data, you know, to the application in a way that, you know, that is required. And you want to do that in one shot if you can, right? And in a consistent way, which is where our underlying, you know, a strong consistency model comes in place. And everything with BANA is a transaction. So you can submit that and get that back and then return it to UI in a natural way. Just to be clear. Oh, Sumo, go ahead. Sorry, no, no. One thing that just warmed my heart when Eric was talking about was when he mentioned TypeScript. And the reason I mentioned that is because, you know, I was in Microsoft in running developer division when we were working on a new language, you know, on top of JavaScript called TypeScript, kind of thing. And we pulled out the programming language for the rest of the world when I was there. And it is fantastic to see how far the language come along and like, you know, has come along and how much, you know, Fauna is being able to use that to really provide what I call a very simple developer experience. And so let me just elaborate on that, Dave, before we move on, because this I think is really important where, like, a relational database, say Oracle, could support JSON objects on top of an underlying relational model. But if I understand, it's still schema on write, so the developer still has to declare that the data model of the database before they get started. And if I understand what you're saying, yours can evolve as the application requirements evolve until the developer or the data architect feels like they want to nail it down. Is that... Yeah, that's absolutely right. I think there's two key differences. I think, you know, the fundamental question is, well, can I just store JSON documents in a tabular database and be done with it? I mean, I think there are two constraints. There's one absolutely that you mentioned, which is that ability to start with a flexible data model and then add the schema over time as you hone in on the structure. And as I mentioned, we support schema enforcement as well as you would in a relational database. The other piece is, you know, you brought up earlier as well, which is you're still then constrained with the query language that you're working with there, you know, SQL and that tabular data model, not being able to bring back data in the structure that's required and needed by your application. And so when you have a more expressive query language that again, it's more similar to TypeScript or JavaScript or Python, it allows the application developer to do that. So, I mean, you can talk to a lot of application developers and they want that kind of power in the language that they use with data just as they do with their application code. And just again, to nail this down and be clear, you're trading the declarative query of SQL. In other words, the ability to just say what you want and have the database figure out how to get it because that's really useful for complicated queries in analytic databases. And here, you don't mind expressing how to get the data in a TypeScript-like language because then you get to deal with the objects or things that your application type system cares about. Correct. And then the other thing I would add to that and you know, that our co-founders experienced a lot as they were sort of scaling the different data infrastructures in their career is that you also get, when you have that query optimization layer, you get very inconsistent performance and performance that you can't really predict. And so, as a developer who's starting to build a system that really scales, there's a real trade-off that's valuable to be able to sort of dictate that if you will, right? Okay, right. Okay, great, let's move on. So one of the, I mean, when we talk to our community and the IT decision makers in our audience, they want to know, they want to know the gotchas. So let's double click a little bit. In particular, when we looked at Fauna, we were like, okay, where are the bottlenecks? And one of the, I imagine you get this question a lot. For example, if your transaction log is in a single availability zone, what happens if you lose it? And if it's not in a single availability zone, how do you ensure that you can maintain performance? Maybe you could address that and maybe put us on the spectrum of you're strongly consistent, where you fit from a performance standpoint with eventual consistency. Help us understand that, if you would please. Yeah, and I mean, the first thing you mentioned back to cap theorem, there is no magic. Formally, we are a CP system, right? And the way we deal with availability is through replication of nodes and in our multi-region capability. So what are the trade-offs, if you will, right, of this system? I think it gets back to one of my earlier answers, which was fundamentally one thing that Fauna cannot escape is the speed of light and the physics and the delays of that. And so our, for example, a right to Fauna, it's gonna be consistent across regions, we'll take in the double-digit milliseconds, whereas with an eventually consistent system that doesn't have to worry about that consistency, that might be single digits. Now on the flip side, because we are distributed geographically, we can serve up simple reads in single millisecond in the instances and very close to the user. And that's also very different from another thing that you mentioned on your knot slide. Traditionally, these systems have had two-phase commit as their consensus algorithm and that could bring up a lot of contention and particular data and it could go out of variability in how fast they can serve those reads. But again, based on the design of our distributed transaction engine and our consistent sort of global log, we're able to get around that and have a more consistent predictable performance. So the big trade-off really is the speed of light and the slight delay in writes for strong consistency. And then, again, we take care of the replication on the back end of the service to make up for potential availability. And to be really clear about your specific question, our log does not run in a single availability zone at a minimum, each one of what we call our public region groups where we run multi-tenant instances of fauna for our customers. By definition, spans at least three different geographical areas. So if we're in the US, we'll have presence in the East Coast, Central and West Coast in EU, we'll have about three different countries and then we have global region groups that span geographic boundaries as well. Got it, okay, so if there's a problem, there's a very, very, very high probability you'll be able to recover. I always say there's no such thing as zero data loss despite the products that we've seen out there but you've architected that. And again, as you say, the trade-off is speed of light and I'm sure the best minds in our industry are working on that. So on fauna's webpage, there's a section that really caught our attention why fauna, where you discuss a number of fauna relative to a number of other platforms. It was actually really informative. I thought quite fair, although we'll hear from your competitors but there was fauna versus DynamoDB, fauna is an alternative. You decided the challenges of cost and consistency that Dynamo has at scale. You had fauna versus Mongo, where you talked about some of the scale and consistency constraints of the latter. Fauna versus Postgres, where you discuss the challenges of working with schema on right. As George has just talked about, the lack of strong consistency when it's geo-distributed. You had Aurora serverless as another one, V2, which addresses some of the limitations of Postgres. Fauna versus Spana, which was not on your website, I don't think this is just sort of our thinking which addresses the consistency in a geo-distributed situation but still requires that two-phase commit trade-off that you just talked about and schema on right. So Eric, your competitors are really entrenched. They're well-financed. They got large customer bases, as we showed in that spending data. Many have momentum. So do your best to summarize your point of view on the limitations of today's popular operational systems and share how you stack up. Yeah, and I think as you mentioned, we all know I think that the operational database market is, I think if not one of it is the largest market from an IT spend and so as a result, there are a lot of different players. So the way I like to sort of attack this is kind of how we see our customers making decisions, right? And so fundamentally there is a branch at the top of the tree, right? Where if people have existing applications, for example, that are led to SQL, for example, and they're looking for a way to move that to the cloud and SQL kind of is their query language of choice, which brings in Postgres and a lot of the RDS systems and Coproach and others that you mentioned. Clearly that's not something that Fauna is focused on. Independent of whatever those architectural differences are, that query language choice makes a big difference. So today, we do see still people who will go from, say Postgres to Fauna and it's typically because they hit that multi-region, that expansion problem and they're really looking for a way to not have to take on all of those cross-status center challenges themselves. And in that case, they're actually willing to migrate from something like Postgres or SQL to Fauna. Probably easier today is if people have made that choice up front and said, hey, the flexibility, the scalability, the performance of no SQL is more important to me. And for those customers, it really, we are bringing to them all of the power of a relational database that they sort of had to give up as they moved to no SQL. So I can kind of walk through that for each one of those really quickly. I mean, DynamoDB is a great one where, I think they were one of the earliest and largest most popular serverless databases. So I think they did a great job on that front. But if you talk to customers and you get into Dynamo, I mean, it kind of goes back to what Dynamo was designed for initially, which was really to, offload a hotspot on a relational database and horizontally be able to scale, read requests with that very quickly. And for those kind of use cases is extremely useful. Dynamo has a lot of trade-offs. They have a very rigid data model called a single table design. And so kind of like what we were talking about earlier, if you know exactly what your application is going to evolve to and exactly what the read write patterns are gonna be, then it's a great solution. But that doesn't apply to most people who are building and evolving and changing their application. Dynamo by default is single region. It takes a lot of work to try to make it work across regions. And then also by default, it's eventually consistent. You can pay a lot more and again, try to configure it to approximate strong consistency, but you absolutely don't get that out of the box. So a lot of differences there when it comes to fauna. Mongo is a great example. And one that we look at in terms of answering your question of how do you grow a new franchise in this market? One of the things that I think Mongo did very well is they intercepted a lot of net new development and then grew with their customers. And that's a big part of how fauna started out as well is attracting these modern developers who are building these new applications. Sometimes it's in very large enterprises. Sometimes it's in smaller customers. And now increasingly, we're also starting to see people migrate off of that. And so that's a little bit of an answer to your question of how do you build a new operational database franchise? You really have to attract this next generation, all these intelligent applications that we're talking about and be able to get in at the ground floor or as people are thinking about kind of reimagining those services. So again, someone like Mongo, again, they have the document piece down which I think is great that sort of fundamental in core to us. But we brought all that relational capability that you don't get. And then we've abstracted the operational level up so that you have a pure API. So instead of, for example, their popular Atlas service, you still have to know about the hardware underlying the service. Again, with fauna, that's completely taken care of for you and it's available globally. And then the last thing I'll mention is while they have a query language that is not SQL, it's relatively limited in terms of what you can do as a developer and back to our earlier conversation, FQL is at a very expressive language, similar again towards TypeScript and Python and it allows developers a lot more power like the relations, the joins, et cetera, but also in terms of how they structure their data for their application. So it's much more developer friendly. Great. Thank you. Let me ask, go ahead. Dave, just a follow up, sort of popping back up a level where we have these future intelligent data applications using Uber as an example where things in the real world change like a rider requests a driver, the system has to match the rider with the driver, it has to calculate a fare and a route and an ETA. What we're trying to understand is how much of that app lives in this real time system of truth which then orchestrates all the other transactions to the extent there are transactions and how much lives in the historical system of truth and who orchestrates that whole flow, where that happens. Can you elaborate and sort of enlighten us? Yeah, I mean, I think like any application decision, there's trade-offs in terms of how you do that and I'm sure you look at any application that probably there's different ways to solve that. In particular, we're sort of a fan of a best of breed and having strong integration with systems that do things uniquely well. So for example, we're not focused on the historical reporting kind of traditional OLAP workloads. Even I'll bring up something that's popular these days, we haven't really pursued vector capability. We've decided to partner with Pinecone and others who are best of breed in that area. And so, I think it also speaks a little bit to our architecture and the kind of applications that we see being built, right? We are an API, we're consumed by it as an API and so it's very easy to integrate fauna into a more kind of event-driven, kind of loosely coupled architecture like that. So I think our answer in terms of what we see is that we are doing that for transactionality, determining where the issues are and resolving them. And we'll get fed from either results that might be done offline in a batch mode that then update information in fauna that partakes in that query and to actually resolve that issue of like, okay, new driver, new location, et cetera. So we'll be fed from those systems. And vice versa, we'll get information out of fauna into those warehouses, whether it's snowflakes or data bricks or wherever that might reside. So we sort of believe in a best of breed, sort of integrated architecture in that world. So just to be clear, like application objects like the rider and the driver might be fauna objects and if they need to be informed about historical data, they would call on or be informed by the historical system of truth, but the application objects that need real-time state would live in fauna, correct? Okay. I wanted to bring Soma back into the conversation. Soma, Eric was talking earlier about that the majority of spend is on these type of operational systems. I go back to the 90s when I was at IDC and we used to count this stuff. The vast majority of the spend was certainly on these transaction and operational systems. We could see the need for, we see the rise, the ascendancy of unstructured data. It was the decade before Hadoop, but we saw the need for something like that and we all know the good, the bad and the ugly there. But Soma, how do you think about the fauna's tam, its market opportunity, how it's going to tap that opportunity with its routes to market? What was the investment thesis for you? Let me just add to that, Soma, to elaborate. Like, how that market, the market seems to have shifted from emphasizing operational databases to analytic databases over the last X number of years. But when we have intelligent apps that are driven by data that need shared visibility on transactions, how does that dynamic change again? Absolutely, absolutely. That's a great question, Dave. I think the simple answer to how we think about tam is, it is huge, it's humongous, okay? As Eric mentioned, like, you know, hey, the operational databases is probably one of the largest, you know, spend item in IT budgets kind of thing. And this being like, you know, the case for many, many years now kind of thing. And it's only growing as the world uses more data and as new kinds of applications come that sort of, you know, have enormous appetite for this data kind of thing, okay? Whether it is like, you know, $200 billion or $400 billion or $600 billion, I think, you know, different people can come up with different charts and numbers, but they're all, you know, large enough that we think like, you know, hey, tam is not the issue here, right? The reason why we got excited about fauna, I would say like, you know, two-fourth. One is if you look at the co-founders and the founding team for fauna kind of thing, the pedigree and the background and the experience that they've had building at scale data systems, okay? In other enterprises and seeing what is working and what is not working and being able to have a pulse for what do modern developers need in a database kind of thing? We felt that that pedigree was fantastic. The second thing is, as we know that like, you know, data exploration is happening, there's new class of applications, whether it is, you know, intelligent applications, whether it is applications on the edge, whether it is IoT-driven applications, there's a whole class of new applications that are coming in where like this notion of like, you know, hey, they need access to data in a globally available way with a low latency and sort of a high performance kind of thing, right? And we felt like, you know, hey, if somebody can pull together a system that meets all of these attributes in a serverless board, then we thought like, you know, hey, that's a database that can have legs and can have a meaningful play in the ecosystem. You know, as you guys talked about like, you know, the competitive landscape, there's a lot of database systems. So you ought to be thoughtful that on the one hand, there's a lot of demand, but there's also a lot of supply. But as in when you see inflection points, whether it is platform inflection or the kinds of applications that are being developed, there is an inflection point, you want to think about like, you know, hey, what is the database system that makes best, that provides the best use case and best value for people who are building data-driven applications or AI applications kind of thing. And that's the reason why we got excited about FANA and we think like, you know, hey, a system like FANA that's truly global, truly available, truly sort of, you know, you know, has the consistency that people need in today's day and age in terms of being agile and being able to move things fast and around the world, we thought there is a huge, huge, huge opportunity. So let me follow up on that because I think the, look, I think the answer that it's huge is perfectly reasonable, but it's also nuanced. And I'll tell you what I mean by that. So when I think of the early days of Snowflake, it was very easy to understand, okay, Snowflake's going to replace teradata installations because it's simpler or you take pure storage. What's pure storage, Tam? It was EMC, okay? So it was a very disruptive and a market share steal. Now when you talk to Snowflake, the Tam is data, which is, as you say, Mike Scarpelli will say it's huge. It's just, it's unlimited. So you really are in that sort of ladder category of giants. So my question is, since it sounds like you're not just saying, okay, we're just going to go steal from Oracle or steal from Teradata. Teradata is a bad example in your case, steal from Mongo. Maybe there's some bleeding there, but specifically my question is, what other conditions have to occur in order for you to tap that Tam? Are there other dependencies that you're looking at to say, okay, these pieces have to come together or is it more, hey, there's a very clear vector of new innovation and value creation that we're tapping? Does that question make sense to you? Yeah, absolutely. Just to go back to your point, Dave, because I was also an investor in Snowflake, I have some knowledge of this kind of thing. I could tell you that when Snowflake started, it was really like, hey, how do I build the best data warehousing solution that's cloud-based, okay? And then over a period of time, they said like, hey, data warehousing is great. How do I get into data sharing, okay? And then over a period of time, they started thinking about like, hey, how do I build a data cloud platform? And like you said, Dave, they talk about data all the way, right? So their Tam has been sort of increasing as the ad footprint and their aspiration has been increasing. Now, if you come to Phonak and I think there are two interesting use cases to think about. One, like Eric mentioned kind of thing, right? You know, when somebody is using an existing database solution and they've reached a level of scale or a level of geo expansion where they feel that the current system that they have is not meeting their scale needs. They are looking for a new solution. How do we tap into that and get them excited about Phonak? So it's one of these cases where like, as much as migrating a database is hard, you know, people, if they want their application to continue scaling and being successful, at times they'll have to do this. And I think Phonak is in the right place to cash them and take it forward. The other thing is, as people go on and think about, like, you know, how do they build new, yeah, driven applications or applications on the edge or other kinds of applications that we talked about, I think they're going to look at all and see, like, you know, hey, what is this system that is going to give me the best, you know, opportunities here? And I think, you know, Phonak is going to be probably at the top of the list, you know, when somebody thinks about it that way. So I think there are some cases where it is going to be, like, you know, hey, Phonak is going to have the ability to steal from some other database system only because, you know, people have hit a scale limit or some other limit and are looking for a new solution. But as the pie becomes larger, I think Phonak has the chance to have a meaningful part of the pie. Great, thank you. If I can't, I can add to that. I totally agree. I mean, you know, a big part of what you have to do in one of these massive platform businesses and I build a few of them is exactly what SOMA said, you have to sort of focus in on, you know, sort of your wedge initially. And, you know, even I'd go a step further, you know, and I know someone knows this, you know, in the very early days of Snowflake, it was, you know, failed to dupe the deployments that they really were able to pick up because they had great support for semi-structured data and they didn't have to do all of this data model conversion and they were able to go in there and swoop those up very early. You know, for us, you know, kind of the moral equipment as I've talked about earlier is people who've said, hey, I want to go to documents, I want to go to NoSQL for that flexibility, et cetera, but are really hitting pain points because of the scale or, you know, the desire to want to bring in, you know, relational capability, though that kind of querying power at strong consistency, et cetera. And so that's kind of our, you know, failed to dupe, if you will, initially. Now, fundamentally, because it is a document relational and as you guys started out your slide with the knot, you know, there's a massive opportunity over time, but, you know, we're focused on kind of, you know, where the most pain is and where the least, you know, sort of, you know, religious tension is, if you will, from a query perspective. And then, you know, and then as it, you know, gains scale just like, you know, something like Snowflake, you know, you get to waves and broader waves of adoption, right? You know, as people's risk tolerance goes down. All right, awesome. Guys, we're gonna leave it there. Great conversation. Eric Berg, Soma Segar, thank you so much for your time. I really appreciate it. Absolutely, great being here. Dave and George, thank you for having us here. Yeah, thanks so much. You're very welcome. All right, I wanna also thank Alex Meyerson and Ken Schiffman who are on production and Alex also handles our podcast. Kristen Martin and Cheryl Knight, they helped get the word out on social media and in our newsletters and Rob Hoth is our editor-in-chief over at siliconangle.com. Remember, all these episodes are available as podcasts. Wherever you listen, just search, breaking analysis podcast. We publish each week on thecuberesearch.com and siliconangle.com. You wanna get in touch, email me, david.volante at siliconangle.com or DM me at dvolante, comment on our LinkedIn post. Make sure you check out etr.ai, etr.ai, outstanding survey data, best in the enterprise tech business. This is Dave Vellante for George Gilbert and theCUBE Insights, powered by ETR. Thanks for watching everybody and we'll see you next time on Breaking Analysis.