 Hi, this is your host, Aptil Bhartia, and welcome to our first talk. Today we have with us Tyson Trotman, VP of engineering at Fauna. Tyson, it's great to have you on the show. It's my pleasure. Pleasure is on my mind. And this is the first time if I'm not wrong, I'm talking to somebody at Fauna. So I would like to learn a bit more about the company itself. How old is the company? What is space you operate in? What are the specific challenges or problems that you're trying to solve for the larger ecosystem? Fauna, fundamentally, is kind of the ideal operational database for modern application developers. I think, you know, a company's been around for a little while, actually close to 10 years, because it's hard to build a novel database. A lot of interesting kind of technical innovation that goes into that. But, you know, the problem initially that our two co-founders, Matt and Evan, saw when they were building the data platform at Twitter. And something that I've also seen over the course of my career is application developers spend too much time thinking about their database. You know, as applications mature, it becomes this kind of huge area for investment, particularly as you try to tie your database into some of your DevOps workflows. And so, you know, Fauna was built to kind of solve a lot of those challenges that application developers have out of the box. When it comes to, like, these new principles, as teams are kind of collaborating or writing the code, what happens to data and databases? Does it also very well fit into the pipeline? Look, the fact is that apps can come and go. We will get new versions. But database is something which is, like, core critical to any organization. That is the most important asset. But when we look at these new workflows, these new principles, automation, collaboration, does database very well fit into these processes or at times become bottlenecks for developers or DevOps teams? Yeah, I mean, certainly, I'd say existing database offerings often become bottlenecks for these developer workflows. You know, I think, to me, the linchpin of kind of modern DevOps is continuous delivery. It's the thing that takes software from a change to a module in a larger system to then integrating that change with other modules and then deploying that change from a development environment, probably to a staging environment to production, maybe through a release pipeline. And there are sets of capabilities that you need out of a database to practice continuous delivery safely and effectively. I think, namely, you need to be able to verify schema changes against your data. You need to be able to verify schema changes against consuming applications. You need to verify the transition from old to new schema. And you need to be able to do this all in your existing environments. So when you do development, as you're deploying and then in production. And ideally, you need to be able to do this all in a way that fits into your existing tools. Every module in the system can introduce its own ideas about what continuous delivery looks like and introduce new panes of glass for developers to be kind of staring at as they're doing these things. And so fundamentally, when we thought about how Fauna fits into this continuous delivery ecosystem, those were the challenges that we kind of had top of mind. Excellent. And as you're saying, you know, that the company has been around for 10 years, actually, that doesn't make you an old company because if you look at Kubernetes, it's going to be more like 10 years now, right? That's when Google contributed work to CNCF. So we talk about new technologies and then they look back, oh, time passes by so fast. So you can call yourself new. I mean, it's the word is changing. My point is that the word is changing fast. We talk about, you know, of course, we can go all the way to darker container days or, you know, Linux kernel argument is now very mostly all the same. What kinds of evolution you have seen in the database space? Or you feel that, you know, yes, databases still remain because it's kind of totally different kind of a space or you've seen evolution, you know, especially when we look at Camille Fauna so that these databases are suitable for modern workloads. And when we talk about modern workloads, we are not just talking about moving fast. We are also talking huge amount of data. We can talk about data like data warehouse. There's a totally different, you know, altogether, extracting value from data so we can also pump into elements, all those things. So talk about the evolution of database that you have seen. Are you happy with that? Or you were not happy with that? That's for the kind of gap that Fauna is trying to fill. Yeah, it's a great question. It's a big question. You know, I'd say there are a few properties that I think are front and center on the minds of modern application developers. One is they don't want to manage their database, right? So they don't want to have to think about instance sizes, scaling, starting, patching, all of those types of things. So, you know, the ideal database from a developer perspective is serverless, something you consume as an API. The second thing is modern applications that often run at the edge, you know, care a lot about where data lives. It's a huge deal for performance. There are other considerations as well. You know, certainly compliance, DR as well. And so the ideal kind of modern database handles things like replicating data across regions so it can survive, you know, even full regional outage. And also make data very quickly accessible to consuming applications. That's a big thing. And then, you know, the third one I think is, you know, the ideal modern database that, you know, supports sort of very powerful flexible access patterns. You know, so developers aren't constrained by things like a legacy DDL, DQL, etc. So, you know, and if you think about kind of the typical database languages we used to like SQL, which is obviously a little long in the tooth now. You know, SQL speaks in terms of tables, developers speak in terms of objects, so it forces you to think in terms of this object relational impedance mismatch, which is a big deal, which is hard. So I'd say the modern, the ideal modern database kind of addresses those three areas, and that's really what Fauna does. Fauna is consumed as an API. You send us HTTPS requests that contain your transaction logic. Everything executes in the context of a transaction. We leverage what we call our distributed transaction engine to replicate data across regions with strong consistency. And then finally, you know, we are what we call a document relational database. So we store data on disk as documents. But we support the kind of query patterns and the attributes that people typically associate with a relational database. So for example, the ability to do joins across different types of data, strong consistency, those types of things that, you know, again, traditionally been associated with relational databases. How are developers dealing with those, some of the limitations, restrictions of historical databases to be able to re-benefits that Fauna is offering? Often, it's by making significant investment in kind of the layer around or on top of the database to pave over some of those limitations. So for example, when it comes to continuous delivery, which we kind of touched on briefly before, you have legacy SQL databases, for example, that have, you know, only kind of limited imperative support for changing schema. And then you have, you know, tools that have come along to kind of build on top of that. So for example, Atlas is an HCL based tool that tries to bring declarative schema to SQL and traditional databases. You know, other vendor offerings, you know, on planet scales invested significantly in this kind of these developer workflows around a legacy SQL database. The same is true with Neon and what they're doing, you know, with their kind of write-on copy through their compute storage separation. So there's different kind of innovation happening at different levels on top of the database. Some interesting things, refactoring the storage layer, the database, to try to get to these sets of capabilities that developers want, again, in the example that we chose to fit into these modern DevOps workflows. We also have started, not have started, but we do talk about, you know, everything as code, you know, infrastructure as code. From the database and data perspective, what kind of approaches companies can have where they can also look at schema as code approaches and how it solves some of those problems. And I also want to, before actually I ask this question, I want to stick to the older question is also that how much adoption are you seeing of some of these new practices, new approaches, where even you talk to your customers and you're like, hey, everybody knows that they are moving towards it, we just have to help them. The horse is already at the lake, we have to just make them drink the water, or you feel like, hey, you know what, they still don't realize it, their teams are still struggling with those limitations. So we also need a lot of education. It's more or less a state of databases. Yeah. I mean, I'd say, first of all, you know, to your question kind of directly, there's no question that like engineering teams and engineering leaders understand the value associated with some of these propositions, right? You know, modern DevOps practices, continuous delivery, et cetera, you know, I think, you know, Nicole Forsgren and others in their book Accelerate, I think did a great job making the case for the business value that's attached to some of these practices, which you know, maybe surprisingly, but you know, businesses that do these things have greater market share, more profitability, et cetera. So I think that realization is there. I think, again, when it comes to kind of DevOps and CD, I think often the realization breaks down, you know, when it comes in contact with limitations of existing tools is the problem and databases are kind of the prime example of that, right? So folks that are fully bought into this mindset to practicing continuous delivery, you know, do that with all the other software modules in their system, throw it out when it comes to the database and instead bash together a bunch of changes have, you know, engineers or DVAs running very manual one off processes to apply those schema changes to the database. But yeah, but I'd say, you know, when the capabilities are there in the database, like they are with Fauna with this recent release, there's a lot of hunger to go and consume those things. We see a lot of appetite from our customers, you know, even as some of these features were in beta to go and pick them up and start using them for their production workloads. Let's not talk about the cultural because we touched upon briefly. How much culture, I mean, we have been asking this question, talking about the cultural changes, the whole DevOps, but from database data engineers, data teams perspective, talk about the cultural changes that are happening that you're seeing. And I'm not talking about the cultural changes we like to talk about, but the cultural changes that are actually happening within the teams of your customers. And how is your approach a bit different, unique, or you're like, you know, this is what is needed there? Yeah, I mean, so let me start answering that question by talking just briefly about what we built and kind of our approach. And we'll kind of tie back to the cultural or team question. I mean, first of all, so with fauna, you know, in a recent launch, we introduced a few kind of very exciting features. The first is what we call the fauna schema language, which was sort of deeply inspired by a GraphQL schema, but is really kind of a declarative language for defining your schema in fauna. And so when I say schema, I don't just mean like field level schema collection field level schema for the data itself, but you can go as far as defining constraints that need to be true in order for a transaction to complete. You can define what we call computed fields, which are effectively sort of virtual fields where codes executed at access time. So all kinds of powerful features that are there to define your your data schema as code. We've also integrated FSL with our fauna shell. So from the command line, from any in any environment, you can define endpoints, similar to kind of git configuration, a local directory hierarchy, those endpoints essentially map to where you pull your fauna schema or the schema where your data lives. So you can pull down your schema and FSL files dynamically as part of local development or in your developer workflows. So there are a few important aspects of this. I think one is this gives you a portable way to manipulate schema in any environment. Number two, the schema rides along with your code in your repository. And this is important because this all fits into your existing tools. You use Git, GitHub, GitLab, Bitbucket, whatever for everything. So we're not telling you go and check some new additional pane of glass to do a deploy request for your database. This all just fits right in natively to your developer workflows. And when you couple that with other features like our backup and copy functionality, like our data import functionality, this gives you the way to validate schema changes against data, against consuming applications, you can validate the transition, all those things that I mentioned earlier, those capabilities that you need in order to practice continuous delivery with your data. I guess I didn't come around full circle and tie that to kind of the transformation that we see, you know, what we see from teams that consume this is all of a sudden they can quit thinking about their data as a snowflake, you know, with one off deployment processes, they're very worried about touching their data because they don't know, oh, you know, when I transition this schema is something funky going to happen is, you know, my is SQL going to hold a lock on some field that's going to cause a production outage, whatever, you know, instead, you know, they bake these changes directly into their release pipelines, they build automation testing those changes, you know, leveraging FSL on the font of shell. And, you know, can be as fearless making changes to their data as they do with other software components of the system, which is a huge deal. Let's look at some of the kind of new or emerging use cases, though this one is not an emerging case, but Apple also recently announced that from 19, they are going to start preorder of their Vision Pro. What it means is that when you come to like, we are AR, we are talking about even more data, which is not only we are consuming, we are creating more data and, you know, EV is and then of course, genetic AI kind of workloads, AI driven graphics and videos. What I am trying to understand is what kind of new workloads that you are seeing are emerging, which might push teams, or you know, when it comes to data and databases, that we will be creating more databases, we will be creating more data, these are two different things, structure and structure. And how do you look at it? Do you see as a challenge for the team? Or do you see as a kind of opportunity? Overall, how do you look at this whole explosion of databases? Yeah, I mean, another big question. So there are a few things here, you know, the first is kind of the proliferation in terms of size of data. You know, fauna again, is an operational database, not an analytical database. So we do support large volumes of data, but typically the use cases that we see when customers get into these more analytical style queries is they want to, they want mechanisms to kind of ETL data out of fauna into their analytical database of choice in ways that are convenient. So there's a lot of buzz right now about things like zero ETL, which really I think is more as like kind of managed ETL, but that's something, you know, we do support a few very convenient ways to get data, to get your operational data out of fauna into your analytical database for some of those types of queries. Coming back to kind of your question, I guess, you know, just about AI more generally, and some of the operational use cases that are out there, you know, we very much see people building AI driven apps on top of fauna. And we're excited about two things, you know, number one, how we can use AI in our products, which there's some really cool things happening there. We recently launched new AI assistant in our docs page. That's great for you know, asking questions about the database, doing code translation, etc. I think that will become even more core to our product as we leverage our knowledge of your schema and your data to kind of inform those types of queries and what you can do. But the second important one is kind of how we fit how fauna fits into the broader AI application landscape. You know, today, the typical pattern that you see is kind of databases rushing to support vectors, you know, similarity search, so that they can generate embeddings and retrieve content to inject into LLMs, which is, which is cool. We have customers doing this with fauna in conjunction with other databases, really what I call more of like indexes like Pinecone and Weaviate, etc. But my own personal view is I think that, you know, vector search is just going to be one tool here. I think there are other types of search that will become interesting. I think today it's kind of a hammer every, you know, we're looking around everything looks like a nail to some extent, but, you know, graph search, you know, other types of search I think will become more relevant. And so, you know, the way we think about it is we have customers storing their primary data in fauna and then also using some of our integration capabilities to link fauna to other types of indexes out there, whether that's vector databases, whether it's Agulia, etc., to perform different types of search over their data. And we're really excited about that future and kind of fauna continuing to be at the heart of some of those application patterns. Dasan, thank you so much for taking time out today to talk about fauna and of course the whole evolution of databases data and these new workloads. Thanks for those great insights, but I would love to chat with you folks again. Thank you for your time today. Yeah, thank you for having me. It was a lot of fun.