 of cockroaches to this. So, you know, you have one single database and you simply keep adding nodes to it and you're expanding it, not just from a volume point of view from the size of the database itself, but from transactional volume as well. You can hit any one of these nodes and it's gonna act like a single logical database, right? Interesting enough, we can actually deploy these things in multiple different regions as well or across multiple different clusters, which is actually really cool. So what looks like one single logical database actually underneath the covers is implemented across multiple different regions so I can survive things and whatnot, right? So, number one, a couple of cool things about cockroaches. I'm gonna go five quick things and then Keith is gonna go and do operator and demo and stuff. Number one, this is standard SQL. We aren't requiring you to learn anything else. We are wire compatible with Postgres. So, if you know SQL, you can interact with cockroaches DB. Number two, you could ask any node in a cluster for data and it will find that data throughout the cluster. Every node is a single consistent gateway to the entirety of the database. This is what allows us to be one single logical database. I could ask any node, I'm gonna find that data. Number three, in this first question, I was asking for this record and it was in region one, it's gotta go all the way over to Europe and back. That's not efficient. How do we geolocate data near users to reduce read write latencies and how can the database do that? Cockroach has a very unique capability called geo partitioning that allows us to actually do that. So, now when this user, Spencer Campbell, asks for his data, well, it's actually located in Europe. Say he's in Europe, right? And so we can actually geolocate data and geo partition data which is actually pretty unique and allows us to deal with these latency issues at global scale. Finally, I talked about scale, simply spin up a node, point it at the cluster and what happens is Cockroach will consume that new node and balance all the data throughout the cluster so that we can balance for volume of transactions or volume of data, whatever it is. There is no manual sharding involved. And then finally, when a node fails or even a whole region, what we've done is we've actually created replicas of data so that the data still existed, you still have access to this. The cluster itself would actually remedy this situation, actually deal with these sort of things in terms of create a third copy of that data somewhere else but we can actually, you know, survive failure of a node or complete region, right? So that's the high level overview of Cockroach DB and just a couple slides and oh gosh, I did it. Man, we're 10 minutes in Keith. I did it in about eight or nine minutes. So, but I wanted to get into a demo of Cockroach DB and getting started. So Keith, do you want to take over here? Yeah, absolutely. So we're going to demo it. Can you- Hey Keith, really quickly, just let me remind everybody please do ask questions in the QA. I will be monitoring the QA for questions along the way and dump those into Keith. I just wanted to remind you, sorry, Keith, go on. No problem. So I'm going to demonstrate two things today. I'm going to demonstrate using our operator to set up a Cockroach DB cluster in a single region. In this case, it's going to be an open shift environment that's running on my laptop. And then later on, we're going to be working on a distributed environment that I set up previously that's across three regions in Google Cloud. So, and I think I'm running that on GKE. So I'm going to go ahead and kind of share the first bit. So hopefully you can see my screen, Jim. So what we have here is open shift. You're not familiar with open shift. It's Red Hats Kubernetes distribution, right? We have published the Cockroach DB operator in their operator hub, which makes it super easy to get started with Cockroach DB. So all I need to do is search for the operator. We got the Cockroach operator right here. I'm going to click on the install button. I'm going to install it into my Cockroach DB namespace that I created a couple of minutes ago. And that's going to go ahead and actually install the operator on the cluster. Now, while I'm waiting for this to install, I'm going to talk a little bit about what an operator is. So an operator is a manager for a custom resource in Kubernetes. So we define a custom resource called a CRDB cluster. And then the operator is a pod actually that we assign state to. We publish custom resources to and it configures and maintains the database for us. So it does things like easy rolling upgrades. It's going to make it easier in the future to set up backups and restores and do auto scaling and all that kind of stuff. Right now it's doing a full install and upgrades for us, which is not super hard in Kubernetes. We got away with not having an operator for a long time. But what it's really designed to do is make some of those kind of day two operations, like post install operations, easy to manage in production. So we have this operator. It provides a Cockroach DB cluster API. I can click on the link to create a cluster. And the backend, it's creating a CR. We're going to name the CR, CRDB TLS example. It's going to have TLS enabled. So it's going to be an encrypted, it's going to be set up with encryption. It's going to spin up three nodes in this OpenShift environment and each of them are going to have 10 gigs of storage. So we're going to go ahead and create this real quick. And over the next 90 seconds or still, it's going to install the database for us. So that's it. That's all you need to do to get Cockroach DB up and running. If you're running not in OpenShift, we have published instructions on our website on how to use the operator. We also have a published thumb chart that you can use for, you know, that's more for test and dev type environments. Also super easy. We also for custom deployments, which is something we're going to talk about a little bit later. We do also publish stateful set configurations for publishing, for installing Cockroach DB manually against Kubernetes right now because the operator is currently single region, is a single region operator. You would still use the stateful sets for deploying across multiple data centers, which I've done here. For the later demo. Let's see if it's created my resources yet. It has. So I have, it's starting to create my database pods. So we'll go ahead and let it finish starting up. We've got to, we have three more that are going to need to start. You can see in the logs that it's already kind of coming up and going. Let's go ahead and see if we have, we're almost the database is almost live and healthy. We got just one more pod that needs to come up. Hey Keith really quickly why we're waiting for that to come up. You know, we're showing the operator via, you know, open shift in the marketplace. It's also available just raw in our Git repo as well, right? Yes, absolutely. So we have a, this is open source. So this is published. Hold on a second. I'm going to log into the database and create a quick user. And then we're going to log into the admin giveaway to prove I didn't just like kind of fake it. So. Okay. Love watching you tied buddy. This is the only part I couldn't script. I know. Here we go. SQL shell. All right. So. Standard SQL. It's pretty super standard SQL. So let me go ahead and do local host. Cause I am, I did a quick port forward. It logs me in. I have three nodes that are running. You can see that all of these are running kind of locally on my laptop right now. That's right. And Keith, you did local host into one of the pods, right? One of the instances of cockroach, right? It could have been any one of those instances, correct? And then you would get the DB console. That, that is absolutely correct. Yeah. We don't implement like a separate node to do the console and then database. It's everything is like a distributed system saying single binary, every single node implements the same binary, right? So. Right. Everything is a single binary, which makes it really easy to kind of work with this stuff, which is awesome. Yeah. And easy to scale and aligned with the core principles of kind of distributed systems and how these things work, right? So. Real quickly, there was a question about rancher as well. You know, could we use this with rancher? I don't think we're in the rancher marketplace yet, right Keith? But I mean, the operator would work with the instance, I mean. Yeah. So we do work, we work with any Kubernetes distribution. You know, we're not necessarily listed in every single marketplace, but our generic Kubernetes instructions, I have yet to find a Kubernetes distribution that didn't work. So really it's a matter of kind of just following the instructions in our documentation. Right. Yeah, exactly. So that is another question too. We, you know, I think we're in this install, we're definitely using stateful sets. Are we, I mean, is there an option to do this a different way, Keith? Or is that just kind of like the standard way of doing things? So stateful sets are the standard way of doing things. We could theoretically use something like demon sets and there's certain use cases for demon sets. But generally speaking, demon sets are more appropriate for things like log stash agent or whatnot, where you want it to run in every single node. Whereas for us, we would have to be used taints in the background to make sure that the database only landed on specific nodes. That's right. Yeah. And I mean, ultimately, stateful sets is helping a lot, especially with the concept of a database, right? I mean, there is a fair amount that actually goes into what we're using stateful sets for, right? So it's actually a critical piece of the whole thing. Absolutely. Yeah. And then, oh, there was also a question about open ID and do we support open ID connect protocol? So, so open ID is, would maybe be appropriate for the admin UI. I am not aware of us specifically having supported that single sign-on mechanism. Right now, our authorization mechanisms are password, user and password, TLS certificate authentication and then GSS API, which would be backed up by like, like a Kerberos-based infrastructure like Active Directory. We have discussed and we've done this for Cockroach Cloud, which is our kind of managed service and for databases of service where you do have single sign-on. So theoretically, we could make that available and self-hosted as well. I don't think we currently publish the tooling that would be required to do that, but it would be feasible. I hope that answers the question. Yeah, I think so, Keith. And then one more, is this the preferred method of installing and running Cockroach using the operator? I mean, I know we've had a Helm chart as well. Like, and so I think I see you just using the operator now over Helm, like what is your preferred approach? Because we use either, right? Yeah, we can use either. So for production deployments, I still recommend the static configs. So largely because there's always something custom that you need to do and the operator's pretty new. I mean, I think you're aware, we announced it in the fall, right? So it's, we're still, you know, we're still kind of working to make sure that it has full future coverage that we need there. The Helm chart is great for like super simple test and dev environments. You certainly could use that in production, but certain administrative tasks are a little bit more complicated using Helm. I don't think there's a wrong, I don't think there's a bad way to install Cockroach DB on Kubernetes. I think that generally speaking, it's gonna be really hard to really mess things up for yourself. The Helm chart is very flexible for basic install. And then the stateful sets are super dynamic for when you have custom networking and all the kinds of things that go into that. Yeah, and you know, somebody was asking about, you know, how do you persist data? You know, is it just PV and what are you doing? And I think that's all kind of covered. I think you kind of spoke to that Keith, but stateful sets is really that key thing so that when things do fail, the persistent storage and remounting that to whatever pod that is, it's just a key piece of the overall equation. It's part of what, you know, because we can actually survive loss of a piece of data within the database, we can lose a pod pretty easily and survive that pretty well. You're gonna show that in the next demo, right? Yeah, in the distributed demo across multiple data centers, I'm gonna show that. So it's a little easier here because of the GUI to show off some of the stuff that we're actually doing. So we're mounting secrets as a volume and we're also mounting a persistent volume claim, a PV into the cluster. Right. And each node gets its own volume, right? So it's, and that way if we lose a node or we lose a pod, we can recover from that very easily. And so one last thing before we move on, Keith, and I just wanted to comment, I'm trying to hit the questions live, y'all. Like there's a lot of questions here, which is really fantastic. So forgive me if I'm not repeating the question, I am paraphrasing and kind of lobby them into Keith here. I just wanted to talk to a couple of things. Our operator is somebody who's asking if it's open source or not. Our operator is definitely open source. The Git repo is out there. I'll get it out to everybody as Keith is doing this. I'll go look it up and get the link to that. It is written in Go, which is kind of a unique tune. I mean, well, I mean, unique to us actually. Our entire database is written in Go. So it's pretty well aligned kind of with kind of how these things come together. So, dude, there are a lot of really, really deep questions here, Keith, I'm gonna do the second half of the presentation, just to talk about how we scale and how we survive things. And then we'll get you back into the demo while Keith kind of sets things up. I'm gonna go back into the screen sharing here, y'all. Okay, let me see where's this guy. Okay. Keith, can you see the share? Yes. Okay, great, awesome. So we showed you how to kind of deal with installation and the operator and whatnot. You know, underneath the covers, what cockroach implements and what we use and we rely on pretty heavily is a distributed consensus protocol called RAFT. Now, if you're not familiar with RAFT, go check it out. I mean, if you're interested in distributed systems, RAFT and PAXOS, really interesting, cool stuff. And we use RAFT to a major extent. I think I heard, and it's not easy to actually get done and do you well within a distributed system. I think I heard Ben Darnell, who's one of our founders, is probably one of the best software engineers I've ever met in my life, say, hey, I could have probably coded RAFT in a couple of days when I was in college, but to do it for a distributed like production grade database, it's taken years to get this right because we're dealing with some really interesting challenges when it comes to a database. And when you start dealing with RAFT and distributed consensus at major scale across different parts of the planet, there's lots of different things that go into that. And there's been a lot of really interesting software engineering from the Cockroach Labs engineering team to actually fix some really interesting things that we found in RAFT that we actually contribute to upstream at CD. And there was a question here about split brain and some of these things actually we've architected out some of those really, really difficult problems to deal with around atomic replication and whatnot. And so there's lots of stuff on our blog. I'm not gonna get too deep into the particulars of what we've done here, but the same kind of core concepts that are driving Kubernetes and RAFT and at CD are the same concepts that are here in Cockroach. And we share a lot of that same lineage that we actually contribute upstream to these things. So I know there was a question about split brain. Man, that's a really, really deep question. There's definitely a great blog post about how we've actually dealt with that in Cockroach. I think, you know, there's been some recent issues about split brain and distributed consensus in this and a whole nother world. But we use RAFT, there's a RAFT group in Cockroach. A RAFT group basically is, you know, because of data and a leaseholder or RAFT leader where basically most transactions will concern, will commit. When we actually place data within a cluster, somebody was actually asking this in the chat, you know, how do you distribute data? I have four nodes and I actually wanna take a table and distribute replicas of the ranges within this table. Ultimately underneath the covers in Cockroach, every table is broken down into 256 megabit chunks of data. And so what that allows us to do is move data around and create these replica sets. And so you can imagine this table, which has dog names here. You know, we're writing, you know, ranges of this data. You can think of these just shards, but we automate all of that chart. I'm gonna show you that, right? And so, but when we write data to a cluster, we can optimize this four different failure domains or basically for latency. But we can actually survive lots of different things. When we write the first range, we write three nodes, we write the second range, the right third range. And what we're doing is we're distributing data evenly across these various different nodes. We can also do things. We use heuristics in the database itself so that when load is heavy on a particular range, we can actually segment that range out to its own node or its own pod, if you will, so that it can actually, you know, deal with that transactional volume over that is, right? So another way in which we can actually deal with the placement of data is to optimize the overall performance of the cluster. Now we can also do something really cool. And if you have a deeper layer, there is, you know, the architectural deep dive of Cockroach, if you're really interested in how this stuff works, we go into a great explanation of how this works. It's in our YouTube channel, if you really want to after it. But we can actually overload the primary key for each of the tables and actually integrate in, say, a location so that we can actually control where data is going to be placed in a cluster as well. And so there's lots of different things we can do here. This is a unique capability to Cockroach. It's called geo-partitioning. I believe, Keith, you are going to show a little bit of geo-partitioning in the demo, right? Yeah, we're going to use geo-partitioning across three data centers in the US as a part of the demo. And so, you know, ultimately at Cockroach, look at, there are two primitives of which we've defined for and which we designed and implemented for. Number one is consistency of data. So, you know, we've implemented a database that is true system of records, so serializable isolation. So, data is guaranteed correct in our database. This isn't eventually consistent. What you read is what you're going to get, right? And that's, and we've optimized for that. And there's a lot of things we had to do in distributed systems that actually deal with that, right? So, there's these challenges to the cap theorem. I think there have been a couple of questions about them. We could talk about that in a little bit. But we've also optimized for the speed of light. That's our biggest competitor. How do we actually guarantee a transaction and then give low latency access to data no matter where people are on the planet? Which is also really, really difficult to do for it. And so, we've done a lot of those things. When we add replicas or we add a node to a cluster, what it's doing is just simply moving replicas around and rebalancing the cluster. We can actually survive failure. Say node three goes down. Those two replicas are gone. The database is smart enough to heal that. It's going to create, you know, the other replicas that's always going to be available. So, real quick kind of overview. I just wanted to give a little breather so Keith can go into the second half of the demo. Keith, that's all you have to say. So, can you see my screen now, Jim? Yes, I can, Keith. By the way, y'all, I'm trying to answer questions in the background as much as I can. Keep them coming. We'll get all of them afterwards if we don't get to everything. But I'll try to allow things into Keith along the way as well. So, thank you very much for all the questions. So, this is a cluster that I set up earlier today. It's nine cockroach DB nodes plus an additional worker node in each region, which you're not going to see in here because I didn't have them join the database cluster. On that worker node, I'm running a load balancer and a load generator. So, we're generating a load in each of these regions. The app that we're running is a simulated ride sharing app that we call Mover. A lot of our tutorials on the website use this app. So, this is just a scripted version of a tutorial, the geo partition replica's tutorial on our website if you wanted to go through and do this yourself. So, right now we're running, performance isn't great. It's running like 500 milliseconds per query. I haven't done anything to the database to make it fast. We're running about 650 transactions per second across the nine nodes. So, what I'm going to do first is I'm going to demonstrate that the database continues to operate when I kill a node. So, I just killed one of the nodes in the US East region. So, I think we'll be able to see it here in a second or two. We have a suspect node. So, US East 1C just went down. I'm going to show that how the database, we're going to have a bit of a performance blip, of course, but then the remaining nodes are going to take over those transactions as they go forward. There was a question about cap theorem, right? So, you can't guarantee consistency, guarantee availability and be partition tolerant all at the same time. And that's still mathematically true. What we do is, because we distribute and use the data and use consensus, we can design the database to survive certain failure scenarios. So, in this case, the database, I just demonstrated surviving a node failure. I could have taken out an entire region as well and the other two remaining regions where it would be able to continue to process data for the database. If we're to lose two of the three regions though, and I lost a quorum for my ranges, then there would be data unavailability. So, it's not like we completely disproved cap theorem. Instead, we're working around those limitations to try to make the database be able to survive the types of failures we need to be able to survive. Now, I'm going to go ahead and I'm going to let that node that I failed come back up. It's going to take a little bit of time. Once it does, you'll see that this under-replicated ranges count that you're seeing on my screen right now, we'll go back to zero. So, the node has restarted and now we have caught the under-replicated ranges up. And so, the database is fully healthy. What's great about all of this is with the exception of that dip where we were re-electing the leaders for certain sections of the data, which meant that we were holding some queries, the database was completely self-healing. And it actually self-healed to the original performance metrics before I even restarted the new node, okay? So, but we haven't done any performance optimization and quite frankly, a P99 latency of, in the half a second range is not great for most user-facing apps. Can I just, before you get into the performance thing, can I just ask, there was actually one question that was kind of interesting, this whole thing. And it's applicable to where you're at right now. And so, look at, we just so showed survival, one node went down, there were still two nodes. There was no real impact to the application. I mean, there was a little bit of a dip, but queries were still serviced, right? What happens if two of three nodes go down, however? So, then we would, two of three nodes So, because it's a nine node cluster, remember, I would have to have lost, I could have lost all of my nodes in one region and the database would have continued to operate. If I lost, because I'm only doing a replication factor three here, if I were to start losing nodes in across multiple regions, there's the chance that some sections of the data would not be available. I could manage for that by increasing my replication factor. So, if I were to increase my replication factor, in this case to five, I could survive either a full data center loss or any two arbitrary nodes in the cluster, right? Right now, I can lose, I can only lose a node or a fault domain, in this case, a data center. In the scenario you're describing, if I like up my replication factor to five, then I could lose two arbitrary nodes or an entire fault domain, right? Or a fault domain plus an additional node. Yeah, I mean, long story short, Keith, it's basically like, well, what do you wanna survive and how do you architect a database to survive that? We're gonna give you a couple different knobs and dials to turn, to optimize for what you want to do, but I think you have to think about that when you're architecting the deployment, the topology of this thing, right? Yes, so we have a great tool that's linked from our website about kinda what your minimum topology needs to look like and your minimum replication factor needs to look like to design your database to survive certain failure scenarios. There's no magic here. I mean, it feels magical because distributed SQL is hard, but the reality is that we're using the same fundamental underpinnings that HCD is using to survive the loss of a master in Kubernetes. And that's the reason why we can do this and we can run in Kubernetes and have a pod be maybe get restarted on out from under us and still have the database operate the way we expect. So what I'm gonna do in the background now is I'm gonna start optimizing the environment for being geo-replicated across three sites. So the first thing I'm gonna do is I'm gonna partition my tables and indexes. So right now we're using just kind of the default which is okay, right? But as you said earlier, a customer interaction really needs to be 100 milliseconds or less, which mean the database probably needs to be three to five times faster than that at worst case to have consistent kind of real-time performance or seemingly real-time performance for a user-facing application which is largely what we're focused on. So what I'm doing here is I'm altering my tables to make some partitioning decisions basically to have the leaseholder, which is the RAF leader. So technically the RAF leader manages right consensus. The leaseholder is the replica, the leading replica that's able to respond to reads without doing a quorum check, okay? So we're configuring the database on the backend right now, making some changes to the tables to optimize the placement of those things. So we're running this ride-sharing app in three cities, New York, Chicago and Los Angeles. And so I'm setting up primary rules for each of those cities to have their primary replicas live in the data center that's closest to those cities. And so what you're gonna see is that our query per second count is gonna go up because the database is becoming more efficient and our P99 latency is going to start creeping down here over the next minute or two. And the net effect of that is in Mover, I'm gonna go ahead and I'll actually show you what we've done here. So I'm gonna go ahead and kind of click in to one of these tables. It's gonna take a second to refresh on a sec. I am having some internet connectivity problems. Yes you are. So I apologize for that. So I had a couple of internet connectivity problems there. Let me go ahead and get this going. Here we go. You see we have the DDL for this table now has a bunch of partitions in it and we assigned those partitions to different regions, okay? And what that allows us to do is basically this is the DBA setting default rules for how the database should act in any kind of given situation. So what we see now is our P99 latency is much better, right? 65 milliseconds as opposed to almost 600 milliseconds. Our transactions per second went up from roughly 650 transactions per second across this cluster to almost 1,100. My query latency is gonna continue to improve here over the next, I don't know, couple of minutes. But 35 milliseconds is great, but really I wanna be sub 10 milliseconds because I wanna make sure I have tons of headroom. So now I'm gonna do a further enhancement. We have a table in here called promo codes, which is really a global table, right? We don't have New York only promo codes, right? So what we're gonna do is we're gonna create some a global index, which is going to allow us to do fast reads from all of the regions against that table at the cost of a write performance hit. So if I were to add a record to this table, it's we're gonna be writing to some additional indexes to make sure that we do fast reads. So our write performance will suffer or read performance will get dramatically better. So I'm gonna go ahead and build those indexes. You can see they're already kind of starting to take effect. And the net effect of this is we're gonna have consistent performance in the database of sub 10 milliseconds for 99 out of 100 of our queries. That means that we could theoretically do 10 database transactions. Within a kind of a real-time app like user interaction, if we absolutely had to, that would be a little unusual, right? Usually it's one transaction or maybe two or three. This is going to allow us to potentially have 10 or more if we really needed to. Now, what you're also gonna notice our queries per second isn't gonna change dramatically with this change because this is only impacting like 2% of the queries in this application. But our P99 latency is going to continue to get better until it lands at like, I think it lands somewhere between four and five milliseconds, if you let it. Once the database itself continues, because the database, as you mentioned earlier, self-optimizes under the covers as well. So once that optimization is done, then you'll be, we're gonna be kind of stick pretty solidly in that three to four millisecond range. You see we're already kind of getting there right now. So that was what I had to show today, Jim. Yeah, and so Keith, basically you could then alter the partition as well. And the database is gonna start moving data around, correct? Like, I mean, this was all done in production, correct? Yeah, so there are times when we move data, there are more frequently, what we're actually doing is moving the authority to act on that data, okay? So the leaseholder and the RAF leader, those are the, that's the authority to read and write that range respectively. So what, how the Waycock RHDB works is that every node is responsible for some portion of the data in the database, right? So, and in a normally configured environment, you could theoretically break this if you wanted to, but I don't know why you would. Every node is going to be the leader for some portion of the data, follower, a follower for other portions of the data, and completely uninvolved in certain sections of the data, right? So most of what I did here was actually moving the authority to act, right? If we set a rule where the authority to act, so with those partition schemas, let's say I had four data centers here and I decided for any particular partition, I wanted to move the authority to act to a location that we didn't have a replica, then the database is going to move the data for you. So you don't have to manually load the data in the background to do all this kind of stuff. We will handle that for you, but it's much more likely that we're going to move the authority to act on the data than it is that we're gonna move the data. The exceptions to be that would be like hot ranges, if we take, let's say we're having a lot of transactions across a really small number of ranges, then we might have some things that we would do there. Hey, dude, we only have four minutes and then we gotta cut this thing off, so I just wanna make sure I get to like, there were like two other questions and I'm just gonna share my screen to have that wrap up slide so while we're taking QA, people have that, if you can just turn it off screen, sure, that's great. So one of the questions, and it came in a couple different ways, how about backup restore? And how does that work in CockroachDB? Because it's a unique problem, right? Because we're guaranteeing data lives in locations, right? And so how does that work in Cockroach? Yeah, so the core database, that's the free to use open source database, includes backup and restore to a single location. So, let's say you add an S3 bucket in ABUS or whatnot, you could do backup to S3 and all of the nodes in your database would just need to be able to write to that S3 bucket. The enterprise, so we're kind of, the entire database is source available, certain features are licensed as enterprise features that you have to pay Cockroach Labs for. If you have an enterprise license, we also support distributed backup that allows you to have an arbitrary number of nodes back up to an arbitrary number of locations. So let's say for my simulated use case here where I'm in US East, US Central and US West, I could theoretically have the nodes in each of those locations back up to a local object store. And then I could move those replicas around in the background. That's gonna be faster, of course, in a distributed scenario where I'm backing up to something that's closer. But all of our backups are distributed. It's just a question of whether or not there, it's a many to one distribution or a many to many distribution. And ladder one is what we charge customers. Yeah, and the ladder one basically, I mean, it's used in, typically we see customers who are trying to deal with like, you know, compliancy regulations with our database. And, you know, like where the backup itself needs to be controlled from jurisdictional issues, right? And I think it's, you know, it's one of those more advanced features. So another question, Keith, was, oh gosh, oh, there was a, there was, guys, there were a lot of really great questions. I think one that's kind of easy to pick off and to go through. What about when you run like a query that's hitting all the records, you know, like a sum or a report? You know, because we don't want to range scan the entire database, right? We're all over the place. Like how does cockroach deal with those sort of things, Keith? Yeah, so there are a couple of different strategies for management. Remember, we only have about one minute. So just so you know. Yeah, so that would be a sub optimal query pattern for cockroach DB. The easiest thing to do would be to run what we call an as of system time or a time travel query. That way that that long running kind of aggregate query isn't blocking other transactions. All right. We're serializably isolated. So if you weren't to do that, then you potentially have some performance issues. There's some other mechanisms you can use with CTEs and kind of pre-calculated aggregates. Like we could theoretically create an index that included some of those pre-calculated index. But we also, I mean, we also implement vectorized queries. So we can actually do some of columns pretty easily as well, right? Yeah, so vectorized, you know, the vectorized execution engine would help the performance there, but we're still gonna have to hit all of the. Leaders for all of the ranges that host that data. That'll just eliminate some of the like overhead of the transactional engine there. That's right. That's right. I mean, we've optimized a lot of things. There's also a cost based optimizer in cockroach DB that deals with like the distributed nature of the data. There's lots of things. I mean, you could actually go in and spec queries and see what's holding things up. We have just barely scratched the surface in this webinar today. You know, cockroach DB is available off our website. I mean, the core version, the free downloadable version is available off our site. Just get, get, get Catroach DB. We have a world of resources around Kubernetes in terms of, you know, how well we work with this. We do feel that we are probably the best database that is really designed to run on Kubernetes. In fact, Cockroach Cloud, which is our managed service of Cockroach DB runs on Kubernetes. We have deployed thousands of clusters on Cockroach on Kubernetes in Cockroach Cloud. A lot of what we package into the operator is that knowledge that we've actually captured. But that is really the easiest way to get up and running and try this today. We actually give away free clusters for about 30 days. You know, stay tuned for, you know, unlimited free clusters in our future. And so there's some really cool stuff coming out of us. So Keith, thank you for that. I want to be respectful of, you know, the Linux foundation and our time requirements. You and I can sit here and talk about this stuff for days. Like I said, I think we just, fairly just scratched the surface today. And also there were a lot of questions that came into the QA. I will get these from the LF and get answers out to everybody as much as we can. So Keith, thank you. Thank you. Appreciate it. Yes, thank you both so much. Thanks, Jim. Thanks, Keith. And as we mentioned earlier, this recording will be available up on the YouTube later today. So thank you all for participating and we hope you have a great day. Thanks everybody.