 Thank you. So thank you so much, Candice, for the introduction, everybody. The Linux Foundation does such a great job of these. I love being here doing these. I don't know how many times I've done this with you all, but it's a lot. So, you know, before I get started, just a couple of my own logistics. First of all, that's me up there. If you want to follow me on Twitter or anything, I'm just James J. Y. MCE. You know, people do like to understand what's the level of content going to be in this in this particular session. I think it's intermediate, you know, we're going to get into the details of you know how you architect software and how you architect an application to actually make it serverless and dive into some of the details and maybe hopefully there's some some good advice for you to actually pull away to think about how you can architect your own applications to be serveless because honestly, I'm kind of a believer in this whole thing. You know, so, I, you know, my God, I haven't updated this it's not there's not two of us just one of us today so we as I guess me in the mouse in my pocket but I'm not a database expert, I am curious. I love this stuff. I love to talk about these things. And the goal here today is give you all basically a high level context of kind of how to think about, you know, the way that we went through this process to hopefully help you think through your own process and as that. Yeah, ask Candice, please do ask questions I'll have the QA panel up the whole time while I'm presenting. And I'll try to get to questions in line as we talk if you have them. And I do love questions along the way. So, last thing is I am with somebody called cockroach labs. You know, love it or hate it people don't forget the name of it, we are named after the mighty cockroach in fact our old Riley book. There's a cockroach on it, which is good. But but yeah, we are the, we are a vendor behind a database called cockroach DB, which is really a kind of a modern cloud database that was really architected to, you know, to be naturally resilient to scale very easily basically to be a modern cloud database and instrument use a lot of kind of cloud native principles. And a lot of the things that I've learned in working within the CNCF for the last five or six years really are dictated and are actually outlined and coded into into cockroach DB. If anybody's ever interested and look at our code base that is open. You can look through there I'd like to think it's a PhD in distributed systems. It's all in go. It's a very interesting code base so that was just a little bit before we get started so as we go through this. Like, I mean, this is a kind of a webinar three parts to high level will drill down and we'll get into some some deeper detail at the end. But please do ask questions. You know what I think is concept of serverless and I have, I hope you're all on this to, you know, is at a high level, because you're curious and kind of where this is going but, you know, as the concept of serverless to me. I just was not a believer in it about a year and a half ago, maybe it's the implementations of serverless that I saw in the market and some of the tooling that that was out there just was not a believer in it. And the closer and closer I get to it, the more and more I'm actually a believer in it and I think if I kind of think of like what helps me understand this is, you know, if we take the concept as a service line we apply it to things. I think start to make a whole lot of interesting right so the same way we took a telephone, and we applied the concept of wireless to it. Well, it just made it more convenient and made it more capable. Right. And I think we can do the same thing to serverless right we can take basically say, you know, how do we take this concept and apply it to our applications and to our back end infrastructure. How do we apply this in the cloud, you know, can we get this this improved capabilities better convenience to be more efficient. And I think that is to me the premise of what serverless mean I mean we have lots of things we have serverless functions we have lots of different things but they're all implementing this kind of the service, this paradigm. And when I think about I mean I really think of five things I think you know little to no manual server management by anybody that's let's automate as much as we can. Maybe there's one person, you know, managing you know thousands or a farm or a fleet of servers for people. But let's just limit that right. Let's not let's not have you know people who interact with your, your piece of infrastructure or your software do that you know, let's let it scale. And then practically, let's make it a service. You know, like one of these kind of poor distributed systems things you know this is an inherently fault tolerant is it it doesn't have high available, the high availability like baked into it. Is it always available and and and there for you. And then I think most, most importantly for a lot of us is, you know, if we think about serverless, I don't want to go and buy an easy to instance or at least a VM in the cloud somewhere. I use what I use and I don't want to be billed for, you know, the excess compute cycles that I'm not actually using and I think about serverless. I think those five kind of poor principles if we can apply them to our own software staff. Things get really interesting because we can actually start to deliver software in a different way. Now, today, people think about serverless is like, you know, serverless compute like, I love Google cloud run I think it's really cool bargate or serverless functions where they have their cloud functions and the netifying for sell all fit into this right or there's like these emerging kind of functions as a service platforms where they're taking all this and pulling together and it's like this end to end app development platform. And this is great. This is a really, really great first step into into serverless and helping to find the space and popularize it. So if you think about this, I think what's happened here is we've taken a concept of you know compute we've taken a concept of like these these application functions and we've applied serverless to it. There's a whole lot more work to do and I think we can actually make our own application serverless and I think about this, the database and that's what we're doing. Well, there's a couple more requirements because I don't think serverless and this actually applies to everything not just the database. I don't think serverless should be limited to a single AZ or a single region. I think serverless should actually be infrastructureless it should be you know free for geographic scale as well these things should be available to available across the entire plan. And with the database well you know you start to get into the weird things about transactional guarantees. How do you implement SQL in the context of this. How do you auto scale a database kind of difficult where is the location of data, based on what's going on how quickly people access things, you know, is state preserved. And ultimately, I think this concept of infrastructure was that's really, really interesting and really, really important. How did we do this right how do we take kind of one of these things and actually apply it and when I think about you know like let's go back a second we're doing this to the talk or Stevie and I'm going to go through kind of how we did this from an execution point of view and then how we do storage of data and these sort of things. And if you think about this, there's several solutions to doing this out there I think I think people are moving in this direction because we do see the value in the core principles of what serverless is applied to various different solutions you know it's just on the website for click house earlier and you know the same basic architecture around ephemeral compute and persistent storage. Is the same thing implemented there. And so I there's something here this this pattern of application architecture is definitely emerging and that's what I want to talk to you right. So if we think of a database, any database any DBMS, there's really kind of three layers there's a language in which it speaks. And because ultimately the point of a database is I'm going to write things to this right, and then there's this execution layer of things that you can do with that data, the other language now in conference is a little different like we still sequel still the language is hasn't changed it's still you know kind of the sequel syntax that you would expect out of any postgres database right, but but underneath the covers we're doing things very differently because we are using kind of distributed system so we're, we're doing the execution of transactions right we're we're replicating and distributing data across you know lots of different things from a storage point of view so the kind of these middle layers really start to change when you start to think about distributed systems and distributed systems are really going to optimize you know the compute that it has to to to as the has to actually go and execute what you want to do. Now for me, you know when we start to think about how do we make any application services, not just the database here. To me it's all about finding that divide your software and your architecture stack. You know what is the, what is that point where you know everything above the line is ephemeral. It doesn't need state. It's a stateless components of what you're doing if you think about execution of a query. That really neat maybe there's like temporary state things that happen between you know various different kind of in a query plan, but but even in that case I mean it's it's it's it's ephemeral it's it's going to go away at the end of the end of the execution of whatever it is it's doing underneath that underneath that line is the things that need to persist the actually, we're writing this like the data can't change it can't just go away, especially with the database right so if we can find a line between a federal compute and persistent storage. That's the line you want to draw when you're starting to think about making your own application service and let me just go through kind of how we do that and how we actually use this as an architectural kind of piece. And then I'll talk about what we're doing in storage it's a little bit deeper into the architecture of cockroach in particular but it will give you some sense of kind of how to how to make these multi tenant how to how to do this persistent so in cockroach today, you know, when I deploy a database I can run here I'm running across three different pods and Kubernetes, I have three nodes of cockroach. And it acts as one logical data is a single logical database across you know multiple different instances. I could ask anyone of these nodes for data, every one of them is an endpoint. I simply put a load balancer in front of this and I write data and I read data, and these three nodes was safely just kind of all work together and deliver the capability of a relational database. Now what we've done in cockroaches we said hey look at where is this where is that line right where's the federal compute and distributed storage. We said the storage and replication part like how we actually distribute and where we put this data is different than what's happening up above above is basically really the, you know, what's a femoral. None of this needs to be there if it just goes away it's okay right. Of course, not in the middle of a transaction and that's a whole other thing, but the sequel is separated from the storage. The thing about this sequel execution is separated from the story so nice this virtual cluster living on top of you know storage and replication we've actually taken our software and created kind of two binaries if you will, one binary kind of studs out the storage and replication part, the storage replication studs out the sequel and execution distribution, and then what we've done is we redefined the software stack so that they communicate to each other in a really intelligent way so we have this. This is a shared cockroach DB storage only cluster, and we can run these virtual clusters on top right and, and each virtual cluster that lives on top of this share of storage, basically is its own size, you know, my tenant one has a fairly you know intense application lots and lots of you know transactions a second, maybe cluster two it's just a prototype where it's maybe a couple transactions a minute or whatever that is a plus or three is like super super heavy traffic. We can spin up more sequel pods, based on the amount of traffic that's coming in so we can scale basically separately because before I would have had to scale this entire stack. Now using the store just shared storage layer I can actually scale separately across the thermal compute and and the persistent storage underneath. So basically what we'll do is you know we when we spend it when we when we spin these things up. This is basically an instance of cockroach again running in a pod. And so we'll spin up you know 10 at one across three and for now the results maybe 10 to two just goes into easy to you know 10 three is going to go across you know all three different AZ so that the entire, you know, AZ fails, we still have the data, because when we write data we're writing data two or three times and these pods are oblivious to kind of where the data lives and they only matter the execution pods on top. But in order to do this we had to actually introduce a new layer into our software. We had to introduce a proxy and a proxy actually basically what it does is it has a manifest of all the pods that are distributed across the entire system, and it knows which tenant belongs to which pod. And so every node is an endpoint so you need this kind of traffic hop or a proxy basically to actually understand, you know, who owns each of the pieces of ephemeral compute so that's kind of a big lesson learned up front for us right now scale in in a database is quite interesting. You know when I think about scale I think of you know three different layers of scale I think about storage size of the database how many how many petabytes is your database like you know how many how many records are stored in that thing. I also think about you know transactional volume of data. And how much you know that's that's another kind of you know vector of scale, and I think the third scale is geographic scale you know can I actually scale this thing across the whole planet across multiple different regions. Well, transactional scale in a serverless database is also very quite different, right. In a serverless database the entire concept is it well if I don't have any traffic I don't want any ephemeral compute running so that I can actually just scale down to zero. Right. However, if I have some sort of like intense usage time like maybe eight in the morning everybody using my application is they all wake up and do whatever it is. And I accommodate those spikes and traffic to have you know my, my P99 latencies all within a certain range. And can the cluster just do that can it just scale up and down this ephemeral compute to meet the needs of what you need to do. And let's put it because it's been down to zero so that I'm not paying when I'm not using anything all the way down to zero, and then start very instantly when it comes back right and so this is all accomplished using these kind of ephemeral costs. And what we built is this thing called an auto scale on an honor scale it basically is moderating the CPU load across each of the pods. And then it calculates two metrics the average CPU usage and the peak CPU usage across five minutes and what we have we have a pretty simple algorithm that basically looks at this this usage over the past five minutes it's you know, we can tune it. This is one of the things that times, basically it says hey man if my CPU usage is going up spin up new sequel pods and assign them to that particular tenant. If it's going down then kill off pods that I don't need to run or put them back into a pool right. And so this this is kind of those one of those key things in the auto scalar. Well, while it has a fair amount of complexity, it's pretty simple at its top layer in terms of how it's actually doing this so if you're going to build something that's going to help you got to start thinking about, you know, what is the auto scale I look like for you, maybe it's measuring for us it was CPU usage it was pretty easy to do. It was a direct kind of correlation to how busy the transactions were within the entirety of the system, but maybe it's something within your software as well. And so like thinking through your own auto scalar and how these things might work. That was kind of one of those things but it just this this made sense for us and so we can scale up we can scale down based on average CPU and peak CPU of that particular cluster. So, I have my picture why I have tenant run. It's running in three different availability zones I have my proxy layer load balancer sit in front of these these three different azs. And tenant one is running at steady state and the steady state capacity is about halfway up the curve for them in the upper left hand corner there. And so, basically, three pods three of the femoral pods can actually handle the traffic, the normal traffic. What I have in each one of these azs and this is just a different cluster across all of each one of these azs is I have hot pods which basically are instances of cockroach that have not been assigned to anybody. They aren't in there in the manifest is unassigned basically so these this proxy across the top the four versions of the proxy they all share this manifest and they all understand what's going on. And they they they in real time it's guaranteed correct it's guaranteed consistent and so the manifest actually understands what's going on within that cluster and so we have these hot pods which are basically instances of cockroach that are unassigned. And we have pods that are out there that are assigned to a particular tenant so if the auto scalar measures that oh gosh you know there's a spike in traffic over the last five minutes I need to increase the pods. It can basically take one of these hot pods, assign it to a tenant and instantly we have much more ephemeral compute you know we've improved by 33% or ephemeral compute for that particular cluster, still living on the same storage and the storage is just for going to expand based on basically how much space it needs and that that's a whole different scale issue and that's that one's pretty straightforward. That doesn't change all the time, whereas you know this this ephemeral compute does and so we basically just take a hot pod we assign it to a tenant we change things in the manifest. So we had way to go. And now we have actually excess capacity. And so when things come back to normal. And our auto scalar realizes it doesn't need as much compute it's going to bring it all it's going to bring down that that pod throw it back into the pool, and then say there's no traffic middle of the night, nobody's using this tenant one there's no traffic. It's going to spin all of those kind of ephemeral pods down. And we're really careful in terms of how many hot pods we have going on at any one time to basically maintain enough, you know, sensibility to, you know, when new requests are going to come through or not so we spin things up spin things down pretty quickly used out of out of that pool of pods but we'll spin down pods if there's no usage of things within a clusters within this big share cluster. And so, ultimately, this is how we actually go through we have this auto scalar kind of dealing these sort of things now. You know, in the end, things do come back very quickly and tenant one has a query all of a sudden. Well I can go and get one of these hot pods and bring these things back very very quickly, you know, under 100 milliseconds closer to 50 milliseconds you know we're working always really hard on making that come back really quick because you want to want any sort of delay. Ultimately, what we've done is we've kind of re architected our software stack and using Kubernetes we're actually able to kind of scale up scale down based on need for that particular base on need for that particular application. It's pretty clean it's pretty clear it's pretty it's it's pretty elegant, and it's working pretty well for us right now. So when we have one cluster of properties to be serverless and AWS we have one other in GCP. We have over 30,000 clusters running within within our environments today so it's a little heavy on AWS versus GCP, I won't get too deep into the numbers two, you know, two thirds one third at what regardless it doesn't matter, you can imagine the amount of a sequel pause that are running across the distributed stores and so lots of different over 30,000 tenants across the two. And that is really just kind of two instances of basically this this persistent layer with lots of different instances of these kind of ephemeral pods living on top, you know, 30,000 of them across the two clusters. Now, eventually we'll make that thing multi region as well so it's not just az so stay tuned for that where we're actually playing with that we have versions of it if you really want to play with it, just reach out to us put our public slacker, you can email me Jim at Congress labs and I'll get you moving on that as well we can actually start to do serverless across multiple different regions which had presented a wholly out of set of problems. Okay, so we talked about kind of this ephemeral compute which is kind of above the line. What about below the line, how did we make our distributed storage, our replication distributed and how we store data multi tenant because if I'm going to have that persistent storage layer that that to us as multi tenant there's lots of data down there in fact, the 30,000 clusters is are all serene one big storage cluster so how do we kind of you know firewall off that data between various different users and different applications and make it kind of be tied to particular tenants within the cluster itself and that that's going to take a little bit of a dive into kind of how cockroach works at the storage layer, which I hope is also valuable to you as well because this is kind of one of those core things within distributed systems that gets really really interesting as well. And so let's go down below the storage let's go down into the persistent layer so above the line was ephemeral below the line is the persistent layer. Okay, so let's dive in so in cockroach database. All data is stored as this kind of monolithic logical key space and, and that that key space is sorted kind of lexigraphically by key. So if you look at this like here's a dog's table here's all the entries, the PK is name or something you know, you'll see it's all alphabetically in here. Right and so it's not exactly like this is a K which is the key and then a value which is a name but you know you can understand I mean it's this huge kind of sorted sort of set of KV underneath the covers now cockroach to you and me the developer is just a sequel it's sequel it's a relational database it looks like Postgres smells like Postgres feels like Postgres. You know the service you get a you get an instance of everybody quickly so we as developers communicate and sequel underneath the covers. We're using KV to actually provide value of the stack within the database itself, I need to dive into that to actually show you how we made the storage layer multi tenant. Okay. And so, for each table that's stored in cockroach we actually break down the key space. The storage layer down to these 512 megabit ranges now we've chosen this amount because it kind of amortizes indexing, it allows us to kind of move things very quickly. You know we've really optimized around this anyone I started I think we're at 256. So we've got really good at basically cleaning up databases moving data around and so these ranges you can think of them as shards. And this is the way we basically do kind of automation of sharding as well. But if you're going to do this right you're going to break down the table into different pieces, you need an index structure so you can find the data within the cluster itself within that storage. So we do that it works very much like a beetry if you're familiar with, you know how to implement that algorithm. And then what we're using underneath the covers to actually, you know, manage all this is is an algorithm called raft. It's a distributed consensus algorithm which really allows us to provide these kind of like rights are all going to happen together, and I'm going to have consistent reason no matter what happens on this on this data. Now if you're new to distributed systems, I would highly recommend reading up on raft. There's another concept we use in distributive transactions called MVCC but, and that's also equally interesting but really raft is kind of one of these core kind of distributed system systems algorithms really important so raft is really comprised of a set of replicas so on the right hand side you'll see this kind of blue set of data. You could think of that as this kind of this middle range that's here, Lady Lula, Muddy and Petey. I have three copies of that. And I'm going to write that copies of that data on three physical different instances. And that is a raft group those three ranges are all going to communicate with each other to make sure it's correct and make sure that they're aligned across these three instances now. It's going to be five times it'd be seven it can be nine some odd number because this eventually we're going to use to get corn rights because two of three if two of three are in agreement. The third one is out of favor and it will come along to actually work with the with the rest of the raft group. Okay, now it does all this it's chatty there's co less heart beats time is very important as concept. But there's a special concept a special replica if you will, and that is the raft leader. And in the raft protocol you elect a leader, and the leaders get basically responsible for all rights to the entirety of the group. The leader if you if you ask the reader for the data you're going to be is certain to get the authoritative you know up to date information. You can talk to the leader and by default it's going to go talk to the leader to get that data, you can also do something called follower reads which relaxes some of the consistency demands and so maybe asked the top, you know, replica here for the data, but I'm using a follower read it's okay. Right, it just really depends on your query and you want what you want to accomplish right. And so this this this rap leader is really what allows us to get this kind of atomic replication so like if I talked to the rap leader and say hey here's my data. Go right into the rep because only the rap leader can actually answer back like I'm good. And this really is what we use to basically ensure consistency across the database as well. Okay, so you want to learn more about raft and distributed consensus. I would go there check it out they have a really great graph I can explains how raft works, much better than I ever did. I've sent a lot of people that website and I think it's it's really, really great. So, okay, so when we place data in this let's go back to the service bar right so we place data into cockroach. Basically I'm going to take my raft group I'm going to write the three copies across three nodes the blue cross three nodes and the red across three nodes. There's a lot more smarts in here we can actually write based on workload knowledge, you know how much how much you know traffic is going through that range or not. But but ultimately what we're doing is we're basically just writing this data across you know, multiple different, you know physical instances. Now, again, let's come back again to this model of the key space right where we have this huge key store. And what we're doing is we're actually exposing this as SQL so let's come back up another layer right at the language is actually SQL so the way this works is basically all tables have a primary key and the key is basically what we're sorting on and the value is is the value. Let's get into examples best way to actually do it so here's the table. It's a dog's table has ID name and weight. There's some table entries over on the right hand side. You'll see you know here's Carl weighs 10.1 pounds I guess right. And so, for each of these records, when we convert it to KV underneath the covers and you don't have to see any of this within your data I mean you don't see any of this actually this is all happening in the covers. What we're doing is we're saying like it look at let's take the name of the table which is dog let's take the primary key which is say 34. Let's take the value name and write it let's take the primary that's take the table primary key value and write it. And what you're seeing is this entire list right here is is sorted. And it's really sorted by key and if you look at like the first two kind of pieces here the table and the and the and the key, the primary key that's really what it's allowing us to do this here. And so we have this huge range of set, and this is really what allows us to do some interesting things we can overload this with a column name this would allow us to do something called geo partitioning. It allows us to write data to certain certain places, but I think it is huge monolithic kind of key graphs that I'm studying these things up and ranges well all of the data in this part of the table is in various different pieces right. Okay, so what we had to do to actually make our coverage or storage layer. The tenant is all we had to do is we had to take the name of a tenant and put it up front. So now if you sort everything well all this tenants data is together all of this tenants data is together all of this tenants data is together. Right and this huge less graphically ordered key for a set right like we, we now have ranges that are tied to particular tenants. So now when you search for data that tenant is the only person that can access that data with that particular key. And so not only have we been able to basically break out these ranges and this huge massive back in into into these individual little pieces but we now have data that's going to be you know restricted to particular tenant right so it's going to restrict this, the secure access to the ranges. So the data belongs to the tenant and the data belongs to the tenant so underneath the covers what we're doing is is actually making the storage layer multi tenant as well so above the layer ephemeral compute we use Kubernetes to scale up and scale down with an auto scale up below the layer in persistent storage, what we're doing is we're using kv to basically implement a multi tenant storage layer for for people. Okay. So, Congress TV, those those are the kind of those are the two big concepts and it's actually pretty simple and pretty straightforward. I think it's interesting and fun to think about that in our own application so you know Congress TV we do have a serverless version today. We've taken all the beauty of our database it's active active we do geo partitioning it's fully managed service. It's a relational database. And really the best way to do this is via our serverless deployment model which it delivers this elastic scale, we can spend clusters down to zero so you don't have to pay is multi tenant, most importantly it's consumption based. And so we really feel it's kind of like, ultimately will be this kind of, you know this frictionless SQL API in the cloud, eventually, if we have endpoints around the whole plan, we get to multi region. So it allows developers to basically focus on what you want to focus on right is familiar and dire, you know this familiar development environment but don't worry about scale. Don't worry about downtime. You know eliminate the concepts of ops out of your life and basically go cloud. Right, I think that's kind of the big takeaway here we think about the core principles of serverless we're delivering all those within the database itself. Okay, so serverless is available now it's not even beta actually came out of beta. Last week actually it's GA. Right now it is single region, it'll be multi region fairly soon. It's free to use no credit card required. It's a free up to five gigabytes of storage 250 million request units, and I'll come back to request units. Most importantly you can set the spend limit which is down here at the bottom of the screen, so that you aren't going to get a surprise one of the problems with serverless is it's like oh my god the thing just went out of control and was pegged. And I got a bill at the end of the month. We actually kind of we temper that and we allow you to set a spend limit so it's two bucks, you never pay more than two bucks a month, or 10 bucks or 100 or whatever it is that you want to do. And this is forever free. You can get a database up and running in seconds using this thing. I know one of my peers, Charlie just, Charlie just created a cluster, it's pretty easy to get going he had his mom create a serverless cluster and connect to it. It's kind of a funny video that we just came out with recently if you want to go check it out that I know it's on our Twitter account today. We're just like a talk or Stevie serverless and mom, maybe that'd be a good query. But, but yeah it's pretty easy to get going so basically you have an instance of postgres in in seconds and for free. Now, these are question is a request unit is basically the the abstraction of a query like a transaction, because the transaction is single database not all transactions are kind of equal right some transactions are you know select star customer is very different from customer where last name is this and first name is this and you know addresses are blah, blah, whatever you can you can understand each query is very different. And so we've actually abstracted out some color request unit so and you know we'll be able to handle kind of you know a certain amount of volume over the course of the whole month or maybe it's spiky it's cyclic you know day and night it's going to happen. Maybe it's just at the beginning of the month you have a huge spike in traffic. And then you know we're going to actually throw back so you have enough bandwidth for the rest of the month based on you know what's being used and I mean one moment in time right and so basically the volume underneath that curve based on what your what your usage pattern is is always going to be about 250 million request units. Again, you know engage us in our public slack channel if you want to learn more about request units what that actually means, I know that we have some documentation on that. All very interesting but I think as you think about your own service and what you're going to do, you know how do you actually start to do this kind of consumption based billing what's that's going to mean so. Great. Thank you everybody I think you know there are no questions I know this is fairly straightforward topic. I hope it's valuable for people. We love talking about these things. You know we're pretty proud of ourselves as a company in terms of like what we're building we do think this is the future of where a lot of things are going. You know, I'm excited to get to Detroit in a couple weeks. You know to be in the booth and actually engage with everybody again. You know for for coupon. North America will be there or boot 26 you come by get a demo get a free t shirt. We're going to have a happy hour. Why were there so we'd love to buy a beer and hang out. But with that, I just want to make sure that you all have a chance to come and meet us. There are no questions along the way. I guess I'm happy and sad about that but I do hope this was valuable for you all. You know we do like to provide stuff that content that's not just about what we're doing but hopefully it's useful for you in terms of the way you think about your own application architecture and your stack, and how you're thinking about service but you know I've really learned a lot about this over the last couple years and I'm pretty excited about it so with that I just wanted to thank everybody for taking the time. I did want questions again our public slack channels a great place to go do provide feedback on webinars along the way, because gosh we sure do we should like that so with that I'm going to pass it back to Candice and I wanted to thank everybody for for taking the time today. Oh wait, there was one, there was one question. There was somebody is asking a question of do we have you know TPCH or TPCDS performance benchmarks for cockroach DB. We do TPCC nightly. We're playing with TPCE right now as well. H and DS. I don't think we've run those two to the person who's asking a question. But if you go to cockroach DB and you go into our docs and you just do a search for performance. You'll see so our benchmark numbers published there. So things that we you know we we compare ourselves to other solutions of course but you know benchmarks for us we would very rarely publish a benchmark about another database because it's kind of difficult to be an expert in your own thing and everything else right so you know we're doing everything that we can to push the TPC benchmarks into the distributed world because right now the problem with benchmarks and in TPC and TPCC in particular is that it's not distributed. So basically do that and then so look for work from us that that's going to be talking a lot about you know public benchmarks around how we think about these things and distributed systems because I think we're living in a new world about those benchmarks have been updated really quite some time so alright so I will I'm going to give it like two seconds just in case there's no more questions. dramatic heavy pause. Alright, well, if there's no more questions. I want to again thank everybody for taking their time today and I want to pass it back to Candice or our gracious host to send you all off. Thank you, Candice. Thank you so much Jim for your time today and thank you everyone for joining us as a reminder this recording will be on the Linux foundations YouTube page later today. We hope you join us for future webinars. Have a wonderful day.