 All right. Can you hear me guys? All right. Before I, uh, before I start, can I have a selfie with you all? Is it okay? All right. Thank you. All right. Thank you so much. Well, uh, I'm her ship. I work with planet scale and, uh, I've been maintainer of it is and I've been with the project for almost, uh, six years now. And, uh, with me, I have Manan who is also working with planet scale. He joined as a intern and now he's also a maintainer of witness. He's been working for more than one year now. Um, and we are talking about scaling databases with witness today. So let's go over agenda. Uh, we'll be talking about what witnesses, uh, where it got evolved and how it, how it looks like, what it's trying to solve the purpose. And, uh, about that we'll go into the demo. Uh, we'll show you how we can actually import data from AWS RDS to witness and how it seamlessly work with rails, uh, or witness. And also, uh, based on the time, we'll also talk about some upcoming feature. So, um, with witnesses, basically, um, horizontal sharding solution on top of my SQL, uh, which, which basically means that, uh, uh, if you, if you're running out of scale on my SQL, you can put witness on top and it will, it will do seamless, uh, sharding for you. Uh, I'll talk in more details about it, uh, the, but that's what witnesses at a higher level. Uh, it's, uh, it's a CNCF graduate project. Uh, and it was started in, uh, YouTube at 2010 and it was then donated to CNCF for maintenance. And it's open source project. Uh, we have a highly, um, distributed community and the contributor across the world who is contributing to witness as a project. Uh, these are some of the client customer like, uh, uh, companies which are actually having millions of QPS on in production on witness. Uh, some of them is like a JD.com, uh, which has like, uh, into the 19, they did 35 million QPS in a, on their singles day in, uh, and now they have been doing much more on witness. Uh, there's slack who is running like 100% of their database on witness. Uh, there's square cash up a financial cash up, which also doing, which is also running their application on witness. Uh, there's planet scale, who is basically offering, uh, uh, witness, uh, they're offering powered by witness and they are running tens of thousands of clusters by now. So what is solved basically. So we talked about, right? It's all scalability issues on my SQL and there can be three types of scalability in general. Like, uh, you have, you're having a lot of QPS and you're not able to serve from single my SQL. So you need to shard and you, so that your QPS are distributed. Similarly for your data, you are running into terabytes or petabytes of data and you cannot, uh, basically, uh, keep it in single my SQL instance. Then again, you have to do sharding so, so that you are able to scale and also you're, and also connections. So you have a lot of applications who are trying to just connect to a single my SQL instance and you're running out of connections. So, uh, we, so witness basically do connection pooling for you so that you, you can scale. Let's go into some features. Uh, now, because I talked about sharding, which means you'll have a lot of, a lot of my SQL instance running per shard and you, it will be very difficult to manage. So it does a easy cluster manager for you, which means if your primary goes down, it knows which replica to bring up as a primary and it will fix all your applications stuff. And you'll, you'll just keep stuck. You'll just keep serving your traffic. Um, if, yeah. And, and it also like, um, so in cluster management, it also comes like, if you have, because you have done sharding, like, uh, you might have to also move tables across. It does know how to do it. And, and it can, you can move data tables from one database to another database and, uh, like, uh, if you, if you run out of like database on one shot, then it knows how to do resharding so that, uh, you can again keep scaling your database. Uh, one of the other thing is like, uh, you have migrations, uh, which means you, you're trying to create new table, add column, delete column, or things like that. And, uh, if it just provides you online, DDL operations, uh, which, which means that, uh, uh, you don't have to take any downtime or anything. And it will just do seamless, uh, migrations for you. Another one is the backup recovery, which means, uh, if you can tell where you have wanted to take backup for, and it will, uh, it will see, it will take backup for you. And whenever you start a new replica, it knows where, which is the latest backup. It will pull it down and, uh, uh, it will again plug it back to the replic primary and fix the replication for you. And once it is caught up, it will again go into the serving set for you. And that will also just start, uh, serving traffic for you. Yeah. Another is a query consolidation, which means, uh, if you are hitting same kind of queries or same query, basically like suppose, uh, it just got, uh, exploded with, uh, it just went, uh, global with same, same kind of information, which means you'll use every, your database start receiving same kind of query, uh, to your same shot. Then it does query consolidation for you, which means it's trying to do hot pro protection for you. And it will consolidate it, which means it will not send all the same kind of queries down. It will, it will keep buffer it and it will only send one query down and same result will just keep back, send back to you. So it tries to product your, uh, database to going down. Another is the human errors where you just wrote some bad query or you try to select all the data from all the shards or you try to update all the rows and stuff. So it put those limitations around it so that, uh, you'll, you, you won't run into getting your database down in those terms. And also you can, uh, do block listing. You can do, you can give query patterns that, that should not allow you. And it will handle, it will ensure that those kinds of query patterns does not go to my SQL. So before I go into the architecture, I just, there's just two words I want to just get into it. Uh, one is shot. I already talked about you, uh, basically a shot is nothing, but having a primary and a lot of replicas running. And then you'll have multiple shards running. And the key, key space is just a synonym, synonym, synonym to like, uh, your database, which means, uh, a key space is basically containing of multiple shards. So, uh, in our demo also will be using key space terms. So basically it's a collection of shards. Yeah, let's go into the architecture. Uh, if you, if you see here, uh, we have something called, uh, uh, shard one, shard two, shard C, two, shard N, which belongs to one key space. Um, here we are running my SQL. And on top of it, we have, we call it something called VT tablet, which is a side car, uh, which is side car, which is running along with your my SQL. What it does is basically this is the one which manages your my SQL and also any query that will come will ultimately come to this. And then it will be sent down to my SQL. This is the one which also does the connection pooling, uh, which, which means, uh, no, all the connections will be handled by this side car. Um, this expose is a GRPC, uh, API to the outside world. And the outside world is basically nothing but VT gates. Uh, VT gates are the one which will be talking to the VT tablet using the GRPC. Uh, and this is, uh, this is the entry point for your applications. So what we take it does is basically it will receive all the traffic to, uh, to it, and it will basically we take it will do the query parsing for you. And then it will do the query planning and then it will execute those plans. Um, and we take it knows where your data resides. So it will, it will send your data to the right chart. It will do the right insert selects and everything. It also, it has an, um, uh, uh, evaluation engine, which means, uh, that, uh, it knows that, uh, how to evaluate a query, how to do aggregations and ordering and everything. So it has a, we have, it has whole new engine written on, on it so that you can do cross shot queries. Um, and it, it's basically a stateless, which means you can run n number of VT gates based on how much QPS you have. And, uh, accordingly, you can just scale. And on, on top of it, you can put a load balancer, uh, basically so that you know how to make the discovery and how to connect to the, the, the VT gate, which is the least used and VT gate talks, the MySQL protocol, which means, uh, uh, that you don't have to change anything on your application. Uh, and it knows how to basically you can just use this normal MySQL driver that is available for, for that language. And it will just continue to work as it. So you just have to change that instead of directly going to MySQL, you can just, you have to go through the VT gates. There's another component called VT CTLD, which is the admin part of, uh, the VITS cluster, which will, which will ensure it will help you to manage your VITS cluster itself. And also we have something, uh, topo server. So you can plug in any topo servers for storing VITS metadata into it. That's all about, uh, the architecture and the intro part of the VITS. Uh, I'll have Manan and over he'll talk about the demo. Hello. Can you guys hear me? I think it's working fine, right? Okay. I'm Manan. Hello, everyone. And I'm going to be taking you through the demo. The demo is really simple to follow, but it is extremely powerful in what it does. So we're going to start with the Rails app, which is connected to an RDS instance. And the entire point of the demo is to move from that RDS instance to VITS without having without, uh, and moving all your data from RDS into VITS. So first, the step that we'll do is we'll first have a VITS cluster, which is called RDS on top of RDS itself. RDS MySQL and Rails will be querying VITS RDS going through RDS. The rails will be stored in RDS, but it will be routed through VITS. So we'll be using the VT gate endpoint and a VT tablet on top of an RDS external data store. Once we have that running, we're actually going to copy all the data from that RDS key space into a VITS key space. So this VITS is the name of the key space that we're going to use for storing VITS locally. And that we're going to copy all over the data that we have in RDS over to VITS. Once we're doing that, our traffic is still going to RDS. Once that's done, we're actually going to switch the traffic to go from the RDS key space to the VITS key space. At this point, RDS is still running. It has your data, but the data has also been copied over to VITS. After that, once everything is done, everything is working, we can remove the RDS instance. You can choose to destroy the data, keep the data, do whatever you want with it. But at that point, your Rails app is running entirely with VITS. So this is the workflow that we're going to follow for the demo. Let's get started. All right. So over here, I have a RDS instance already running. It takes a little while to spin it up. So I already spun it up beforehand. And if you take a look at the databases that we have, we have one called Rails app. That's the one that we're going to use just before we continue. This font size is fine for everyone. People in the back, can you see that? All right. Great. So if you use, there we go. So here, I've already set up Rails as an app itself. So you see that there are two internal tables that Rails uses, the AR internal metadata and the schema migrations. Both of them are Rails internal tables. And there's one extra table that we've created. It's called users. We're only using that table for the purpose of this demo. Essentially, we're going to insert data into that user's table and we're going to record the latency as to how fast or how slow it was. So this is what we have at the RDS side. We have the Rails and we have already done the migration itself. I'm going to spawn the Rails server now. So we go over here. We start this page. There we go. I already had inserted a few amount of data in the beginning, but now it's going to insert about four users every second. And it's going to keep track of the latency that it took to do those insertions averaged over five seconds and that's what we're plotting over here. We're also keeping track of the total error count that we see while we're doing those insertions. Right now we're starting with zero errors, but we'll see how we go. Over here, we're going to set up a watch on. So here we're just counting like we're keeping a watch on the number of users that are there. It's essentially a very easy watch. We're just doing a select constant from users and this is running against RDS. So you can see that the user count is increasing on RDS. We're running against that. Now, if you remember from a slide, the second thing that we want to do is one of a BTS server and have it in front of RDS. So I've already have a BTS running as well. I'm running it in kind. So I'm not running it on on on Kubernetes on I'm running it Kubernetes, but not on GCP or AWS. You can do that too. With this operator works with all of those platforms. Right now I'm running in kind locally and I already have it set up so I can really quickly go over and show you the configuration that I'm using. So if you look at the configuration over here, so we have key space configurations over here. We have one with this key space. I mean, we're not using it right now, but we eventually will as I showed you in the slides. And this is completely BTS on you'll also have a MySQL instance running inside. And over here we have an RDS key space. And if you see it's using an external data store, so it's not using MySQL directly, it's using an external data store. And here you're providing the parameters to how to connect to that external data store, which RDS user RDS leave you name. These are all environment variables that I've set it out in the startup script. So now that we have BTS running a goal is to actually copy data. We want to move the traffic from RDS going directly to RDS, but through the test. So let's go ahead and do that. So we go over into the database configuration. You know, it's your demo that here right now we're using those environment variables that I configured in the test as well. RDS host RDS deep in M so we can change these host is local host because I'm running it locally. Database we're going to use is called RDS because that's the name of the key space. And like her shit said, the key space is like an equivalent for database in the test on analogy. The user we're going to use is user. All of this is configurable in the in the configuration of this operator. Right now we're not running with any password. So there's no protection on our on my cluster them locally, plus we don't do that in production. And the port that we have is 15 306. So take a look at the error count and take a look at the latency right now. Right now I haven't restarted that. So for any database configuration changes, I need to restart the real server. There you go. So here you see the account increased to about nine. But remember this error count is because I changed the rails configuration. I could take it down for a smile and then spawn it off again during that time. And then the latency is going to drop. This is the VT gate page. This is like a bit test VT gate. Do you guys remember that the proxy layer that's on top which is connected with the users connect to. And this is the plane for that a UI and it shows the QPS that's serving. So early on it was no QPS because we weren't wearing the test directly at all. But now we're going through the test even to ideas. So we have this graph that's coming up for QPS at VT gate. Okay. So this is all working fine. We're not running. We are still going to ideas. So look at the latency latency has come down. It stabilized to the point that it was before it spiked up for a bit, but it's back down to where it used to be. It's still going to ideas. So the data is not an ideas. If you go over to the watch on ideas, you see that the data is still being stored in ideas, but it's going through it as the next thing that we're going to do is we want to move the database, the data from that ideas key space into a witness key space, which we saw in the configuration is running all locally to do that. We have a command called move tables. Now I've already spawned this command off as well before I started the demo, move tables come on. It takes in two arguments. You can give the source key space, which for us is RDS and you can give it the in the in the parameter of this, you give it the destination key space for that is rails and rails app is just the name for the workflow. You can you can choose to provide anything there. This all parameter is saying but all tables I want to move since I'm moving all the data. I want to move all the tables, the internal internal rails tables and also the users table that you're using for the demo. So we have this point. We can now check out the progress that we've made on this. There we go. So the copy is complete, but you still have a replic. We still have a vstream running what that vstream is doing is because you're inserting data consistently constantly into the RDS, you want to keep keep copying it into the test as well. Right, because you don't want to lose data when you cut over from RDS to the test. So we keep a stream running which is continuing, which is see that status site status running vstream and it has a lag about a second, but it's keep on keeping on inserting the data that you you're inserting into RDS also into the Witeski space. At this point, we're ready to switch the traffic. We can switch the traffic from the RDS key space to Witeski space. Take a look at the error count right now. So it's nine and let's do a switch traffic. Okay, it's taking. Okay, all right. So that one second time lag that's the general amount of time it takes for it to catch up because it'll stop writes on the RDS and it'll it'll it'll buffer those rights and it'll start doing them on the RDS key space. You look at the initial start state that we had that we had not switched the reads or the rights. Both of them were going to the RDS key space. In the end, our current state is that all the reads and the rights have been switched. You can choose to do this in two parts. Just switch the reads first and then switch the rights or just do the switch the rights first. In which case, half the traffic would be going to the Witeski space. Half the traffic would be going to the RDS key space. In this demo, I've done both together. So reads and rights are both switched. And we go over. You see that the latency has dropped that latency drop is because I'm using the test locally. So all the loopback interface calls that were going to RDS servers spawned off in Paris are now just running locally. So this drop is coming because I'm running with us locally and you see the error count. We have a few errors that happened, but I come to that point later. Okay, so we've done the switch traffic. So at this point, we've copied over data and we're using with us the key space. So once you're happy with everything you've seen that everything is working, you can actually change your configuration of that. If you look over here, you're still connected to the RDS key space. So how this works in the test is that you can I'll show you the routing rules. So we test transparently also creates the routing rules for you so that even though when you do when you do the switch traffic, even though a user is connected to the RDS key space, they're actually going to the witness. So if you look over here, it says that any query that comes to the RDS users table should actually be served on the witness users table because you've done the switch traffic. So essentially your app does not need to change when you do the switch traffic. Once you've done with it and you're happy with everything, then you can change your app to actually start serving traffic from from directly from the witness key space. There we go. All we need to do is change this to witness. And like before, if you want this to take effect, we have to actually restart that. At this point, we're querying directly to witness. So we're at the last step of the demo. If you have RDS running, you have rails running and it's querying directly to witness the RDS is not being used. And if you take a look at the watch over here, this watch is still getting data. The reason that is what the reason we do this is until you actually complete the the entire move tables command, we insert the data back from the test into RDS so that if whatever something something does not work out, your some queries some queries are not working against the test because the charted database we have my SQL compatibility, but there are some queries that won't work. If something like that you hit roadblocks in that sense, then you can actually go back and you won't use any data. Once everything is done and you're happy with everything, then you can go ahead and complete the workflow. This takes in one more parameter. It says keep data. So I'm choosing to keep the data in the RDS server. If you don't, if you don't provide this parameter, it'll by default, go ahead and delete all the data that you had in the source key space. And if I go ahead and look at RDS now, this count is now stagnated. Like it's not going to increase any further. It's a stabilized because we've completed the workflow. Everything is moved over to the test. We no longer need to insert data back into RDS. And if you go and look there, you still have insertions happening while this is going on. That is it for the demo. So let's see how much time do we have. Okay, we have enough time. We have about 10 minutes left. So we can talk about the upcoming features. So actually I created the slide, but I wasn't sure how much time I'd have. So I thought that I'll talk about them in as much time as this left. So we have enough time. So this first one is VTR. This is BTS orchestrator. It's a folk of orchestrator that we have tailor made for BTS. Essentially, this is the so Harsha talked about automatic fixes and things. Right. Like if if you're running BTS and Kubernetes, you could have a MySQL pod that got evicted. So it got restarted. If it starts, then it's it's replication or not set up correctly. Things like these. So right now, if you if you So right now, if you're not using the test and if you're doing managing things yourself, someone has to go over there and fix those replications. Right. And VTR is the automated component of the test, which will do this automatically. It'll go over it to check that there are automatic failure scenarios. It'll check that you have a primary. You have a replica. The replica is not connected to that primary or that it's not sending some icing acts or it is said to read only so there are a bunch of failure scenarios. Whatever goes wrong, VTR is the single single thing that can fix it. So we have we have operations that you can manually as well through the VTC TLD, but VTR is the automated failure fixing thing that the test offers. So if you have it, you're running anything that fails with your can go and fix it. It can do emergency repair and charts. It can do anything like if the primary if the primary fails, we can even switch traffic to a read only and a replica server and and guarantee that there won't be any data losses. So we'll find the one which is the most advanced and if you're using semi sync, the one which is the most advanced, there will be at least one server which has all the transactions, right? So we'll promote that and that is how we guarantee that you won't lose any data even if your primary fails. And these things happen in production, right? I was scared a little about the demo as well. I had the Internet running on my local phone, but you never know, right? So the next thing that we have here is VTR admin and that is a new and improved UI. So right now I can show you the VTC TLD UI that we have. OK, this is VTK. So 15,000 should be right. So this is the UI that you have. This is the VTC TLD UI. It shows the shards. It shows the number of shards that you have in each key space. Right now we're running them in Uncharted. So we only have one shard, but you can shard them out as your data grows into multiple shards. And if you go in here, you get the serving shards and the list of tablets that we're running. So currently we're running in one primary, one replica mode. This is the old, old UI. So it does the work, but I can't actually show you VT admin right now. I don't have it running locally, but if you go over to the Vitesse docs and you spawn it off yourself, you'll see that it's a much better UI. You can look at how queries are run against Vitesse. You can look at the look at the instructions that Vitesse will run to actually execute the query. It's good for debugging. You can do all the operations there as well. So you don't need to actually go to the to the CLI and run start replication or something like that. You can actually do it through the VT admin as well. So VT admin is going general availability in the next release of Vitesse and VT is going general availability in the release after that. So these are the two great, amazing features that are coming up in Vitesse. Yeah, just to add to that, Vittus VT is already used in production, but the company does not want to be named. So but they are already using in production, but we call it experimental because we want to make sure that we are ready, like as a Vittus community is ready to take on Vittus, but it's there. People are already using it and 14 releases going to happen next month and it's going to be very stable any ways there. Okay, so these are the resources. I have the link to the demo. You guys can go along that link and you can follow along. It has all the instructions how you need to do. There's only one prerequisite, which is you need to have RDS running. These are the docs, the code link for Vitesse website and and the Twitter handle. We're open for questions now. Thank you VT tablet component. Is that basically your consensus state machine or? Could you repeat the question and then guess? So on top of my sequel, you mentioned you have this VT tablet component. Yes, can you talk a bit more about that and what that is? Is that just a consensus? VT tablet is like a sidecar instance. So basically, if you want to send a query like VTR, it needs to some some way to start application of my sequel. Instead of querying directly to my sequel, we prefer to have a sidecar instance VT tablet which supports GRPC protocol and a few other things that you can use. So VTR will query VT tablet and the VT tablet is the instance that will run it on my sequel. Same goes for other components. So it's like for communication purposes for other things for like getting like the health health stream as to VT tablet will also send health streams to VT gate. It's like for registry purposes for all of those things we have a VT tablet sidecar. But it's not the one that does the durability things that actually happens through the VT CTLE and the durability also happens directly into my sequel. So you have a primary instance and you set it up with how many number of semizing acts you require for it to make progress. And that is the number that that is how you use the durability and Vites will guarantee that when you do the failovers it'll respect the durability policy that you said. All right. Does that answer your question? Could you ask the mic there? I have two questions about backups and the first of them is where does Vites store the actual backups? What providers? Yeah. So actually Vites has some it's a community basically supported by a community. Right. So few of the backups that it already supports is like a set storage S3 and you can do contributions on more on those and file systems and stuff. So and you have to just give in the VT tablet that we showed the site card that will take those inputs that way you want to store the backup and it will it will use that information to do the backup and when it will start a new replica it will know like how to get from the latest backup and then hook it to the primary and start the replication. OK. So thank you. And the second question is does it support point in time recovery? Does it? Does it support point in time recovery that like does it do binary log backups? Right. No. Point in time recovery? Yes. You want to take? OK. We have a maintainer. Actually she's and she's a team lead on Vites. Yeah. Vites does support point in time recovery. We don't have our own bin log server but we integrate with Ripple which is a bin log server. We do plan to provide urban log server with Vites so that we don't have an external dependency. Anyone else? Does Vites support MariaDB? It used to support but then MariaDB fork actually changed in incompatible with that we have to drop after 10.14 I think. 10.4. 10.4. OK. No plans at all for sure for supporting positive, right? No. Yeah. That's it. Well, one more question. What about support for hybrid or multicloud deployments? Sorry, can you say that again? Support for multicloud or hybrid cloud deployments? I'm sorry. Yes. Hybrid. Oh, Vites operator is there. You can run it with any cloud provider. The operator just works. Yeah, in multicloud. So to add to what Hush had said, the Vites operator for Kubernetes works on any cloud provider's Kubernetes but you will have to do the multicloud networking yourself. That part is not part of the operator that we currently have. Yeah. But if you like to do contributions, yes, we would love to have it. OK, so I guess there's one more here. So when you did the demo, it looks like you're effectively traffic splitting between RDS and the Vites key space, right? Right. No, we didn't split the traffic. It was earlier going entirely to RDS and at one point we switched over and it was going entirely to Vites. You can, however, split the traffic but only between reads and writes. So you can just switch the reads over to Vites first, keep the writes going to RDS. Because your Vstream is still running, all the writes that go to RDS will eventually also show up in Vites. So you're safe there but you can do it in two steps. Yeah, I will run only at one place. But we don't allow doing things like 20% going to RDS and 80% going to Vites. Things like that are not permitted. You can do in one step for reads, one step for writes, or both together, which I did in the demo. When we said it is going through Vites doesn't mean that we are storing it in Vites. It was going through Vites means the VT gate level that I showed. Through VT gate, it is still going to RDS or VT tab and then going to RDS, yeah. Curious if you also support the other way around. We do, we do. Like Vites to RDS. You do, you can switch the traffic back if you need. So the question is whether we support a migration from Vites to RDS? You, I think you can do it because our move tables are written in such a way that it doesn't matter. You just have to define that key space and if you have defined the key space, you can do move table to that key space and that key space can be RDS. Like like how we did it. I think in the demo, you were writing to one and you said you can read from Vites. Yeah. But when you move over, this one stopped updating. When we do the Fitch write. Yes. No, no. Also it was still going back, but when we did say complete, now we have done the complete migration, then only it stopped. So we could have done the demo in a way where you switch the read and writes from Vites, but RDS is kept up today. It was there. It is there. It was happening. No, no. Till you didn't do the complete. So first you have everything in RDS. You have a Vites on top of RDS. Then you copied everything to the Vites key space. At that point, RDS is still running. Data is going to RDS. You do the switch traffic. We have a reverse replication stream running, which means that any data that's going to a test is also being copied over to RDS. Until you do like complete, everything is fine and you're done with it. We won't stop sending data to RDS too. So the way we had to show the demo, we had to show all the steps at once. But in real life, what people will do is that they will wait for a few days before they do the complete and stop the reverse replication. And that's the question. Hi, thank you for the presentation. During the demo, you mentioned the errors and that you will explain them later. Could you tell us? The errors? Yeah, the errors came. You have to count the errors. Yes. So what happened there? Thanks. The errors, right? The errors you have shown, when the errors will come. So basically the errors were coming because you have to stop the real server. But when you do, you will do rolling updates of your real server. When you're picking up the new configuration. So those errors can be provided. And sometimes you'll see a few errors that we saw in the demo when we start buffering. So a couple of errors will start the buffering at BT gate level. And then it'll start buffering all those requests and you won't see errors further. So beginning the errors that we saw, for example, that I had to take down the real server and bring it back up again, which we saw like 10, 15 some errors. That was because I only had one real server running in the demo. But if you have multiple real server, you can just take one down, change its configuration, bring it back up while others are serving traffic. In which case you won't see those errors. Yeah, we have a buffering, basically, mechanism at BT gate. Yes, at BT gate. Any planned operation through a test, whether it's a resharp, it's a repairant, or a move tables, we buffer traffic at BT gate. But you need to configure the buffer pool and how many queries you want to buffer at the time because you could have like out of memory issues if you buffer too much, right? So at that point we start, we stop buffering the amount of queries that are old and we start throwing errors for those. Hi, how much latency is added through the BT gate? You have an extra, how much latency is added? Extra latency through BT gate? Yes. It's usually one to two milliseconds, that's what we claim. One to two milliseconds extra, yeah. But you should think it's a distributed, so it is going through one server and then if you have a good network, then it's fine, if you have a bad network. Yeah, the network latency is there. Network can open. Hi, when do you plan to support the select star query on the shuttle key spaces? We do. We do. All sharded, like, we have a good set of support for the sharded queries and you should just try it on and see if it works for your application. So for sharded queries, mostly it should, yeah. Most of the queries should work in the sharded level as well. So example, if you have data that is stored in two different shards and you're trying to join, that also works, that also works. We have an evaluation engine at the BT gate level that will do those things for you. But there are things that still won't work. For example, window functions because in an unsharded mode, we can rely on MySQL to do that all for you. Like, we can just pin the query down to MySQL. But for window transaction functions, other things, like sharded mode, that support is not there. But evaluation engine is the primary that you need to add support to. So if there's something that is not supported, we will do contributions for that. Yeah, if you open an issue and you really want some queries not working, we'll just make it work for you. So if you tried this before, say January of 2021 and you found that certain things didn't work, we've actually implemented a new query planner called Gen4, which is opt-in right now, which supports a lot more of the select start type of queries with sharded mode. Yeah. So select start definitely should work. Yeah. Even if you add new columns. Even left joins, right joins, everything. It just works for sharded queries. So we do group buys aggregations, everything on BT gate level as well. It's for more advanced, recently added 8.0 functions and stuff that right now we don't have support for. Yeah. So like I said, a lot of sharded query support has been added over the past year. So you should try out a newer release if you've run into errors in the past. A few questions. Sure, sure. One is in terms of sharding strategy, is there like a fixed set that you support or it's up to us to do that for you? So we have some set that you can choose from. Otherwise you can write your own sharding strategy and there's an interface which is a plugin basically. If you implement that plugin, it will use your sharding strategy. And I saw like the number of where you keep your replicas and all that is independently configurable. Yes. Yes. One of in the multi-tenant situation, you might want to have certain like, you know, your Europe customers data in Europe and America customers data in America and so on. We support region based sharding and we call that multi-column index and you can say this column is for region sharding and this is where you want to do local within region. I see, but it's still considered one instance of VITAS. Yes. Nice. You can still, if you want, you can make it multi-incense, but we still consider one if you want to do that. It's highly configurable. It's up to you how you would want to run it. Yes. The last question probably it's a little bit more open-ended is, how do you compare with like Cockroach and Yugabhite and like... Yeah. They're all your competitors here, right? Yes, they are. How would you... So VITAS is completely open source in the first place and VITAS has been around a lot longer and it's built on 25 years of work done on MySQL. So at the query execution layer, it's very efficient and the sharding strategy for VITAS is very flexible. It's not hard-coded. So these are some of the things that we feel are very good about VITAS. And it has been heavily tested, basically. It was built in 2010 and was heavily used at YouTube at that point. So it's heavily tested. I'm not sure about the other part. So yeah, that's the other part. If you... Yes, they bet on different things. Yes, yeah. MySQL, yes. Yes, so that's the best part. We don't have to build everything in VTG level. We can rely on the MySQL to do a lot of heavy lifting for us if we... So we push a lot of things down for MySQL to do it. So if you do count star, right? I'll just give you one example if you're still here. So if you do count star on some table, right? We don't select all the rows to VTG. What we do is we push down that query down to the MySQL level to do the counting for us and we just do the summing at the VTG level. So our engine is built in such a way that try to push maximum to MySQL because it knows how to do things. It's already been doing for so long. So why not? And then... For analytics, VITAS is probably not the best fit for transactional workloads is where you should use VITAS. We have not found... I mean, obviously there might be people who tried VITAS and did not go into production with it, but the number of users we already have, the workloads are very diverse. So there is Slack, which is a chat. You had YouTube, then Square, which is a financial app. There are... E-commerce websites were using it. Gaming companies were using it. Yeah. So food companies... I think it's almost now in every department, yeah? Any factor? Yeah, I think JD.com really proved the scalability of VITAS even beyond YouTube because they've been running it and when they do the single stay in China, the amount of traffic they get on their website is beyond anything we see in other parts of the world. I think we're out of time. So we are available to talk at the VITAS booth as well over here. So please be... Thank you so much.