 and I guess sorry. So I'd like to thank everyone who is joining us today. Welcome to today's CNSEC new webinar, how to migrate the MySQL database to VTAS. So my name is Daniel Oh. I'm a principal technical marketing major as a head. As I'll add, I'm a CNSEC investor. So I'm gonna be modeling today's webinar and we'd like to welcome our presenter today, Mr. Bendei, the solution architect in a field operation and planning scale. There are a few things housekeeping I don't probably get started. During the webinar, you are not able to talk as a attendee. So there is a clinic box at the bottom of your screen. So please feel free to drop your question in there and we will get as many as we can get the end. So this is your official webinar of CNSEF and it's a chase subject to CNSEC with code to conduct. So please do not add anything to the chat or question. It would be in a violation of the code we conduct. So basically please do respect for all of your fellow participants and pre-geners. And please also note the recording answer will be posted later today to CNSEF webinar page. You can go to www.cnsef.io slash webinars. So I'm gonna hand it over to Ries. Ries, take your way. Thank you very much, Daniel. Welcome everyone to today's webinar. So yeah, my name is Liz van Dijk. I work for PlanetScale and today I'm going to be talking about Vitesse and the things you'll need to consider when moving into it from an existing MySQL based deployment. To first help establish the why, I'm going to be starting off by covering some of the basic concepts about Vitesse and then we'll go over some typical pitfalls and just a few recommended methods to avoid those when making the jump. First, let's see, a really quick introduction. As I said, my name is Liz. I'm a solution architect at PlanetScale, which recently celebrated its second birthday. Our company's founders were Vitesse's original creators and our mission as a company is to make it the most trusted cloud native relational database out there. We are headquartered in Mountain View, but I suppose like many of you, we're all remote employees currently. I am myself a Belgian. I live in Portugal, but I'm talking to you from California. So my background is mainly MySQL and I'm just beginning to dip my toes into this wacky world of cloud native architecture. So I'm really excited to bring you along as I figure it out. So what is Vitesse? We throw the word cloud native around quite often, but what does that really mean when it comes to databases? Well, 10 years ago, YouTube, which was a real web 2.0 darling, was facing some pretty unique data related challenges. And instead of building something new, their database team at the time decided to try and adapt MySQL. And MySQL by now is seen more than 25 years of active development. It's got tons of optimizations and durability increases and by itself has great performance and reliability already. That's why it's such a great foundational building block for any application, but it doesn't scale horizontally. So Vitesse was designed as a middleware layer on top of MySQL to provide that transparent sharding logic. And it does so while presenting itself to your application as a single MySQL endpoint. Secondly, you might remember that Google had acquired YouTube a couple years prior. So the database team also needed to adjust their framework so it could survive in Google's stateless container orchestration environment, which is called Borg. Because of this reason, and I only realized this recently myself, we can actually proudly claim that Vitesse was ready to run on Kubernetes even before Kubernetes was first officially released. It was built to be cloud native from the start. So a very quick glance at Vitesse as a part of the CNCF portfolio is right here. The project itself started in 2010. It became an incubation project as part of the CNCF database landscape in early 2018. And as of last November, it was the first project to graduate there. We're going to be releasing 6.0 in April. And some of the features I'm covering in this webinar are but everything we're talking about today can be accomplished with Vitesse already. I'm not going to go too deeply into the rest of our stats, but suffice to say it's considered a very healthy and active project by a lot of large scale web companies today. So as we mentioned, Vitesse is based on good old MySQL and it's built with the potential of massive scale in mind. That means it comes with a fairly large amount of bells and whistles attached. So to help take in, let's take a look at our reference architecture. Quite a lot going on in this diagram, but understanding it is going to be very helpful in our later explanations about how we can gradually build up to an environment just like it. As I explained before, Vitesse speaks MySQL and even when split across many different shards, it's going to present itself to your application as a unified MySQL database. So what's on the left of the dotted line in this diagram could be anything from MySQL GUI clients. It could be your custom application or it could be, say, CDC system capturing information for auditing. So to those applications, Vitesse and MySQL should be largely interchangeable. I do have to use the word largely because the compatibility, as of right now, is not quite at 100% even though it's getting closer every day. Now, like a lot of systems built for scale, it's very important to consider that just because Vitesse lets you execute something without spitting out an error doesn't necessarily make that thing the right thing to do. So as your application scales, you're going to be learning lessons about which architectural choices do and don't work. We'll get into the ways to gradually close that gap for your application though. So there doesn't need to be one big dramatic cut over. For now, let's quickly discuss what we're looking at here. From the ground up, we can see that all of the components in Vitesse are meant to be treated as cattle rather than pets, especially in the query path. Each element is built to be duplicated and recoverable from sudden failure. This is why it's such a natural fit for Kubernetes and any environments where resources might be added or purged as needed, sometimes even at a moment's notice. So it's also what makes it very easy for us to build a gradual transition path for your existing production workload. So let's take a closer look at these building blocks real quick to build a bit of foundational knowledge about how Vitesse works. Behind the load balancer in that diagram just now, you saw what we call the VT gate. This is your application's entry point of the test. By itself, it's a very light, stateless proxy. It contains as well importer itself informed as to the state of the cluster. So it knows at any point in time exactly how to break down your requests. Sorry, did someone just join? Okay. It's also going to transparently select the correct shard for you to select from. And it supports and respects a large variety of native SQL terms like joins and transactions. It also has a couple of built-in optimizations like connection pooling to help boost performance. I'm sorry, I'm just going to mute or just join getting some feedback here. Or can someone of the moderators help do that for me, please? All right. So as I said, VT gate has got a couple of built-in optimizations like connection pooling to help boost performance. And it also installs some guardrails around queries that could potentially harm our cluster. So VT gate presents itself as a unified database despite being connected to multiple instances of MySQL underneath. And the concept of this unified database is what we call a key space. We know that as far as instances go, each shard may have multiple copies of our data. But the overall design of our database schema and how it presents itself is still very important. Key spaces in Vitesse are defined by the combination of a good old normal schema file, as well as the added V schema, which describes the sharding-related metadata in a JSON format. A key space can consist of one or multiple shards. And just to be clear on this, a shard can contain a portion of the data contained in your database. And within each shard, we generally recommend spinning up at least three replica tablets to ensure high availability on that level. Within a shard, replication is managed by Vitesse, but it uses MySQL standard replication functionality. So we use the same terminology to describe them. So zooming in on the shard itself, and I guess I jumped to this slide a little too soon. So zooming in on the shard itself, these are made up of one or more Vitesse tablets. So as we said, the recommended amount, minimum amount for high availability is three. But these Vitesse tablets are the smallest worker units that we have available. And it's where existing MySQL users should be getting into more familiar territory. The Vitesse tablet is made up of a normal MySQL server process and a small VTTablet sidecar process that helps inject the logic that we need to make MySQL sharding aware. This pair of processes could run anywhere that you would like to run MySQL. You could run it on a bare metal machine, on a virtual machine, inside of a container, and the MySQL server flavor that is used were actually agnostic too. So it could be any type or version that you're already familiar with or running. Now, within a shard, VTTgate is able to send reads to each available tablet, but the writes are reserved for only a master tablet, of which there's always just one. So to make sure that within a given shard, we always have at least two fully consistent copies of our data. We do recommend that you run your tablets in semi-synchronous replication mode, which means that the master tablet is not going to commit a rise to its data files until at least one slave has acknowledged the change. I want to give you a very quick look at what all of this looks like. So I'm going to jump into a real quick demo here. This GUI is not a standard part of the test, but it helps me illustrate all of the previously mentioned concepts a little bit more easily. So while we were talking, we discussed a couple of different elements. So we talked about VTTgate, we talked about key spaces shard, a sharding schema, as well as a normal database schema. And I just want to give you a first look at what that looks like. So over here, we can see basically a dashboard of what it looks like to run VTTes. I'm just using a little example application. Use the mobile environment for a variety of talk. So if you've seen some of our talks at conferences, you might recognize the name of this database. But just to give you a quick peek, the test really to give you a look at the schema real quick. So as you can see, our schema is made up of normal SQL. It looks very familiar. There's nothing strange going on here. This just looks like a good old MySQL database schema. Very simple database here, just three tables. But on top of that schema, we do need to embed a little bit more logic to ensure that sharding works properly as expected. So we have a sharding schema on top of that, which in VTTes is called the V schema. And we discussed that earlier. So the V schema is described by a JSON file that essentially it latches on to the existing schema definition and just adds more metadata to it. It helps us define a variety of details here. I won't get into the specifics of how to build a V schema too deeply on this talk because there's hours that we could fill about that. But I wanted to give you a quick idea of what it looks like. And we'll talk about some methods to go back and forth and testing whether your design is being effective or not. Now, just to illustrate that VTTes actually does look and feel exactly like MySQL. So to your application, the endpoint that you're connecting to should be no different. Like the connector that you're using to get to MySQL today is the same one that you'll be using to get into VTTes. And I'm just going to grab this connection string from right here to show exactly what that looks like. So our database, in this case, has a couple of instances running. So we have one master and three replicas right now. But as we are logging into this database, we're using this connection string. We'll actually be logging into the VT gates, which will display all of the, you know, which is essentially going to display a unified overview of what's going on in the background. So let me show that real quick how that works. So I just copied the connection string just using normal MySQL client. As we're logging in, you can see the server version right here, the VTTes MySQL community server. That's actually what's telling us that we're logging into VT gates. But operationally, you will see that it works very similarly. So we have two databases available. I'm going to use one of them. Just to prove that this really does look and feel exactly like MySQL. So I'm just going to run a query here. And if I had a couple of really, really nice photos, a very, very beautiful box that we could rate, but that's not within the scope of this current presentation. But this was just essentially to show that once. Go ahead. Could you turn off your video? Your audio is cutting out every once in a while. OK, we'll do. Thank you. I'm sorry. I had not seen those messages. All right. So let's hope that this stays more stable. Please feel free to let me know if if there are any more issues moving forward. OK, so this was just a quick demo to show or to illustrate that when VTTes is up and running, even though there's a lot of complexity in the background, to your application, it actually looks and feels just like a like an individual, like a single MySQL endpoint. And all of the logic required to distribute your queries is kind of abstracted away from your application. So going back to the slides, bringing back the architecture slide, let's collect all of those components that we talked about and start thinking about how we can move towards our implementation. So there's a few critical items to consider when moving to VTTes today. As I mentioned before, it's not yet 100 percent compatible with all of MySQL's query language, but even where compatibility is not an issue, queries might respond differently in a sharded environment than you'd expect. And it's important to start familiarizing yourself with what makes or breaks a sharding strategy. The bottom line is that whenever queries can easily be filtered by the same column that makes up your sharding key, behavior should be exactly as expected. And if you do a lot of cross shard data gathering, though, you might find yourself surprised by the impact to your performance. So as I said, like in any relational database, just because your query works doesn't mean that using it is a good idea. Make sure to test your query workload extensively and expect that you'll need to do some rewriting in most cases. So where and how do we even start that process? Now, the first step, the very first step to assessing your workload's compatibility can be done without even installing the test thanks to a tool called VT Explain. This tool is analogous to MySQL's explain statement. And when it's fed a vschema and a schema file, this tool is going to return a breakdown of how VT gate is expected to handle the query in a real cluster environment. So it's also going to provide immediate feedback in case your query is not supported at all by the test. Rather than setting up a full test cluster from the start, this step could be executed on anyone's local machine. I'm going to show that momentarily in a brief demo. You can grab VT Explain either by getting the latest test packages or building the tool from source. Now, just because you can run it locally doesn't mean you'll be able to start completely unprepared. First and foremost, to get started, we're going to want a solid snapshot of your actual load. So some monitoring tools like PMM or Vivid Cortex allow you to already take a normalized set of queries to use, which is definitely a very efficient way to test. But if you don't use either of those monitoring services, you can also create your own normalized query list by collecting a set of queries in production yourself. You can do so by setting MySQL's slow query logging feature to log all queries for a limited amount of time, however long it takes to capture a representative sample and then running that throughput through the PT query digest tool to get a ranked list of your most impactful queries. And those will be normalized as well. So you'll have a nice clean list to work with and get a good idea of how Vitesse will respond to them. So once you've done that, you're going to want to read up on schema and reschema design. There is a lot to consider when building the right Vschema and testing your design with Vtexplain is a very important step towards getting that right in the first place. As the rule of thumb, if you're always selecting information as filtered by a specific customer or business ID, those tend to be good starting points for your charting key. Try it out, though, with Vtexplain to see how well your Vschema works with your existing query workload or how either your schema or your queries can be tweaked for better results. How to go about this? There's a ton of text on this slide, but I wanted to make sure it was included. So you had some immediate examples to get started on. So. This is a minimal example of how this works. On the top of the slide here is our current schema definition, which is no different than you would see in my SQL. And off to the right is our Vschema for this database, which has a single key space defined as sharded on the column data for both tables. If we run the Vtexplain command and we're assuming a shard count of two, we can see from the output exactly how Vtegate will break the query down into pieces and gather information across the cluster. Now, Vtegate doesn't actually connect to anything. It's not connecting to an actual topology server. So there is no Vtescluster right now to help us predict. We can give it a clue. We can give it the amount of shards that we predict will need to use, and it will make its determination based on the information that we're feeding it here. But it's going to give you a sense of how Vtegate is going to respond to that query, how it will need to break apart the query to make sure that it's accessing the correct shards. Rather than have you just believed me, though, I'm just going to show it off real quick. So let's do another short demo. Right here. Oh, shoot. I think I am already in the right folder. Okay, so I have the Vtexplain binary just installed right here in my demo folder. I also have exactly those two files as displayed on the slide. So if you get these slides, you can just copy, paste both of those yourself to give this a quick shot. And because it's a rather long command, I did execute it right before starting here. If we want to run Vtexplain to get a sense of how it would work, here is how you break down the code itself. So we've got Vtexplain, we're specifying our schema file, which is just our normal schema definition, our V schema file, which is the Vitesse-related metadata. We can pass it a number of shards just to get a sense of how a query might be broken down in a sharded environment. And then you can enter individual query to see what the result would be. Now, based on how we're filtering this query, the fact that we're using data, we're using data as a filtering column and we're specifying an exact value here, Vtegate will be able, based on the index that we defined in our V schema, the Vtex, it should be able to find the results on a single shard. So what happens when you send this query to Vtegate is going to send it right away to the exact shard, it predicts will contain the correct answer. So here in this list, right now only one query has been executed, you can see the steps that Vtegate needs to take to execute this query. In our current example, this is a perfectly normal supported query. So we're not getting an error in return. In case you have a query workloads that has some unsupported language, you're also going to get an immediate error results with a clear message about that. So just to give an example, the reason that we're only querying a single shard, even though we've specified that there are two shards available is because we're giving an explicit value here. But if we're writing up a query that does a comparison, for example, we'll see that right away, Vtex will need to start querying every single shard to make sure that each row is matched against this requirement. So given that we're specifying two shards, in this gate, Vtegate predicts that we will be we will be sending the query to both. I'm just gonna show that as we increase our shard count that will keep applying. So say that we're specifying eight shards we can see that the query is likely to be executed on all eight shards because we need all of our rows. Now, Vtegate as a rule will just compile the set of results and return it back to your application as a single table, just as my SQL would, but Vtexplain allows you to figure out how it needs to go about gathering your data and what the expected impact is going to be. So this actually helps you tweak both your Vschema or your queries. It kind of helps you understand where you can make some adjustments through trial and error to try and improve how well your workloads is going to be a fit for a sharding environment. So, okay. Just a quick summary of Vtexplain. It's a pretty handy tool and it's kind of the first step that we recommend anyone take when you're considering moving your existing workload into the test. And it will bring you a long way into making sure that your application is the right fit set up for success there. So, oops. Now that we have built an understanding of our workloads through some trial and error, we're gonna want to start building out of a test environment in your dev QA environment. So, given the many moving parts of the test, just getting operationally familiar with it is going to go a long way to being able to perform the steps in this migration. Sorry, just I think I missed the slide here. So, a good way to get started, I would recommend is to run through the various tutorials on fitest.io that are gonna help you walk through the steps as I showed in the demo right here, but also get a full environment up and running. So, once you've completed your analysis with Vtexplain and you're feeling confident that you'll be able to get started, you'd like to figure out how this all works in a real Vtex space. I recommend running through those tutorials and from there kind of build up towards adding your own schema and vschema for more extensive testing. So, remember to check off not just your own application, but consider any additional applications that might be accessing your database, like doing analytics or change data capture that might be shipping data off to another location. Those are often items that tend to be forgotten when the database layer is adjusted, but for sure when moving to Vtex those need to be considered as well. Now, beyond queries, another aspect to consider with Vtex is the additional network latency created by Vtegate. If you are already running a load balancer, this should not be too unfamiliar, but under normal circumstances we expect Vtegate to add about one to two milliseconds to your round trip time. So, add another one to two milliseconds if you're just introducing a load balancer for the first time as well. And generally speaking, Vtegate by itself adds almost no time at all to your queries and it may actually speed them up thanks to the performance enhancements. So, we consider this additional latency well within tolerable levels for most applications, but if yours is particularly sensitive to latency it makes a lot of sense to spend some extra time testing in the Dev QA environment. Now, provided that we've done everything we can to learn about Vtex and how it fits into our environment and we're feeling just about ready to dip our toes into production. Vtex's modular design makes it very easy for us to do a gradual shift in production while retaining the ability to roll back if anything seems particularly off. We call this a canary deployment and the idea is that we start off by diverting just a small percentage of our production traffic to a VT gate and a VT tablet so that we can see the results in real time. I'm gonna pull in our diagram one more time to refresh your memory of the components that we covered earlier and how they all work together in our reference architecture. Pulling this up specifically because now you can look at this illustration of a slightly simplified and less beautiful rendition just to help us illustrate the operational steps that we're gonna try and take here to start building up towards this reference architecture. So we can start up a Vtex environment around an existing MySQL server which by itself is going to continue to work completely unaffected. Critical to achieving that is to set up your VT tablet process to treat MySQL as remote or unmanaged so VT tablets will be acting purely as a pass through layer without attempting to manage the original instance by say controlling its replication settings or trying to take backups. So VT tablets can be set up in such a way that it's only acting as a pass through layer. VT gate, our proxy is kind of agnostic to this distinction and it's going to treat this original MySQL instance as a fully fledged, uncharted key space. So we have a parallel query path available that's going to run through most components of Vtex even without having established a fully managed Vtex key space. So how we dip our toes and start to divert traffic is strongly dependent on your application design. If you are using your own application internally we would recommend starting off there and just using it in production in-house. So divert some of your internal traffic through VT gate to start just to start trickling that through or another idea is if you have a defined group of beta subscribers that could also be a good place to start but however it's accomplished we do recommend starting small and switching more of the traffic over as your confidence grows. This is a process that you could take days, weeks even months to complete however long you need but it should definitely be completed before moving on to the next stage. So once our cutover has completed and we do have all of our traffic running through VT gate, we're still not running a fully fledged Vtex tablet here but we have started, we are at least running all of our traffic through VT gates already and we should start seeing the impact there. So there's some confidence to be built there. So once we have this set up we can consider spinning up a real Vtex tablet to prepare for a gradual migration of our data. So we are going to use to accomplish this we're going to use a Vtex workflow called table migration which specifically is going to come out with Vtex 6 to start separating out these tables needing to be sharded. So in previous versions of Vtex this process was called vertical split clone and it used a slightly different internal mechanics to accomplish a similar goal. The goal being to perform a live table copy and a traffic cutover between running two running instances of my SQL. So in Vtex 6, table migration is based on a feature called V-replication which can be used in countless scenarios requiring data to be moved around between different members of the cluster. Now we're using table migration. There's a little bit more information here. As we're copying that over when the process has completed your newly migrated tables are going to be running in a separate key space. Though thanks to Vtegate you'll still be allowed to join tables between both of those environments. So we've moved over some tables to a completely different MySQL instance, a separate key space but Vtegate still allows us to join tables. The Vtex tablet as it's spun up is also going to enjoy the benefits of being fully managed by Vtex. So that means it can be very easily made ready for high availability just by having multiple replicas spun up. It's going to have its backup stake and automatically, et cetera. So as you're building your familiarity with Vtex this very same process can be used to eventually migrate all of your tables. It's a great way to build some confidence and to future proof your stack, especially if this is also part of a greater overarching move towards embracing Kubernetes for other parts of your application. The ability to use a canary and a very gradual cutover for all of these components is a very effective way to minimize your risk as you're transforming your database architecture into a cloud native one. So there's a more description of the process itself, just a couple of highlights here about the setup. As I mentioned before, our legacy MySQL instance is treated as an uncharted key space by itself that MySQL instance is untouched it doesn't really require any changes. In our example, I didn't really get into sharding for the new key space, but we might as well have. So don't hesitate to dive into the documentation to familiarize yourself and skip a few steps ahead if you know it is exactly what your environment needs. Vtex has workflows available for all of these major reconfigurations and data migrations and planet scale engineering has also been hard to work at a new set of migration tools that will serve to better support this process every step of the way. Just making it easier for you to generate an initial V schema as well as automating much of the table migration workflow. So more to come on that pretty soon. Either way, until then, there's a great set of tutorials available on the Vtex.io website. These are honestly the way that I've been training myself. They're not perfect, but all that means is that we clearly would love your contributions and your feedback on how to make them better. And you should go check them out. I'm just gonna click through real fast here. So you get a look. There's a couple of very important operations here that have clear user guides built up around them. And I highly recommend them as a way for you to get familiar with how all of this works. And, whoop, I hope you enjoyed this webinar as much as I enjoyed delivering it. My goal was to inspire some confidence in the fact that there's methods that will allow you to safely and gradually support your adoption of container orchestrated cloud environments without having to leave your database behind. Thank you very much for listening. Don't hesitate to reach out on the Vtex community Slack if you end up trying out those tutorials. So as I mentioned at the start, I'm just getting started in this world myself, but I'm happy to be joined by two contributing engineers today who can help answer any questions I'm not able to field myself. And on that note, Deepthi, Morgan, and I are happy to dive in and ready for some Q&A. Awesome. That's a raise for great presentation and demo. So we are now having some time for questions. So if you have any question, just please just drop into the Q&A top at the bottom of your screen and we will get as many as we have time for. So by the way, we have a two question here. The first one is, is it possible to be used to MariaDB? Yes, I'll be able to answer that right away. It is possible to use MariaDB or any flavor of MySQL as your backing database. So you can attach a Vitesse tablet to any flavor of MySQL as of right now. I see a nice follow-up question there. Is it possible to use PostgreSQL? As of right now, Vitesse support my SQL as a backing database. This is a question that's come up quite regularly though. So it's definitely something that we're considering for the future. But as of right now, PostgreSQL is not supported and it is quite a ways out. It is in fact in the roadmap. But I have to put the disclaimer around that, that it is going to be quite a ways out. As of right now, we're MySQL centric. Yeah, we have one more question came up and actually two more. So how are we scheduled to replicas their session affinity options? I'm going to ask Morgan to answer that question. Morgan, I'm just going to repeat it real quick. So how are the reads scheduled to the replicas exactly? Are there session affinity options there? Sure, so you can read from a replica by using a specially named schema to switch to. In terms of affinity, if you start a transaction, you will have affinity to your master. But if you're in auto commit mode, the test will try and balance that between the replicas that you have available. Thank you, Morgan. Next question looks like, could you explain a bit more on how to migrate the existing data into shards? Any tools available for it? We have a 32 terabyte in ODB table. I suppose I can take a crack at this question, but it is a relatively large one and it's highly dependent on the design of your schema. The reason or the approach that we've proposed here in this talk is meant to accommodate very gradual switch over. But I believe that what you're asking for is more of a set of general guidelines as to how to approach breaking up your data into shards and which schema changes or which V schema would be the right fit for your particular database. So without having more information, it's kind of hard to get into your question, but I do want to invite you for sure to join the Vitesse Slack community where people will be able to get into your environment with a bit more detail and try and get a little bit more and give you a little bit more information and a little bit more of a background about how to start approaching this. It's going to be multiple steps, basically depending on your current schema and how much of it you're willing to change to adapt it to a shorted environment. Thanks, Liz. I might just add a couple of points on that because it is quite a large database. The tests in being cloud native, it encourages you to run with smaller shards so that if you have a failure, it can kind of move the data around quickly. So that you have this probability where you could have a failure while you're doing a redistribution. So it encourages shards to be about 250 gigabytes each, knowing that that takes about 10 or 15 minutes to be able to copy that data if you had to move that to another node. So as you move into Vitesse, how you're choosing that shard key will end up being one of the things that you want to figure out first. And that's something that the Vitesse community can help you out with. If I may also jump in. If this is really a single table that is 32 terabytes with Vittus, it is definitely possible to shard it. It may just be a long process. And as far as tools for accomplishing the actual sharding, they are part of Vittus. The rest of it, what your sharding key is and any other changes that might make that process easier, that of course is dependent on the data characteristics and your current schema. Thank you very much, Deepthi and Morgan. Let's see, there's a next question here. We are currently using Proxy SQL for query routing. For migrating to the test, would we swap out talking to Proxy SQL for VT gate? Christopher, that is correct. Yes, so VT gate could be considered analogous to Proxy SQL in your current environment. And it covers a lot of very similar requirements. And then the last question that I'm seeing right here, a second question is, would it work with Procona extra DB cluster? That's an interesting question. I don't believe that we would be using an individual PXC node as a backing database for Vittus, but Morgan, I believe in theory we could. We probably could, yeah. Definitely could have you used an externally managed, the test kind of gives you both choices. As something that's managed by the tests, we encourage you to use Semi-Sync as the HA solution. Indeed, so just to provide a little bit more background there, so we are in fact very, there's ways that you can run the VT tablet process in a very transparent way. And this is the way that we would recommend running if you had to use PXC as your backing database, but in theory, yes, you could use VT tablet as a front for PXC as well. In that case, you would just not, you would just not let Vittus manage your replication settings and choose to rely on PXC for that. I believe that is it for the questions. All right, thanks Elise for the great presentation once again and demo, and thanks for answering the Morgan and DP. So yeah, that is all the question we have time for today and thanks for joining us today again and the webinar recording and slides will be online later today. And we are looking forward to seeing you at future science and best science webinar as well. So have a really good day. Thank you very much, everyone.