 So today's talk is on Vitesse, RV FastJet by Mia Keelan and Florent. A bit of a background on Vitesse. Vitesse is a database solution for deploying, scaling, and managing large clusters of open source database instances. Vitesse is based on my SQL slash MariaDB. It's massively scalable. It is a CNCF graduated project. Vitesse runs both on public and private infrastructure, and works very well with dedicated hardware. Vitesse has over 24,000 commits. It's being used widely by companies like YouTube, Slack, Hujiri.com, GitHub, and Pinterest. And it's pretty popular among the DBAs for horizontally scaling my SQL databases. And I like to give it over to Florent to talk more about today's talk. Great, thank you. Yes, just like you said, it is being widely used more and more in bigger and bigger systems. But this comes down to two problems. We have to ensure that Vitesse's code base is reliable. And also it has a good performance. This is the two big attributes that we want. So how do we ensure reliability? Well, we have test, unit test, end-to-end test. Everything goes through CICD, pipelines, and et cetera. But how about performances? Well, we have benchmarks. And today we're gonna cover how to benchmark a big project just like Vitesse. We come up with five pillars on how to benchmark an open source project such as Vitesse. First, we want it to be easy. We want to foster a culture of benchmarking. We want to encourage people to do more and more benchmarks. Secondly, we also want it to be automated. Just like unit test. We want to avoid human error. We want to spend more time on important things and so on. Then, we also want it to be reliable. A benchmark's result is not a Boolean. It's not like unit test, pass or failed. Everything is measured in nanoseconds, so it has to be reliable. Then, we also want it to be reproducible. We want to resolve, I mean, we want to allow engineers to debug a benchmark if there's an issue or a spike in performance. We also want it to be observable. We want to see results. We want to see reports. We want them to be accessible by anyone in the community. So, how did we achieve that? Well, we've created ARIFAST yet. It is an automated benchmarking monitoring tool for Vitesse. It is open sourced, of course, and version number two is being developed and golden at the moment. We execute different types of benchmarks in ARIFAST yet. We have macro benchmarks and micro benchmarks. The first one, we have OLTP and TPCC, which are two different types of macro benchmarks that are worldly used. This type of benchmarks, macro benchmark, usually run between 30 and 60 minutes just like end-to-end tests, almost. And for that reason, we execute them only after merge or master or after release or before. Secondly, we also have micro benchmark. Vitesse's code base is coded in Goulding. So, for that reason, we execute micro benchmark with GoTestBench. It focuses on a single function, very tiny piece of code. It is very short running between one and 10 minutes and this is why we execute them after every commit. Then, how to execute these two types of benchmarks with ARIFAST yet? Well, ARIFAST yet has CLI, so we can execute benchmark individually through the CLI, or execution is being triggered by commits, PR, from Vitesse's code base, and from there, we use a configuration file where every single attribute of the benchmark is defined and declared. This enhance the reproducibility of our benchmarks. We just have to copy the configuration file to someone else's laptop and we can execute the same test. Once we have that, the configuration file, we execute Terraform inside ARIFAST yet again to provisioned infrastructure, which relies on Equinix that provide us with Bermittal server. Bermittal server is a good way to improve reliability. We have the same hardware specs, this is a good way. Then we apply some configuration with NCL. We want to configure the hardware that we've provisioned and then we execute the benchmark, either micro benchmark, macro benchmark. Anyway, we execute it. At the end, we get the results, we save them in MySQL database, and then we send the results in a Slack channel and we broadcast them to our website, rendering graphs and et cetera. Talking about this, sharing and observing, I've talked about the Slack and the website. Well, we also have a third way of doing sharing and observing, which is to get a status. We want to validate or invalidate Vitesse's PR and commits based on those results. This is still in progress, but we hope to get that merged soon. Now, Akulin is going to talk to you about the different dashboards that we have into web UI. So the major dashboards we have is the micro benchmarks dashboard. And as you can see here with the commit hash 925 and so on, so on. We compare this micro benchmark run against Vitesse 9. And as you can see, we have different metrics and different color coded to show the difference if it's an improvement or it's a warning. And if you click on the blue link, it actually points to the benchmark, the code itself on the official Vitesse repository. We also have is the QPS, which is the quiz per second. These are the results we actually get from Sysbench. And we, as Florence mentioned previously, we run this off every merge. And these are the merges for the past 30 days which occurred in the Vitesse master repository. And these are the following metrics which we get from Sysbench. And we have two major runs as Florence mentioned, OLTP and TPCC runs and both get the following metrics. And also we get transactions per second for OLTP and TPCC. And this is exactly same as I mentioned in the previous slide. I'd like to give it all to Florian for the conclusion of this talk. Great, thank you. So let's go back to our five pillars that we saw before. We say easy. Well, RFI still makes benchmarking easy because we have CLI commands, everything is executed by a CLI or commits NPR. Additionally, it is very easy to add microbenchmark and microbenchmark because we have a very centralized configuration file. It is also automated. Just like we saw, we have terraform and then symbol configuration. Everything is executed after a commit using GitHub web hook or release. It is also reliable. We have Bermudol server using Equinix and we also have Heast and Heast Horry of all the previous benchmarks results stored in MySQL database. That way we can aggregate the results and provide a more reliable result. It is also reproducible. Just like I said, we have a centralized configuration file. Just copy it to someone else's laptop and execute it. And finally, it is observable. Just like Aclin showed us, we have web UIs, dashboards and we send Slack messages and GitHub status, commit, and et cetera. This is the URL of the project on GitHub. Don't hesitate to go check it out. And thank you.