 Hello, everyone. Welcome to Cloud Native Live, where we dive deep into the code behind Cloud Native. I'm Ani Talvaso, and I'm a CNCF ambassador, as well as a senior product marketing manager at Camunda, and I will be your host tonight. So every week, we bring a new set of presenters to showcase how to work without native technologies. They will build things, they will break things, and they will answer all of your questions. So you can join us every Wednesday to watch live. And we hope to see you at KubeCon next week. So you can still register. So grab those tickets and get into the Cloud Native space next week as well. Perfect. And this week, we have Hanlin here with us to talk about how to build a multi-cloud database as a service. Very exciting topic for today for Cloud Native Live. And as always, this is an official live stream of the CNCF, and as such, it is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of all of our, your fellow participants, as well as presenters. But that being said, I'll hand it over to Hanlin to kick off today's presentation. All right. Thanks. So, hi, hello, sorry. Hello everyone. My name is Hanlin, and it's a pleasure to be here today for the CNCF webinar. Oh, can you hear me? Okay. Yes, I can hear you very well. Okay, sure. My team is, so my team is working on building and manage the multi-cloud database, TIDB as a service product, and the TIDB operator is one of the fundamental building blocks for making that happen. In today's presentation, I'm going to present a live demo on how to create and manage a TIDB cluster using the TIDB operator on a Kubernetes cluster. Okay, let me change the slide. Okay, well, before jumping to the details of Kubernetes operator and other cloud native technologies, please allow me to do a brief introduction to TIDB itself. So, TIDB is a MySQL compatible SQL database by MySQL compliant. It means you can connect to a TIDB cluster in the same way you connect to a MySQL instance. Different from traditional OLTP database that runs on a single instance, TIDB is a distributed system. Traditionally, when a database got excessive data, we need to shard the database and it could be challenging to manage those shards. Using TIDB, on the other hand, you don't need to worry about the sharding and TIDB will manage that for you. TIDB is inspired by Google Spanner and it is also built on top of a KV store. In this case, the KV store is something called TaiKV. TaiKV is a distributed KV store empowered by RocksDB. For a KV store, the key range could be very huge and we will reach the scale limit if we put all the keys into a single machine. To address that scale issue, we split the entire key space into multiple continuous key ranges and we call them key regions. The concept is similar to shards. Key regions are distributed to different TaiKV instance and this is a superficial description of how we built a distributed KV store. Perfect. And there's actually an audience question that popped in. So, ITSploid asks, what is the difference between this and VITES, which is also my SQL and CNCF? Which database? Sorry. VITES, I'm not sure if I'm pronouncing it correctly. If you can see the comment there. Oh, I'm seeing the comments. Okay. What is, oh, from VITES, okay. So VITES is a shard management system. So basically it managed, it helped people to manage different shards, but I think at the back end, it's still using the MySQL, but TaiDiBee different from VITES, it doesn't have any shards concepts in there. So basically for VITES, because it's still using the sharding technology, so there's some limitations that probably you cannot do some joint operation or something, but this is not the case for TaiDiBee. So TaiDiBee, since there's no such thing called a shard, so you can do all the operations that you can do towards MySQL instance. So I think that's the difference. Okay. Okay. So I think the question is answered. So let me just- Yeah, perfect. Okay. Thank you so much. Yeah, no worries. So when TaiDiBee receives a SQL statement, so it will try to parse the SQL generate a query plan. Okay, great. Generate a query plan and it will determine in which key regions the required data is located. So now the question is, which TaiKV instance hosts those key regions? Host those key regions. This is where PD comes into the play. So PD stands for placement driver and one of its core functionality is it maintains the key regions to TaiKV instance mapping. PD serves as the control plane for TaiKV and it also checks whether a certain key region becomes too dense or too sparse and it will try to split and merge the regions dynamically. Finally, in this diagram, there's some component called TaiFlash. So what is TaiFlash? TaiFlash is basically a column storage engine optimized for analytical processing. With the presence of TaiFlash, TaiDiBee is capable of handling analytical workloads without interfering with ongoing transactional workloads. Okay. So I will change the slide. Okay. Now, okay. Now I think we have some basic knowledge for TaiDiBee than what is TaiDiBee operator and why it's useful for us. Well, as we are seeing previously, TaiDiBee has many different components and managing it could be TDRs and Airprom. TaiDiBee operator is a tool for managing TaiDiBee in a Kubernetes cluster. Similar to other operators in the Kubernetes ecosystem. Hi, hi, Maha. Yeah. Similar to other operators in the Kubernetes ecosystem, we provide a set of CRDs and the user can simply describe the desired state for a TaiDiBee cluster and the operator will automatically drive the cluster to its desired states. TaiDiBee operator can do lifecycle management for TaiDiBee cluster, set up the monitoring and set up data change capture, set of things like data change capture clusters and so on. Okay, let me change the slide. Okay. So here's the plan for today's live demo. First, we will cover the installation for TaiDiBee operator. Then we will label the nodes in a pre-created Kubernetes cluster so that each component could be scheduled to a dedicated node. We will create a Kubernetes cluster by applying a TaiDiBee cluster customized resource. So after the cluster is up and running, we will run the TPCC benchmark against that cluster and we will log into the TaiDiBee to check out the newly added database. Next, we will access Grafana and the TaiDiBee dashboard to see the metrics being collected. We will also try to scale out and scaling the cluster and check out changes on the dashboards. Finally, we will clean up the resource we've just created. Notice this is a live demo so things could go wrong. Any questions before the demo? Not so far from the audience. Oh, now they're seeing me. Oh, someone asked if possible to get the deck afterwards. For example, are you maybe thinking about putting it to GitHub or somewhere else? Is there a place where people could find it afterwards? Okay, sure. I think I do have a Slack link. Let me find... Sorry, I do have a link for the demo. I will probably handle it in the private chat. Perfect. We can get it there. I think we can also include it to the material so that people can get after the events and so forth. But also, if you are watching this live and your internet is poor and whatnot, you can always watch the session afterwards on demand on the CNCF YouTube channel. It's going to be added there immediately. So you can either look at the slides afterwards or you can just watch the session play by bake, sexy as it happens. So no worries there as well. Yeah, sure. And then there was another question from Laurentinus. Could you share use cases that we are trying to accomplish using DVD but cannot be accomplished with other DB? Okay, so accomplished using... Cannot be using other DB? Well, I think that really depends. I think there are different DBs on market. Something are using a similar technology. I think Cochrane's DB and others, these are similar to TIDB but I think the major difference is from traditional OLTP database. For example, MySQL or Postgres. Those databases are, I think by default there are some, something like pretty much single instance databases. So you need to manage a sharding and other stuff and that could be tedious and that could also be very challenging in some cases. So for TIDB, I think the benefits for it is very scalable. So, and you don't need to worry about managing the shards on your own. So I think that's one of the major benefits. I think some other features could be, it is marketed as a OLTP, sorry, EdgeTab database. So it's handle hybrid workloads like transactional and analytical because I think after some time of development, people added some analytical processing data sort, something called the type flash. I just mentioned in the slide. So that means, for example, when you are handling both transactional and analytical workloads, those two are kind of workloads won't really interfere with each other. So we won't see a very obvious performance drop if you are doing analytical processing and on the same time you are doing transactional actions, yeah, something like that. Perfect, sounds good. Then there was another question on, I kind of continuing, I guess, a bit on that topic as well. So my T-split asks, I'm curious about scaling and disaster recovery capabilities. Oh, okay. I think most of the disaster recovery capability is actually brought by Kubernetes. So Kubernetes has some components like stateful sets and deployments. So we use that for, so we use that for data, I think, H8, yeah, for example, if some part is, we are typically seeing that in our production environment, sometimes tightly be part got all end. So basically, when we are handling some big queries, so we'll take a lot of memory to do the compute. And sometimes we didn't assign enough memory and the part just crashed. And with the help of Kubernetes, the part will be bring up automatically. So that's, I think that's what the cloud native brings to us. Yeah. Perfect. Then there was a, Jorge asks, TITVD uses PBS to store data, right? Or is it like a middleman between apps and DVs? Oh, I think the answer depends. So for some of the, so TITV is a cloud, I think in our use case, we use it as a cloud native database and we deploy it in Kubernetes. When we deploy in Kubernetes, yes, it is using PVs as the data store. And for some use cases, people just deploy TITVD on a bare metal machine. So in that case, people will, but they're not using Kubernetes in that case. So people use local disk or something similar. Yeah. And then we had regarding the cluster config, how many nodes we can have? Oh, okay. So the number of nodes, I think there are several limitations in our use case. So we are deploying that's on, we are deploying TITV cluster on Kubernetes. And so Kubernetes typically, I forgot the exact number, but Kubernetes does have a node limitation. So that's one thing. And what our typical deployment is we will assign, for example, there are several different components, TITV, PDD and others. So all of them can have multiple replicas and we basically assign a single pod to a single machine. So a pod and a machine typically have a one-to-one mapping in our practice. So I think one of the limitation could be the TITV, sorry, one of the limitation could be bounded by the number of nodes that can join a Kubernetes cluster. So that's one of the limitation. But apart from that, I don't really, I think there probably is also another limitation on how many nodes can join a TITV cluster, but I need to check out the exact number. Yeah. Yeah, no worries. Then we have ITSplot asking, how does it handle cluster slash node upgrades where nodes will need to be refreshed with new nodes slash node groups? Okay, so handle cluster upgrade and nodes need to be refreshed with new nodes. Okay, so I think for upgrade is actually a little bit tricky. I think in our practice, there are some rolling upgrades. I think it will, what we will do is we will change the version number in the TITV cluster CR object. But yeah, I think that could be a little bit tricky, but basically it will trigger some rolling upgrades. Perfect. And so far the last on this cohort of questions, maybe Santosh asked, is TITV no SQL? No, it's a relational database, just like MySQL. MySQL is a relational database. So this is also a relational one. Perfect. And we're gonna have obviously more Q&A at the end, as well as throughout the whole demo. So people keep on asking questions. Thank you so much for so many already. But I think we can hop on to the demo now. Yeah, no worries. Okay, so I'm gonna try to switch to the demo screen. Okay, okay. Is the front-size okay? Do I need to increase the front-size? It looks good to me, but we can obviously see if anyone says anything. And actually we had a new question here that we can answer while everyone can see if the front-size is good. So is it auto-scale, pods in horizontal or vertical? How does this support move out? I see. So the hard, I think it can scale both horizontal or vertical. So typically we will install something called VPA that means that stands for vertical, pod, auto-scaler. So basically we will sometimes we want to increase the memory or CPU for example, tidy be pods automatically. Yeah, so that's one thing. But also it can scale out, it can also scale out. So scale out to scale out, I think we just need to change the replica numbers on the tidy be clusters back to make it happen. Yeah. Perfect. And then we can grab the kind of extra question from centers as well. So is there any cloud native databases which are no SQL from open source? I think there are a bunch of them. I think there are a bunch of them but probably it's not really related to today's demo. Great. Do you want to grab the last question there? Yeah, sure. So many questions which is amazing but we also need to get the demo at some point. Yeah, do you want to hop on straight to the demo or do you want to take this last question? Maybe the demo will take some time to load the data, maybe I will keep off the demo and answer the questions during the wait. Perfect, sounds good. Okay, cool. So I think we've covered the plan for today so let me switch to this terminal. So I've already created a Kubernetes cluster, get nodes. So I've already created a Kubernetes cluster on GKE that has seven worker nodes before this presentation. To simplify the demo, the cluster nodes are all located on US Central 1A, so A, that's the availability zone. In a typical production environment, sorry, typical production scenario, we would like to spread the workloads across multiple AZs but for the sake of the demo purpose, we will use a single zone cluster. And the first thing we wanted to do is to install the tidyv operator. We are going to install it using Helm chart and it's a popular application delivery tool in the cloud native realm. And let's add the pin caps chart repository to my system. So here are the commands to add it to my system. Okay, it's already added. And then let's refresh it to make the charts up to date. Okay, sounds perfect. And the next thing we want to do is to apply the CRDs. Just like other operators, we need to install the CRDs first. Let me grab the CRD here. Okay, so we can install the CRD for version 1.3.9 here. To confirm the CRDs are installed correctly, we can do the following command, resources, and let's grab for pin cap. So this will list all the CRDs installed in the pin cap resource group, sorry, API group. Okay, that seems those are installed correctly. And then we want to create a namespace named the tidyv admin and we will install the tidyv operator under that namespace. So let me try to create a separate screen. First, create the namespace and then let's watch that namespace and see what's going on under that namespace. That's n tidyv admin get pod and let's wait. Okay, okay. Then on the upper screen, we will try to use help to install the tidyv operator under the tidyv admin namespace. And notice that we are using 1.3.9, which is the latest release for tidyv operator. Okay, now we are seeing that the pods are being created. So at this point, probably it's pulling some images. Okay, the status becomes running. Okay, that means the operator is supposed to be installed correctly. Okay, let me just cancel that. The next thing we want to do is in this cluster, we have seven nodes and we want to assign one node for pv and two nodes for tidyv and the four nodes for tidyv nodes. So to tell the scheduler how to do the assignments, we need to label the nodes. So that's first label the first node for pv. Okay, the node is now labeled and we will label the following two nodes for tidyv. Okay, nodes are labeled and the remaining nodes, we will label them for tidyv. Okay, so I think the nodes are labeled correctly, but just to confirm, we can do a get node and we provide a label dedicated equals pd and we are seeing that one node is being listed and for tidyv two nodes are being listed and tidykv and four nodes are being listed. Okay, that means we are labeling correctly. And the next thing we want to do is we want to create a NEM space that's for the tidyv cluster. So the tidyv cluster resources will be allocated in that NEM space. We call that NEM space demo. Now the NEM space is created and for the sake of the demo, we will switch the default context to demo. So in the remaining of the code lab, we won't need to type dash and demo again and again. Okay, and okay, the next thing we want to do is we would like to download the sample Kubernetes cluster YAML and we would like to do some changes to that YAML. The first thing we want to change is the replica number for PD by default, it's three. And for this demo, we want to make it one. And we don't need that much storage size. For example, 10 gigabytes for PD is too much for this demo. We make it five. And for the storage size for tidyv, we don't need the 100. We will just make it smaller, make it 10 gigabytes is enough. And the final thing we want to change is actually for tidyv. So to make our, sorry, let me locate the tidyv service first. Okay, here we go. So to access the MySQL clients, we would like to expose the tidyv service via the load balancer. So here it's originally by default, it's using the internal one, but we want to access it in this demo. So we make it external. I think that's all the changes we need to make in for this tidyv cluster object. Let's create it, create dash af, tidyv cluster YAML and on the second half, kubectl dash get pods dash w. And we should be seeing the pods are being created. It will only take a few minutes for those pods to come up. I think at this point, the pod is the node is pulling some image. And after the PD instance is ready, it will try to spin up some tidy, tidyv pods. Okay, so tidykv is now up and running. Well, okay. Now tidykv nodes are being assigned. And okay, now tidykv are up and running. Yes, okay. And I think after tidyv is up and running, then it will try to start the tidyv pods. I think we have two replicas of tidyv pods. We are seeing the tidyv pods are being initiated here and as the, okay, now they are running in the running state. I think it will take a while for them to be ready. Let's try to get the tidyv cluster basic objects. Let's try to see a status. Okay, the status is now ready. So that means the cluster is now up and running. The next thing we want to do is we can connect to that tidyv cluster. What we will do is we can see what's going, what's inside the, what's in the database. So I'm just grabbing the endpoint for that tidyv cluster from the service. And then I'm going to connect to that tidyv using the MySQL client. Okay, so now it's connected to the tidyv cluster. And we can see the version number for tidyv cluster is 5.4.1. And let's show the databases. Okay, you can see initially it will have some default databases. Okay, so the next thing I want to do is I would like to create the monitoring components for it. So here is the, I think this is the monitoring tools. Let me first apply it and I will explain what's inside. Yeah, on the bottom half you can see the monitoring part is being created. And what's in that component? I think this is only a default configuration for the monitoring. Get hub and let's try to, yep, wait, monitoring tool, pink hub. Okay, so maybe just show the YAML. So basically this, you can see from this back, basically tidyv monitor is a wrap on top of the Prometheus and the Grafana. And basically, apart from these two components, basically there's an initializer. The initializer has some built-in logic to load the configurations to Grafana and Prometheus. I think that's as simple as that. So this is a simple configuration, but for production you probably need more fine grind configurations like the configurations for passwords or the durations. You want to persist the metrics for Prometheus and these are all configurable in a fine grind configuration. Okay, so now the pods are up and running. Let's try to run the TPCC benchmark against that cluster. So I will try to create that. I will try to create that. Let me create a new screen for that. Let's first grab the endpoint IP and then run this command to load the data. Okay, as we are seeing some outputs, I think it's just loading the data. And we can switch back to the MySQL client. And at this point, when we show the database, you can see there's a new base for TPCC being created here. And use TPCC. I think there are some tables newly created in that database. Show tables and the slack. So from let's see the items. We see the five items. Okay, you can see that there are some fresh stuff being added to that database. So it will take a while for the data to be loaded. So what the next thing we want to do is we want to check out the Grafana dashboard and also the tidy dashboard. So, okay, still loading data and check out the services in the system. And we can see that only in the default name space, only the tidyCB service, that's the MySQL endpoint as exposed to using load balancer. But for other services, these are all cluster IPs. That means these services are not accessible from outside of the cluster. So to expose, to connect those services, we need to do a path forwarding, yeah, so by running this command, we are basically putting in the basic Grafana, the Grafana service to our local 3000 ports. And similarly, by running this command, we are path forwarding the tidyB dashboard to our local ports. All right, these are all being path forwarded. And we can connect to these services. For example, let's first try out the Grafana dashboard. Okay, so now we are seeing the Grafana and by default, default configuration, it's using the default password, which is, yeah. And it's a weak password, to be honest. And we skip the change password command. And in this dashboard, see that there are some defaults built in a metaphor tidyB cluster. And we can check out the tidyB details and sorry, change this one. Oh, wait, on typing and the details. On typing and the details. And we can expand the cluster. And you can see that the cluster has just been created and let me change the duration to 15 minutes and refresh it every five seconds. You can see that we cluster has just been created and shortly after it is created, we are seeing some loads because we are running TPCC against it. And from this point, you can see that this is the metrics for regions on Type-KV. So we can see that every instance got 39 regions. I think that is because by default, we will have the data will be replicated to three Type-KV instances. Yeah, so every instance will have all the regions here. Yeah, and we can see that each regions, although we have 39 regions in total, and each Type-KV instance is only around one third of the leader of one third of the regions, something like that. Okay, then let's check out the tidyB dashboard. Okay, so this is tidyB dashboard. And we are seeing, okay, there are some queries, queries in the default dashboard. And we can check out the SQL statements. So tidyB dashboard is mostly used for troubleshooting. And we can, for each, there's a dashboard for SQL statements, so we can see the details, some statics for each single SQL statement. And we can also check the slow queries. So slow queries will do some statics for the queries that takes extra extra finish. And we are seeing these are typically have around 300 milliseconds to finish. And by clicking on the queries, we can actually see, oh, okay, this is not very typical. Let's try to see updates, something like this. Yeah, for example, if we check out some of the SQL statements, we can see that there's a detailed query plan for that SQL statement. And these information are very useful for performance tuning, yeah, okay. So this is a very basic tour for the Grafana and the tidyB dashboard. Let's go back to the screen and the data is still being loaded. Okay, it will take some time to load the data. And as we are loading the data, that's the reason why we are not seeing some queries like select or get all in the SQL statement section. So basically all the SQL statements are something like insert and write to the database because we are currently loading data. Okay, well it takes quite some time to load the data. And at this point, okay, so originally we are, I think we have 39 regions, now we have one more region because we are continuously writing the data. And what we can do at this point is we can increase, we can scale out the cluster by changing the, by changing the number of replicas in TKV. So let's try to scale out the cluster here. So let's first check the current number of replicas in the system. So originally we, in the TKV spec, oh sorry, in the TKV clusters back, we are specifying three instance. We are just confirming that we have three instance for TKV and then we can do a patch here. We can create a patch to patch the number to four. And you are seeing that instantly, we can see a new TKV pop being created here. Yes. And we check the replica number here, the changes from three to two. And now the new TKV node is being initiated here. So for scaling out, typically the change will happen instantly after the change is applied. And it will take some time for the TKV pod to up and running. And there will be some latency in the dashboard. And later on we're supposed to see a new TKV in the dashboard. Let's go back to the, okay. It's taking quite some time to load the data, okay. No worries, if there's some time, we do have the few audience questions that we can get to. Does that sound good? Okay, so there was a question from Sivanj. Can we use this TIDB in Java older versions like GTK8, will it be compatible or support? I think I'm not expert on that, but my understanding is I think TIDB has spent quite some time working on the compatibility issue with my SQL. So if your GTK can be connected to my SQL, I think it can be connected to TIDB in the same way, yeah. Great, and then IPSploit had a comment. And I think that they asked before about the cluster node upgrades and so forth. And then they continued. Tricky is the word when it comes to DBS, is DBS and Kubernetes, a clear plan slash process would be nice, especially with node refresh slash upgrades. Yeah, yeah, okay. So I think for the upgrade you, I think that does need some expertise. Sorry, I don't really have the expertise on that. So probably I will, I can probably get back to my team and find more resources on that. Perfect. And then we had a person asking, is there performance from benchmark available in comparison to my SQL, Maria, DP and Vitzis and so forth? Actually, I'm not sure on that. And I think for the benchmark, it could be, I won't say tricky, but it will be a little bit unfair because my SQL is a single instance database. So it doesn't need to worry about the latencies between different components, but tightly distributed, it's very scalable. It can handle maybe not petabytes, maybe terabytes of data, but my SQL will be hard to handle that data, but data never leaves the instance. So the performance could be better. So it's really, it's really how like, what's the difference between performance versus scalability? So yeah, so I think there's, yeah. Sorry, by the way, I think there's back for TIDB itself in the, I think official website for King Cap, but I'm not sure if they have formed the benchmark for different types of clusters in the page. Yeah, sorry. Yeah, perfect. And then there was, I think two people asking about the Git link so that they can follow along with the steps that you're doing. Okay, great. I think someone maybe linked it already, or they linked your docs link, but if you have the Git link, then I was listening to the attendees via the chat as well. And then there was a question around Santosh asks, we should be having a TIDB driver connect from Java, right? But then I think Lee Shen answered since TIDB is compatible with MySQL protocol and syntax, you can use MySQL drivers to connect to TIDB. If you want to elaborate on that, obviously free to do so, but I think probably at least it was a good answer to kick off with. Yeah, thanks. Thanks, Shelley, yeah. Yeah, and then people are asking for the Git link and then a question, is it free to use for production environments without any licensing complications? Question? Yeah, sure, yeah, sure. TIDB is open source, I think under Apache 2 license, yeah. Yeah, and then a question was asked for architecture diagram, it uses ROX-TB behind the scene. Yeah, ROX-TB is the, yeah, like Shelley mentioned, ROX-TB is the starch engine and it's actually a library we use, yeah. Great, and I think that was it for the question so far. Okay, yeah. Okay, sure. So let's get back to the panel and we can see that after some latency, we can see the new TIDB 3 nodes come up and running. And also we've loaded all the data finally and we can run some analytical, we can run some other, instead of loading the data, we can actually run the test against the cluster. Okay, it's supposed to run some queries and from the dashboard, let's try to refresh it. Okay, now we are seeing some, as we are running the TPCC against it, now we are seeing some slack statements there. And, okay, so there are many, many things to tweak here and there's something like query plans. Let me check, but okay, now as we are seeing some outputs, that means the TPCC is really running. And, okay, come back to the Grafana dashboard. So we are seeing, as we are seeing the new Type-KV nodes and I think there are some balancing works between the Type-KV and we are seeing that the Type-KV3 nodes is picking some leaders for the regions gradually. And now we want to try one thing is, we want to scale in the cluster by one node. So different from the scale out operation which happens instantly, right? For scale in cluster, let's say if we delete the Type-KV3 node, if we blow the Type-KV3 node directly, then some of the leader are not evicted. So it will be a problem. So there will be some disruptions in the workloads and we will see some difference in the query segments, right? And what Type-V operator will do is it will not directly remove this Type-KV3 nodes. Instead, it will try to evict all the leaders on that Type-KV nodes. And after all the leaders are evicted, then it will try to remove that node. So this makes our life easier. Okay, so let's try to change the replica number down to three in this dashboard. Okay, okay, we can see that the change is applied, but on the bottom half of the screen, the Type-KV nodes are not changed directly, right? Because at this point, it's trying to evict some Type-KV leaders from this node. So it will take some time to evict all of them, but after the number of leader on Type-KV3 drops to zero, we are supposed to see the pods being removed, being terminated. And we are seeing that, okay, now it's being terminated. And I think at this point, the leaders are all evicted, but there are some latency on the dashboard, but later on, we are supposed to see a drop for the leaders on Type-KV3, yeah. Let's go back to the Type-DB dashboard. I think we can refresh all the statements. And we are seeing more and more select at this point, we can try the slow query and see, okay, these are taking, these analyzed statements are taking long, but you can see that the select statement could also take seconds to finish. And yeah, and at this point, I think these informations could be useful for performance tuning later on, yeah. Okay, now go back to the screen and we are seeing that Type-KV3 node is now down because we are not seeing its metrics here. Okay, that means the node is actually being removed. Okay, let's see how many pods are still in, okay. Now we can see that only three Type-KV incidents are kept here. I think that's the most part of my demo. And before I remove all the resource, any questions? I'll take some questions from here. Yeah, there was one question, for example, from AJ. There was, if some nodes are down for a while, how do you handle split-spring? How do we handle split-spring? I think, from my understanding, maybe it's not correct, but Type-KV is basically using raft to do the consensus. So, and for example, when you want to write some data to the database, I think there's only one Type-KV instance taking the right request. But for example, depending on the keys you are writing, it could be a schedule to one of the Type-KV nodes to take the right request, but there's only one Type-KV nodes taking the request. And if that node is down, then I think there will be some re-election will happen. So, a new leader will be elected and the right will be shifted to that schedule to that Type-KV node. I think that's the most important thing I think that's the overall theory, I think. So, yes, since there's only one instance taking the right, so I think there won't be split-spring. Okay, yep. Yeah, and if there's any other, like there's no audience questions currently that we haven't answered so far, but now we have few minutes to left if anyone has them, so please do write your question there. And was there any, I think you had maybe handling like a resource that you want the people to know in the future regarding Q-Quad or something where people should be learning more about these things, possibly? Yeah, I've just also shared the GitHub just link with our staffs. So, later on, I think we can also share this GitHub just that I used, yeah. Perfect, and we now shared the presentation, but yeah, perfect. Now the GitHub link went to the old, yeah, attendees, it's perfect. Mm-hmm, yeah. Yeah, but yeah, if there's any questions from the audience, now is the time to ask them. But yeah, did you have something coming up in Q-Con? I think you mentioned before. Oh, yeah. I wanted to tell people. Sure, so here's a little advertisement. So in Q-Con is at the corner. So in Q-Con, my colleagues will host some presentations. If you are interested in Thai KV, please be sure to visit this booth. And also if you are interested in chaos engineering, we are also hosting chaos mesh. It's also a project initiated by Pincap and be sure to visit this booth there. I think that's the information I have so far. Perfect, great. Next steps and resources that people can utilize going further than. And then we had a question from audience. Could, I presume this, could scale up vertically without downtime, question mark? Yes, yes. I think, like I mentioned before, I think in our environments, we will set up something like a BPA to scale up the, for example, memory and CPU for nodes. Yeah, that can be, that can happen, but you need to configure a BPA on your own, yeah. Perfect. And we do have still a few minutes. So if anyone has any questions that are typing them out right now, please send soon. Hanlin, is there anything else that you want to still share? Yeah. Okay, I think the final thing is, I will just show how to remove the cluster, how to clean up the resource. So basically in the command line, oh, let me just do a watch here. Yeah, you simply remove the cluster and you can see that the pods are gone and you can also, you also need to delete the tidy B monitor. Yeah, the monitor resource is now gone and then you need to remove the PVCs and the persistent volume, yeah. And I think that's it. That's basically what I want to cover in this presentation, yeah. Perfect. And then I guess we have a comment from Mastimo to the scaling vertically without dying time. They say, I think scale out strategy. Okay, thanks for the comment. Okay. Yeah, oh yeah. So final call, any questions if anyone is typing something out, but any final words from you, Hanlin? Anything to say, add to the end here now? Thanks for joining the webinar, yeah. I think that's all I have to say, yeah. Perfect, thank you so much. And since no new questions have popped up, we can start the wrapping up. Thank you everyone for joining the latest episode of Cloud Native Live. It was great to have a session about how to build a multi-cloud database as a service. We love the interaction and questions from the audience. So many of them, always great to see that. And as always, we bring you the latest Cloud Native code every Wednesday. If for the next few weeks, we will have a break because KubeCon is happening, but after that we have a lot of great sessions coming up, so stay tuned for those. But thank you for joining us today and see you next time.