 Okay, thanks for the time. Okay, in this topic, we will explain about NDB operators to deploy NDB clusters on the Kubernetes. As we know that NDB clusters is clustering a shared nothing architecture. It is based in memory, means that data will be stored in memory, in distributed, in memory data set. And then it is always on with 99.99% system hike availability is super highly available and also always consistent because all nodes in NDB clusters are read write different with NDB clusters that we just explained just now. In NDB clusters, every node are read write and always consistent. And it's massive linear scales with in-memory extreme performance. And what you get by running your database in NDB clusters? First is performance, read write scale levels. Okay, because all nodes are read write then you can get more throughput by adding more nodes. Read write throughputs by adding more nodes. And then it is real time in memory. Okay, as we know that memory can give better response times than I owe to the disk. And super low guaranteed latency. So this NDB cluster is designed for super low latencies applications, efficient criticals in terms of performance. And data will be automatically shut across the NDB clusters, NDB clusters nodes. Okay, and it makes partitions available by default in NDB clusters. And this elastic you can change. You can add nodes easily in NDB clusters. Secondly, you will have a consistency. It is always on read write scale levels. Okay, there is no like, you know, data to be like in a waiting stage. Okay, the moment it has committed and the data will be available on all nodes. And always consistency. In fact, this is asset compliance. Okay, durability is provided by checkpoint. Yeah, there is a local checkpoint and global check point to make sure that the data that reside normally will be back up to the disk. Third is NDB cluster support SQM or SQL. And it is highly available, completely super highly available. Yeah, with 99.99% system high availability. And I will show it to you later on. Okay, this is the NDB cluster architectures. We have NDB clusters consist of multiple that are not over here, but to store data in memory and completely consistent. And on top of that, we will, NDB cluster has various options to connect to the NDB clusters data nodes below. So you can write your own code using Java or using C++ and execute NDB API to query read and write data on the NDB clusters. Or if you have, I mean, majority of the application is using SQL interface to communicate with database. So NDB cluster have SQL nodes, yeah, has SQL nodes, okay, where the application, SQL based applications can connect to the SQL nodes as if it is connected to the normal my SQL. When I say normal my SQL is that, normally my SQL is, the application just connected to my SQL and then do a SQL query and so on. So NDB clusters are a general architecture consists of three different types of nodes. The first is management node, okay, management node, actually store the configuration data of all nodes basically. And it's controlling the cluster membership as well as acting as arbitrators, acting as arbitrator if there is any network partitions that potentially causing split brains. So management node will have to mitigate that issue. So second type is data nodes, okay, data node is running the NDB engines to store data completely in memory unless you define one or two columns in the table to store data to the disk, but that's not something that usually people do, okay. Usually they store everything in data memory. And third is my SQL servers so that the applications, okay, can connect to the SQL nodes, yeah, the SQL node and if the application commit data transactions on the SQL nodes, then the data is actually stored in the data nodes. If the table objects, yeah, the tables itself use NDB storage engines, okay. So this is the architectures, okay, the high-quality architecture. In this example, we have four my SQL servers, okay, all are read, write, there is no fellowverse, all those things, right, like an NDB clusters, there is no fellowverse because all is read, write. And application can use my SQL routers or any a little bit answer that you like, okay, to spark a lot of cost of my SQL servers over here. But this my SQL servers does not store data locally, but it is storing the data to data nodes behind and the data will be stored as a memory. And we have four data nodes over here that split into multiple data node groups, which is node group zero and node group one, okay. So meaningly, we can have one node groups, yeah, which is node group zero. But if we have data size more than data memory, then we need to distribute the data to the second node groups. If data growth, then we can have another data groups and data will be shattered across. And between the node groups, we can have one to four replicas, yeah, and fully consistent. And we have two management nodes, yeah, we can have two management nodes, one primary and one standby management nodes, okay. How about if we have major outage on the NDB clusters? Since NDB clusters is fully redundant, then if we have large nodes within the NDB clusters having problems or outage, then NDB cluster will keep up and running as long as the data is still complete, the full set of data will still be there. Let's say for example, data nodes two and data node three, having a problem that data node one and data node four still have a full set of data, then this installation, this architecture, NDB cluster is still up and running. Even though the management node is down, completely still up and running, because the management node only use for, in this case only use for starting up the NDB clusters nodes. Once it is starting up, it will look for the management node to get parameters for these nodes to starting up. Otherwise, it will still up and running, okay. So if our data is growing and one node group is not sufficient, then we can have another node group, yeah. And data will be shotted across. Okay, back to the topics, okay. Today's topic is about running the NDB clusters on Kubernetes, okay. Now modern application is developed using microservices, okay, running on cloud native where the monolithic applications is, you know, we break the, I mean monolithic application, we chop the monolithic application to become smaller and run this smaller applications, okay. As microservices running on top of cloud native platforms like Kubernetes and this API become norms. Used by the modern applications for microservices to communicate and interact with each other. So based on Surfery connected by JetBrains, it is evident that microservices and res gain popularity is nowadays. And based on Surfery connected by CNCF, okay. It shows that majority of the companies have no, at least, thing or have cloud native environments in their organizations itself, in their organizations, okay. So this CNCF Surfery, 82% of organizations deploy a massive cloud native environments running microservice on top of Kubernetes. So this is like high levels Kubernetes architecture. So basically Kubernetes has Kubernetes working outs and Kubernetes control planes, basically control planes consists of multiple Kubernetes working outs. Sorry, master not working out, I'm sorry. Yeah, so control plane consists of multiple Kubernetes master nodes. So in Kubernetes working outs, we can spin containers, okay, containers cannot run by itself on top of Kubernetes, but it needs to run inside a pods, yeah. Pods will give networks identity, yeah, will give identity to the, to the containing container inside. Then my SQL itself, okay, can run as a containers within a pods. We can run single minus SQL instance within a pod or we can run multiple minus SQL container in a single pod, yeah. And each of the container will run only one minus SQL instance. And we can, we can run a minus SQL instance set by set with set or set containers inside the pods. So our minus SQL operators for Kubernetes is using set car containers, basically to maintain the clusters inside the pods, inside the Kubernetes. So NDB cluster is very suitable for running in Kubernetes because resiliency, because NDB cluster is also shed nothing architecture. It can operate without centralized management or any single point of errors. It's not available in NDB clusters. It is completely high, high levels, yeah. And scaling is very easy and data is shed across the NDB data nodes. And it provide consistency as well. And standards, yeah, it should support query standards. So SQL node is basically like normal my SQL instance, yeah, normal my SQL instance with NDB storage engine support. So how to deploy NDB clusters on Kubernetes. So typically we have, in Kubernetes, we know that they have deployments, they have replica set, diamond set, stateful set and so on. But, you know, database is a stateful applications, yeah, you need to store database data on persistent storage. So the persistent storage is really critical in database. Therefore, the models, the models to deploy database on Kubernetes is using a stateful set. So stateful set here in our MySQL operators is used to deploy NDB MTD is a data nodes, a multi-traded data nodes, yeah. And then NDB NGM, okay, this is management of the NDB clusters and MySQL D, okay, this is the SQL node of the NDB clusters. As you see over here, okay, we can create a stateful set consists of three, four or more, okay, and then we can attach a storage into it. How we attach the storage, it requires P3 and PPC. P3 means persistent volume, P3 means persistent volume claim. P3 is like Kubernetes resource. It's Kubernetes resource to define the underlying storage so that it can be mounted, it can be seen from the resources within the Kubernetes clusters. And PPC, a persistent volume claim is like, is a definitions, okay, the size, yeah, from the PPC to be mounted inside the stateful set container, yeah, stateful set ports. In the MySQL context, we know, right, the MySQL database has My.CNF and inside that My.CNF, there is a parameter called DataBeer. And DataBeer is always pointing to the directory where Database.afl is stored, yeah, are stored. So normally, the Database.afl is pointing to slash file, slash slip, slash MySQL. The file really to mount this PPC into this slash file, slash slip, slash MySQL inside the container, okay. The training within the port. So this is the basic to run MySQL database on Kubernetes. So in the context of NDB, okay, we have management node, we have data node, and we have SQL node. Pardon me about the one diagram over here, it is not deployment, it is stateful set. So three of them are stateful set, okay. Management node is deployed using stateful set, data node is deployed using stateful set and deployment. I think MySQL node is also using stateful set. And we have CRD, okay. Once you have my NDB operators running on your Kubernetes, then you have additional custom resource, right. Which is MySQL controllers, NDB controllers and pick up restore controllers. And here it goes. And we have NDB management nodes, okay. Running within a port. Management node container within a port. And we also have set card containers that are running on the same part. And the set card container is to manage the management nodes, as well as to protect the ports getting filling if the management will fill. So a set card container is like protecting the ports for not, you know, it's not filling, yeah. So that the management will, I mean, management will ports will always be up and running. And management will need to have a config map. The config map will be mapped into the management node container. We also have a port for data nodes container, okay. It has persistent volume claim and persistent volume because data nodes is component in NDB clusters where data is stored. Okay, although it's not in earlier, but in the end of the day, we need to store this data on this. Okay, this is how to install NDB operators. Okay, if you have, I mean, Kubernetes clusters running either using MiniCube or Kubernetes Rancher or whatever it is, okay. And then you can easily install NDB operators using two ways. Okay, we support HALM installations and we support KubeCTL and YAML installations. I show you here is using, you know, KubeCTL and YAMLs. Okay, very much, the installation is pretty much simple. So once the YAML is executed using KubeCTL, then we will have, you know, the NDB operators running, yeah, two ports are running. First is the NDB operator itself. And secondly is the webhook servers. Okay, to deal with various NDB cluster containers. Okay, this is a NDB cluster YAML with minimal configurations. Okay, how to deploy NDB clusters in quick way, yeah, in Kubernetes using very minimal configurations. Okay, so I created a sample of the YAMLs. Okay, and we can run NDB clusters on custom namespace. Okay, in this demo that I show you, I create a new namespace called NDB clusters. Okay, and then on my sample of the YAMLs, I mentioned that namespace that I use is NDB clusters. Then I can have spec something like redundancy level equal to two. Okay, and data node, data node count is two. That means there will be one node group. Two divided by two equal to one. So one data node group consists of two data nodes. And my score node is also two. All right, and since redundancy level is equal to two, then management node is also two. Using KubeCTL, we can deploy the NDB clusters. Okay, using this YAMLs. Okay, once it's executed, then Kubernetes will spin the NDB clusters, as you see over here. Okay, once then, then we will have two pots running management nodes, two pots running data nodes, and two pots running SQL nodes. And all are deployed as stateful set, as you see over here. Okay, three of them. Management node, SQL nodes, and data nodes. Okay, then NDB operators is also create three kinds of services for management nodes, minus SQL nodes, and data nodes, automatically. And we can check the status. Okay, once done, we can check the status by logging to any nodes. Okay, in this example, I'm logging to management nodes. Okay, using KubeCTL minus N, NDB clusters is my namespace, and then exit minus IT. And then the pot, yeah, which is management nodes. And then the command that I execute, which is NDB underscore NGM minus C, connect to itself, yeah, and then minus E show. Then it will show all the NDB cluster node status over there. Okay, as you see over here, we have two NDB, sorry, okay. So we have two NDB data nodes, okay, with ID three and four. We have two NDB management nodes with ID one and two, and two SQL nodes with ID one, four, eight, and one, four, nine. So adding SQL nodes, quite simple. So we can just edit the ammo files from two to four and execute. Okay, the next moment, we will have additional SQL nodes running, yeah, very simple. So we can show the status after adding nodes using the same way as you see over here, we have four SQL nodes. So additional two nodes, SQL nodes. So now how to access the SQL node? First, we don't know the password. Actually, the NDB operator will create two secret and save the password inside the secret, okay. Then we need to use a B64 minus D to decode. Okay, since the secret, the password is safe into secret as B64 encoding. So we need to decode that and that one is the password. So I can log in to the one of the SQL node and then using the password and I can log in to the SQL nodes. Okay, from there, I can do a query, I can get the parameter or all those things using a few tables that reside on NDB info. And this one is a slide that I want to show to you that this is really active, active solutions as you see over there. Okay, I can, you know, I log in to my SQL node one and I create database, I create table, test the test with engine NDB. Don't forget engine is always NDB in order to make the table itself reside on the data nodes and then I insert three records, yeah, using my SQL D zero. Then I query, okay, my SQL D zero and my SQL D one would are showing three records. Now I'm inserting, okay, record number four and number five using my SQL D one, okay. Then I query from my SQL D zero as you see over here. Okay, my SQL D zero also shows five records. That means it's really active, active. You can connect your application to any my SQL node. And this is the best practice, okay, we need to make sure, I need to reduce of part affinity and anti-affinity, okay. So keep database nodes apart, cause racks and then afford collocation of instance, sharing the same data and so on. Okay, that's it. Short descriptions about the NDB clusters deployment on Kubernetes. I hope that you enjoyed that. Thank you so much for the time.