 Thank you. Good morning, everyone, and I apologize for the technical issue that I encountered just now. I hope that I can finish this talk on time. So, I wanted to introduce myself. My name is Sananto, and as you see, my logo is Oracle Employee. I'm Oracle MySQL Principal Soliciting Leader. And today, I would like to share with you a little bit details about InnoDB Cluster Set. As the name indicates, the InnoDB Cluster Set is set of InnoDB clusters. So, the expectation here is you have a couple of InnoDB clusters, more than one InnoDB cluster is running on multiple sites. And Cluster Set, basically, is standardized technology in how we can do data synchronizations between InnoDB clusters that's running on multiple sites. And this is, you know, for disaster recovery solutions. Why we do this? Because, you know, we realize that organization face a big challenge, especially if there is a major outage on primary data center. And it can cost them a lot. Okay, based on survey, it is more than half saying that the experience now takes costing more than 100,000 US dollars. And most of the time, they say that on-site power failures is the biggest cause of significant outage. And, you know, in data center, there are a lot of important stuff over there. We have servers, we have firewalls, we have AWS applications, everything is mission-criticals. Okay, and power is just one of it. Okay, power is not everything in that case. But those everything is nothing without powers. If you lost powers in data center, that's it. So, to mitigate this, then we need to have, you know, business need to have alternate sites and running a duplicate of the system on the alternate site and use database synchronizations. And as you see, we have a muscular synchronous applications. It's very simple. Okay, it has been around a long time back. We have source, we have replicas. Every time we make transactions and commit it on the source, then the bin lock event will be sent over to the replicas. And the replicas threats will actually capture the data and then commit the same data to the storage engine so that the expectation is we have source and replicas running in data will be consistent. But it's not always true. And in the past, okay, DBA have a lot of job, especially if they want to deploy replicas. Okay, primary and secondary here, source is where the transactions happenings and replicas is where the transaction is getting replicated. And we have a new challenge called primary and secondary. Okay, primary is red, white, secondary is red only. And a lot of jobs that DBA need to be done by the DBA in the past. They need to do backup of database and then they need to transfer the backup from the primary site to the alternate site. They need to restore the backup to create a secondary database. And then they will study it with primary mode. And they need to create replication users on primary database and assign replication slave privilege to the replication users. And they need to go to the secondary database, connect to the secondary database and create the replication channels. All this need to be done manually and it's not a complete solution if you are talking about disaster recovery solutions. Because disaster recovery means that we need to orchestrate as well, right? If the outage happens, then we need to make sure that human interventions will be minimum. Okay, everything should be standardized. Then there are a lot of workarounds that need to be done in the past, let's say, doing a custom script, for example, for DR orchestrations. Or incorporate third party tools. Okay, just in case if DR activation is happening, then they can use this third party tool to orchestrate the disaster recovery procedures. But now we have InnoDB clusters. InnoDB clusters is based on group applications as you see. It's still using asynchronous replication underneath. Okay, however we use group applications, protocols basically to make sure that this asynchronous replication becomes really synchronous. Data will be consistent across the InnoDB clusters. Members. Okay, we have primary and multiple secondary. Okay, it is read scalables. Primary will run on read mount and secondary will run on read only. And we have master routers as explained by the previous presenter. Okay, master routers is our default component in InnoDB clusters to provide connection transparency to the application. So that the applications to connect to InnoDB cluster does not need to change. Okay, of course, this is a layer for routers. It's based on port number. There are two kind of port numbers. First is read like port and second is read only port. So applications will see master routers as database endpoint. So they can connect to the master routers. If application connect to read like port, then my score will redirect the connections to the primary. If the application connected to read only port, then it will redirect the connections to the secondary nodes. And we have integration as well with my score shelves. Okay, my score shelf has admin API that enable DBA to deploy this InnoDB cluster easily. Okay, you don't have to take quite long time actually to pick up how to deploy InnoDB clusters. It's very easy. You can deploy three in the cluster within less than five minutes if you talk about standard default InnoDB cluster installations. Okay, this is one of the differences between synchronous replication, normal asynchronous replication, and synchronization that's happening in the Coupre application that's used by InnoDB clusters. Every time there is a new transactions getting committed, then it needs a consensus. It needs a quorum. Consensus happening when more than half or majority of the nodes are within the Coupre application itself certified the transactions. Then the transactions will get committed on all nodes. The result is InnoDB clusters can provide maximum data protections. Okay, whenever there is new transactions going into the routers, read-write, on a read-write pops, then the routers will redirect that connections to primary nodes. And if there is any commit, then the block defend will be sent over to all the secondaries. And certification process will happen. And upon quorum, each member will commit the transaction to the InnoDB storage engine. There are a lot of models or types to control the consistency in InnoDB clusters. If you take a look over here, the time when the transaction is really getting committed on each node may not be the same on the same time. Because we are using consistency, eventual consistency as the default consistency in InnoDB clusters. However, if your application connected to the standard node and require new data, then you can set the consistency to before. Okay, therefore consistency before, after, eventual, and before and after. So this is pretty much application requirements. So the InnoDB clusters will fight automatic failures whenever a primary node is failing. Then group applications will promote one of the nodes to become a new primary, and application does not need to change the connections because MySQL routers. So because of MySQL routers. So MySQL routers will redirect the read-write connections to the new InnoDB cluster primary node. How about if the fail node comes back? Once the fail node comes back, then it will try to join back to the clusters. So we have auto-healing capability. So once the nodes comes back, then it will try to catch up the transactions, catch up the states by doing recovery. It can be used, it can be scrolled, or it can use incremental recovery. And the group application framework will automatically select which distributed recovery method suitable for recovering the fail nodes. And once it is completed, then the fail node will fully join to the InnoDB clusters as a primary node. And at that point in time, the routers will start to see this as a primary node. So the fail node will be joined back as a primary node. How about MySQL routers? We advise MySQL routers to be installed on the same host as the application so that the application can connect to the local MySQL routers. Or, optionally, we can run MySQL routers outside the application servers, but you need to provide a high availability of the MySQL routers. You need to have more than one MySQL routers, and the load balance will be or high availability of the MySQL routers will be. We need to be provided by third party, let's say, pacemakers or keep alive. And in 2020, we released the replica set. It's basically a MySQL Asynchronous Replications with app-on. Integrations with MySQL Shell, at mean API. So creating the Asynchronous Replication is no longer a completely manual process, but it is more automated and orchestrated using at mean API. And secondly, is integrations with MySQL routers. So using a MySQL EODB replica set, you don't need to have an IP failover when master is failing or something like that. But we can use MySQL routers basically to redirect the connections from the applications if application connected to the read-write port of the routers. If the primary node fail, then the primary node is available on the other nodes, then MySQL routers will redirect the new connections to the new primary node. But bear in mind, this EODB replica set does not have automatic failovers. So we need to failover the primary node manually. So in order to make this even more suitable for a disaster recovery solution, we have disaster tolerance solutions with MySQL EODB replica set. So what is a replica set? We have two or more EODB clusters with Asynchronous Replications to replicate data between primary of the primary clusters and second cluster, we call it as replica clusters, because it is read-only clusters. Okay, replicate into the primary node of the replica clusters. Bear in mind that in this architecture, only one node, which is primary, the rest of the node of the primary cluster will be running on read-write. The rest of the node will be running on read-only. Okay, the rest of the node will be running on read-only. Upon, you know, commit on the primary. Okay, once the certification done, the transaction is committed, on the primary, the data will be replicated using Asynchronous Replications to the primary node of the replica cluster. And again, it has, you know, MySQL shell admin API that you can install and configure in very, very easy way. Yeah, easy way. So if you have a three-data center, then you can actually extend, okay, each of the data center will have its own EODB clusters, but only one become primary cluster and two are replica clusters. We don't have automatic failovers from primary cluster to replica cluster, but I will tell you why later on. Okay, not every cluster has to be on three nodes. Okay, we can have one of the clusters, okay, have different topology. Let's say you only need to have read replicas on disaster recovery, but you don't want to have a high availability. Okay, that's fine. You can have, you know, replica cluster only consists of one node, which is primary node that running on read-only on your disaster recovery site. How about MySQL routers? MySQL routers can run on every site, every data center, okay, and the application can transparently connect to the MySQL routers. Okay, if MySQL routers, I mean, if application connect to the MySQL routers in read-web ports, since MySQL router aware about the cluster set topology, the MySQL routers will connect this, the connect the connections to the primary node of the primary clusters. So, in the underlying, even though in the database infrastructures we have a single clusters become, you know, it's a read-web clusters and then the rest are read-only clusters, but if you have requirements that require each site to be active, active, then you can use MySQL routers as database endpoint. Yeah, because MySQL routers can run on everywhere, okay. So, as you see over here, even though MySQL routers by default always looking for the primary clusters, okay, MySQL routers running on standard set can also connect to the primary clusters, but we can have, you know, additionally, optionally, we can have MySQL routers running to connect only to the specific cluster within the InnoDB cluster set. So, this is the MySQL InnoDB cluster set configuration. First of all, we can create, we need to create InnoDB cluster. As you see, it's very simple. Okay, db.quid cluster. Then you will have one InnoDB cluster with one node and then you can just add instance, okay, to add the second node and to add the third node. You don't need to do a backup and recovery because it will do cloned. Very simple. And then once done, you can issue command cluster.status, okay. Then you will see that you have a trainup of InnoDB clusters. Okay, then we need to create cluster set. We need to define a cluster set for this cluster. Then the command is very simple, cleared cluster set. That's it. Then you will have one cluster set with one InnoDB clusters, okay. As you see, one cluster set, one InnoDB clusters. The name of InnoDB cluster is B-R-U, okay. Then we can add the replica clusters, okay. And then how to add is very simple. Cluster set.createReplicaCluster. Then you will have two InnoDB clusters, okay. Additional InnoDB cluster has been added into the cluster set, yeah, with one node, which is primary node, yeah, on the replica cluster. After that, we can add instance to that primary node so that we will have second instance and third instance running on replica cluster. And you will have three node InnoDB clusters on primary cluster and three node InnoDB cluster on replica clusters. As you see over here, you can check with cluster set.status, cluster set.status, and you can see, okay, you will have, that you have the two clusters, InnoDB clusters, one primary, one replica cluster, and then, yeah, that's it. Then if you need more informations, then you can add extended column point, then you can have more informations in the output. Okay, this is the hack-affiliability scenario. I hope I still have time. Yes, I still have time about seven minutes. And we can change primary members in the primary cluster. Let's say you have node 1, node 2, node 3 in primary clusters, node 1 become a primary node, and you want to run primary node on second node. You want to switch, then, yeah, easy. The same as InnoDB cluster. You can just issue command set primary instance. The same thing with replica clusters, okay? You can just connect to the primary node, connect to the replica clusters, one of the nodes, and then you can run set primary instance, okay, to switch over primary node from node 1 to node 2, for example. How about the replication? The replication will follow. If you set the primary instance to second node, then on the primary clusters, then the replica cluster will follow. The replication will change, not from the node 1 of the primary clusters, but from the node 2, yeah. If the primary node of the primary cluster fill is the same, right? It's the same like you switch the primary from node 1 to node 2. It will be automatically a fellowverse to the second node, and asynchronous replication will automatically adapt with the new changes. The same thing with the replica clusters, okay? If the primary node of the replica cluster is done, okay, the fellowverse to the other nodes, then, you know, the replication will still be happening, you know, starting on the new nodes, new primary nodes of the replica cluster from the primary node of the primary clusters. And then we can switch roles, okay? Let's say you have requirements. Let's run our application on the other side, okay? Then we can just do set primary clusters. Then it will flip the cluster roles. Primary cluster become replicas. Replica clusters become primary clusters, as you see over here. So how about routers? Routers will always follow the primary clusters, so we don't need to change anything from the application side. Okay, this is the process. Basically, once we run set primary clusters, then the cluster roles will be flipped, primary becomes primary. I mean, primary clusters become replica cluster and replica cluster become primary clusters. If we have a data center crash or any work partitions, then we have a command called force primary clusters, okay, to force the primary clusters, sorry, force the replica clusters on the ultimate side to become a primary cluster. But, okay, we don't, okay, since there are no circumstances happening, okay, in this kind of scenario, so we are thinking to not have like, you know, but this up to you, actually, if you want to develop like custom so-called witness on the TIT side, let's say on the cloud, just to monitor primary cluster and replica clusters and do some sort of automatic fallovers from primary to replica. I think it's more towards really situational how much data loss that, you know, you can take. Because anyway, if primary cluster is unavailable, okay, we don't know what is the status. Maybe some application is still writing into it, so potential data loss will still be there. Okay, so this is the process of emergency fallovers to another clusters, as you see. We can use a force primary clusters to set primary, I mean, replica cluster to be a primary cluster and set the primary, the existing primary cluster become invalidated status and so on. So this is a status, as you see over here, okay, only when the cluster status is not okay, invalidated and unknown, okay. The rest of the status we can do a switchover and emergency fallovers. So this is the restrictions that we have, okay, on inodb cluster set. It must be mass coefficients 8.0.27 and higher and only works with single primary mode, one primary and multiple secondary and as in conus replication is supported again and submissing is not supported. Okay, thank you so much for listening to my presentations and thanks for the time. Yes, please. Thank you. Okay, if I'm not mistaken. Okay, let me give you the mic. Yeah, Alvin here. Okay, so what is the most difficult change management issue you can think of and how would you execute that? Okay, so for example, an upgrade from one release to another or maybe even secondary node, you have secondary node that is in this geographic location and we move to another country, for example, all right. So that is, so that kind of scenario. Thank you so much for the questions. Okay, very good questions. Change management is more towards operational, you know, kind of things, okay. How the user or customers actually using inodb cluster set. I mean, I give you a scenario. For example, if the customer is using, let's say, standalone instance, how to convert this standalone instance into inodb cluster set, okay. Cost geographical locations and then how to upgrade, how to do patching and so on, right. Is that your questions? Yeah, yeah. Scenario, okay. Well, let's say, for example, we have three node inodb clusters on the, let's say, Singapore and we have disaster recovery on Kuala Lumpur, let's say, and we want to create inodb cluster set to link up this, you know, inodb cluster in Singapore and replicate to inodb cluster set on KL, for example, right. And then we want to do upgrade, let's say. I'm sorry. Singapore to KL. The primary cluster in Singapore, the replica cluster in KL. Is that what you want? Okay, so let's say you want to do the upgrade, right. Okay, let's say I give you an example. If we have, you know, data center movement, let's say, okay, for secondary data center, I don't want to use KL data center anymore. I want to use Middle East, for example, right. Then it's pretty much simple process. Basically, we need to create the replica cluster on Middle East first. So we need to run three inodb clusters within the inodb cluster set. Okay, after this tab less, because just the commissions inodb cluster on KL. Yeah. The same thing with upgrade. We're going to upgrade the replica clusters first and then follow by the primary clusters. Thank you.