 I want to explain about MySQL group replication. So as you guys, I'll explain about group replication and then use cases and then summary conclusions. So it's group replication as Ricky was saying, a multi-master update anywhere. It's a plug-in for MySQL, and it's automatic distributed recovery. It has conflict detection, and it has a group membership consequence. What it can do for the user removes the need for handling a server failure because it's automatic now, provides fault tolerance, enables update anywhere setups, automates group reconfigurations, and provides a high available replicated database system. So we call these five masters as one group. That's what we call it as one group. That's the reason we call it as a group replication. All of them can connect to Client. All Clients can connect to any of the master. And so that one we call it as replication group. So how it works is, let's say this client is doing an update, and another guy is doing another update. Where A equals to 1, where A equals to 2. So as you can see, it's not conflicting. So both of them will go ahead, and then both of them will commit. And this guy, whoever is here committed, it's a prepaid phase. That guy will send it to broadcast that particular transaction to all the rest of the servers in a group. And the same thing will happen to this transaction as well. Since they are not conflicting, both of them will be committed in the group on all the servers. Let's take an example of where both of them are trying to update the same couple. We have a certification database on all the servers, with which we will see that whoever comes first into the group will win. And then he will commit. And the other guy will lose. And then the transaction will be rolled back. By looking at the certification database, we will be able to identify that particular transaction is a conflicting transaction. Now, Pushin, for this certification database, means you are maintaining this certificate on some other database? No. The certification database is kind of a cache, which is maintained on all the machines separately. Every server will have its own certification database. But all that certification database is insane. It's a cache in memory, or in memory. So automatic distributed server recovery, let's say there's five servers are in the group, and the new guy wants to join that group. What happens is this guy, by setting the group name, he'll say, OK, I want to join. So one of them, let's assume that the second master will say, OK, I will donate for you. So there will be an SNC replication between the second master and the newly joining guy. And that newly joining guy will be in the recovering state. Because he is in the recurring state, none of the rights will go there. So once he downloads all the data from the master to that guy, he will say he'll join the group. So now he will also be in online mode. So after that particular server is turned into online, even that guy also will be able to serve the right request. So let's say the fifth guy is saying that I have a maintenance, and then I have to go down. So what happens is, so that guy is gone down. And the rest of the guys will know after five seconds that is configurable. After five seconds, all of them know that, OK, there is one person left. So now the five guys will be coordinating. And even though when there is a coming porting concept, only these five guys will be participating. After something, let's say after one hour, he comes after the maintenance. So again, the same thing happens. That guy will say, I want to join you, but I already have so-and-so transactions. So again, one of them will say, OK, I will donate it. So there will be an SN3 application between that guy and the newly joining guy. Again, that guy will be there in recovering mode until he actually comes online with the group. So all these things are automatically done inside the group replication platform itself. And though it's a platform, it's exactly like how MySQL, whatever you're doing on the MySQL, all those things can be done. So like, you know, DB tables, however you use the DB tables on the regular server, the same thing can be done here as well. Performance schema, we do have a group replication performance schema. So there are a few things that will give you information on what is the status, how many members are there, and how many transactions are happening on each server. And this group replication is fully cheated support. So the full group will have one cheated. The same, we say it as a group name. So this guy also will be generating the group name colon 1. And there is another transaction happening on another server, but he will use the same group name. So in that way, cheated will be in order. And you can even have a single replication from the outside group to a group. Conflicts will be detected here. Like you connected to a master one, and there is something happening here that reaches to the group. And if there are any conflicts happening because of other masters, that will be detected here. So those things, since it will be rolled back, those things will not enter into the group. Even the other way around is also possible. From the group, there is another guy who is not inside any of the group, but he doesn't want to join the group, but he wants to get all the information from that group. So you can have asynchronous group replication between one of the master from the group and to a normal outside group. So there is another concept called read-only mode, which is like, there's something went wrong there. You don't want that to be there on that group. So the moment we got group replication plug-in detects that there is something wrong on that master, we will put that in the read-only mode. Because the reason we put that in the read-only mode, there will not be any write request served on that server. Until the DBL solves it and then say again, start group replication, we will, none of the write request will be solved. So here he will solve the problem, whatever is happening on the master, and then the same procedure, like he will say the start group replication, again he will go into the recovery mode, one of them will be donor, and then he will be in sync with the rest of the servers. And those who already familiar with the regular replication multi-threaded slave, which is a parallel supply and support, a player support, which is possible even on the group replication as well. So on each server you can even say that I want to enable parallel player support. So by doing the same thing, what you do in a regular replication, you can do it on the group replication as well. But setting parallel workers is equal to eight, and then parallel type you can mention, and then preserve commit order also you can mention, which preserves the same order, whatever is happening on the master. So this parallel parameters I have to start on the slave or on the master? In group replication there is no master or slave, it just though everything is just one node, everything is node, everything is a master. So you want to do it on one particular master, you have to do the settings on that node, and then you want to do it, same thing on other nodes, you have to set these things on all the nodes that you want at the best metal performance in the player. So for this group replication, all the nodes it will use that for your problem. Doesn't have to. Doesn't have to. Yeah. So till now whatever I have explained is multi master mode, where everybody is ready to accept the right request. But there may be some situations where something like a DDL is happening, a DBI is doing a DDL, and when the DDL is happening, you don't want some DMLs to happen on other machines. So for that there is a setting called single primary mode, where you enable it, and then say that this is a group replication plugin will automatically pick one of the servers from the group, and then you will mark that particular server as single primary. So all the right request will be going to the single primary. This is, we made it as a default mode. I will be covering, there are few limitations with the multi master mode, that's the reason we made this as a single primary mode. So after you install it, if you want the multi master to be enabled, you have to change it. The default is single primary. So as I said, automatically one of them will be picked, which is, and rest of the slaves, now we call it as slave now, in this case. Because only one of them is a master, rest of them are slaves. All the servers will know that who is the primary guy. Let's say that particular primary is gone. Group replication plugin will automatically detect that, and then it will promote one of the other slave servers in the group to a primary. This is also automatically done by the plugin itself. And if you want to know who is the primary at this moment, there is a query from the performance schema table. You can execute this, which will tell you, okay, this is the primary that is acting currently. So we have a built-in communication engine inside. So that you don't need a third party software, and then also since we have implemented our own communication engine, which is broadcasting it to all the transaction events to all the servers, even it can be used easily on the cloud itself. Because it doesn't, no network multicast support is required because we have an internal communication system. These are the requirements that you have to have these, for example, INNODB, the tables, we subgroup replication support is only INNODB, and you should have a primary key on all the tables. It requires a global transaction interface to be turned on, and it requires binding log to be turned on. It requires row format to be turned on, and your application should be optimistic because there could be that there are some transactions that will be rolled by even after prepared. So the application that you are planning to execute on the group replication has to be optimistic. And as DT said, we have a limitation currently that only nine servers can be added in one group. Currently, these are the four things that are forbidden we are not supporting now. It's serializable when you enable the multi-master. Cascading foreign keys is not supported. Transaction save points is not supported, and binary log events checksum is not supported. Foreign keys are not supported. Cascading foreign keys are not supported. So does that mean things like on delete cascade or update cascade here? On update cascade, there is one more class which says that cascading. So those things are not supported. And for this nine-note limitation in a group, so, for example, can this possible? We have two groups, and there is an application. We are working on that. So in the coming releases, it would be possible, but we are working on that. And through all these concurrent DDLs, you have to be a little careful when you are doing the concurrent DDLs on multi-master. In single-prime, there's no problem at all. In use cases, elastic replication, when the application is something like, the nodes are increasing, you want to increase the loads, or you want to decrease the load, you can easily do it by just saying, okay, well, I want to join the group. So that is possible. And something like, you can have a high available charts. So each group is one chart, and then you can have multiple charts, and each chart group can be a group replication. And another use case is, a simple asynchronous replication master's slave doesn't have all these automatic things, like setting up primary and rest of the things slave, and if primary goes down, automatically making one of the slaves, all those automations is not there. You have to use the tools like MySQL failure. But if you set up the group replication in the single primary mode, it is equal to master's slave replication with all these automatic features. So the summary is, it's cloud friendly, because we have our internal communication system, and it's well integrated with rest of the MySQL server, and it's operation friendly. Yeah, so it's going, and there is one more thing that we are working on is on top of the group replication, there is a MySQL InnoDB cluster, which we are working on, which uses the group replication, and it makes more things automated. Something like MySQL router, and MySQL shell, we'll be talking to the group replication, and all these automations will be these possible. Yeah. Questions? Yes. If I were to write a client application, so do I specify a single IP address, or how do I connect? How does it select which database of I connect to? With the InnoDB cluster, it's possible that you directly talk to only one person, and then it will automatically see which server it is possible and it will send it. If you're talking to the group replication, the clients will be connecting to separate servers. Okay, so they'll just choose one of any of them, and then you're directly talking to the group replication. Okay, let's say it's a stateless, those PHP applications that each request, they're always to a connection before closing. Is there any performance impact for that one? For the clients, connecting and then reconnect. Closing and then reconnecting. Yeah, connecting and then closing. Each time they connect, does it take time to find the primary master mode? Does it take time to find the primary master? I'm not sure about that. Maybe InnoDB cluster might be a really quick answer. Okay, just to answer your first question, we have a software localizer called MySQL Router. So this MySQL Router is basically, is able to load balance. If you run on multi-primary mode, it will be able to load balance across all the key servers. So that means that every transition will just load balance for using the raw problem algorithm. So if you run on single primary, that means you can also still use the Router. The Router will basically make sure you can fail over to the next available server. So when you try a few times, fail, you just go redirect all the future connections into the next server. That is number one. What is it called? MySQL Router. You can download that, you can try it as well. And for your information as well, MySQL Router version two of one is going to be, it is now on RC, release candidate. So it should be around another three to four weeks before it becomes GA. So that one is, two of one is designed for group replication. Okay. I have a question. Constantly, how the group replication is different from traditional MySQL cluster? What are the key differences? MySQL cluster is mainly for starting and then write scale out. Whereas the group replication is mainly for the higher availability. Will you make sure this group replication is synchronized one or is it still asynchronous? The releases that we have done till now is virtual synchronization. Eventually it will be synchronized. In my case, I'm writing in a primary before it get an acknowledgement from one of the node that nodes got failed. Still it waits for the acknowledgement from the failure node or what will happen? Fully synchronization we are working on it. But the thing that we already released on 5717 is eventual synchronization. It is not guaranteed that you have committed here on one node. So it's still asynchronous. Eventual synchronization. It's kind of... It's not 100% synchronization. Eventual synchronization is if a node is there on the group, it will reach to that particular state which is there on the master in some time. We cannot guarantee that time depends on the load. But if the server is up and running, it's going to reach that state after some time. Not immediately but after some time. That's what we call it as eventual synchronization. So for this, that means when transaction comes into the let's say server one. So it is not going to conflict? No. But it will wait acknowledgement when it reach and your people will be, right? No. It's actually on master transaction is prepared. It is sent to the communication system. And then from the communication system, it broadcasts to all the servers. So all the servers will receive this and all the servers will individually check. Again, it's local database. And once the local certification database says, yes, you can go ahead and do the commit, it will do the commit. It will not see whether on other node will it be committed or not. Because if your local certification database says yes, it is guaranteed in your group replication that on all other nodes, this local certification, on that node will also say the same answer for that particular transaction. Either yes or no. So that means this transaction will wait for that acknowledgement? No, it won't wait. It won't wait. It will just commit the server one. Yes. It will commit when it comes from the communication system. It will commit when you get the acknowledgement from the communication system. Not acknowledgement, those events. Oh, okay. Do you have any other questions? One last one. The nodes, right? Can we have the nodes at a different look, different data centers? Is it possible? Yeah. Okay. It doesn't matter where those servers are sitting. You just have to say that I want to... For this, you have to say that this is the group name and when some server is trying to connect, you have to mention the group name. Okay, because I was just thinking about if you, let's say, put it at a different geographical locations, it's still okay. Yes. But network latency, so you have to think about that. But other than that, there is the group replication support. Any other questions?