 Hello there, ladies and gentlemen, my name is Dennis. I'm part of the TROF contribution team. Today, we're going to talk about the TROF native replication features that can be accomplished at the end of the June release. So we're going to talk about the justification of the replication itself, about the replication, definition of that, types of replication, and about the community-defined use cases. We have split them into three sections. So what the replication is, from the definition, from any of the databases and the VK page, replication is the actual way how to distribute the data from the supplier to consumers. So what types of the actual use cases replication solves? It solves scale out via read or write replicas, operational recovery, failover, and, of course, offline backup. There are several types of the replication. For all of you, well-known three types of replication. It's a single master slave replication, multi-master replication, and the artificial use case with multi-master single slave replication. So here you can see the first types of the replication. It's a single master slave replication. When we have a single master and some slaves that are attached to this master, and all slaves are in the read-model state. These types of use cases solve the scale out via read replicas. The second use case is multi-master replication. When the each master can accept the read-write operations. And the last one use case is the multi-master slave replication. When we have a lot of masters and they have a common slave. The common slave is used to receive the replicated data from the each of the masters. And masters can be connected to each other or not. So we should ask ourselves how do instances know about themselves? And how do user able to define which type of replication and he built? So TROC community defined TROV instance metadata. It's a dictionary that stores specific data. And the replication contract describes the way how instances connected among them all. So the replication contract is divided onto three attributes. First attribute is the replicates two. That means that this list defines to which instances current instance replicates. The second attribute is replicates from. It means it defines the raw slave of. It means that this list shows the list of instances from where the data is replicated. And of course the last attribute is the writable. This attribute defines if the instance accessible for right operations are not. So the first is key defined by the community was defined as we user should be able to build replication sets among already provisioned instances. So as you can see, here we have already provisioned instance one. And after asking TROV to build a new replication set, we will receive a spin up container that contains some instances that are connected. We are the master slave replication. And each slave at the replication set is all the time a new instance. The second use case is defined as we need to be able to detach one of the instance instances from the replication set and mark it as a standalone server. So when we have a replication set, we were able to perform this operation. And at the end of this operation, we will have two standalone servers. So in the last use case defined for read replicas is provisioning a new replication set from scratch. So user asks TROV to build a new replication set that contains a certain amount of instances. And at the end, he will receive the actual replication set that contains one master. And the other provision nodes are slaves. All these use cases are defined for a single host deployment. So we should be able to manage instances within multiple regions and hosts. So we have the anti-affinity and affinity rules to accomplish this task. So the last question for us as the developers and users, how do we want to place the instances? Or we want to place them together or we want to place them apart? So here you can see the schema that defines the multi-host deployment. So each host contains only one instance from the replication set. So what's going to happen if the master will go down? The connectivity between from the slaves to master will be lost. And the slaves would not receive a new data from anywhere because they don't have any supplier. We have two ways to recover the actual replication set to normal state. We can promote one of the slaves to the actual master. And then we will see the previous replication set. But it will contain a minus one instance. So it's pretty valid. The next use case, when we can take the backup from the one of the slaves and apply it to a new instance. So at the end, we will see the same replication set that we have before the master went down. So the next use case for the cross-region, cross-host deployments is the multi-master replication. So each of the regions and host contains only one master from the replication set. One of the use cases that can be accomplished within a multi-region deployment is offline backup. So by having a replication set that defined as the multi-master replication, we can take out one of the instances from the replication set. The market is, we can able to shut down it and take a consistent backup. But somehow it's really specific to MySQL because Mongo and Cassandra data source are not able to perform backups with the shutdown server. How do we able to accomplish a fail-over, automated fail-over? But when user want to spin up a new replication set, define a multi-master replication and mentioning that he want automated fail-over, like minus-minus fail-over, he will receive an actual replication set with a hidden instance from him. So what do we actually accomplish with this schema? When one of the instances will go down, we will receive the reduced replication set. So we have a minus-1 instance. And this is actually not what we want. We paid for the whole replication set, not for the only part of the instances. So we have two ways to fix this replication set. As I already said, we can take backup from the one instances and spin up fresh new with applying this backup to this instance. And we will receive the same replication set that it was before. But in the production, we don't know how huge backup is and how long will it take to apply this backup to the instance. So it will take a while. So the second way is more a faster way and it will be the greatest option in the current situation. We will mark the hidden instance as viewable and attach it to the previous replication set. So it's gonna take a lot less time. So user will receive the same replication set that it was before. So for the general release, community defines the next use cases. User should be able to build a new replicas on existing instances, detach operations, building a fresh new replication set and the promotion roles defined for the promotion slaves to the masters. For the future release, community will implement the multi master replication, multi master per slave replication, operational recovery, fault tolerance and fail over automated or manual in the offline backup. Thank you. The questions? No, no, there will be a certain cylinder integration that will allow user Trove to send the special alarms to the cylinder that will initiate a special mechanism that will notificate the user. Yeah, yeah. Yeah. User will have a manual file over. Thank you.