 Hello! Are you interested in learning how to deploy a CockroachDB cluster into a Kubernetes multicluster? Then this video is for you. My name is Alex Soto, Director of Developer Experience in Red Hat and remember that if you want to stay updated with all our content that we are delivering then follow me on Twitter and subscribe to the channel. So let's start. What we want to do is exactly the thing that you are seeing now in the screen. I've got a cluster and Kubernetes cluster deployed in Amazon and I've got another cluster deployed in Azure. Then what I want to do is just deploy there a CockroachDB cluster, one in Amazon and another one in Azure. But the good thing is that I want that this CockroachDB cluster behaves as one. So node 1, node 2 and node 3 is a part of a bigger CockroachDB cluster with node 4 and node 5. Although they are in a different cluster provider, one is Amazon, another one is Azure, but I want is that all the CockroachDB cluster behaves as one. So this gives us some kind of questions like how can I make to connect different pods from different clusters because they are in a different network and the answer is SCAPR. SCAPR makes us or Kubernetes clusters connect between them. So Kubernetes 1 and Kubernetes 2 connects and from the point of view of each of the nodes there is like an one only cluster. So SCAPR is a multicloud communication for Kubernetes and basically SCAPR enables secure communication across Kubernetes clusters with no VPNs, no special firewall rules. So let's see in action so you understand exactly what we're going to do. I've got here two clusters. Okay, let me clear this, this. Let's create name spaces. We need to create the same name space in both clusters. I've got here my console. So I can do kufkatl create name space code ROGDB. So I'm creating the name space in the first cluster and now I can do kufkatl create name space code ROGDB in the second cluster. Now I've got two clusters with the same name space and then I can move there kufns code ROGDB. Okay, move here kuf kufns code ROGDB. Okay, now I've got both clusters with the same name space and I move it to this name space. Now what I need to do is first of all deploy the code ROGDB cluster in cluster number one. I could use Helm, but in this case I'm going to use YAML file, the typical YAML file. It can do kufkatl apply, MinusF, code ROGDB stateful group one. So I'm just, you know, deploying my first set of nodes. No worries because these YAML files are published and you'll be able to, you know, inspect by yourself by checking in the description you'll see there are the links. So now I can do kufkatl getPot and notice that it's creating the first three nodes of this cluster and notice that I'm in the cluster number one. Kufkatl getPot, notice that they are almost running and here I can do kufkatl getPot and obviously there is no pot, okay? Because it's another cluster, I've not deployed anything. So now the code ROGDB cluster is running but it's not ready. I need to initialize the cluster, the code ROGDB cluster. In the case of Helm, this part is done automatically. If you are deploying manually code ROGDB then you need to do it. So I'm going to do kufkatl apply MinusF cluster init g1. So if you're curious of what is this cluster init g1, it's just a job. It's just a job that init the code ROGDB cluster, okay? So now I can do kufkatl getPot and obviously you can see that I've got these three nodes of the code ROGDB cluster up and running. Obviously I do here, kufkatl getPot, there's no pot. Then what I need to do is connect the network from cluster 1 to cluster 2. So from the point of view of code ROGDB, all the pots are running in the same network, okay? For the reason I need to use a scalper as I said, so I can do scalper init. It's just a CLI tool, so you can just download it from the website, which you've already seen the location of the website, but again the website is on the description. So I do a scalper init and this is just initializing all the scalper proxies. I can do scalper status to know exactly how it's going. Notice that it says, okay, now a scalper is enabled, but there is no connection between clusters. I can do kufkatl, kufkatl, oops, kufkatl getPot again, and you see that now I've got these proxies here, right? This is where the scalper magic stands on, okay? It's here. I can do kufkatl getServices if you want, and you see that I've got the scalper services as well and also the code ROGDB internal G1 and code ROGDB public here deployed, okay? Now I need to expose the stateful set. Remember that kufkatl getStatefulSet, remember that code ROGDB cluster are created using a stateful set, and you can see that here I've got one stateful set with three instances, which is the three nodes, right? So now I want to expose this stateful set, so scalper makes some kind of magic, so this stateful set can be also available from another cluster. To do that, I need to do scalper, expose, and for example is this one, but G2no is G1, okay? So I need to set, okay, I want to expose the stateful set code ROGDB G1 in more heedless, and the port is a 26257. Now I do this, and that's all. I can do kufkatl getPot. Still, everything is exactly the same, but now scalper is exposing me this code ROGDB. Notice here, these proxies, these proxies are the ones who are exposing me to other clusters this stateful set, so you can look at kufkatl getPot, and now everything is up and running. Now I need to connect to another cluster and to connect to another cluster, I need to create a token, a security token to use it to connect to this cluster. This is an authentication method at the end, so I can do scalper, connection token, and I want to store this connection token to one file that is called side1. This is the token that I need to use to connect to the cluster number one. Now it's being written. Okay, now let's move to the second cluster. Of course, I do kufkatl getPot, and there is nothing there, right? Now, let's do it another way. Let's start scooper here, scalper, scalper in it, and again, it's going to create this kind of magic, this kind of proxies, which behaves all this magic. Notice that now it's pending, and finally it's there. I do kufkatl getPot, now you see that there is a scalper proxies, scalper router there. Now, what I want to do is connect to the other cluster, so I want that cluster one and cluster two from the point of view of stateful set and these name spaces behave like a single cluster. I do scooper connect, not connect, and I put it the gml file that I've created here, right? This is where the token is written. Now we can do scooper connect and say scooper is now configured to connect to this, and notice that this is the other cluster, is this cluster. Now, what's happening if I do kufkatl getPot? Notice that now I'm seeing here the code bridge dbg1, zero g1, one, and g1, two, is this one. In fact, these are not the real pods. It's just an immigrant thing like fake if you want code bridge dbg cost pods. Now, again, I can do, for example, kufkatl describe, and I can do pot and see that here the image is not code bridge db image, but it's just a scalper proxy. Now, I've got all these things connected. Let's deploy my two nodes that behave or that are stored in this cluster. Okay, so I've got cluster one with three nodes, then I want to create cluster two with two nodes. So I can do kufkatl apply minus f and let's deploy this. So I'm just deploying now two nodes, two real nodes, two real code bridge db nodes. Okay, now we can do kufkatl getPot and notice that these three pods are the ones that represents that are proxy to this one, to the other cluster, and these ones are two code bridge db instances that are induce real cluster. Right, let's see if it works. Okay, it's still, you know, creating and running. Notice that here they are running, but they are not ready, and they are not ready, but because they have not been initialized, or they have not joined yet to any initialized code bridge db cluster. Then what I need to do is expose this stateful set because remember that I said that you need to indicate to scupper what are the elements or sources that you want to expose between clusters. Notice that I've got here this one, and now I can do scupper expose, and now instead of g2, I need to expose the, sorry, instead of g1, I need to expose the g2. Then I do this, and now is when the machine happens. Notice that if I do kufkatl getPot, you see that this g2 cluster, this knows that before we're not ready because they were not initialized. Now they are initialized, and the reason that they are initialized is because they have joined a running code bridge db cluster, which is running in another Kubernetes cluster. Notice that these are two different clusters, right? So now I've cleared my console and let me do, for example, kufkatl forward of the pot, right? So we can check that the cluster has been created with five nodes, even though three nodes are in one Kubernetes cluster and the other two nodes are in another Kubernetes cluster. So I can go here and then I can go to that browser and then we go to localhost 8080. Remember that this localhost 8080 is because I'm forwarding the code bridge db dashboard that is located in Kubernetes cluster one, okay? I'm just forwarding the content. Now see that I've got five nodes, g10, g12, g11, g21 and g20, right? So you can see that I've been able to create a code bridge db cluster with five nodes where three nodes are located in cluster one and two nodes are located in cluster two. If you want to try it more, let me show you another thing. Let me clear. And now, for example, I could do a kufkatl run. This should be not this, but let me kufkatl get services because I don't remember exactly the name. This should be code bridge db public. Yes, then it's fine. I guess it should work. So now I'm just getting inside the code bridge db cluster of cluster one. And for example, I could just copy the example that you can see in code bridge db documentation. I can do create database bang. So I'm creating a database here. Okay, then I can do a create table. But let me copy paste it. I've got here this create table just to not get you boring days. Okay, now I created a table and now I can do an insert into bang.accounts. Balance and values, for example, 1000. Okay, now I've inserted something and I can do select all the content from bang.accounts. Oops, sorry. But remember that this is from the cluster one. So here I'm accessing the nodes one, two or three from code bridge db cluster. Now I'm moving to the cluster number two, I'm going to close this, I'm going to connect two again, kufkatl run. But in this case, I'm doing it using the cluster number two. So I'm accessing the other two nodes and the node four and the node five of the code bridge db cluster. And what's happened now if I do select from bang.accounts, there it is. Here you can see that the value is exactly the same, the idea is exactly the same. So code bridge db cluster is being updated across all our multi cluster infrastructure for free without doing really nothing special, just using the scalper to synchronize all the content from all the code bridge db nodes. Hope that you enjoyed this video and remember that in the description you'll find all the links for all the content that you need to reproduce this example. And also links to the previous videos of code bridge db, how to develop a coarchus application with code bridge db and also how to deploy code bridge db using Helm in a single cluster. Thank you very much.