 Hello, everybody. Good news. I'm the last one before lunch. My name is Lasluven Senoji. I'm a software engineer at Cisco. I previously worked at a startup called Banzai Cloud, which was acquired by Cisco. And I've been working with Istio for like three and a half years now. So I will talk about Istio, some more specifically multiclass to Istio today. Is it complex or maybe piece of cake? Let's dive in. Today, I will focus on the single mesh multicluster multi-primary on different networks Istio setup, which means that we have only one single Istio mesh. The Istio control plane is installed on each cluster. And they are communicating over dedicated ingress gateways. To install this setup, today, according to the official Istio documentation, what you need to do is several manual steps. These are essentially kubectl apply commands, istioctl commands, even running some bare scripts, getting some resources from one cluster and applying to the other. If you do all this in the right order, bam, you have this multi-primary on different networks set up ready. If you need to do this once, that might be fun. But if you need to do this a couple of times, actually it can be pretty cumbersome. Also, even after you have all this setup, specific to the multi-primary topology, you should have the Istio configurations synchronized between the clusters. There is no out-of-the-box support that for Istio today. You can only achieve that with external tools. So could all this be better in Istio? Could we automate this somehow for our users? How? Let me introduce you to a possible solution by first dividing all these manual steps in the official docs into two categories. The first category is easy. It involves only one cluster, like installing Istio control plane on a cluster or installing the Istio's gateways. The second category is the one which involves some kind of configurations between the clusters, like configuring the trust relationship between the clusters or enabling cross cluster endpoint discovery. Actually, to solve the first category is not that hard. It could be done many ways. We opted to go with a Kubernetes operator-based solution. More specifically, we use the open source Cisco Istio operator. I will highlight here that this is not the official Istio operator that most of you are probably familiar with. This is a completely different Istio operator implementation. Actually, this was implemented and open sourced before the official one. Back at Banzai Cloud, now it is usually referred as Cisco Istio operator. What this operator does is that it automates all those single cluster steps. So it installs the Istio control plane, installs the Istio's gateways, configures those, et cetera. This is the easier part. The second part is more challenging to solve, which involves multiple clusters. Because we were searching for the right tools to solve that problem, and we couldn't find the right one. So we ended up implementing our own solution. And this is called cluster registry. We have recently open source cluster registry. You can find it on GitHub as well. So cluster registry has yet another Kubernetes operator-based implementation called the cluster registry controller. I guess you can see a team here. We kind of like operators. So cluster registry is a fully generic tool for any multi-cluster Kubernetes use case. Because what it does, it is able to synchronize Kubernetes resources between multiple clusters. Now, in conjunction with the Istio operator, these two tools can make sure to automate all those steps to have the multi-primary on different networks set up ready. Like, for example, the cluster registry can sync Kubernetes secrets between the clusters, containing the Qube API server access to the other clusters. And this way, they can enable for cross-cluster endpoint discovery. Or we implemented an automated federated trust-based solution as well. I will call out here that this is the only main difference between the official Istio set up and compared to our solution. Because by default, they recommend the common trust-based approach. We rather use this federated trust-based approach and automated it. Also, I mentioned that the Istio configurations should be in sync. After everything is set up, cluster registry is able to help us with that as well. What I'm stating so far, if you listen carefully, is that the Istio operator can automate all the single cluster related steps. And with the cluster registry, we can actually automate all the multi-cluster related steps as well. Which means that it should be easy enough to set all this up right now. Let me try to do that. Where is it? OK. It was not that bad. So it's building hard like this. OK. So what I have here is that I have two Kubernetes clusters. I have the Istio operator and cluster registry controller installed on both. And I have also made sure that the cluster registry controller is able to synchronize the resources between these two clusters. That's the only manual step I don't have time to show in this demo right here. What I'm going to do is that it's going to be bad. Let me apply two CRs first real quickly and then show you the content of those. OK. OK. So I applied the Istio control planes. These Istio control planes CRs to both clusters. These are custom resources for our Istio operator. And they install Istio control planes on these two clusters. These two CRs are almost identical. We are using Istio 1.13 for these. We are installing an active Istio control plane, which is the same as the primary term in the multi-primary setup. We just use the active term instead. We are installing the east-west gateways. And we have some additional setups for these conflicts for the multi-primary setup. The only difference between these two CRs is that the first one is using a different network than the second one. And that is because we don't have pod-to-pod connectivity configured between these two clusters. They will instead communicate over the dedicated east-west gateways. So all I did is apply these two CRs. And believe me or not, the single-mash multi-cluster, multi-primary on different networks setup should already be configured automatically on these two clusters. That's it. Actually, I will need you somewhat to believe me because I won't have time to show everything. But let's see a thing or two. Of course, we have these two control planes running on both of these clusters. We also have the east-west gateways running as well. These are the easier part. Let's see something more interesting as well. Yeah, so from these two logs on the first cluster, we can already see that the first cluster can already reach the second cluster's QBAPI server address for endpoint discovery. And the other way around, that's where the second cluster can reach the first one. Even though we didn't do any manual secret copying or anything like that between these two clusters, it was all done automatically. I think this is pretty neat already. But let me show you one more thing, which is the instio config synchronization. So as you can see, there is no instio configurations on these two clusters. So let me be strict here and apply this strict rule to the first cluster. As you can see, it's already here. And in this setup, what we get for free as well, that it's already copied and synchronized to the second cluster as well. With these open-source solutions, this comes out of the box without any external tools as well. So I think this is pretty neat. Let's go for takeaways. Yeah. So with these two open-source tools, these two operator and cluster registry, we can almost fully automate the single mesh setups. Today, I was concentrating and talking about the multi-primary setup. But it actually works very similarly with the primary remote setup as well. And actually, you can very easily combine the two. So you can have any number of primaries and rebots in the same mesh and configured automatically. Also, partly why we like operators is that they sit very well with the declarative and GitOps-based workflows. So mostly all you need is installing the controllers, the operators, and some custom resources as I showed. And you should have this setup ready as well. And also, as I mentioned, we have out of the box instill config synchronization as well. And what's worth mentioning here is that this comes without a single point of failure, because there is no one single control plane to track all this. This comes with a fully distributed manner. So at this point, I will ask you and let you decide if multi-cluster is complex or piece of cake. If you want even more than a piece of cake and you want it for free, then you should try Cisco's product called Callisty, which using these open-source tools gives you a fully automatic multi-cluster experience. It comes with a nice UI and much more. If you are interested either in our product or any of these open-source tools mentioned today, please visit the Cisco Boost at KubeCon. Or I think I'll be right here for a few more minutes. So if you're interested how we bake this cake or have any questions, feel free to come here. I'm happy to chat. Thank you very much for listening.