 Hello and welcome to my talk. My name is Rastislav Sambol, and today I'll be speaking about multi-cluster service deployments with operators and Kube carrier. So before we start, let me introduce myself. I am a software engineer at Kubermatic where I'm working on Kubernetes platform and Kube carrier open source projects. I'm specialized on multi-cluster application management and multi-cluster networking. In the past, I also contributed to several other open source projects such as Ligato.io, FDI.gov.vpp, Conti.vpp.cni, or CIS repo. So as the title of the talk suggests, it will be about multi-cluster service deployments today. During the talk, I will go through different aspects of multi-cluster service deployments, and I will also mention some community driven open source projects related to them. We'll go through multi-cluster infrastructure management, then multi-cluster application management, and finally multi-cluster networking. At the end of the talk, I will also show a quick demo of multi-cluster application deployment with Kube carrier and multi-cluster networking with Submariner. So before we go into any details, let me talk about use cases for deploying applications across multiple clusters. One of the reasons for doing that may be close user proximity. For example, we would like to serve users from different parts of the world without high latency. Another reason may be regional high availability, where we may want to minimize the impact of regional outages. Another reason may be security and organizational separation. For instance, we may have to use dedicated clusters for each organization or organization unit. The next one may be data locality. For example, databases with sensitive data may be only available on-premises clusters. Last but not least, one of the biggest use cases is edge computing, where we usually run many smaller clusters distributed across multiple locations because of low latency requirements. Let's start our multi-cluster service deployment story with the necessary infrastructure for running and managing multiple clusters. For that, each cloud provider usually provides their own solution. But if you wanted to automate operation of many clusters across multiple regions and different cloud providers, including on-premises infrastructure, and do that all via single pane of Glass or a single API endpoint, I really recommend you to take a look at the open-source Kubernetes platform, which can easily do it for you. Let's assume that we have hundreds or thousands of clusters running. Now let's talk about distributing some workloads on them. The Kubernetes multi-cluster provides two possible solutions for that. The other one is called the Qubefet. It aims to solve much more than just multi-cluster application deployment. For instance, it also aims to solve scheduling, DNS policies, etc. It is widely used but quite complex to use. The newer concept is called Work API. It is a simpler approach for deploying workloads to clusters. For instance, it does not cover cluster registrations or scheduling. But at this point, it is just the API definition without any implementation. It is based on the Work Custom Resource Definition, which can contain a list of resources that should be applied to a target cluster. On the right side of the slide, you can see how the Work API Custom Resource may look like. It refers to a specific cluster and in the Workload Manifests section, it contains a config map. Both approaches share some common concepts. They contain a single source of true from where the workload manifests are propagated into managed clusters, and a control loop which applies resources and tracks their status. Now we are getting to the Qube Carry project, which we created in Kubernetes. It builds on similar concepts as the previously mentioned solutions. There is one management cluster, which is our single source of true. We call it Service Hub. And there are multiple service clusters that can run application workloads. The difference is that by design, it only works with Kubernetes operators. Application operators run in the service clusters. Qube Carry discovers their custom resource definitions and make them available for users in the Service Hub. And Qube Carry then propagates the custom resources from the service hub to service clusters to drive the operators running in them. Finally, Qube Carry has built in multi-tenancy, so multiple service provider and multiple service consumer accounts are supported. The reason why we decided to use operators in Qube Carry to build our multi-cluster application management platform is that operators are blessed to automate full application lifecycle within the cluster. That includes deployment, upgrades, backups, et cetera. And at a scale of hundreds or thousands of clusters, it is really necessary to have this automation in place. Managing individual Kubernetes resources for each application in every single cluster just cannot work at this scale. This picture illustrates how Qube Carry works from a higher level. The service users interact only with the management cluster via Kubernetes APIs. They deploy custom resources derived from the original CRDs provided by application operators running in the service clusters. Qube Carry then distributes those custom resources across clusters, which drives the operators running in them. And operators in the service clusters deploy and manage the application instances based on the content of the custom resources deployed by Qube Carry. This slide shows how multi-tenancy works with Qube Carry. As you can see, Qube Carry supports multiple service provider and service consumer accounts, which are separated by namespaces. Also, users within each account get proper Kubernetes RBAC roles, set up and assigned automatically. We have three personas illustrated on this picture. The platform operator operates the management cluster and manages the Qube Carry installation itself and also manages the Qube Carry accounts. The service provider manages service clusters, operators running in the service clusters, and registers the services or their custom resource definitions in their service app. The service consumers interact only with the management cluster where they request and manage their service instances. Okay, so now we have our hundreds or thousands of clusters running and we can automate the deployment of applications into them via our central service app. Now let's talk about how the applications can talk to each other across these clusters. The Kubernetes CIG multi-cluster provides a solution for that as well, and that is called multi-cluster services API. It extends the Kubernetes service concept across multiple clusters. In the provider cluster, services have to be explicitly exported using a service export custom resource. The multi-cluster services implementation that then propagates the service export into service imports in all clusters in the cluster set. And the exported service will then become accessible from each cluster in the cluster set at the DNS name, servicename.servicenamespace.svc.clusterset.local. An open source project which implements this API is called Submariner IO. I will use this project in my demo. Okay, now I would like to show you a demo of multi-cluster application management with Kube Carrier combined with multi-cluster networking with Submariner. The demo topology would be similar to the picture that I have already described. We'll have one management cluster that we'll use as our service app and three service clusters where we'll be deploying our workloads. For the demo, we'll use the Redis database as our managed application. We'll run Submariner across all four clusters to provide multi-cluster services connectivity. Okay, so what you can see here is the Kubernetes platform where I have created four clusters, each one at a different cloud provider in a different region. One of the clusters I'll be using is a management cluster that will be our service hub through which we'll manage the deployment of applications into three service clusters. So let me go into the console. In the top left corner, you can see the management cluster. And then we have three service clusters here, service cluster one, two, and three. The Kube Carrier has been already installed in the management cluster and the service clusters have been registered in the Kube Carrier management cluster. Also, the Submariner have been installed in each one of the clusters. So the multi-cluster services should be working when we'll need them. So let me start the demo with showing you the Kube Carrier accounts that I have created. So we'll have three Kube Carrier accounts in this demo. One of them will be the service provider account. And then we'll have two tenant accounts. So the service provider will provide ready service to the tenants. In each one of the user clusters, in service clusters, I deployed the ready operator already. But so far, no ready instances are running in any of those clusters. At this point, we can check via Kube CTL get catalog entry. The Kube Carrier has already discovered the custom resource definitions in the service clusters. So in our catalog entry list, we can see a ready in cluster one, ready in cluster two, and ready in cluster three, which are ready for using by our tenants. So at this point, our tenants can request an instance of readys in any of those service clusters. And the way how they can do it is by creating a custom resource, which is derived from the original readys custom resource definition. But the API version refers to a particular cluster and a particular service provider. So in this case, by using this custom resource, we will deploy a ready instance in service cluster one. And the spec of this custom resource contains some information needed to deploy a ready instance, such as password for accessing the database that we'll use later. So let me now use this deployment demo file and deploy a readys instance in the service cluster one as a tenant A. To make sure that we can see what happens, I'll run a watch commands here in the service clusters, one, two, and three. They'll be watching for all ports and we are gripping the name readys in across all the ports. So I'll now go ahead and fire this command. So again, we'll apply this custom resource, which refers to cluster one under tenant A, and that will create a readys instance for our tenant. So soon, we should be able to see that the readys instance has been deployed for tenant A in the service cluster one. Similarly, we can request readys instance for the tenant B. So using the same custom resource under tenant B user, we can request readys instance for the tenant B. And as you can see, it has been just started. And very similarly, we can also deploy some readys instance in the service cluster two. So the deployment manifest would look exactly same apart from the API version, where we refer to service cluster two. So let me deploy that one as well. And we should see in the service cluster two that our readys instance has been started. Okay, at this point, we have three readys instances running into different clusters. Now, let's try to connect to those readys instances from a different cluster. So in the cluster three, we are not running any readys instance. And potentially, we could try to connect to those readys instances from this cluster. So in order to do that, we can rely on multi-cluster services. API, which is implemented by Submariner, which is already installed in our clusters. And so in order to export the service to a different cluster, we should first review what are the existing services of our readys instances. So as we can see, we have a readys instance service in the tenant A namespace, which is of type cluster IP. And similarly, we have a readys instance service in the tenant B namespace, which is of cluster IP type. So in order to export them to other clusters in our cluster federation, I would use the command subctl export service. Subctl is a binary provided by Submariner. But essentially what it does is that it deploys service export custom resource with the service name readys instance and the namespace as given here. Similarly, we can also export the other service instance in the tenant B namespace. And at this point, we should be able to verify that service exports have been properly created by showing kubectl get service exports. And yes, as you can see, our readys instances in tenant A and tenant B namespaces have been exported. Similarly, we can do service export in the cluster too for the readys instance running in there. And again, we can verify that service export custom resources have been properly created in this cluster. At this point in the cluster tree, we should be able to see the service imports automatically created from the exported services in cluster one and cluster two. And as you can see, kubectl get service imports gives us three service imports, two in cluster one and one in cluster two. So now we should be able to access any of those three services from this cluster. So let me exit into a client pot running in this cluster and the way how we can connect to those services is using DNS names that refer to original service name, original namespace.service.clusterset.local. So this domain name we should be able to resolve right now and it seems that resolving works. So we can go ahead and try connecting to the readys instance with the readys CLI tool. We are just referring to the exactly same domain name and we are using the password that we specified in our readys custom resource up here. So this should work and if it works, we should be able to run a pink command and we should get the punk from the server. Right, exactly same way we can also verify the connectivity to the readys instance in the cluster two. So it should just work as for the readys instances in the cluster one. Okay, that was it for the demo and for my talk as well. If you have any questions, feel free to contact me at this email address or via Slack right after the session. And thank you for watching my talk.