 Hello everyone, welcome to this DevConf talk connecting Kubernetes clusters with Submariner. My name is Tiffan Kate, I work for RedHats as a software engineer on the OpenShift multi-cluster networking team. I'll be explaining what Submariner does by showing you how to use it and how to use the features it provides. So without further ado, let's switch to my demo screen. Three clusters, which you can see into the three K9S screens. I'll be connecting cluster two and cluster three. I'll explain what cluster one does in a little while. If you want to try this at home, you can replicate the setup without using too many resources. I'm running this on a laptop with 20 gigs of RAM. You can see the command I used in the bottom right hand corner of the screen. Make clusters using equals Prometheus. This is available in our Submariner project as you can see by the path I used. This command will start three clusters using kind and also set up Prometheus inside each cluster using the Prometheus operator. You can see that running in the K9S pod lists. The first cluster only has a control plane and one worker node. The other two have a control plane and two worker nodes. Before I show what Submariner does, let's try connecting a pod in one cluster to a service in another. I'll set up an nginx service in cluster two and network client pods in cluster two and three. Now that they're up and running, I need to retrieve the pod names and the service IP address. And with that, I can curl to retrieve the nginx default page. Everything works on cluster two using either a IP address or service name. But as might be expected, cluster three can't connect to the service. Its IP address is unreachable and its service name is unknown. This is what Submariner is designed to help with. Of course, it's possible to expose a service in one Kubernetes cluster to the outside world and then use that from another Kubernetes cluster. That means exposing the service to anyone and also requires configuring each service you want to talk to. Submariner provides data plane connectivity between pods and services in connected clusters and also allows simple exporting of services. First of all, I need to install Submariner. I'll do this using a tool called Subcutl and this is how you install it. Trust us, piping this to bash is safe, famous last words. Subcutl can also be installed using Go install if you have Go setup. I had previously set my path up, so with the installation complete, I have the latest release of Subcutl 0.8.0. The next step in Submariner setup doesn't involve the connected clusters. Instead, we need to start by setting up a broker, which is what we'll use cluster one for. This will be used as a shared data store containing information about the connected clusters, and so it needs to be reachable by all the other clusters that we're going to use. All this is done using custom resources, so let's take a look at the CRDs in cluster one so we can follow what happens. As you can see, we already have a number of CRDs, but they're all Prometheus related. To set the broker up, I run Subcutl deploy broker with the appropriate context. As you can see, this adds a number of CRDs. It also creates a broker info.subm file stored locally, and we'll use this to provide the broker's information to the clusters we want to connect. The cluster CRD contains static information about each cluster. We'll look at one in more detail in a few moments. The endpoint's CRD contains dynamic information about the active gateway in each cluster. This is how the connected clusters will communicate with each other. The service import CRDs are used to synchronize services between clusters. We'll revisit that later. The Lighthouse variant is present for backwards compatibility with previous releases of Submariner. We now use the XKate version. Let's check the pods. Deploying the broker doesn't start any, as you can see here. The next step is to join the clusters. This will add cluster customer resources, so let's change our view in KNINUS to keep an eye on that. To join cluster two, we run Subcutl join with the appropriate cluster and the name of the broker info.subm file. We also need to disable NAT traversal here since there's no NAT involved. Because we have two worker nodes, we need to choose one. Everything else is also detected. Subcutl deploys the Submariner operator and configures it. When it says Submariner is up and running, it's a bit optimistic. The pods and containers are still coming up at this point. We can see them in KNINUS. While we wait for cluster two to join, let's join cluster three. And as Subcutl finishes on cluster three, we can see the cluster two CR appear on cluster one, which means cluster two has joined. Let's take a look at the CR itself. It specifies the side that we're using and the cluster identifier. Ignore the color codes, they don't do much yet. Now that both clusters are connected, let's try our curl again. First, choosing the IP address. As you can see, the pods in cluster three can connect to pod IPs in cluster two. Submariner has set up an IPsec tunnel between the two worker nodes we chose earlier. And it has added routes so that traffic can go from each pod in either cluster through the tunnel to pods in the other cluster or services. However, we're not quite done yet because the service name doesn't work. There's one more step. We need to export the service and there's another SubCutl command for that, SubCutl export. This creates another customer resource, a service export, which is translated into a service import on the target clusters. Here are the contents of the CR that was created for us. Our core DNS servers use that to provide IP addresses to connect to remote services. With all that in place, we can find the remote service. It's not visible as nginx demo, but if we add its namespace and .svc.clusterset.local, it works. Let's take a moment to go over the various components we've used here in more detail. The list of pods shows six different Submariner pod types. To understand them better, let's overlay some diagrams. The first piece of the puzzle that we set up was the broker. This is cluster one, which provides access to our CRDs through the Kubernetes API. The data itself is stored using whatever storage backend is configured, for example, at CD. Then we joined cluster two. SubCutl deployed the SubRainer operator on the cluster, and the operator then brought up the other components. One engine on the gateway nodes that we'd labeled, one root agent for each node, one Lighthouse agent for service discovery, and a pair of core DNS pods that allow us to resolve remote service names. Finally, we joined cluster three. This went through the same process as cluster two, but because there were now two connected clusters, an IP sector tunnel was open between them, providing the network connectivity layer. The service export and import implementation followed the upstream multi-cluster Kubernetes SIG proposal. This is Kubernetes enhancement proposal 1645, multi-cluster services API, or MCS API, which defines a number of concepts which are applied in Submariner. Thus, Submariner's exported services are defined using the MCS API's service export CRD, so any tool which can create an MCS API service export will work with Submariner. Likewise, Submariner's imported services are represented using the MCS API's service import CRD, and external services are made available through DNS, using the domain defined on the MCS API. Another aspect of service exports that I want to demonstrate is how a target is chosen when a service is available in multiple clusters. I'll start by deploying NGINX on clusters three, so we end up with NGINX services on clusters two and three. Next, I export it so it's available across all joined clusters. Now, if I try to retrieve the IP address of the service from cluster two, I'd rather use the service in the same cluster, and this is what happens. The DNS server only returns the IP address of the service in cluster two. From a cluster without the service, I'd like to use the services available in other clusters equally. To illustrate this, I'll use cluster one, so let's join it. We also need a test pod, so let's deploy that and retrieve its pod name. Now, if I retrieve the address of the exported NGINX service in cluster one, I get back the IP address of either cluster two or cluster three, pretty much in equal proportion. Some circumstances, we might want to explicitly target a service in a given cluster. We can do that by prefixing the name with the cluster we were interested in. To show that they are a cluster preference wasn't a fluke, let's run it again. The result is the same. Cluster two only gets its own service. What else can we do with subcuttle? We've included a number of analysis and monitoring features in the tool. First off, the show sub command allows the state of the clusters to be inspected. The connected clusters, their endpoints, the tunnel driver and use. Subrainer can also use wire guard, the gateways, and the versions of the components that are deployed. There are actually some other useful pieces of information which aren't yet available through subcuttle. They're available in the subrainer status, which you can see in the subrainer's custom resource. Let's go back up a bit. We can see the cluster information shown by subcuttle, some demon set tracking information, the connected gateways, and there under latency RTT we can see the ping stats we keep track of continuously. Then more endpoints status information, the version E3EE matches the 0.8.0 tag, the detected network plugin and so on. Subcuttle also includes a couple of benchmarking tools. There's one for latency, which shows min, mean, max latency, and another for throughput, which shows the raw path output. Subcuttle also includes our full end-to-end test suite. Since it's written in Go, we just chip it in the tool and you can run it too. It takes a while, so I'll stop it. Subrainer itself provides Prometheus metrics as well. If you're using OpenShift, you can see these using user workload monitoring. Otherwise, you'll have to set up your own Prometheus. Here are the metrics we currently provide. Subrainer gateways shows the number of gateways deployed. I'm only looking at a single cluster's metrics here with a single gateway. Subrainer connections shows the number of connections. Initially, I had two clusters joined together and we can see where I joined cluster one. Gateway RX bytes and TX bytes show the amount of data going through the tunnels. Connection-established timestamp tracks the time at which connections are established. Connection latency seconds shows the latency measured on each connection. Of course, it doesn't make sense to stack this particular metric. Connections shows the same information as Subrainer connections, but measured from the gateway itself rather than the operator. If you would like to find out more about Subrainer, our website provides detailed information, including quick start guides on a number of platforms such as AWS with OpenShift, GKE, Rancher, our sandbox environment using kind, which is what I just used for this demo. It also provides information about monitoring, troubleshooting, and how to get involved in the community. If you have questions, now is the time. We have a few minutes left in the talk and I'll be available and chat afterwards. Thank you for watching.