 Hello, my name is Ulysses Alonso Camaro, and I'm going to present an OpenShift multi-cluster automation. I will demo this automation in the use case of performing an OpenShift cluster migration. This automation can be used as a general GSLB capability in deployments that have several clusters, multiple clusters in the same data center, or multiple clusters in different data centers, or a combination of these. The foundation of this GSLB automation is F5 cloud services software as a service DNS load balancer. Given that this is a software as a service solution, the customer doesn't need to provision anything in the first structure, and this can be easily tested and used without compromise. Let's start. This is a common real-world scenario where a customer initially has a single OpenShift cluster and an enterprise DNS. This DNS doesn't have GSLB capabilities, cannot test applications availability, or cannot test it externally, cannot return DNS replies based on customer's location, and the DNS server cannot be automated with an API. This is not suitable for a multi-cluster setup. To bring this multi-cluster capability, we will use a global cloud load balancer based on DNS. This will be a service hosted in F5 cloud services. It doesn't require an appliance to manage. It's a software as a service solution. It allows to direct traffic to the nearest application instance. It draws traffic for GDPR compliancing. It allows also to split load across the compute instances. It has built-in DDoS protection, and last but not least, it's fully configurable via APIs. Let's see how the integration between OpenShift and GSLB will work. In one side, we will have one or more OpenShift clusters. In the other side, we will have F5 cloud services. Both have declarative APIs. What we need is an automation tool that is able to talk to the two APIs and glue the OpenShift applications into GSLB. This is GSLB tool, which we will demonstrate later. GSLB tool queries the OpenShift cluster for the application's routes and will process this information publishing it into F5 cloud services. This will be actuated either by a DevOps user or a process. Lastly, the state of the GSLB will be stored in a repository or source of truth, in this case, Git. In summary, GSLB tool is an automation tool that glues the OpenShift and GSLB APIs. Minimizes operation errors, allows swift application rollout across clusters, and manages all routes of a given project at once, like Red Hat migration tools. Let's see how a setup with a single cluster would be onboarded into GSLB with F5 cloud services for an OpenShift migration. As a preparatory step, we would deploy a new OCP cluster. In step one, we will use the control plane migration assistance. Then we will use the application migration tool. At this point, we will have our projects applications into the new OpenShift cluster. Then we will provision GSLB software as a service with just a credit card. Then we will use GSLB tool to populate the GSLB for the project. The next step is testing DNS GSLB and the applications. And once we have verified that the applications are operational, we'll change the authoritative DNS for the zone. With the last four steps, we would have completed the migration of all routes of a whole project. Please note that migrations would not be performed in a single pass. Instead, these steps would be performed in upper project basis, like CAM migration tool. The next section of this presentation is a demonstration on how to do an OpenShift migration with F5 cloud services and GSLB automation. Now we are going to see the projects and routes using the demo. The first project is called Web. And this is going to contain two HTTPS routes, the route of the website and also the shop. These are handled by two applications. The next project is CRM, Caster Relations Management. And this is also going to contain two routes, two HTTPS routes, in this case, two different host names. The starting point of the demo is a two cluster setup. The clusters are net on-prem and AWS-1. The first thing that we will do during the demo is add in a third cluster named AWS-2. We will migrate the application from AWS-1 to AWS-2. Initially, we will split the workload at 50% between on-premises and AWS-1. After we migrate the applications to AWS-2, we will send some traffic into AWS-2. And once we have verified that it's all operational as desired, we will split 50-50% of the traffic between on-premises and the new cluster, AWS-2. We will do this for each project. The first section of the demo is adding the new cluster, AWS-2, into the GSLB tool configuration. Installing GSLB tool is a two-step process. After unpacking the Ansible playbooks, first, we will clone the git repository containing the current configuration, and second, we will copy the credentials file which is encrypted with Ansible Vault, but not in the repository. The next thing that we're going to do is adding the cluster AWS-2 into the existing GSLB config. This configuration is stored in the setup.yaml file. We just need to add the API endpoint and the public addresses for each availability zone. We update the repository with the modified setup.yaml by using the GSLB tool setup update command. At this point, GSLB tool can operate with the AWS-2 cluster, but we haven't done anything with it so far. The second section of the demo is migrating the project CRM. We will migrate it from AWS-1 to AWS-2 using the project retriever strategy. We are going to start taking a look to the initial configuration, which is the CRM project in both on-premises and AWS-1 data centers. On the window of the right, you can see that right now, we are splitting the request between the two, and we are going to see how this is mapped into the actual config. So we go to the DNS load balancer section. We can see that we have our delegated domain cloud services. We can see that we have two FQDNs for this project, account and support. If we go into monitors, we can see that all the members in the account host name, they are green, they are up, and we can see that there is two, we have three members actually, one for on-premises and two for AWS-1. That is because AWS-1 has two availability zones, so the 50% that goes into AWS-1 is split between these two availability zones. If we go back, we can see the same status and ratios for the support application. The first operation that we will do is retrieving the routes available in AWS-2 and publish them into GSLB. We do this with the project retriever and GSLB commit commands in sequence. We will speed up the execution of the Ansible playbooks. In the later stages of the commit command and after successful publishing into GSLB, the new configuration is uploaded into the Git repository automatically. Let's take a look to the changes that we have done into GSLB. We go again to the DNS load balancer section, we select the domain on which we are operating, and we can see that it still has the same DNS records as before, but if we go into the monitoring section, we will see that we have two new endpoints for each of the availability zones of the AWS-2 cluster. We can also see that at present, the ratio for the AWS-2 cluster is zero, which means that we are not sending any traffic to it. If we verify the applications with a browser, we can see that both account and support applications are sent to the AWS-1 and on-premises clusters. The next step on the migration is to start shifting the workload into AWS-2. For that, we will use the project ratios command, where we will specify the CRM project, and then we will specify that for the on-premises cluster, we will send 85%. For AWS-1, we will send 0%, and for AWS-2, we will send 15% of the traffic, so we will get a chance to verify that the cluster is really up. And then we will commit this transaction if the project ratios command succeeds. We will specify also the reason of the change. While the commands are run, I would like to mention that it is possible to perform several operations on the clusters and submit the changes once. I also would like to mention that the commit command either successfully publishes the new desired state of the GSLB zone or doesn't perform any change at all. A GSLB commit command will not perform partial updates on the GSLB state. Now, the transaction has been fully committed, it has been successful, and very soon we should start seeing, we can already see that the traffic has been sent to AWS-1, and now it is being sent to on-premises and AWS-2. We also see that there are more requests being sent to on-premises than for AWS-2. This is a statistical process, so the more samples that we get, the more clear this difference in ratios will be. To finalize the migration, we would change the ratios again, this time splitting even little workload between on-premises and AWS-2. For AWS-1, which is a cluster that we are going to the commission eventually, we will set a ratio of zero. Again, after changing the ratios, if successful, we will commit this information into our source of truth, Git, and we will specify the reason of the change. The submission has completed, and we can see that now the workload will begin to be split more evenly between the two data centers on-prem and AWS-2. We can use a browser and verify that we are only going to be sent to AWS-2 and on-premises this time. In this case, this is the support page. The next project that we are going to migrate is the project web. This project contains the WWW website with two applications, one in the root and another for a slash shop. For this project, we will use a different project migration strategy, the project populated strategy. This is the cloud services UI, and you can see that the configuration doesn't have the initial conditions for the demo. Where the WWW website is in the on-prem and AWS-1 clusters, that is not set up yet. So next step is to set up these initial conditions, and then we will perform the migration from AWS-1 to AWS-2. I wanted to show you this preparation step to demonstrate how easily we can populate GSLB from scratch, and also how GSLB tool can nest several commands and publish them in a single kit commit. So before we demonstrate the project populated strategy to migrate AWS-1 to AWS-2, we will actually have to set up the on-premises and AWS-1, which are the preliminary conditions. We will use the project retreat command as before for the project web in on-premises, also for the project web in AWS-1. We will set the project ratios for this project as 50-50. And we will commit these transactions, indicating the reason of the change. We will speed up the execution of these four commands. We can see now the last steps of the transaction submission. Eventually, the health probes will detect that the two applications are up in each cluster. And anytime soon, we should see traffic pain send to AWS-1 and on-premises clusters. We can see here on the right, if we refresh the UI, we also see this change reflected. We can see that we have the new WWW website. If we go into the details, we can see that we have a new monitor for the WWW website. We can see that it checks for the shop and also for the route site. We can see that it has three public endpoints, one for the on-premises. The other 50% is sent to AWS-1, split evenly between the two availability zones. To verify that the preliminary setup has been completed successfully, we will use a browser and we'll request the two applications. We will see that we are sent to both data centers. After we finish with the preparations, we will actually do the migration of the project web from AWS-1 to AWS-2. This time, using the project populated strategy. When we did the CRM project migration, we used the project retrieved strategy which gathers the routes from the cluster that we want to publish. This means that with the project retrieved strategy, the routes in the newly published cluster might differ from the other clusters. Instead, if we use the project populated strategy, we will use a reference cluster to gather the routes that we expect in the new cluster. This reference cluster can be a different one from which we are migrating. In this demo, although we are decommissioning the application from AWS-1 and moving them into AWS-2, we will use the on-prem cluster as reference for the routes that we expect in AWS-2. Project populated just requires specifying the project, the source of reference cluster, and the destination cluster. Then, we will use project ratios on the project web to disable AWS-1 and enable AWS-2, keeping on-prem with the same ratio. Finally, we will commit these two commands in a single transaction. Again, we show the run of these commands accelerated. And once the commit is completed, we can see that AWS-1 doesn't receive any more traffic. And how eventually AWS-2 and on-prem receive the same traffic. The delay in AWS-2 is because we did this migration in a single go, and cloud services needs few seconds to retrieve the initial health status of the new cluster. If we had assigned to AWS-2 our initial ratio of zero, then this transition would have been perfectly smooth. To finish with the demo, we verify the resulting configuration for the project web in cloud services. We can see that as expected, the AWS-1 cluster has a ratio of zero, that each availability zone of AWS-2 has a ratio of 35%, and that the on-premises cluster has a ratio of 50%. If you are interested in GSLB automation for OpenShift multi-cluster setups or cluster migrations, please visit GSLB Tools GitHub page, and it's wiki. Thanks for watching.