 Hello, everyone, and welcome to our demo of Rancher Continuous Delivery. I'm William Jimenez, Technical Product Manager at Rancher Labs, and I'm excited to share with you today this new technology for managing clusters. So, what is Continuous Delivery? Well, it's a technology that is powered by a new open-source project that Rancher Lab has been working on called VREET. VREET allows you to manage policy, application deployments, and infrastructure settings across your cluster by simply defining in a declarative format these settings in a Git repo. VREET allows you to manage Kubernetes policy, applications, and infrastructure settings through a simple declarative syntax. It leverages Git as the source of truth for all your Kubernetes settings. VREET also provides visibility and control as you apply changes to your clusters, whether you have one or one million clusters under management. And these features enable a GitOps model that truly scales to meet the most demanding software challenges of today. And the number of clusters is increasing. As companies mature in their use of Kubernetes, they naturally create more clusters. And we see emerging technologies such as IoT and edge computing, which drastically increase the number of clusters under management. So, we believe that the future will require some to manage thousands or even millions of clusters. So, what are the existing ways that people handle this today? Well, we can simply adopt a one repo per cluster model that many of us might even have already done. And this has benefits. It's very easy to set up. It uses a pull model, so it's consistently checking from the repo and trying to apply those changes. The disadvantage is it's a lot of machinery to manage. A lot of Git repositories, a lot of automation mechanics to keep running on every cluster, pull and apply mechanics that have to be making these changes. So, it's a lot of things that can go wrong and a lot of overhead. Another model is to have one repo that many clusters pull from. Now, the advantage here is now we have a single repo to manage many clusters. Certainly a simpler approach to GitOps in a highly scaled environment. The disadvantage is all of these clusters now are pulling from a repo at indeterminate times without us necessarily having control. So, all clusters in theory can get updated very quickly at once, which we may not want. We don't have visibility into the update status. We just know there's an eventual consistency expectation, but we don't know when things are consistent. So, this provides benefit but has still some challenges. Another model to address the challenges of a distributed pull model is to use a push model. So, the advantage of the push model is now we do get control of the rollout. We get control of the transaction that might happen across our clusters. We get some centralized power back and we get some visibility. Disadvantages, frankly, the push model just doesn't scale. When we're talking thousands or millions of clusters, a push model is not scalable. Network access is required. Ingress network access is required for all of these downstream Kubernetes clusters. And sometimes they're going to be in remote locations where the network will not be amenable to such topology. And so, this is where we come to the fleet-based approach to GitOps at scale. And in this model, we actually take some of the best of the previous examples and we actually apply changes using agents that pull updates from a central controller. So, we get the benefit of eventual consistency that is not dependent on push. But because we have agents that are checking in and syncing metadata with a controller, we have more visibility. We have understanding of when the transaction completes. And we can also inform the agents as to when they should start the transaction. We get that rollout control. So, it's really just taking the first iterations to the next step by adding a controller-based model similar to how Kubernetes you would use controllers in its distributed topology and applying that to the cluster-level world. So, what does the GitOps architecture look like? Well, fleet is a system of controllers and Kubernetes resources that is designed to handle taking something in Git, taking definitions in Git which can be a variety of formats such as straight Kubernetes YAML. It can be simply Helm, Customize if you want to use that for your formatting of your Kubernetes resources. It takes that code in GitHub or GitOps. It takes that code in your Git repo and then applies it transactionally to the downstream clusters using the agents we talked about. We have the ability to manipulate groups of clusters independently using our cluster groups feature, and it's also a provider-independent pure Kubernetes solution. So, while this was developed at Rancher, this is actually designed completely on top of the Kubernetes API and does not require Rancher technology to exist, only just a CNCF-compliant Kubernetes cluster. The two main components of fleet are the controller and the agent. The controller is a central component that is responsible for watching updates from Git and then communicating the transaction needs to the agents that are downstream. The agents then are in each individual downstream cluster, and they're responsible for receiving those commands from the controller or watching for a state change from the controller, I should say, and then applying it transactionally and reporting back the status to the controller. So, you can see we're following a very typical distributed controller model that Kubernetes employs in really all of its architecture. So, a lot of the technology and the approach should become very familiar once you start looking at it. Okay, so let's dive into a demo now and see how this technology works firsthand. So, here we are in the cluster explorer view, which is something in our new 25 UI, which you may or may not have seen in Rancher 2.5. But from this dashboard here, we can simply click on this top left menu and click on Continuous Delivery. And now we are in the Continuous Delivery view, and we can start making changes. Okay, so the first thing we're going to do now is create a cluster group. Now, a cluster group is a way to address multiple clusters in fleet. And we're going to use this example here to address all of our clusters in the North America region. So, I'm just going to call this North America clusters. Notice that I used a tag here that was arbitrary that I just came up with. And this tag is region equals North America. So, now in order to get my clusters to register with this group, I need to add that tag to those clusters. So, I'm going to go to the clusters page and I'll simply assign them a tag as such. Now, there's actually a few ways we can do this. We can do this even from the standard UI. We can tag clusters, but it's conveniently located in the fleet UI here. So, we've tagged both these clusters now. And now we should be able to see them in our cluster group if we check there. And there we go. It looks like we have two clusters now in our cluster group. Okay, so now that we have a cluster group that we can work with, let's go ahead and create a repo, this register repo that we're going to use to start deploying some code. So, I'll go to create repo and I'm going to give this repo a friendly name. And I'm going to put in the git URL. In this case, this is on GitHub. It's publicly accessible. And then I'm going to tell it to only look at code in a certain path within that repo. I can use the root of the repo or I can use a specific path. And in this case, I'm also going to deploy only to our cluster group that we just created. All right, and so that's it. It's now checked out the code from git and it should be now applying the changes. Notice we can also look at these objects as Gaml. All of these objects that we're creating today are CRDs, typical Kubernetes CRDs. We can view them as such. And if I look at my app here, I can see that now all of the deployments are now in a ready state, which means that they have synchronized with the cluster and that is now deployed. And to validate that, let's go ahead and take a look at the cluster explorer view. Let's look at the deployments in our clusters and see if we can see what we expect to see, which is, in this case, a few Redis deployments. And there you have it. There's our Redis master and slave. And these were deployed through fleet because they weren't there previously. Now let's take a look at the code that powers fleet. So here we have a git repo that we used earlier in our example. And if we take a look at what's inside this folder here from the simple application, we notice that it's actually just made up of simple Kubernetes YAMLs. This is just the standard YAML format that you've always used with Kubernetes. So it can be that simple to get started with fleet and to start automating your deployments with a GitOps model. So the next thing we want to look at is how we can also manage the Rancher server itself with fleet. And this is called the fleet local context. So if we go to this local context here, we see that we have one cluster, the local clusters we'd expect. This is just the Rancher management servers cluster. And here we can add a git repo just like we did previously. And we're going to do much of the same thing. In this case, I'm going to deploy the backup operator using git. So I just simply fill out the form as previous. And now the backup operator information is being synced from git and it should start deploying it immediately. And there you have it. It's deployed our operator. Pretty easy, huh? Yeah, it definitely makes deploying and configuring things once you've got it set up a lot easier. And we'll take a look in a second at the git repo for this context as well because it's slightly different than when we did last time. And we'll learn some new things about how we can configure things with fleet and the GitOps model. Now let's quickly take a look at the cluster status itself and let's see if we can see the deployment of the backup operator successfully working. So I just went back to the cluster explorer view and notice if I look at this cluster dashboard, I can see there's a job that just ran and that job looks like the one that's related to backup. Let's take a look. And I can see here in my object definition, this is just the Kubernetes object representation in YAML that there is a job that was run for the backup operator that has backed up my cluster. And so now let's take a look at the git repo for the backup operator. So notice here if I click on this, I'm going to see some slightly different structure here. I actually have a file here called fleet.yaml. Now this is a different type of file than just straight Kubernetes YAML. It's actually a special one for fleet that allows you to configure some additional properties that fleet should be aware of. So let's take a closer look at the fleet.yaml. Fleet.yaml is a declarative YAML file that allows us to specify how we should deploy certain resources like customize or helm or Kubernetes YAML. It allows us to control the rollout policy. It allows us to control how we want the cluster to handle certain conditions or what things we want it to target. But we don't have to use it. We can also just use regular Kubernetes YAML. And so that's the end of our demo. Thank you so much for watching. And if you have any more questions, stop by the Rancher booth. We hope to see you next time.