 Hi, this is Paul from WeaveWorks. Today in this video I'm going to demonstrate a simple build pipeline for promotion from development to production using EKS and EKSD. Before we go though we're going to take a quick look at GitOps again just so that everyone understands what they're seeing. The principles of GitOps are very simple. Everything in Kubernetes is declarative so you declare everything that's in your cluster. You use Git to keep a versioned state of everything that's in your cluster. This allows you to monitor what's going on and it is the authoritative source of truth. Tooling that WeaveWorks has written over time both commercially and in the open source applies these changes to the cluster after checking with Git and software agents continuously reconcile what you've declared in Git with what is actually running in the cluster at any given time. For this demo there are two clusters. The first cluster, the development cluster, is running EKSD in Equinix Metal. It's a single node very simple cluster that only has flux the GitOps enabled tool installed on it. The second cluster is a bit bigger. It's running in EKS using the Weave Kubernetes platform. This cluster is going to be where production containers are deployed. The connection between these two clusters is a single Git repository. The manifest that the developer places into the Git repository for their development cluster are put into the dev branch. In production the production cluster draws its manifest from the main branch of the same Git repository. To promote containers from the development environment to the production environment the developer issues a pull request. This pull request then is reviewed and when it is merged the new containers that the developer has completed are deployed into production. Our production environment also includes two other features. We've installed a service mesh linker day and the GitOps progressive delivery controller flagger in order to do a canary deployment for new containers that are deployed into production. Okay let's begin with the development cluster. The development cluster the manifests for that cluster are contained in the dev branch of this Git repository. In this repository are the manifests, helm charts, customized templates required to run the developer's application. Now we've just heard from marketing that they want to change the background color of this particular application from green to blue. So the developer goes in finds their manifest for this particular application and we're just going to change it here manually from the color green to the color blue. Keep in mind that Flux is running in this cluster so that when the developer commits their change they will automatically have the application the new manifest and application deployed for them. So running in the cluster you'll see here are our pods our front end and Flux version two. So the notification controller the customized controller the helm controller and the source controller are all running here. When it detects a change to the Git repo it will start the new pods as you can see happening right now and as those pods come on line and running the kubernetes will have done a blue-green progression here and we will see that our color of our UI has changed to blue and when all the pods are up and running and the old pods have gone away we should see nothing but blue. As you can see it is shifting between the two pods that are running okay so now we've completed our deployment and we're very happy with it. So at this point the developer says okay we're ready to go we're going to send this off to the dev ops team or to the ops team to actually deploy this into production. So what we're going to do here rather than just deploying directly is we're going to create a pull request from the dev branch to the main branch and as you can see the only thing I changed here was the color of the UI. So I create my pull request and I say you know put in my note and create that pull request. Now in production nothing has happened but this pull request is ready to go. Meanwhile this pull request is waiting to be deployed into the production cluster so let's switch over to the production cluster side of this. Okay let's move on to our production cluster. Our production cluster is using the Weave Kubernetes platform which is a completely GitOps enabled platform. It packages up a set of standard cluster components as well as providing isolation to applications through the use of workspaces. So in this production cluster the application that we have is here at the very top. So the first thing we have to do is we have to go off and merge our pull request. So our pull request is in we like it okay merge the pull request I think I'm okay with that and we go ahead and do that. Now in our cluster we have configured a canary deployment for our application. That canary deployment again because everything is GitOps is defined here and you can see it targets our specific application. So we also in this cluster have linker D as our service mesh so we should be able to monitor the change in the traffic as the service mesh begins its progressive delivery. We can watch the pods change and we can do that very simply by watching what goes on here. The traffic can be monitored with linker D's built-in Grafana dashboards. So what we are going to see here in this Grafana dashboard is we're going to see the traffic alter over time. So as the new pods are started as you can see here we will see the traffic begin to shift from the old pods to the new pods over time. We've configured the duration for that in our canary definition for flagger. Now here you can see the progression is beginning. We've set it to have into intervals of five seconds and five percent of the traffic moving back and forth. So it'll take it about a minute to do this. Meanwhile you can see that the pods now there are two sets of them running our original set and the new set. The benefit here of using a service mesh and progressive delivery means that there is no downtime to any user so they don't see the change over. The second thing is that the canary as it's defined here also has thresholds. These thresholds are used to determine the success of whether the progressive delivery and the new pods that have been deployed worked adequately. These metrics here that you see come from Prometheus and so can be any Prometheus metric can be used to determine whether the progressive delivery passed or failed. So currently let's see we're at 40 percent and 60 percent. Our threshold is 50 percent which says that when you reach 50 50 i.e. half the traffic is going to the old pods and half the traffic is going to the new pods then we're going to test and ensure that everything is running right. We'll then shift the traffic over to the new pods and scale down the old pods. So if you look at this graphically we can actually see where this is happening. So let's take a quick zoom in here and we can take a quick look at the traffic as it is moving. As you can see the original pods here in orange the traffic was scaled down. The new pods here in blue are scaled up over time. Once Flagger has determined that the new pods are working adequately it will then switch the pods. The new pods become the primary and the old pods traffic will be scaled down and then the pods themselves will actually be scaled down to zero. So you can see that happening right now the traffic is going back to the new version of the pods it's gone back up to a hundred percent and if you look over you can see that the pods not only were they renamed but the old pods are being scaled down to zero and at this point now all of the traffic is now diverted to the new pods. This was done completely with Git and GitOps. So our canary defined here in Git defined the progressive delivery. We used Git and pull requests to promote from the development environment to the production environment. We had the Weave Kubernetes platform which is giving us a multi tenant production quality Kubernetes infrastructure which by the way can run on anything as you see it's running here on hosted EKS and our development environment is running in EKSD. All of this is achieved through GitOps because GitOps is platform independent. It will work on any Kubernetes and it really doesn't matter where that Kubernetes is. Thank you very much for watching and I hope you're enjoying KubeCon.