 Hello, my name is Katie Gamanji and currently I am the ecosystem advocate at CNCF. I have joined CNCF last year and my responsibility is to lead the end user community while bridging the gap between the end user organizations and the projects within the ecosystem. I have many roles within the community and one of them is being a member of the advice report for captain, which currently is a sandbox CNCF project. As well, I am collaborating with Open UK to make sure that open standards are used across data, software and hardware. And as part of my motivation to make cloud native ubiquitous, I have collaborated with Udacity to create the cloud native fundamentals course. Pretty much this course is constructed to take anyone with little programming experience on the path and journey towards cloud native. I have mentioned the end user community and I'd like to give a bit more details of what it actually represents. The CNCF end user community currently is constructed of more than 145 vendor-neutral organizations that use cloud native to build and distribute their services. It is the largest end user community out there and it's at the CNCF's goal of end user-driven open source. Pretty much we are looking at these organizations to define the production experience but at the same time to shape the organic growth of the projects within the ecosystem. If you'd like to find out more details about the end user community and their usage of cloud native, visit the CNCF.io forward slash end user. As well, if you'd like to showcase your adoption of cloud native, you'll find all of the details here of how can you join the end user community. Today, however, I'd like to talk about GitOps but more importantly how it unlocks the infrastructure provisioning and application propagation towards the edge. And to do so, I would like to start by introducing the cloud native GitOps tools and here I'm going to give a brief introduction of tools such as ArgoCD and Flux. However, I would like to transition into using GitOps to more real-case scenarios such as infrastructure provisioning when I'm going to talk about the Cloud Street API and Flux CD integration and how can we push the application towards the edge by using the association between QBH and ArgoCD. Before I move forward with the topic, I'd like to introduce the ecosystem that allowed this new GitOps model to be constructed or to be identified within the community. If you look seven years ago, the container orchestrator framework space was very heavily diversified. We had tools such as Docusworm, Apache Mesos, Kubernetes, Corus, Flick and all of them provided a viable solution to run containers at scale. However, Kubernetes did believe in defining the principles of how to run containerized workloads. Nowadays, Kubernetes is known for its portability and adaptability but more importantly for its approach towards declarative configuration and automation. And we can see this represented in numbers. Based on the CNCF survey last year, more than 83% of the companies are using Kubernetes in production. One of the transition towards the contributor community, more than 2,500 engineers are actively collaborating towards feature build-out and bug fixing. Now I'm looking to the end user community, more than 41,000 attendees were registered at the KubeCon surrounding the world. Now this were the KubeCon virtual in Europe and North America. And this has been extremely beneficial for Kubernetes because over time multiple tools were built around it to extend its functionalities. And this created what today we know as the cloud native landscape, which resides under the CNCF umbrella or cloud native computing foundation. Now currently the CNCF landscape provides a lot of tools that enables further functionalities or integrations. However, the community focused into enhancing the developer experience, but more importantly introducing a new way or redefining the way we deploy cloud native components. And this is how the GitOps model came to be about. When referred to GitOps, pretty much it is a strategy that uses Git repositories as the source of truth for defining the desired state of the application. By using this model, by default we're going to have a PR based rollout. That means that the delta between your local environment and the production cluster is just one PR away. As well as GitOps, we have the feature of automatic reconciliation. With the GitOps tools, they actually are going to watch a repository and if new changes are identified they're going to be extrapolated and applied to the cluster straight away. But more importantly with GitOps we have a version state of our cluster. That means that we have historical data points when we know our application was up and running and using Git commands we can easily refer it to that state. When we're looking into the cloud native ecosystem, currently this area is led by tools such as Flux and Argosydy. Flux currently is an incubation CNCF project and has been donated by Weaverworks while Argosydy has been donated by one of our end user organization which is Intuit. At this stage we have a brief introduction of the cloud native tools that support the GitOps strategy. However, I would like to deep dive a bit more into how GitOps can be used in real case scenarios. In this particular section, I'm going to focus how Fluxydy can be used to automate the infrastructure provisioning of a cluster by using Cluster API. I would like to introduce Cluster API just to make sure that we have the ground knowledge on components for Cluster API set up. So currently Cluster API provides a set of declarative APIs for cluster creation, configuration, management and deletion across multiple cloud providers. When we refer to Cluster API, we refer to C cluster lifecycle which had its first initial release in April 2019. Since then it had two more releases and currently it presents a V1 alpha free endpoint. I was mentioning the Cluster API integrates with multiple cloud providers and currently we have a dozen of them from which some of the major cloud providers such as AWS, Azure, Google Cloud and many more. As well the metal provisioning of the cluster is currently supported by Packet and Tinker and we even have support for Chinese providers such as Tencent, Baidu and Alibaba Cloud. Let's look in a bit more detail how Cluster API works. Supposedly you'd like to provision a set of clusters in different regions, different cloud providers. The first step is to provision a management cluster. For testing purposes it is recommended to use kind which pretty much is a dockerized version of Kubernetes and it's quite lightweight and you can even run it on your local machine. If you'd like to use Cluster API in production it is recommended to use a fully fledged Kubernetes cluster and this is because it comes with a more sophisticated failover mechanism. Once you have our management cluster up and running will require the dependencies installed on it which are represented by the controller managers and currently we have three sets of controllers that we require. One for Cluster API CRDs for customer service definitions, one for the bootstrap provider and one for the infrastructure provider. Cluster API introduces five new customer service definitions and will require a controller for each of them to make sure that we can add, remove or reconcile any changes to it. The second set of controllers are going to be the bootstrap provider and this is the component that translates the YAML configuration into cloud init script and it makes sure to add the instance to the cluster as a node. Currently this functionality is supported by QBDM and TALIS. And the last set of controllers are going to be the infrastructure providers and this is the particle which actually interacts with the cloud provider API and provisions their resources such as instances, VPCs, subnet, security groups and many more. And of course we're going to have a controller for every single cloud provider. Once we have our controller managers up and running we'll be able to provision our target clusters and these are going to be the clusters we will provision or deliver to our engineering team to place their application on top of. These are going to be the clusters that the customers will interact with when consuming a product or a service. Now I've mentioned the cluster API introduces five new customer service definitions but more importantly this enables us to represent our infrastructure as code using YAML. So the first resource that will require is going to be a cluster resource and this pretty much takes care of the networking components for cluster. We'll be able to specify the cider for our pods or any DNS suffix if you have any. The cluster resource by default is going to be associated with the control plane resource. The control plane resource allows us to declaratively manage a set of master machines. As you can see here we have our machine resources on top of it which is pretty much the configuration for a node or an instance. We'll be able to define settings such as the instance type, the version of Kubernetes and many more. Once we have our cluster and control plane configured, we might require to have a data plane. So these are going to be the worker nodes we're going to place our application on top of. Usually this is going to be managed using a machine deployment and machine deployment resource is very similar to a deployment. Pretty much it comes with a very powerful rolling out strategy between different versions of machine sets. Machine set is very similar to replica set. It will make sure that we have an amount of machine resources up and running at all time. And lastly we're going to have the actual machines or the actual instances which pretty much will define our nodes within the cluster. But the role in this case is going to be the worker nodes. Before I move forward with the light demo, I would like to showcase the current setup that I have. Now the cluster API completely introduces a new concept of cluster as a resource. We'll be able to define our infrastructure using YAML manifests. Once we have our manifest, we'll be able to store it in a GitHub repository. So pretty much we'll be able to have these historical data points of different versions for our manifest. And we can even introduce an extra layer of abstraction or a configuration manager such as Helm or Customize. In this particular demo I'm going to use Helm. So pretty much we'll be able to parameterize our manifest using the Helm chart. To deploy all of these resources, I'm going to use Flux CD. So pretty much if we have any new changes to our manifest, they're going to be identified by Flux and apply straight away. Now the cluster API components in Flux CD are going to be installed on the management cluster. And the management cluster pretty much from this point we'll be able to provision our target clusters. It is worth to mention that for this demo, I have provisioned the target cluster, mainly because it takes around 10 minutes to have it up and running. However, in the demo, I'm going to change the amount of worker nodes we have within our cluster. So for that, we're going to change our YAML manifests and we're through the Helm chart. These changes are going to be identified by Flux and applied to the target cluster straight away. Before I move forward, I would like to introduce the two sets of configuration that Flux CD requires. So customer service definitions. The first set of configuration is to define the location of our manifest. So where we have the desired state of our application or in this case, the desired state of our infrastructure. And the second set of controllers are going to be the release strategy. So how would like our components to be deployed? So in this particular demo, I'm going to have my Helm chart being stored in a Git repository. So this is going to be under the source controller ownership. As such, I'm going to have a Git repository resource with the name copy AWS. In the specs section, you see that I have an interval of 30 seconds. So Flux is going to monitor this repository every single 30 seconds. And if new changes are identified, these are going to be applied straight away. And lastly, we're going to actually define where our manifest are living. And in this case, it's going to be within the cluster API Helm chart repository. And we're going to check out on the main branch. As mentioned, the second set of configuration is going to be the strategy for releasing our components. Since I have a Helm chart, I'm going to use the Helm controller capabilities. As such, I'm going to have a Helm release resource with the name copy AWS. I'm going to have an interval of 30 seconds as well, meaning that if new changes are identified to our Helm chart, we're going to apply them straight away, well, within 30 seconds. And then in the specs section of the chart, we're going to be able to reference our Git repository that we defined in the earlier slide. So here we defined our repository, but more importantly, we represent or specify the relative path towards our chart. In this case, our chart is located under the charts forward slash copy AWS folder. And trust the end, we can see that we select a particular values file for our Helm chart. So pretty much what input variables we should have for this chart. And in this particular case, all of the variables are going to be stored within the values slash demo.yaml file. And without any further ado, let's see how this actually looks in action. So as mentioned currently, I have a Helm chart in this repository. I have a Helm chart for templating all of my cluster API components. So if I look within the templates folder, I'll be able to see all of our YAML manifests. So for example, we can go to cluster resource. So have a cluster resource, we define a cider for our pods. We define or reference our control plane and infrastructure reference, which is going to define it as an AWS cluster. However, in this demo, I would like to change the amount of replicas we have for our data plane. So this is going to be managed by the machine deployment resource as we've seen in the slides. And as you can see here, I have a templated version for representation of the amount of replicas. And this is going to be overwritten by the values input file. And if I go to my terminal, currently as mentioned, I have the target cluster being provisioned and currently it has three master nodes and three worker nodes. So in total, we have six machines. And towards the bottom half of the view, you'll be able to see the logs from the flux source controller. Now without any further ado, let's introduce some changes. So if I go to the values-demo file, I would like to change the amount of replicas from three to five. As well with flux, it is important to change the chart version. And this is the only way for flux to identify that there has been new changes to this particular Helm chart. So we're going to change our chart version and then we're going to commit all of these files. So if I do a git diff, I'll be able to see that I've changed the chart version and the amount of replicas for our workers. Let's commit these changes. And I'm going to use a demo commit message very meaningful and let's do a git push. Cool. And now we have a new commit that changes the amount of replicas for our worker nodes. And we should see soon enough, or within 30 seconds, we should see that this is going to be identified by flux. We see that the source controller already has seen the new commit. And more importantly, we see that two new machines are currently being provisioned for our data plane. We can further confirm that if we go to the AWS console and if we do a refresh, soon enough, hopefully we're going to see two new instances initializing. Here we go. So we have one, two new instances being initialized to be added to our cluster. And if we wait for another minute or two, we'll be able to see that these machines, machine resources are going to be in an up and running state within the cluster. And this is pretty much how can we automatically roll out changes to our infrastructure using the power of GitOps. And in this particular case, the power of flexibility. Going back to my slides. The second use case I wanted to showcase when it comes to the GitOps model is how can we propagate our applications towards the edge? In this particular case, I'm going to focus on QBedge and ArgoCD. I'd like to start with a short explanation of what QBedge represents. QBedge is built on top of Kubernetes and it provides synchronization for metadata application and networking synchronization between the cloud and the edge. Using QBedge allows us to have seamless communication between the cloud and edge components. What it actually means is when you deploy your application, just need to interact with the cloud side. And the application is going to be propagated towards the edge and it's going to be completely abstracted from the user. As well as QBedge, we have the edge autonomy. That means that if the cloud side is down or there is a misconfiguration or the connection is lost between the cloud and the edge, the application on the edge is still going to be up and running independently. And QBedge as well has been constructed to be running on low resources, such as CPU, memory, and bandwidth. Let's look into the QBedge architecture briefly. So currently, there are three main areas that I'd like to identify here. Cloud, edge, and the devices. It is worth mentioning that the edge and the devices here actually represent the edge. However, the devices are the physical components that are with end users and I would like to differentiate that. When you look into the cloud side, we're going to have a Kubernetes cluster but more importantly, this is going to be integrating or communicating with the cloud core components. The core components will communicate with Kubernetes, but at the same time, it will make sure to proxy any request to the edge using a WebSocket. On the edge side, we're going to have the edge core components which communicates with the container engine runtime and with the MQ2T client. Currently, we'll be able to manage our containers on the edge using a runtime such as docker, container D, cryo, or advertlet. As well, the edge core components are going to talk with the MQ2T client and this will pretty much ensure it that we synchronize the state of the device with the desired state that we have on the edge. As well before I move forward with the actual demo, I would like to introduce an overview of the current setup. So in this demo, I'm going to have two nodes, one of which is going to represent the cloud and the other is going to represent the edge. On the cloud side, I have a Kubernetes cluster up and running and I'm going to have the edge core components, cloud core components already installed. As well on this side, I'm going to have argocity installed and we'll be able to interact with AY as well. When you're looking into the edge node, we're going to have the edge core components already installed and we're going to use docker to manage our applications. That means that any applications that are going to be pushed towards the edge are going to be run and identified as docker containers. And before we move towards the live demo, I'd like to introduce the argocity customer source definition, which in this case is an application. This particular resource is used to define where we store our configuration, so pretty much the source and we define the destination. So where we'd like our components or infrastructure application to be deployed. So in this particular case, we have an application CRD with the name nginx edge and it's going to be deployed within the argocity name space. If you look into the spec section, I have trimmed the output for destination just to shorten the output within the slide. And towards the end, you can see the source and currently we define that our manifests are stored within the cube edge, our argocity demo repository and it's underneath the manifests folder and you can see this defined in the PAL variable. And towards the end, we can see that we have a sync policy which is set to automated. So pretty much any new changes that are going to be identified are going to be applied to the cluster straight away. As all of argocity, we have an option for manual synchronization and this is useful if you'd like to validate the changes before applying them. And this is a time to move towards the second demo. So if I go back to my terminal, as mentioned, currently I have two nodes, one of which is going to represent the cloud and the other one is going to represent the edge. On the cloud side, as mentioned, we have a Kubernetes cluster up and running which means that we'll be able to use kipsetail commands to interact with it. So if I do a kipsetail get pods, currently I see that I have no pods and no applications up and running. However, because I have cube edge already installed, I can get the nodes. So currently you can see that I have two nodes present or available for this cluster. The first one is the master node. So this is the actual node which comes with the Kubernetes cluster that I have installed. However, the second one is the node which is an agent on the edge. So this already has been added to the cluster with the edge core components. Now let's deploy something to argocity. Before I move forward, I would like to showcase our application CRD and pretty much everything we had in the slide, have an application CRD with the name nginx edge. Here we can actually see the definition of our destination, which is the current cluster. So we deploy our application to the current cluster and you can see the source pretty much where we store the representation or the state of our application. If we go into GitHub, we'll be able to see that within the manifest folder, we have an nginx deployment. Pretty much this is a very simple deployment over here that deploys one replica of nginx in the open version. So pretty straightforward. And I also can interact with the argocity UI and current I can see that I have no applications defined or deployed. So if we exit the file, we can apply our application CRD by using the kubectl commands. So I'm gonna use the kubectl-f apply our argocity application. Going back to the UI, we can see that application has been created and it's automatically synchronized. If you click on it, we'll be able to see that we have our deployment for nginx with one pod. So we have one replica of our application. Going back to the terminal, I can do a kubectl get pod and I can see that I have one replica of our nginx application up and running. However, if I choose or select a wide output, I'll be able to see that our application has been pushed towards the edge node straight away. And you can see this has been happening 30 seconds ago. And we can further confirm that by looking into our edge component. So as mentioned on the edge, we don't have Kubernetes installed. So if I do a kubectl command, we'll not be able to identify it. And this is because on the edge, we manage our containers using Docker. So if we're doing a Docker PS, we'll be able to see a bunch of containers. We're gonna have a set of them which are gonna be related to the log balancer. Currently we're using traffic to provision those. However, I would like to draw your attention towards these two containers which have been created just one minute ago. And the first one is pretty much our nginx application container, which is running and that one is gonna be the post container. And this is pretty much how we'll be able to push our applications towards the edge by using the power of Argo CD. Going back to the slides. So this concludes the talks and the examples I wanted to give today. We started by introducing the cloud native GitOps tools such as Argo CD and Flux. But more importantly, we deep dive into real use cases of using GitOps model. As such, we looked into how can we provision our infrastructure with cluster API and Flux CD. And as well, towards the end, we looked into how can we push our application towards the edge components by using tools such as Cube Edge and Argo CD. And this has been possible because GitOps at the moment is the moving forest that redefines the deployment of cloud native components. But more importantly, it is based on the declarative, automatic and reliable fundamentals. If you have more questions about today's talk, please reach out. I'm gonna write a medium article with more information of how this current setup has been done and you'll be able to follow it with more details. As well, if you have questions around this talk, I'm gonna be available on social media to answer them, such as Twitter and LinkedIn. Enjoy the rest of the conference. Thank you.