 The term GitOps is evolving and technologies that support GitOps processes are evolving as we speak. GitLab customers have been doing GitOps way before the term actually became popular. In this talk, let's hear Victor Naghi, Product Manager for GitOps at GitLab walk us through GitLab's perspective of GitOps, how we are securing the GitOps workflow as well as our vision for the GitLab Kubernetes agent going forward. In his previous avatar, Victor has experienced building and scaling complex infrastructures and at GitLab he dreams of and executes GitOps all day long. Take it away, Victor. Hi, I am Victor and I am here to discuss with you how to do GitOps securely with GitLab. And you know what, the nice thing about the recorded presentation is that I am watching it live together with you. So feel free to ask questions or share your comments in the chat and we will be able to have a live Q&A at the end of the talk. So let's get started. Let's start by setting some common ground around GitOps. Why is GitOps such a big thing? What does it give us? Where does it fit the word? I would argue that GitOps is just a minor step in what we could call operations experience. But pronouncing operations experience is really hard, especially if you say ops x. So let's stick with devx or dx and devs and ops are converging anyway. So how does this evolution look like? On the left, you can see me some 20 years ago working for startup as a full-stack engineer. By that time, I knew a few sistered-mean folks who were more special than me, but actually that was the state of the word and my job was not just to write code and lead the team, but as well to take care of deployments, associate into the servers when that was necessary, and make sure that the servers are ready and just working as expected when there was a problem, I was on call. That was my job. Followingly came the DevOps movement. But before that, let's speak first about the SRE approach, as I would argue that for truly embracing DevOps, one should at least have an understanding of what the SRE approach is about. So Google arrived at SREs after they throw over developers into the ops team. This is actually where the systematic thinking about architecting the infrastructure with automation in mind comes from. And this provided a huge boost towards infrastructure's code. So for this, you should drop a few devs into your ops team. On the other hand, DevOps is when you throw over the ops responsibilities to the dev team, probably together with a few ops people too. Devs will be frustrated and they want to automate or ignore the ops part. The primary idea is to shift responsibilities left in order to streamline your processes and increase velocity without sacrificing stability. But actually doing this right at scale requires SREs. Following the SRE and the DevOps ideas, we got to something that we call today GitOps. What does GitOps add to this evolution? All in all, I would say that GitOps is just a minor step in this process of operations experience evolution. It's primarily a really catchy marketing phrase. Let me be a little bit more specific here. The SRE approach is about changing business critical processes with engineers. It's hard and costly. DevOps is a cultural change that's super hard. On the other hand, GitOps is primarily technology driven. We think that's easy. You are wrong. What you really want is actually this evolution. If you work in a classical IT organization, you might fast-track to GitOps, but you will be outdated immediately, especially if you skip the cultural changes of DevOps. So it's better to focus on the next step and this whole evolution, the whole direction where we are headed here. If you're ready to GitOps, you got there because you wanted to improve. So again, you are interested in the next step. Knowing that the whole story is about developer experience, we can have a better idea about where we are headed, especially if you know where we are right now. So let's see the current status. There is today a GitOps working group at CNCF, and there is still no definition for GitOps. Everyone agrees that it should be technology agnostic. Nevertheless, having a strong Kubernetes background, it's something not easy to ignore. Of course, I don't have a definition either, but as it's about the evolution of the operator's experience, the major principles, directions are pretty clear. These are the ones that I could come up with. Besides the topics mentioned here, there's a huge change in technology that enables these. With a single term, I would call these changes to be cloud-native or to include the availability of APIs required for automation, the possibility of containers, and the appearance of container orchestrators. These are technical details. The drivers and consequences of the trend are actually shown in the slide. So let's see how do we stand with these aspects of operations experience. In this side, I've compiled a rough overview of the GitOps ecosystem and market maturity. For each aspect presented before, I highlighted some tools that could be used to fulfill the requirements, and under Victor Zedner's index column, I present my personal evaluation of the market based on personal user research and some analyst inputs. Let me quickly run through each row. Single source of truth rops. If you have infrastructure code in your repo, you can check this out. Still, there's a lot to do to have a cutting-edge ops workflow. If you have to fill out a form to get a deployment rolling forward, you are in trouble. Actually, I occasionally meet companies that are in the process of creating gated processes instead of trying to set up quadrays and automations. Our current as a whole has a lot of space to improve here. Automate deployment productions, promotions. Do you have a final button to click to reach production? Often everything is automated, but we still require the blessing of someone. As one of my ex-colleagues said when our project manager asked him if he tested he soon to be released version manually, was that he did not. But he tested every scenario that he could come up with automated tests. Unfortunately, the manager did not understand it and delayed the deployment for the few days. Self-healing. Self-healing is purely a technological question. Not every workload might fit the existing solutions and not every fitting workload uses these already. The few I've mentioned here are Kubernetes, HasheCorp's Nomad and Lambda functions that we often forget about. Then we have declarative ops. You need all the above to get here. Being declarative means that you just describe the desired state of your infrastructure and the infrastructure worked out how to achieve it. This is kind of like the center cloud of ops. Finally, we have single views for DevOps requirements. When the discussion is about GitOps, this aspect is often overlooked. But actually if you understand that GitOps is just a minor step in the evolution of ops experience, then this topic emerges naturally. Currently, the best tool I'm aware of in this space is actually GitLab. And I think that we can serve your requirements in around 80% of the cases. I've created an example project for us to see GitOps with GitLab in action. So let's go through that. Some driving principles, it should be 100% automated. At the same time, I've set up the project in a way that you can run it locally or you can trigger specific aspects kind of manually as well. But by default, it should be driven by the GitLab CI. Everything is stored in Git. Except for some AWS credentials that are needed to access our AWS accounts when we are going to provision infrastructure in there. Besides, those are stored in mass environment variables. We used our forum for infrastructure provisioning and this will create a layered infrastructure, an EKS cluster and we'll install the GitLab Kubernetes agent into the cluster to have a permanent and secure connection between the cluster and GitLab. In this project, I represent how to do pool-based deployments to the cluster and how one can integrate environments with alerting and metrics within GitLab from the cluster. All this is set up using a merge-acquired base workflow to make sure that it's case to bigger engineering teams too. And we will do it all securely. The title of the talk is to do GitOps securely. So a few pieces related to it are worth noting here. The layered infrastructure I mentioned allows for fine-grained control over who can change what and the GitLab managed tarform state by default requires project maintainer levels to change your tarform state. Besides tarform, the GitLab Kubernetes agent will be used to securely connect GitLab with the cluster and the cluster owner defines the roles the agent runs with. Of course, there are many minor details about the project. So feel free to check out its code base to learn more about it. I will pause for a few seconds here so you can take a screenshot. An important bit about this project is that it was not created for this presentation. I continuously add more best practices and recommendations to show how to do GitOps in the best way using GitLab. So the first step is to create the infrastructure. During the interviews I run with tarform users, a common pattern, especially with experienced tarform users, is when they start to tell me stories about the dead ends they went into and learned from. I've tried to set up this project to help you to avoid these dead ends. For this reason, it includes free tarform projects to provide the layered infrastructure. As a result, each project can be developed independently. The access rights around the code bases can be managed independently and the tarform state files remain small so tarform runs faster. The image here shows a merge request with infrastructure changes. A tricky bit of infrastructure's code is that everything is stateful. To help you with this, we have extended the merge request with the summary results of your tarform changes. You can see that merging this change will result in 12 new resources being created within your infrastructure. If this is worrying, you can easily check out the full log with single button click. With tarform, we could set up the infrastructure and with the GitLab Kubernetes agent installed, we even connected the cluster to GitLab. So let's speak a bit about the agent. The GitLab Kubernetes agent is the future of GitLab's Kubernetes integrations. Given the integrations into GitLab, the agent is not limited to deployment as it often decays with GitOps solutions. But we actually provide you a deep integration with your cluster and connect the cluster to a single source of truth, your version control system. An office-cited shortcoming of integrating the cluster under GitLab was that we required to opening up your cube API towards GitLab and we recommended to give cluster admin rows to the service account that GitLab will use to connect to your cluster. This is the past. With the GitLab Kubernetes agent, the rows of the agent are totally controlled by the cluster owner. Of course, different features have different requirements and we will try to help you finding the right balance here. Let's see, how does the agent work? The GitLab Kubernetes agent actually has two components. The Kubernetes agent server abbreviated as CAS that is installed beside your GitLab instance and the agent K that is the cluster side component of the agent. Once the agent is set up, it requires a token to connect to GitLab and we grab its configuration from GitLab. On gitlab.com, the Kubernetes agent server is managed by the GitLab SRE team and you just have to set up agent K to try out this feature. Personally, I'm excited about the possibilities inherent in the agent and the integrated experience it will be able to give to GitLab users. As of today, beside deployment, the Kubernetes agent provides Selim integrations helping both the setup and surfacing network security policy alerts within GitLab. Let's see how deployments are set up. Once the cluster side component agent K is installed, we can start deploying into that cluster. You can see an example configuration for this here. Wait, you can see it now. This says to sync the ML files from these two directories and their subdirectories. Oh, you are correct. You should never commit secrets into Git. So you might ask, what is that second globe? What's the second line here? Actually, under the manifest repo, we installed the sealed secrets component. This way, you can store secrets under Git securely. You might still say that sealed secrets requires access to the QB API from a local machine to create the sealed version. This is totally correct. And we have an open issue to offer this functionality through GitLab. The idea is something like that GitLab provides a UI where you give the details of your secret and GitLab just sees the secret accessing the QB API through the agent and opens a new MR for you without ever storing the secret in an unencrypted way. One of the guiding principles for building the agent is to be good cloud native and open source citizens. As a result, we have built the deployment module on top of the GitOps engine that's used by Argo CD2. This gives us a trial and tested code base and enables us to work together with an amazing team from Argo. I'd like to add a side note here that relates to the example project I'm presenting. The project repo contains quite a few components to be installed with GitOps and I plan to maintain and build this project further regularly to include new functionality as we release them. Anyway, the important bit is that from your point of view, these just work. So, how can we get the insights about these deployments? I believe that the single view provided within GitLab should provide management interfaces and ops interfaces primarily for DevOps and in a little way to core ops or Azure tasks too. By this I mean that we don't want to compete with the monitoring tools built into Google or AWS. When you need to debug, for example, some Kubernetes node errors, you should use tools dedicated for this. On the other hand, if you are working at the DevOps level and you just need access to the metrics or logs of some deployments, we think that that should be at your fingertips together with the issues and merge requests you are working on. In these screenshots, you can see the current deployed version and its details. Depending on your setup, you can deploy version manually and get to its metrics. On the upper screenshot, you can see our telephone management interface. Following the idea of DevOps should be supported within the UI, you can manage it from state files too. A common situation is a stale state lock, for example. You can lock and unlock from within the UI. On the other hand, included in the GitLab, we do not support debugging your state files. For that, you can download the state JSON from the UI and debug it locally. As discussed at the beginning, we know that this whole story with GitOps is actually about improving the operations experience. Depending on the area you want to improve, you can draw inspiration and ideas from the sample project just presented. These are the features included in this demo project. I've tried to provide an extensive readme together with the project so you can fork it and test on your own easily. We use GitLab CI for everything that's possible. We use the GitLab Managed Trafform State so you don't have to set up the Trafform State backend before you start provisioning your infrastructure. We install the GitLab Kubernetes agent into the cluster using Trafform to connect your cluster with GitLab. This will allow pool-based deployments. You can set up Prometheus integration together with Alarque Manager. And if you want, you can even set up CILIUM's network security policy integration. And all of this is driven through a rigorous, merging web-based process. These were the features that we support today. Actually, GitLab's Trafform integration was released less than a year ago while the GitLab Kubernetes agent was first released last September. So we have a huge roadmap of exciting features ahead of us. In terms of the Trafform integration, we plan to release GitLab-integrated Trafform registry in a few weeks and have a separate slide to share with you some ideas we have around improving the GitLab Kubernetes agent. These are the features and ideas that we're either working on or are evaluating with respect to the GitLab Kubernetes agent. We want to enable secure push-based deployments with a CILI tunnel and release it in a few weeks for you. This will allow a very smooth migration path from the existing GitLab CILI that you use to use the agent with a secure connection to your cluster. We want to provide automatic labeling and full lifecycle resource management for the resources that are deployed with the agent. This will mean as well that we will prune the resources once you remove them from your manifest repository. Today, we are working together with Fervind to integrate Polaris together with the agent and we are already planning our user interface to provide an amazing experience around the deployment that are run through the agent. And you can see there are a bunch of other ideas we have and I would love to hear your feedback around these. Please contribute in the linked issues. This was a quick demo of how to do GitOps securely with GitLab and if you did not post your questions yet, please do it now. And let's continue discussion. Thank you very much.