 Hi, welcome to my talk and presentation on GitOps Kubernetes and SQL Management. I'm Karzena Malk. I work at CloudBees, most of the Jenkins X open source community. Jenkins X is a CI-CD platform on Kubernetes with GitOps built in and committed to facilitating GitOps best practices. In today's presentation on GitOps and SQL Management, we'll see the GitOps-based workflow the Jenkins X project creates for applications and see how Jenkins X manages Kubernetes SQL securely using Kubernetes external SQL. This talk follows on from a talk I did previously on GitOps, Kubernetes and SQL Management, which provided more context and discussion on the issues and tradeoffs with different approaches to SQL Management using Kubernetes and GitOps. So the previous talk, and there's a link in the slide, has a whole baking metaphor to introduce some of the concepts. There was also some information, but especially on the tradeoffs between the different SQL Management techniques when doing GitOps and Kubernetes. For this talk, I've kept the cake photos because they're pretty. But also this is a great week to celebrate. So enjoy a cake. In this talk, the focus is more on Jenkins X3 Alpha. During this talk and the brief presentation, we're going to discuss and see how Jenkins X3 guides you to set up your repositories to create a GitOps-based workflow and how Jenkins X handles secrets using Kubernetes external secrets. So first we'll briefly go over some of our terms and set a little bit of context in which we're considering this problem of SQL Management. And SQL Management is an issue across software development, but in this talk, we're looking at it in the context of GitOps and Kubernetes. First, why Git? It doesn't have to be Git, but Git is the most widely used version control system in the software industry today. Awesome side note. Git recently celebrated 15 years since its first release, hence the cake celebrating 15 years of Git. Happy birthday, Git. GitOps uses Git as the single source of truth for declarative infrastructure. And it now enables developers to manage infrastructure with the same Git-based workflows they use to manage a good base. This means all infrastructure configuration and application code are stored in Git repositories. Infrastructure as code isn't unique to GitOps, but it is foundation for GitOps. With GitOps, the entire state of your system is described using declarative specification for each environment, and these are stored in Git. A key part of GitOps is this idea of environments as code, describing your deployments declaratively using files such as Kubernetes manifests that are then stored in a Git repository. And this, having infrastructure config and deployments declaratively described in files in Git raises issues of secret management. And how Jing and Zex approach a secret management with GitOps and Kubernetes is a subject in the stock. So at this high level where we're at right now, your desired state of your system is declared in Git, and this is the single source of truth for your system when doing GitOps. So the other part of GitOps that's important to keep in mind is that you have to be able to observe the actual state of your system and reconcile that with the desired state of your system as described in Git. So Kubernetes helps a lot with this. It handles active reconciliation within your cluster. Kubernetes has been open sourced by Google and is now the most popular container orchestration system of that form. So you can do GitOps without Kubernetes. However, there are really natural fit together. With the deployment of declarative Kubernetes manifest files being controlled by Git operations. So Kubernetes deployments have the following properties which are really helpful. Automation. So Kubernetes automates the process of applying changes correctly and in a timely manner. Convergence. Kubernetes will keep trying to update until the update is successful. Item presence. Multiple applications of convergence have the same outcome. That is to say that the same actions can be done and will result in one single desired outcome. They are not cumulative. Just go to one state. So they can be applied multiple times without changing the result beyond that initial desired outcome. So when doing GitOps with Kubernetes, it is helpful to have a new GitOps Kubernetes operator which automatically ensures that the state of your cluster matches the config in Git. So the operator will pull the Git repo for changes or divergence between the desired and the actual state and will trigger deployments in Kubernetes to reconcile them. And make sure your new container images and config changes are propagated to the cluster. So when following the GitOps model, the desired configuration of your system is starting to get. So the GitOps Kubernetes operator software process, in our case, literally a Kubernetes operator is responsible for changing the currency of the system into designs to your system. So Jenkins X, of course, has a GitOps Kubernetes operator and the combination of GitOps methodology with Kubernetes declarative configuration and active reconciliation model combine to provide a number of operational benefits that all aim to produce and provide a more predictable and reliable system. Having all your configuration files version control in Git has many advantages, but securely managing secrets in such a system has been difficult. First, what are secrets? A secret is anything you want to tightly control access to. Kubernetes provides a mechanism allowing users to store bits of sensitive information into a protected resource object called the secret. So Kubernetes provides a built-in object for managing secrets called secret. Common examples of data you would want to store in a secret include username and password credentials, SSH keys, API keys, TLS, certificates. So here we have an example of a simple Kubernetes secret. It's a very simple data structure composed of three pieces of information. You have your name or your secret, the type of the secret, which is optional, and a map of field names to sensitive data encoded in base 64. So when you first see a Kubernetes secret, you may be tempted to think that the values are protected by encryption. They are not. Base 64 encoding allows binary data to be represented as a string format. It does not promote encryption. The base 64 encoding is like plain text. And here we can see those same secrets easily unencoded reading username. That's my join and password. Andi, think. You do not want your secrets laid out in Git. Okay. Git up spectators may be very happy to store configuration files in Git. They are unwilling to store their sensitive data in Git due to security concerns. There are additional reasons as to security concern to store secrets in Git. Git was designed as a collaborative tool. Making it easy for many people to view and review each other's code. Git is designed to enable access to other people's code. And for these reasons, it is very dangerous to use Git to hold secrets. We have no granular file level access controls in Git. So Git does not provide read protection of a sub path or a sub file in a Git repository. In other words, it is not possible to restrict access to some files in a Git repo and not others. And when dealing with secrets, you ideally should be granting access to secrets on a need-to-know basis. For example, if you have a temporary worker, you would want to give that new user the least amount of access to sensitive data as possible. Unfortunately, Git does not provide any way to do this. And this is an all or nothing when giving access to your repository. And distributed Git repos. So with GitOps, team members locally clone the Git repository on their laptops and workstations. This would mean the proliferation of the secrets to GitOps. So GitOps is one of many systems. Because opening up the attack surface. Our one rule for GitOps, Kubernetes and secret management is don't store Kubernetes secrets in Git. And especially don't store raw base 64 encoded Kubernetes secrets in Git. But we're trying to do GitOps. And there are, and we want to have the entire state of our system in Git. Luckily, there are a number of potential solutions. But unfortunately, they all come with trade-offs. So in this talk, I'll be focusing on the solution that Jenkins X has chosen to manage secrets with GitOps and Kubernetes. But if you wanted more of an overview of other solutions, potential solutions, then you can watch my previous talk. The Kubernetes secret object is convenient to use. It provides a declarative API that makes it easy for application pods to access secret data. And without any special code. So we do want to use Kubernetes secrets, but how do we do it skewed? And the approach that Jenkins X has chosen is to use external secret management system. These secret management systems may differ in future, but the general principles of that are the same. In this strategy, rather than using native Kubernetes features to store and load secrets securely into the container, the application containers themselves retrieve the secrets values dynamically at one time at the point of use. And this is what Kubernetes external secrets helps you with. So you don't have to write that code yourself because teams were doing that and it was taking up a lot of time to do it well because it's important to do it very quickly. Kubernetes external secrets is a very nice open source solution that works extremely well. And that is why Jenkins X is focusing on it. So luckily there's Kubernetes external secrets. And this has been open source by GoDaddy. Kubernetes external secrets enables the secure retrieval of secrets stored in external secret management systems and the ability to securely add secrets to your cluster. Look at how that's done. So the project extends the Kubernetes API by adding an external secret, secret objects using a custom resource definition and a controller to implement the behavior of the object itself. An external secret declares how to fetch the secret data while the controller is responsible for fetching that secret data from external providers and converting all external secrets to secrets. The conversion is completely transparent to pods that can access the secrets on one because of Kubernetes secrets. So Kubernetes external secrets supports the external secret management systems of all the major cloud providers and hash code vaults. The controller fetches the external secrets using the Kubernetes API. The controller uses external secrets to fetch that secret data from external secret management systems. And the controller upserts secrets. And then pods can access secrets normally. For Jenkins X, our number one main concern is we do not want to commit raw Kubernetes secrets to any external secret resources to get. And we highly recommend using Kubernetes external secrets. Which means you can check in the external secret resources into get. And we will see an example of that. And these are in a reference to the actual secret management values, the actual secret values that your secrets contain. Jenkins X supports the major external secret management providers to, that will then be holding like the single source of truth for that secret. The Jenkins X team are big fans of Kubernetes external secrets. And then we developed a JX secret, which is a small client tool for working with Kubernetes external secrets. And it integrates really nicely with the Jenkins X workflow for continuous delivery on Kubernetes. So Jenkins X has chosen a secret management approach that adds an abstraction layer above secret management solutions. All those external secret management systems. And this enables users to choose where the source of their secrets can be stored, preferably outside the Kubernetes cluster. And storing your secrets outside the Kubernetes cluster is a very good process for disaster recovery scenarios. We're now going to do a quick walkthrough on setting up Jenkins X3 Alpha, the get ops approach in Jenkins X3, and secret management using Kubernetes external secrets. This is the docs landing page for Jenkins X3 Alpha. As you can see, we have some easy to find top level guides and developer guides. In the administration section, we have links to a number of quick start get repositories. These are GitHub repository templates that make it easy to start when installing Jenkins X3. You choose one depending on the cluster you want to create, whether it's Google, Amazon, Azure, Minikube, OpenShift. For this demo, we'll use Google Cloud. And we'll use GKE in Terraform. So we click on the create get repository. And this brings us to a template to create our infrastructure repel. We'll name it JX3 Terraform in front. Make it private. This is just in case to accidentally check in any secrets. This should not happen, and secret management with Jenkins X helps that not happen. But just in case we'll make it private. The owner should be the GitHub org that you created for Jenkins X. And now you can see that repo being created within the GitHub org. Now what we're going to do is we're going to clone this infra repo. Great. Now we're going to clone that repo and CD into the clone. Now we'll create our cluster repository. Let's read the docs on how to create the cluster repository linked to from this button here. We create a cluster get repository choosing the desired secret store. Either Google secret manager or Vault. I'll use Google secret manager for this demo. And we'll use this link here. For this repo, this is our cluster repo of the Jenkins X3 GKE cluster. Again, set the owner to my GitHub org, making it private and creating our repository. Our last little bit of setup is to configure the git URL of your cluster git repository which contains the helm file YAML. You can see it here. And we need to configure this into our infrastructure git repository which contains our main .tf file. So we need to get this URL from inside our git clone of the infrastructure git repository which already has the file's main .tf and our values auto tfrs inside. You need to link to the cluster repository by committing the required terraform values and I am going to use these values grabbed from the instructions we were just looking at. Although I populated it with the git URL that's how we needed to grab that and I populated it with the name of my GCP project. I've set, if you see here, I've set Google Seeker Manager to the value true because I'm using Google Seeker Manager and we're putting these values into our values.lado.tfrs file. So the values of that file now contain this line which was already present and then the added information. Now we commit those values and push them. Once those values are added to our infrastructure repo done now, we can terraform init and this downloads the various terraform modules initialize and verifies the terraform stuff. We're being asked for the bot token that's being used to interact with the Djingatex cluster git repository. This is the token for the bot you created that's a member of your GitHub org. So I'm going to enter that value here but I will not show you because let's do the token. Great. Now we have a chance to approve our plan. And we'll say yes. We recommend using terraform to manage the infrastructure needed to run Djingatex. There are a number of cloud resources which maybe needed to be created such as IAM bindings used to manage permissions for applications using our cloud resources. We have our Kubernetes cluster also storage buckets for long-term storage of logs and you get a name created for you and I will be flying goose which works for me. Okay, so now that that is done let's connect to our cluster and then we can tell the Djingatex installation logs and for that we have a command hello install you can see here you can see custom resources being created so this is quite interesting. You can see these secrets are being created and added as a number of secrets that are needed for Djingatex. And in fact some are generated and some are provided at installation by the user like our bot taken and these need to be managed and there are a number of solutions that can help. So Djingatex prefers to use managed cloud services where possible. Google secrets manager is a good example where secrets are stored out of the cluster and synchronized in cluster using external secrets which we will see in a minute. Where managed cloud services are not available or desired Djingatex can also use a vault. Great, we're done. And now let's look at our namespace. Awesome, so here you can see a number of namespaces that we have. Let's look at Djingatex. So now we've moved into the Djingatex namespace and we can create a project. We have quick start projects to help you very quickly try it out. The one I'll demo much we do will do node. So we're going to create a name for it, awesome node to initialize with our automated commit message. So you can see Djingatex has created clone of that node repo in the quick start. And then it's created a pull request on our cluster repo. Excellent, that's now finished. And as you can see we have an awesome node repository that has been created for us in the Djingatex space. We'll have a look at that in a minute and see the pull request that's happened on the cluster repository in the commit and the creation of that additional repository in your GitHub org. And while activity is happening in your cluster you can watch pipeline activity or pipeline log. First I want to show the UI that Djingatex now has with Oxent. So a common question we've heard in the community over the years is when will there be an open source UI for Djingatex and there is now. We're using Oxent by way of an Oxent.JX plugin. Right now we can see the Oxent front end where it's showing us what's happening in our cluster. As you can see I set up Djingatex in the repositories last night, so 10 hours ago. And we can take a look at the pods we have running in our cluster. So you can see all our running pods created 10 hours ago. You can look at our Djingatex and as per usual for Djingatex we have three environments, development, production and staging. You can also see some of the pipelines including the latest pipeline setting up awesome load. So here we can see some repositories that are in our Djingatex space, the GitHub repository. We have our GKE cluster repo and also the awesome node repo that we have just added. And remember our GKE cluster contains the configuration for your cluster repo. As you can see this repo has the helm file which defines all the charts that we're deploying. And here you can see Kubernetes external secrets. In addition you can see here the PR from last night. We'll take a look at it. As you can see it's promoting our awesome node repository to our staging environment. And this PR has been done by our bot and it was merged automatically by the bottom. So if we go into our Djingatex space org and take a look at awesome node this is the repo that we added. So if we go back into Octane and we look at our pipelines and we look at this one. This is the pipeline for the awesome node quick start. The repo for that quick start was cloned, the image was built and it's gone through successfully all the way through to staging. With Djingatex 3 we use a single GKE repository for all the name spaces in the cluster. Within Octane we can view our secrets but remember Octane is giving us a view into our cluster. And within our cluster the secrets are Kubernetes secrets which means that they are only base 64 encoded which means I cannot show you a secret here but I'll talk through some of the Kubernetes external secrets flow and how that works with Djingatex. So that the source of truth of the secret is stored outside the cluster. Let's look at Nexus. You can see here that there is a password and within our cluster that is a Kubernetes secret here is some metadata on it without the actual secret. The secret is in this YAML file so I can't show you. But this gives a bit of a flow and how external secrets is helping to populate the Kubernetes secret. We'll take a quick look now at our cluster repo and we look in our helm file. We can find a reference to Nexus. That's where we say where the Nexus chart is and it's at Djingatex Nexus. This is an off the shelf helm chart that we are installing and here we can take a peek at it here. It's Djingatex charts Nexus. As you can see this helm chart has a Kubernetes secret in it. Kind of secret. You can see the templating for admin password and how good that gets piped into base 64 encoding. Which is how that secret will be in your Kubernetes cluster. So when we install Nexus the boot pipeline runs and it attacks as a Kubernetes secret and it turns that into the external secret. Which we can see in the cluster git repository. And there is no actual secret data that has been checked in to get. So it's very secure to use Kubernetes secrets and to use getups with Kubernetes but you do not have to check your actual secrets in to get. You check in the external secret. When the boot pipeline applies all of that YAML external secret resources are installed and our external secret controller sees that it's an external secret. And it's got a watch on our Kubernetes API server. And it's like, hey, I've got a new external secret that says to create a new secret for Nexus. So we'll get the details, the data, the password, the keys from this location. Then the controller goes and gets that secret data and creates a real Nexus Kubernetes secret in the JX namespace. And there's a little bit this diagram of that happening. As we saw, the Nexus Helm chart has a secret in the chart. When we add it to Jynkin's X, the boot pipeline generates an external secret which gets applied to our cluster. The external secret controller which is running in the cluster will see that and go read the location to go and find the real secret. Wherever it's stored, it will get the data from that secret and then it will bring it back in and it will create a Kubernetes secret inside the cluster with the real values. And this can be seen in this YAML time. But the Kubernetes secret data is never checked in again. Okay, so here are some links to discover more about Jynkin's 3Alpha, how it differs from Jynkin's X2, some architectural changes that were made in Y. Also, links for finding out more about how to become involved with the Jynkin's X community. Join us for our bi-weekly office hours. Join us on Slack. Okay, quick wrap up. GitOps enables developers to treat the configuration of infrastructure and deployment of code in a similar manner to how they manage their software development process using a familiar tool. Git. Protecting secrets is not a small challenge in software development. Database passwords, API keys, TLS certificates all need to be carefully protected and changed. Fortunately, cloud providers are designing systems and services to make this process easier. And Kubernetes provides a secret object that can be used to protect any sensitive information needed within the Kubernetes cluster. However, getting secrets securely from back-end secret management systems into your cluster to be sorted as Kubernetes secrets has presented some security challenges and teams have had to create their own solutions to fill this gap. Kubernetes external secrets does a great job of securely solving and that is why Jing and Sax has chosen it as a solution for secret management with GitOps and Kubernetes. So above all our one absolute rule for GitOps Kubernetes and secret management is do not store raw Kubernetes secrets again. So thank you for listening and watching this presentation.