 Hey, good morning, everybody. Thank you so much for being here. So this session is called GitOps to automate the setup, management, and extension of a Kubernetes cluster. And I'm going to talk about how the session is going to be structured and how we might need to move around a little bit in the room. But first, my name is Kim Schlesinger. I am a developer advocate at DigitalOcean. I focus on our Kubernetes education. And prior to DigitalOcean, I was a site reliability engineer at a company called Fairwinds. And I'm actually a career changer. So before all of that, I was a primary school teacher, and I was an instructional coach and curriculum designer. So I like bringing tech and education together. And what we're going to be doing today is I've created a repo for you that we're going to be working through. So let's take a look at this. So this is the repo. There are five chapters in the repo. And the goal of this workshop is for you to have some experience getting an aha moment, either with infrastructure as code or with GitOps. And so if you look through the chapters, you see the first chapter is we're going to set up a cluster using Terraform. The next thing that we'll do is we'll install and set up Flux CD for continuous delivery. The next thing is that we'll use a project called Seal Secrets to encrypt Kubernetes secrets so that you can store your secrets in a public Git repo. The next thing that we'll do is we'll use a project called Crossplane to make the cluster that we've spun up a universal control plane so that we can create other cloud resources from within that cluster. And then finally, we'll tear down our cluster. So if you're using a cloud provider like DigitalOcean that you don't get charged for any usage that you want, don't want to be charged for. And so I know that workshops are really tricky. And my goal is for you to have an experience. But the internet can be really unreliable at this venue. And so here's how we're going to do this. So if you are someone who is interested in watching me do a demo of all of this, and you maybe want to follow along or you just want to observe, then in a few moments, I'm going to invite you to move toward the front of the room so we might need to shift. If you're someone who's sitting in the audience and you're thinking, I already know a little bit of this, like maybe I'm already familiar with Terraform, but I'm really interested in digging into Flux or Crossplane, you're absolutely welcome to skip ahead to those chapters. They have very good instructions in them. And I think we'll see how this works. What I'd like you to do if you're someone who's skipping ahead and isn't going to follow along with me, I'm going to have you move to the back of the room. And I have two colleagues here, Adam Wolf Gordon and Wayne Warren. They're both engineers at DigitalOcean. But they'll be available for you to ask for help if you're someone who's working ahead and you get stuck. They'll be there to help you troubleshoot. And so yeah, if you want to move lockstep with me, you'll move to the front. If you want to move on your own, you'll go to the back. And if you are someone who wants to watch just a little bit of this workshop and then step out after a little bit, that's totally fine. The goal of this workshop is for you to get what you need to out of it. So if you need to leave, that is no problem. So if you are following along with the workshop and you want to spin up a cluster using DigitalOcean, we've got a promo code for you. So in this section in the read me, it says promo code. You'll get $100 US worth of credit. And the way that you apply the credit is that you create a DigitalOcean account, you go to the Billings page, and then you enter this credit here. It's Kim at kubecon.eu100. There are a few things, though. If you are already a DigitalOcean customer and you've used a promo code in the past, you can't enter one yourself. So we'll have to do that for you manually. We won't be able to do that until after the workshop. So you have a couple of options if you want to use that credit. The first one is you can slack me on the CNCF slack and send me your name and your email address and I'll apply the credit after the workshop. This is what I look like on the CNCF slack. And then the other option is you can just create a new DigitalOcean account with an email address that you haven't used before. And then you'll be able to apply the credit that way. So yeah, there are also prerequisites in terms of binaries that you want to download. So for every single chapter, you want a DigitalOcean account, you want to use Doctal, which is the DigitalOcean command line tool that lets you communicate with our API, Terraform, Helm, and Cube Control. I bet a lot of you have some of those things installed. For chapter two, you'll need a GitHub account and Flux CLI installed. For chapter three, you'll need a tool called CubeSeal installed. And for chapter four, there's nothing that's not in a list above that's required. And then finally, so how to ask questions and troubleshooting tips. So the first thing is if you're working through the workshop and you get stuck since there's so many of us, there's a process I'd like you to go through. So the first thing is if something fails out, reread the instructions that you just tried and run the command again. If that doesn't work, Google the error. If that doesn't work, talk to somebody next to you or find Adam or Wayne and ask them for help. And if you are attending virtually, ask a question in our Slack channel. You can also do that if you're in the room. And so that is 2-CubeCon Custom Extend Cates. Uh-oh, and it looks like there's some trouble with the video hanging for the virtual folks. All right, so last thing before we break and reorganize is that I've got Annie here who's our moderator. And so I'm going to demo each chapter. After each chapter, that'll be the opportunity to ask questions. The way that it'll work is Annie will take a few questions from the virtual platform and ask those on the behalf of those attending virtually. And then if you have questions, you'll queue up at the microphone in the middle and we'll go through there. So next thing we're going to do, I'm going to put a 10-minute timer on. Your job is to make sure you have a digital ocean account and that you're all set up. And then rearrange as needed. So moving at your own pace is going to be toward the back of the room. Going step-by-step with me is going to be at the front of the room. And then when the timer goes off, I'll get started with the demos. So ready, set, go. And if you guys want to get settled in, go right ahead. Thank you. That's fair. That's not what I wanted. Just going to see what people are doing. Can you put? OK, you already did, or somebody did. They're voting up, so at least there's at least three people. Oh, OK, great. Yeah, here's again. When I entered the promo code, validation says, OK. So they're saying that it should be that one. OK, thank you. Can you hear me? Can you hear me now? Can you hear me? Can you hear me? Hello, hello, hello? Can you hear me? Hello? OK. Hey, everyone. It seems like there's an issue with our promo code. We're going to look into that. If it doesn't get applied, send me a Slack message, and I will fix that on your DigitalOcean account. I will come see you in just a second. We've got three minutes, so you're just getting set up. If you haven't already, you can fork and clone that repo, and then we'll get started on chapter one, which is creating a cluster with Terraform. Perfect. All right, I've already got some Slack messages requesting me to take care of the code after the workshop. Thank you, I will definitely do that. All right, I think we're ready to get started. So the first thing we're going to do is we're going to try and set up a cluster using Terraform. I'm lucky as the presenter that I have probably better internet access than you do, so if your internet's timing out or if we're having issues with Wi-Fi, no problem, we'll watch the demo. So let's head to chapter one in the repo. And so what we're going to do is we're going to use Terraform to create a DigitalOcean managed Kubernetes cluster. And so the reason this is part of a GitOps workshop is that in GitOps, you have to use infrastructure as code tools because you want to be able to define your infrastructure in files that you can commit and track with source control like Git. Terraform is an infrastructure as code tool. It uses declarative configuration files that help you automate provisioning infrastructure resources like VMs, managed databases, firewalls, or Kubernetes services. So what we're going to do in this chapter is we're going to use Terraform to create a managed Kubernetes cluster. And it's going to be a kind of cluster that you see a lot in tutorials. So this is a diagram from the Kubernetes documentation. This diagram shows on the left a control plane. And then on the right, we see three worker nodes. And that's what we're going to be provisioning in DigitalOcean with this Terraform. So if you're following along or if you're watching this after the presentation, there are four tools that you need to complete this chapter. The first is a DigitalOcean account. The next is Doctal, which is the DigitalOcean command line tool. The next is Terraform, the command line tool, and then kubectl so that you can interact with your Kubernetes cluster. All right, so step one is you want to have access to the files in this repository. So you're going to want to fork and clone this repo. And then you're going to want to change into this repo. So on the command line, you see the name of this repo, which is kubectl 2022 doKS workshop. And you've got two sets of instructions there. And then the second thing that we're going to do is we're going to configure Doctal so that it can communicate with the DigitalOcean API. And we're going to do that with a token. And so I'm going to tell you the steps, and then I'll show you. We're going to create a token in our DigitalOcean account. And then we're going to store that token as an environment variable. And then we're going to use Doctal to authorize our account. And then we're just going to make sure that Doctal is actually communicating with our DigitalOcean account. So step one, we're going to create that API token. So this is the DigitalOcean control panel. I'm on the tab that says API. And I'm going to click Generate New Token. You give your token a name, so I'll say KubeCon Workshop. And then you can select when you want your token to expire. It goes all the way from 30 days to no expiration date. And you're going to want read and write permissions for this token. And then I'm going to generate that token. I'm going to copy that to my clipboard. And I won't be able to see that again, so I'm going to be careful with that. And then the next step is I want to store the value of that token in an environment variable called doToken. So I'm saying export do underscore token. And then I'm setting it to the value of that token. All right, I think that looks good. And I know that different operating systems have different ways of handling environment variables. So you may just want to keep that token on your clipboard and paste it in manually throughout the tutorial. All right, next step is we want to use that token to grant account access to Doctal. So I'm going to run the command Doctal off a knit. And Doctal is giving me a message that says, hey, I need your account token in order to do this. So it's on my clipboard. I pasted it. I'm getting a message that says that I was able to validate that token. And then I'm just going to double check and make sure that worked. Let me check the command. I'm going to say doctal account get. All right, and we see the information about my account. My email address, like what's my droplet limit? That's our VM product. Has my email been verified and things like that? So I feel good that I'm able to connect to the digital ocean account from the command line. All right, and now the fun part. So in step four, what we're going to do is we're going to look at a Terraform file. I'm going to move through the file really quickly. And then we're going to run Terraform apply. That'll take a few minutes. We'll go back to the file at that time and dig into it then. So in the repo that we've worked and cloned, I'm going to open it with my text editor. And I'm going to the Terraform directory. And I have some additional stuff in there because I've already run a few commands. But doks.tf is the file that we want. And so we see some things here. We're using the Terraform digital ocean provider. You see we're going to be using the Diotoken in Terraform. And so we have some lines that point to that. And then we get to the good stuff. So here we're going to create a digital ocean Kubernetes cluster. We're going to call it the kubecon cluster. And then we get to set some arguments here. And so I've already got my nameset kubecon cluster. This next one is important. You want to select the region where your worker nodes are going to spin up their droplets. So what digital ocean data center are you going to be using? If you want to see a list of those data centers from the command line, you run this command doctyl compute region list. And you see the list of all of our data centers. And for this exercise, you want to pick a data center that is located geographically close to you. And you want to pick a data center that says it's available, that you can create resources in there. So I'm from Denver, Colorado, in the United States. I usually use the data center San Francisco SFO3, because it's closest to me. And it's available. But now that we're in Valencia, Spain, I did some Google Maps sleuthing. And it seems like Amsterdam, London, and Frankfurt are all roughly the same distance from Valencia. So you can pick Amsterdam, London, or Frankfurt. Just make sure that in the available column that it's listed as true as available. So Amsterdam 2, I'm not going to use that. But I will use, it looks like Amsterdam 3 is available. And so right now I have the London data center specified. I'm going to change that to Amsterdam 3. Again, pick an available data center that's located close to where you are. And then the next thing that we're going to do is we're going to specify which Kubernetes version that's available through DigitalOcean Kubernetes are we going to use. So again, this Doctl command will give you a list of those versions on the command line. So this is Doctl Kubernetes Options Versions. And there are two Kubernetes versions available for you. The earlier one is 121.11. And you need the slug that has this dash do.1 at the end. And then the more recent version that we're going to use is 128. And so I just want to make sure I've got the right slug. Looks like version 128 is set. HA is for our high availability control plane. We've got two control plane options right now. This one will spin up a little bit faster. And then this node pool block is where you specify information about the virtual machines that you're using for your worker nodes. So nothing that you need to change for this workshop, but you select the size of the nodes. I've picked a basic AMD droplet with two VCPUs and four gigs of RAM. And then you specify whether or not you want those nodes to autoscale. And then you say the minimum number of nodes that need to be running and then the maximum number of nodes that are running. So you can have constraints on the size of your cluster. All right. So I said we weren't going to go through that too in depth, but I wanted to go through the arguments. So we've got this Terraform file set up. And now it's time to run some Terraform commands. So first thing you want to do is you want to change into your Terraform directory. And you're going to run Terraform init. So this is like running npm init or any other initialization commands. We're just making sure that the software is ready for us to go and that if we need to download any dependencies that that's taken care of. I ran this command earlier, so you wouldn't have to see that. So it went pretty fast. And then my favorite command in Terraform is the next command, which is Terraform plan. And so what this is going to do is Terraform is going to start communicating with the Cloud Provider API. And it's going to say, hey, this is what I am planning on creating in your Cloud Provider account. Is this actually what you want me to do? So let's take a look at that. So I've got Terraform plan. And then I'm passing in an argument of the variable. And I'm saying what I named a lowercase diotoken in the Terraform file, I actually want you to grab the value from my diotoken environment variable. So here is the plan command. Nothing has happened. Terraform is just saying, hey, is this what you want? And this makes me feel safe as someone who's creating infrastructure. It's saying with these Greedon plus signs, hey, I'm going to create a Digital Ocean Kubernetes cluster. It's going to be called KubeCon cluster. And then you see a lot of those arguments that we set. And you see some arguments that we didn't set that are going to get created in the cluster. And then Terraform will get that information later, like the endpoint, like the created at timestamp, like the KubeConfig value. We've got a maintenance policy here. We didn't specify any of that. So it's just going to default to whatever the defaults are for Digital Ocean. And then information about the worker node pool. So next thing I'm going to do, I feel good about that. The plan. Now I'm going to run Terraform apply. So this is actually going to create those Digital Ocean resources. It gives us the same information again, saying, is this what you want me to do? And this time, you have to type the word yes in for those things to be created. So now we have this message saying, hey, I am creating a Digital Ocean Kubernetes cluster called KubeCon cluster. Terraform gives us information. But I want to see if that cluster is actually spinning up in my Digital Ocean account. So I'm going to go back to the Cloud Console. And then if you look at the Kubernetes cluster list here, you can see KubeCon cluster. It was spun up just now. All of those things are being provisioned. But Terraform initiated that action. I didn't do it from the web console. And so that's part of the beauty of Terraform. Now I just did it as a human operator. But hopefully you can see how you could make that. That's an automated process where you have that infrastructure as code. All right. So just to go back to the Terraform file, hopefully some of these things are looking familiar. You've seen them in a few other contexts. So we've got that variable called Diotoken. We're pulling in information from the Digital Ocean Terraform provider. And then there are all of these Digital Ocean resources that we're creating. If you go to the Terraform provider page, you can see other cloud resources that you can create. So let's see. I think I have that at the bottom here. So once you get familiar with the Terraform documentation, you see here on the left, there are all these things that you can create with Digital Ocean. So you can create a container registry using Terraform, a database, a firewall, a droplets, a domain, basically anything you can create in the Digital Ocean Cloud console or with Doctrl you can create with Terraform. And there are providers for so many things. All the major clouds, AWS, Azure, and Google. Kubernetes has lots of resources that you can create via Terraform. And so that's just some of the power of what Terraform is looking through their directory. All right. Let's check in on our cluster. Still creating. These clusters are probably going to take between four and five minutes to create. So we've still got a little while to go. But let's take a look at where we are in the tutorial for chapter one. All right. So I ran Terraform apply. I entered yes. I'm waiting for that cluster to provision. And I'm checking in a few places. But this message in our terminal is what's going to tell us, hey, Terraform is done creating that resource. While we're waiting for Terraform to do that, we can get set up to access our Kubernetes cluster using kube control. And so what we're going to do is we're going to add an off token or certificate to our kube config file. And here's how you do that with DigitalOcean. So you go to your cluster. And on the Getting Started page, if you go to Connecting to Kubernetes, you grab this Doctal command. And I'm going to open a new tab so I can do this. And then I have information that says, hey, I added those cluster credentials to your kube config file. And I have changed your kube control context to that particular cluster. So if I say kube control, config, get context, I think I'm going to have to my backup cluster and then this cluster. Yeah, I've got both. So I have my backup cluster that I spun up prior to the workshop. And I have the cluster that's spinning up now. And I should not be able to access anything in the cluster yet because it's still provisioning. But I'll try anyway. Kube control, get nodes. It's taken a while. Yep, not ready yet to access. All right. And then the last step is what I just did is I want to verify that the cluster is up and running and that I can connect via kube control. So that's the command kube control, get nodes. So this is a good opportunity to stop, let you catch up if you're following along. And we'll take questions. So Annie will ask questions from the virtual attendees. And then if you have questions, if you're here in person, if you could line up at the microphone. No questions? All right. Well, we'll take a few minutes. We'll let my cluster get up and running. And then we will move on to chapter two. And like I said earlier, if you've gotten what you needed or you want to go see another talk, I will not be offended if you leave now. I want you to have the experience. So just out of curiosity, raise your hand if you're following along with me, like running the commands. All right. Not a majority, but we've got a few. Cool. And this repo will be available for you after the workshop if you'd prefer to do it some other time. All right, we'll check, see if the progress bar gives us any information. All right, we'll give it one more minute. If it's not done, we'll switch to my other cluster and then we'll hop into chapter two. Chapter two is exciting. It's setting up flux to build a get-ups pipeline. Oh, and we do have a question. So thank you. Go right ahead. Hey, Kim, thanks very much. Can you hear me OK? Brill. So I noticed the API token, the scope of it was very broad. Is there other plans for DigitalOcean to have like tighter scopes on the API token? I know it's not exactly related to this, but I kind of clocked that. Sure, yeah, I believe that's on our roadmap, yeah. Cool, thank you very much. Thank you. All right, well, let's hop into chapter two. So this chapter is titled, build a get-ups pipeline with flux. And I have found that the term get-ups, to me, it feels sort of like the term agile or some other tech terms that get really overloaded with meaning from a lot of different places. And so I'll tell you my understanding of get-ups, how we'll be using that term in this particular workshop. And it's OK if you have a different understanding of get-ups. But my understanding of get-ups, it's a set of practices where you make get the main source of truth for both your infrastructure and your application code. There are tools for setting up get-ups pipelines, things like Flux CD, which is what we're going to install and use today, and Argo CD. And there are others, but those are two really popular CNCF-backed projects. So a get-ups tool is continuously like watching the state of your get repos. And so if you make a commit and you push that commit and a change is noticed, then the get-ups controller says, I want to reconcile that. I want the change that's in get. I want that to be true in the Kubernetes cluster. And so it's that reconciliation process. Yeah, and Flux CD helps you do that synchronization. And it makes sure that your state inside your cluster, whether it's your infrastructure or your application, matches what's in the get repo. And so this is what we're going to be setting up in this next chapter. So we've already set up our DigitalOcean cluster. And we already have the control plane at EtsyD running. And so what we're going to do is we're going to use Helm to install Flux CD. Flux spins up a source controller and a Helm controller. And it communicates with Helm and manages releases. And then on the right side here, you see the people. That's us. And so when we make changes to our get repository, we push those changes. We're going to push them to get hub. And then if we add them to a particular directory in our repo, Flux is always watching that. And when we push those changes, Flux does what it can to make sure those changes are true inside of your cluster. So this is hopefully a GitOps experience for you. So the prerequisites for this chapter, you need a GitHub account. And you need the Flux CLI tool. So let me just double check on my cluster. Excellent. So this is the message from Terraform. Hey, I created that DigitalOcean Kubernetes cluster. I just want to verify I can connect. Cube control, get nodes. Excellent. Looks good. All right. So installing Flux, the first thing is we are going to bootstrap Flux CD and have it installed in our cluster. And what we want Flux CD to be able to do is we want it to have the ability to create GitHub repos and make changes to that GitHub repo. And so we're going to do that through a GitHub personal access token. So if you've never done this before, you go to your, oh, that's the documentation, which is also good. But I'm going to go to my GitHub account. And you go to your icon, and then you find the settings. And then all the way at the bottom, it says developer settings. And there's GitHub apps, OAuth apps, and then personal access tokens. This is what we're going to create for Flux to use. So I'm going to generate a new token. All right, I'm going to unplug this real fast while I get my GitHub password in there. What's this token for? We're going to say Flux CD. Expiration will go seven days. And then there are all of these different scopes. We want Flux, like I said, to be able to do all the things to repositories in my account. So I'm going to generate that token. And I'm going to copy it on my clipboard, like before. And then, again, we're going to store this as an environment variable in our terminal session. So I'm going to create an environment variable called GitHub token, and then give it the value of that token I've got on my clipboard. So I'm going to go up a directory. And I'm going to say export GitHub underscore token. All right. And next thing, this is so exciting. I love this. So we're going to run this Flux Bootstrap command. It's going to create a repository on my GitHub account. And I just need to make sure that anything in the angle brackets that I change the value of that. So I'm going to copy this whole command. I'm going to paste it here. It's got a couple lines. So first, I'm just saying Flux Bootstrap GitHub. Flux can bootstrap all sorts of different environments. It can do Bitbucket. It can do Git. It can do other things as well. But I want this to get connected with a GitHub account. Next thing is the owner of the account. That is me. So this is my GitHub username, Kim Schles. I want it to create or pull from a repository that's called kubeconworkshop. The path that Flux is going to be monitoring and then doing the reconciliation process with is called clusters-dev. And so I'm going to hit Enter. And this takes a little while. So it says connecting to GitHub. It looks like the repository was created and it's syncing. So let's go look at my GitHub account and see, do I have that repo? All right, it created this kubeconworkshop repo. Nice. All right, and we've got all this information. So it says deployment is ready. All components are ready. So let's see what got created inside this Kubernetes cluster. So I'm just going to say kubecontrol get namespaces. So I have the default namespace and the other namespaces that come with a cluster. But I also have this brand new Flux system namespace. So let's just look at the pods in there. kubecontrol get pods from the namespace Flux system. All right, so if you remember that architecture diagram, these should look familiar. So we've got the Helm controller, which is running in a pod. We have the customize controller. So Flux really uses customize, a notification controller, and a source controller. And so just to look at this again, you see those components now in the Kubernetes cluster created through that bootstrap command. And then there are some nice Flux commands where you can check that things are working. So if you run Flux check, it'll check prerequisites. Last night I did this, and it let me know my Flux CLI was out of date. And so I updated it. You get some good information from that. If something fails, you can run Flux logs to investigate and try to find out what's going on. This looks good to go. And then we can say Flux get all to inspect the resources that Flux has created. All right, says, hey, I created a git repository. And I'm going to be looking at that. All right, I think we're good on that. Oh, yeah. And then what was built into that repository? It looks like it should have three different YAML files. Let's take a look at that. Inside the kubecon workshop, we've got this sort of long file path. But we have those three files. We've GOTK components, GOTK sync, and customization. And these are files that are auto-generated by Flux. And some of them are long. You shouldn't have to mess with them. And they have a nice warning at the top. This manifest was generated by Flux. Do not edit. So you don't have to worry about those. All right, so we have Flux set up. We have a repo that Flux is listening to. And so the next step is we want to clone that repository onto our local machine. And we want to prepare a little bit more of the layout so that we can have the sealed secrets chapter work for us. So this command is getClone of the repository that you created with Flux. So I'm just going to grab this. I'm going to step out of that directory. I'm going to say getClone. And I'm going to change into that, so kubecon workshop. Excellent. And I'll open that with my text editor. All right, next up, we're going to create some new directories in that. And it looks like we have a question, so go ahead. Yes, two questions, actually. So how did Flux authenticated with GitHub? Because you did not provide the GitHub token, so I would assume it took it from your terminal. And how did Flux authenticated with Kubernetes cluster before creating the namespace? Excellent, so two good questions. One was, how does Flux authenticate to GitHub, and then how does Flux authenticate to Kubernetes? I believe that the environment variable that we set, the GitHub token that Flux is looking for, an environment variable named that in your terminal session. And then Flux also, I think, has access to your kubeconfig file so that it can communicate with your Kubernetes cluster. Thanks. All right, so this is where it gets a little bit more complicated. But we have some nice commands here for you. So I want to create several different new directories. So one is going to be called Helm. One is going to be inside the Helm directory, repositories, then releases, and then secrets. So copying this command should get you there. And once again, it's the cluster's dev directory is where you have to put anything that you want Flux to look at, because that's what we specified in the Flux bootstrap command, telling Flux, hey, look here. So let's make sure that we have those directories. So we've got Excellent. So we've got Flux. OK, we've got Helm, releases, repositories, secrets. That's what I am looking for. All right, and then finally, for Flux, we want to add some items to our getignore file. And so the first thing is we're going to, this is for a sealed secrets, we're going to ignore anything with this values dash values dash dot yaml. And then we're not going to ignore sealed yaml files. And we'll see what that is. I think I already have that in the getignore, but let's double check. Yes, they are here. Those things are ignored. All right, so that is bootstrapping Flux to communicate with GitHub and your Kubernetes cluster. And now the Flux controller is running, and it's going to take care of any of that reconciliation. And we'll see that in action in just a few minutes in the sealed secrets chapter. But since that's the end of the chapter, now is a good opportunity for questions. So we'll take questions from folks online. And then if you are in the room and have questions, if you could queue up at the mic. The mic, perfect. Yes, so there's a question online. What would be your recommendation regarding Git repo structure, EA? The folders for cross-plane providers, kind of along with the sealed secrets, resources, manifests, and so forth when working with GitOps. I'm going to try and restate that question. So the question is, how do you approach structuring your Git repo? Does that sound right, Annie? Yeah, so if you're following along, I would do exactly what's written in the tutorial. It's sort of a mono repo where everything's all in one repository. But the thing that, honestly, that I struggled with the most was realizing that you have to specify for flux which directory that it's going to be monitoring or which directories. And so being mindful of that. Yeah, for this tutorial, we've got Terraform directory. We're going to create some sealed secrets files. We've got cross-planes. We've got a lot of stuff going on. But I would stick with the tutorial for now and then once you get familiar with the tools, then you'll have more confidence to restructure it as needed. Thank you. All right, we have a question. Hi, yeah, I have a question. You used personal token for GitHub to connect to flux, as I think, right? Yes, yeah. Is there a way to avoid that? Because personal token is connected to a person and if a person leaves the company, you have an issue. So is there a way to work around that? Yeah, that's a great question. So the question is, we used a personal access token from GitHub to give flux access to our GitHub account so it can create and make changes to a repository. So you could do that through your work, like the work organization. And I'm sure there are other ways of handling that, but I don't know them off the top of my head. So I can do some research on that or I can hop over to the Flux booth after this and talk with the Flux maintainers. But I'm guessing there's a better way. OK, thank you. Yeah, good question. All right, we've got about 40 minutes left. We've got two chapters. Oh, we've got one more or we have more questions. So let's take that question. I've seen that you bootstrap the Flux stuff with an imperative command. There is a way to do it in a decorative way. Oh, is there a way? Oh, that's a great question. So the question is, we used an imperative paradigm with Flux where we ran the commands from the command line and we passed in the arguments. Is there a way to do it declaratively, so using infrastructure as code? That is a great question. I don't know the answer right now. But I'll find out and share it in the Slack channel. Or if you do the research and find the answer, let me know. And I'll share it with the group. So excellent question. Yeah, in the spirit of GitOps and infrastructure as code, not running those commands imperatively. All right, well, let's take just a minute. If you need to stand up and stretch, you can do that. And then we'll hop into the next two chapters. Oh, say that again? The Slack channel, how can I do it? Oh, yes. My name is Kim Schlesinger. All right, so we've spun up a cluster with Terraform. We've installed Flux continuous delivery. And the next thing that we're going to do is look at a project called Sealed Secrets. So in GitOps and using infrastructure as code, if you want Git to be your source of truth, you want to be able to publish files and not worry about it. But one thing that's really tricky is when you have confidential data, like API tokens or passwords or database connection strings, things that you don't want to commit to Git or GitHub or GitLab, whether or not the repository is private, things that are not safe to put in there. And so there are a lot of different solutions for this in Kubernetes. And today I'm going to show you a project called Sealed Secrets. So let's look at the diagram. So you can see our cluster is getting a little bit more complex. So we have our DigitalOcean Kubernetes cluster. We have Flux CD inside of it. We've got our control plane running. And then on the right here, you see there's a SealedSecret object and a Sealed Secrets controller. So we're going to be installing that inside our cluster. And then on the outside of the cluster, like where the developer or the platform engineer is, we're going to encrypt Kubernetes secrets using a tool called kubeseal. And then that's going to give us an encrypted string that's OK to push to your Git repo. And inside of SealedSecrets, there is a private key that will decrypt your secrets inside the cluster. So that's how we're going to handle that. So let's see what we're going to do. So in order to complete this chapter, you need to have completed chapter 2. So you'll want Flux CD set up up and running using the same structure that we did in the tutorial. And then you'll also want a command line tool from the SealedSecrets project called kubeseal. All right, so let's hop into setting up SealedSecrets. So the first thing that we're going to do is we're going to create a Flux custom resource called HelmRepository. But we're going to set up a HelmRepository for SealedSecrets. So let's see. The instructions say, change the directory where your Flux CD Git repository was cloned. So let me make sure I'm in the right projects. Yep, so I'm in kubicon workshop. And then we're going to use Flux to create SealedSecrets Helm Repository with this command. And I'm just looking through, is there anything that I need to change? I just need to make sure that I have an environment variable called Flux HelmManifestPath. So let me check that. Yes, so those are some of the directories we set up, clusters, dev, and Helm. So I'm good to go there. And then I'm going to copy this and paste it. All right, so it says, Flux create source HelmSealedSecrets. We've got some explanations for the commands. And we're going to have a new file, wrong place. And here it is. So it's a YAML manifest. We're using a Flux CD custom resource. It's called HelmRepository. And so if you've ever run Helm from your command line where you have to install a HelmRepository and then specify the chart that you want to create the release from, this is how Flux does that declaratively in a YAML manifest. So we're saying, hey, create a HelmRelease called SealedSecrets, put it in the Flux system namespace, and then you're going to pull it from the SealedSecrets directory or from the SealedSecrets repo. Actually, that's just the HelmRepository, not the HelmRelease. But that's OK. All right, next thing that we need to do is we want to have some values set for the SealedSecrets Helm Release. And so we're going to set the version of SealedSecrets that we're using. And we're going to pull a values file from a different project. So we'll just run this command. And I'm expecting to see a file that starts with SealedSecretValues in my kubecon workshop file. So looks good. Not a whole lot of values set here, but we have ingress is not enabled right now. All right, next thing, now we're going to create the HelmRelease. We have the SealedSecretsRepository. And so looking at this long command, we're creating a HelmRelease called SealedSecretsController specifying the release name, the source, the chart, the chart version, the values that we're going to apply to that HelmChart, and where that file is going to be created. So just in the root of the kubecon workshop directory running that command. And then we should see another file. Oh, it's in repository. Oh, no, that's wrong. It's in releases. All right, so we have this HelmRelease with all of that information, so declaratively creating a HelmRelease. And then next, we want Flux to see those changes and to enter into the reconciliation process. And so we're going to add all of those files as a Git commit. And then we're going to push that commit to GitHub. And then we're going to see Helm create those new resources. So in this command, still setting the SealedSecretsChart version and just adding the new files that we created with those Flux commands. All right, so adding those. I'm pushing that to my GitHub repo. Let's just take a look at that and make sure those changes. OK, it was updated 10 seconds ago. Great. And then let's just do a Flux logs and see if we can see what's going on. All right, so we see Flux is in that reconciliation process. And we see a HelmChart is being created. It says, no artifact available for HelmRepository. And then let's take a look at what's, oops, CubeControl, Git namespaces. Let's see what's in the FluxSystem namespace. What's new? So CubeControl, Git pods from the FluxSystem namespace. All right, so we've got the SealedSecretsController, which was created 40 seconds ago. So that was installed because we made that change in Git and GitHub. And Flux noticed the change and then did that reconciliation process. So we've got the state of our cluster now matches the state of our Git repo, which is GitOps, making Git the single source of truth. All right, so the FluxController, I think, it runs one minute every minute by default. So if you apply a change and you want to see it happen right away, you can run this command where you force a reconciliation, Flux Reconcil. Yeah, and then just to see that HelmRelease one more time. All right, it says it's ready. And that reconciliation succeeded. All right, next up, we are going to export the SealedSecrets public key. So a public-private key pair was created inside of the SealedSecretsController. And we need to have access to the public key so we can use it to encrypt our secrets that we don't want people to see. And so this happens in one of two ways. So you can try this kubesield command where you're grabbing the controller from the FluxSystem namespace, and you're asking for the cert, and you're saving it as this file. This doesn't always work for me, so let's see if it works. Nope, that didn't work for me. And so what I'm going to do is I'm going to port forward the SealedSecretsController, and then I'm going to use a curl request to get the certificate from the API endpoint. So I'm going to copy this. I'm going to open another tab. All right, so it looks like that pod is port forwarding, and then I'm going to run this command. And I want a pubsealedsecrets.pem file in my kubeconworkshop directory. Oh, it looks like I already. Oh, maybe it did work that first time. Oh, no, it created the file, but there's nothing in there. All right, so did that curl request succeed? Yes, so now I have the public key on my machine. I can stop that port forward process and back to the instruction. So I now have the public key. And oh, it looks like it's safe to commit that to get. So I'm going to commit that and push to GitHub. Excellent. All right, next thing, we're going to encrypt a Kubernetes secret. So what I think is the exciting part is we're going to have some data that we wouldn't want to have in a Git repo, and we're going to encrypt it using kubeseal and sealedsecrets. So I'm going to create a Kubernetes secret, and I'm going to call it your-app-secret.yaml. So I'm going to say, touch your-app-secret.yaml. And I'm going to paste this yaml manifest. So this is a secret. It's called your-app. And then here is the data. This data is base64 encoded, but that isn't strong enough to have it out publicly. And so what we want to do is we want to use kubeseal to transform this string into something that can only be decrypted by the private key that's in the sealed-secrets controller. So how do you do that? So we're going to run a kubeseal command. We're going to ask it to give us output that's yaml. We're going to have it create that new updated secret in the flux system namespace. We're telling it the name of the public key. We're telling it the name of the file where it's going to encrypt the secret. And then we're giving it another file name. It's called your-app-sealed. That's going to be the file that we commit to GitHub. So let's run this command. Let's see, do we have your-app-sealed.yaml? So I have your-app-secret. That's the one I don't want to commit. I have your-app-sealed. Ah, here's the one I want to commit. So this has been transformed from a generic Kubernetes secret into a custom resource from sealed secrets called a sealed secret. It's got some of that information we passed in through the commands. And then look at this encrypted data. This is safe to commit to a public GitHub repository. So you are good to go with having safe and secure secrets. So what I'm going to do is I'm going to delete that file that has the secret in it. And I'm going to keep the sealed file. And then I'm going to commit that to GitHub. And then sealed secrets controller. We'll take a look at that. It will decrypt the secret. And then we'll take a look at that. So all right, I already got rid of that your-app-secret file. Oh, this is beautiful. I don't even have to run a flux command. I can just use kube-control-apply to create that sealed secret. So kube-control-apply your-app-sealed. Excellent. And then if we look, kube-control-get-secrets-from-the-names-space-flux-system. Oh, there's a lot there. All right, so we have the your-app-secret in its opaque. It was created 10 minutes ago. So that sealed secret got turned into a regular Kubernetes secret. And we can inspect the secret to see if it has the decrypted data. So let's see. All right, so there's the base-64-encoded string that sealed secrets decrypted. So we've got that secret in the Kubernetes cluster, but it wasn't committed to Git or to GitHub. And then we're not going to do this step. But optionally, if you would feel safer, you can create a private key backup. And we've got some instructions there. And then a list of some security best practices and then some resources. So we just use sealed secrets to create the sealed secrets controller so that we can encrypt secrets from our command line, commit them to GitHub, and then they are decrypted through the sealed secrets controller in your Kubernetes cluster. So that is the end of that chapter. We'll take questions. So we'll do questions from the virtual attendees first, and then I see folks in line. And then we've got one more thing, which I'm super excited about, which is using crossplane to spin up some cloud resources. And then we'll be done. Any questions from virtual attendees, Annie? OK, no questions from virtual attendees, people in the workshop. My question is the CRD, so the sealed secret, the file that we created. You did the kubectl apply. But so this file isn't managed by Flux? Oh, yeah, that was not managed by Flux. Did we commit it? No, we didn't. But if we committed it, I think Flux would reconcile that. So it's not in a specific directory. It's at top level, right? Yeah. Yes, oh, that's a good point. You'd have to put it in that dev clusters directory. OK, so it's not managed if it's there. OK. That's true. Yeah. Thank you. Hi, I would have the same question, but another one. OK. What would you recommend to not accidentally commit your unsealed secrets file, maybe a pre-hook, pre-commit hook, something like that? That's a great question. So the workflow that I showed you where you created the secret with the base 64 encoded string, and then you manually deleted it, I think having a guardrail like a web hook or some check on that is a really good idea, because it would be very easy to accidentally commit that. So yeah, putting some boundaries around that, some automated processes to check that would be a very good idea. Thank you. Hi. Hi. Where is the sealed secret private key stored? The sealed secret private key is stored in the sealed secret controller pod. So when we did the port forwarding and we asked for the public key, the private keys also stored in that same place. Thank you. Yeah. All right. I wasn't sure we were going to get through all the chapters, but I think that we are. We've got 20 minutes left, so we will do some cross-plane. And then if you're following along, I'll show you how to tear down your cluster and clean up any resources so you don't get charged, and then we'll be on our way. So this next chapter is called make your cluster a universal control plane with cross-plane. And so cross-plane is a CNCF backed tool that allows you to create cloud resources from inside a Kubernetes cluster so you can make a cluster a universal control plane. I think cross-plane is really exciting. It took me a couple of days to sort of understand some of the exciting parts of it. But what we're going to do in this part of the workshop is we're going to install the digital ocean cross-plane provider in our cluster. And then we're going to spin up a digital ocean droplet that's just totally separate from our cluster so you can see us provision some cloud resources. So the next part, which we probably won't get to, is that you would pick a different provider. So all the major cloud services have cross-plane providers. Big projects in the cloud native world have cross-plane providers. Cloudflare has one. So hopefully when you see the pattern of how you install the provider and create the resource with digital ocean, you'll be able to apply that pattern to another provider and get up and running with cross-plane. So this chapter has no special prerequisites aside from the tools that you need for all the chapters and having a Kubernetes cluster. And so the first thing that we're going to do is we're going to create a separate namespace where we do all this installation. So we're going to say, keep control, create a namespace called cross-plane system. And I'm just going to make sure it did what I expected. All right, cross-plane system. Next up, so I'm doing this all via helm. And like some of you have already pointed out, you could do this all in flux, absolutely. You'll have to use your imagination, though. I'm going to do this just from the command line. So I'm going to add the cross-plane repository and then I'm going to update it. I'm guessing I already have that in my Helm repository. So I'm just going to run the Helm repo update command. Excellent. And then I am going to use Helm to install the cross-plane Helm chart in that cross-plane system namespace. So through this command. Excellent. So getting the information that cross-plane has been installed, let's take a look. We'll just look at the pods in the cross-plane system namespace. All right, so I have a cross-plane pod and I have a cross-plane RBAC manager pod. Excellent. So that's the first step. If you want to use cross-plane for any of the providers, that's the first thing you need to do is install cross-plane somewhere in your cluster. Now we're getting into the provider-specific instructions. So we're going to install the DigitalOcean cross-plane provider and I have that defined in a file called install.yaml. So back to the repository for the workshop, going to cross-plane and just looking at the installation manifest. So I'm using a cross-plane CRD called provider and I'm creating a provider that I'm calling provider DO for provider DigitalOcean and I'm installing it with this particular package. So I'm going to run a kubectl apply command on that. It's in the cross-plane directory. Oh, I'm in the wrong directory. All right, let me try that again. So kubectl apply the cross-plane install manifest. Cross-plane install. All right, it says that was created. So now if we say kubectl get provider, we've got one provider. It's the provider DigitalOcean. If you were using multiple, like if you were using a Google Cloud provider, you would see that one as well. If you were using Cloudflare, you would see that provider installed. So we're ready to talk to the DigitalOcean API, or almost ready, getting there to talk to the DigitalOcean API from this Kubernetes cluster. So next step is we want to configure two different things. One's a secret, and one is the config provider. And so I need that DigitalOcean token value again. Let's see if I'm still in the same session. Excellent, so I'm going to use that in just a minute. And we need to put that in this file. So we have the config.yaml. So here's where the token's going to go. There's a placeholder here called base64encodedProviderCreds. And just have a little comment here. This is an opportunity to use sealed secrets. So I won't be able to do that right now, but this is a place where you would use sealed secrets so you could commit this file up to a public repository. So I have to base64encode this value. And I have some instructions. If you've never done that before, you can do it from the command line. I'm not sure how it's done in Windows. And I'm not sure how much you trust DuckDuckGo, but if you search base64 and then a string value in DuckDuckGo, it'll give you the base64encoded value. So I'm going to echo that. All right, so there's my base64encoded value. You've got to have that for a Kubernetes secret. So here's how Crossplane's going to communicate with my DigitalOcean account using that token. And then the next thing that we're going to do is set up this custom resource called a provider config. It's going to be called DOExample, and it's going to pull the credentials from this secret that I created above the provider DO secret. So I'm going to apply the config file. So kubectl apply crossplane config. All right, no error messages, so it looks like it's good to go. And then the next thing, this is where I think it's so cool, is we're going to create a DigitalOcean droplet from the cluster, from inside that Kubernetes cluster. And so with crossplane, any resource that the provider offers, you can probably create it using a YAML manifest. And so for DigitalOcean, there is a custom resource called a droplet. You give it a name, and these things should look familiar. You say which data center region that droplet's going to be created in, the size of the droplet, the image you want that droplet to use. And then the provider config ref, it just has to be the same name as the config ref that we created here. So let's look at our cloud console. Let's look at droplets. So these are droplets for the Kubernetes clusters that I have running in this account. And what I want to happen is I want a crossplane droplet to get created. So let's do it. So kube control, apply, crossplane, and then droplet.yaml. It says the droplet was created. And if we look at the cloud console, hey, the droplet was created. So that droplet was created from that Kubernetes cluster. I didn't create it from my command line. I didn't create it using Doctal or from the web console. But just a little bit of the power of what crossplane can allow you to do in terms of being a universal control plane. So that is how you create a droplet on using the DigitalOcean crossplane provider. If you're working through this tutorial in the future, so the next steps is that you would choose a different provider to install. And the steps are very similar. You install the provider. You set up however they like to authenticate, probably a secret with a token. And then you find a resource that you want to create from your cluster. And this is the official list of crossplane providers. So you see AWS, GCP, Azure, Alibaba. You've got Rook, Helm, Terraform, SQL GitLab, Equinix, Metal, Sevo, Argo, Styra, Cloudflare, so many things. Just imagine if you want your Kubernetes cluster to be the mothership. It can create all of these things. And this is a way for you to implement multi-cloud infrastructure. Maybe you use DigitalOcean for something, but you also use Google Cloud resources for something else. You can set up a cluster that can handle all of that. So that is the crossplane part of the tutorial. And crossplane does so much more I wrote a blog post about it for DigitalOcean a few weeks ago. And part of the power of crossplane is that if you are a platform engineer, you can create different resources that you give to application developers that maybe they don't want to know much about your infrastructure. But you want them to be able to spin up databases or whatever you want. But crossplane also gives you that power where your platform engineering team can define special resources and tell your application team, hey, if you need to do some testing or you need to spin up this or that, here's how you do it. So I can't speak highly enough of crossplane. I think it's a really exciting project. So that is the end, except for one thing of the workshop. The last thing is destroying your cluster with Terraform. And so you don't want to get charged for having resources running that you're not using. And so you can just destroy your cluster with a Terraform destroy command. So I'm changing into the Terraform directory, and I'm destroying it. Terraform is like, hey, do you want me to get rid of all of this stuff? And I'm going to say, yeah, I do. And it's destroying that. And then I'm going to just manually destroy this droplet that I created. Excellent. And that is that. So we'll take questions. And if you haven't left already, I would love if you learn something or you want to ask me questions, come up to the front. I have a few t-shirts and tote bags and a ton of digital ocean stickers. I'd love to meet you and talk about that. Again, slack me if you need those credits. Apologies that the credit codes didn't work. But we'll take questions from online, questions from the folks in person, and then we'll be done. Perfect. So there was two questions from the Q&A box. Can sealed secrets convert a YAML with string data so you don't need to base 64 encode as you're removing a secret file after you've converted it to a sealed secret anyway? OK, so the question is, can sealed secrets, do you have to base 64 encode the secret before giving it to sealed secrets? And I believe it's a Kubernetes constraint that the secret data must be base 64 encoded. So yes, you have to have that there. Great. So how should we maintain the universal control plane, which is basically another cluster in a GitOps way? Is it recoverable if it goes down? Great question. So how can we maintain the universal control plane in a GitOps way? I guess you would want to have a very strong disaster recovery plan. So planning for, let's say this cluster goes completely online, what is the plan to handle that? So that's the scope of a different talk and workshop. But yeah, using all of the things that we were exposed to today, like Terraform flux, maybe use Argos CD instead, finding a way to get all of your pieces in a GitOps workflow so that if you have a cluster that goes down and everything is defined in Git in theory, you would be able to spin up that cluster using tools like Terraform and Flux. So that's how. All right, a couple of questions from the audience. Thanks. There was another question in the channel I want to highlight. It was about using Flux with, for example, 100 microservice repos, so multi-repos. What would you recommend for these amount of repos regarding conflicts, for example? Yeah, so the question is, what if you are running 100 microservices and each have their own individual Git repo? I think Flux would be able to handle that. You have to configure Flux to listen inside of those repositories. But if you have a good templating system for your teams to copy and use, I think that would help you use Flux for all those microservices. Different repos can have conflicts between each other, right? How would you resolve them? Oh, that's a good question. I don't know. I'll try and find out. OK, thanks. Hi. I noticed that you removed the cross-plane-created resources manually. My question is whether cross-plane has the ability to manage the drift or manage the lifecycle of the cloud resources in a way like our GoCity of Flux does it. Yeah, cross-plane does have that capability. We've got a bug in the DigitalOcean provider, so that's why I had to do it manually. But yeah, you can destroy. You can use cross-plane to get rid of resources as well. Good eye. Thank you. All right, thanks so much, everybody. Thanks for those of you who are watching online. I'm Kim. Come say hi. I'll have a mask on. Get some swag. Reach out to me on the CNCF Slack workspace or send me an email, especially if you need those credits. We'll get those applied as soon as possible. Thanks so much. Thank you. Thank you. Thank you.