 Hi, everyone. I'm Kingdon Barrett. I'm a developer experience person at Weaveworks. I'm open source support engineer, and this is Yozus Geigeles. He's not able to be here in person today, but he is live and he can take questions. I will relay them. I don't think he'll be able to hear us, but he can hear me. So, as long as I stand close to the computer. Yozus, would you like to introduce yourself? Yeah. I'm a developer experience engineer, at Weaveworks, and at this point, I'm primary developer of the Visual Studio Code extension. So, my primary expertise is not in flux, but in flux UI work, but I've been learning more and more. So, I'll share what I know, and Kingdon will fill in the rest. We've been working on a VS Code extension together for a while. It was developed by a few other people, and we've been maintaining it. Go ahead, and I'm gonna let you take the lead, so. Okay, so I was trying to prepare a demo to that Yozus Flagger. Flagger is a progressive delivery system, and I wanted to demonstrate a developer using Visual Studio Code working in two environments, working in test and working in staging. And it took two different processes for rolling out new deployments to test and staging. And in this demo, if you push something to test, you should see it immediately applied, but if you push something to staging, it goes through the same process you would have if you're pushing something to production. And if you push something to production, you don't want something to go wrong and everything crashes. So they're safeguards, and they're all backs, and there are different ways of handling it. Flagger is a tool that uses Flux, which we'll explain what that is later, to configure that. And the way it works, it's a cannery deployment. When you upgrade my code, it creates new pods in the Kubernetes cluster, new containers running the new code. And then gradually and incrementally, it routes traffic to the new version of the code. And then it checks if there are errors, or 500 errors or any kind of problems. And if everything looks good, it switches over all the load balancers to the new code. And if something goes wrong, it reverts. So the staging is a complicated process, but ordinary developers should be able to just click some things and then have the code go through this process. But to do that, I have created two clusters, both of them using kind, kind staging and kind test. And these clusters have ingress, which means that I can reach them from my laptop, which is a MacBook, so they run in the VM, but I can address them through their host names. But that, unfortunately, is not working, so we're gonna try to debug it. This is how I created the clusters. I created one cluster with this kind configuration where it has host port 80, go to container port 80, and port for free to for free. And the other one was selected to have this, so that they're listening on two different ports. So test cluster should be listening on 8080 and sending something to 80 in the cluster, but that is not working. So we need to find out why that is happening. Let's see. Okay, so here's the test, and then we also have our get-up set up as well. So, and there's one final thing. This is very cheap, but I think for a demo, this is good, there. So all of it is the same. It's my local machine, but if it has the host name of test, then the ingress will go to test. So if I want here, here, here, here, here. I should be seeing the pot info app right now, but there's something wrong with the ingress controller. So ingress controller is the piece of Kubernetes that uses nginx to pass traffic from the external internet into the cluster. And it's somehow misconfigured that I haven't been able to solve. Kingdom might be able to sort me out though. So where should we start? So where should we start? Great question. Let's take a look at the nginx logs. We might need a terminal for that. Yep, let's do two. Can everyone see or do we need to font a little bigger? A little bigger? Great. Okay. So this is the pod, nginx, and passes traffic from outside in. So if we do, we select this pod, this is the namespace, this is the pod. If we do K, logs, namespace, this is namespace. Move some zoom stuff out of the way. That's all right. Yep. So ingress, fold, now dead. Okay, this is the... Just so everyone knows this is not scripted, we're actually debugging this live. And this is also open for Q and A. So if you have questions about, maybe we should show before we dive too deep in, how do you download the VS Code extension and where's the marketplace page? Where's the documentation, that stuff? Yeah, so if you have VS Code open already, you can get ops, and it should be coming up near the top. You will install this. We also have a pre-release version that has newer features, but you can find it here in VS Code itself. I think it only shows up when you, we don't have currently a pre-release version that's newer than the latest release, so you can't opt in to pre-releases right now, but normally if we had a pre-release, you could do that, join the pre-release channel, and then we can publish changes if you find a bug. There are bugs, I'm sure. There are definitely things that we are aware of that are shortcomings as well, maybe not bugs per se, but things like one shortcoming. This is based on the Kubernetes extension from VS Code from Microsoft, and that uses a fork and exec to run kubectl. So it's a bit slow and it's a bit, it's not that it's unreliable, but if you have many of those resources on your cluster, many flux customizations, many sources, then it can be a bit slow to refresh. Yeah, clunky, this is a very high priority for us, next. Okay, so yeah, this is the environment, this is a test cluster, and the logs that we're looking at don't seem to suggest anything is wrong. Maybe I should, I think I'll try and rebuild the cluster just to be sure. Maybe I'll delete both. Before you do that, why don't you describe the ingress? See if you can see any events on ingress resource itself. Right? What is KO? No, it's commands. No, plus graph doesn't take OYAML. Oh, that's right, that's right. Yeah, here we go. Okay. Nope, this doesn't tell us anything either. Yeah, if you wanna try and tear the cluster down and start it over, maybe if we only have one, I know that you did something to make sure the ports wouldn't be in conflict, but... Yeah, this could be a thing because I was testing the demo setup, and it was all working great, and then I added the second cluster, and it's all messed up. So that could be exactly it. I'm gonna delete both of these, and we'll try to rebuild them from beginning. So kind, get clusters. I'm delete cluster. We're gonna create a new cluster, and we will use this configuration. So the configuration takes the host port 80, sends it to the cluster ingress, port 80, once the ingress is ready. Okay, so, all right, it should be really blank. No, nothing in there. We can start from scratch. And the way the demo is structured is I wanted to show some things where you set them up manually, you make sure they work, and then you then you move them to Git, and then you set up GitOps to make sure they don't need to worry about it anymore. Now Git is keeping it up to date. So this is the stage where we set everything up manually. Okay, we can see in our extension, reload everything. There's only one cluster, there's no nothing on it, no flux installed. Maybe we don't need flux right now. Well, let's try to set up ingress. I have some stuff here. This is the way I was installing it before using Helm. So Helm is a package manager for Kubernetes if you're not familiar. Okay, we already have this. So we should create a namespace where the ingress settings will be. And now we're installing a Helm chart which is like a package, like a Debian package or Red Hat package, but it's for Kubernetes. And it takes a lot of different configuration parameters. And these ones I found before work with the kind and engine next. So once this is set up, you should be able to create ingresses that take your local network to the cluster network. Yeah, so Helm has a concept of, I guess repositories and releases and charts. A repository, what it sounds like where all the charts are stored. And when we install a chart, it creates a release which is a combination of a chart and all the parameters to apply it specifically to your cluster. All these settings plus the chart make a release. And I'm just running a command line right now to do this. But I wanna be running command line every time I delete the cluster. So I would use flux which would automatically do this for me every time I create a new cluster. Okay, so we have ingress. Okay, la la la. Okay. So we should now maybe try to create pod info. From Helm repository and then try to try to reach it for ingress. So I have a little bit of stuff here. I'm just, I have these different manifests for flux objects and the way they're organized is based on sort of what's called a bootstrap system. So this is a test, it looks like more stage. This is a test environment and it has some flux configuration and this configuration is gonna load this specific, this git repository and then it's gonna constantly apply it to the cluster. So any change you make, including a change to this configuration too will be reconciled. But I don't know if I'm gonna do that. I'm gonna try to do it as simple as possible from the start. What do we have here? We have a Helm repo, very simple. This is, this is a URL. This is a, we're index of all the Helm packages. This is a namespace, we're gonna install it. This is the name of the repository and every minute it will be reconciled. So go back here. Go to base. This is shared between both clusters. Okay, Helm repo. I'm gonna use Kubernetes, apply.f, Helm repo which will create this object in Kubernetes. So right now, if I do, actually it's not gonna work because Plex is not installed. So if we do, can we get, there's no such thing. So if I try to, if I try to import this config into Kubernetes, it's not gonna like it. There's no secret keys. So I'm gonna use this and then install it this way. So we just clicked on enable GitOps here. This is a little bit different than bootstrapping. This is doing a flux install. You wanted to talk about that? Yeah, yeah. It's exactly what you said. It's not bootstrapping. It's just setting up flux. But if you delete flux, it will not be self healing and it will not manage itself. So this is not what you want to do when you have your final setup. This is for learning or experimenting. It's for a kind cluster. Honestly, there's no reason to have a permanent deployment on a kind cluster. It's not gonna help you. Yeah, we're trying to show different use patterns for flux. And often this is where people start and it's easier to understand because bootstrap is flux with flux. So we start with one level of flux. Okay, so this might bug out a little bit and keep running this for too long. But it's now installed, I think. So if we run this again, apply Helm repo. Okay, that's created this from this. If we reload, we can see now there's inside the sources, default namespace, there's the this, which is undefined. I think it's not loaded yet. Let's try wait, we can style. So what we're waiting for here, these circles at the top, they should have green check marks in them and... Right here, yeah. And these should not be running so long as well. These ones up here. Those are health checks for flux itself. Okay, so they... Okay. There we go. So do you wanna tell what the Helm controller did or the source controller did when we created this object in Kubernetes? Yeah. Yeah, go ahead. Okay, so the source controller is reaching out to the Helm repository, which hosts an index.yml file. And I'm gonna explain this in a little bit more detail than I probably would because I wanna contrast it with something else in a minute. But the index.yml file contains all of the release metadata if you're not familiar with how Helm works for every release, every published release of the chart and any other charts that happen to live in that Helm repository. So I said I would contrast it with something else. This is different than an OCI repository where if you'd like to list all of the releases in the OCI repository, all you have to do is fetch the list of tags. It doesn't come with all the metadata. So you can get 1,000 pages of tags much faster than you can download all of the metadata for like Bitnami or something very large. And this is a feature that flux is pushing forward. I'm not sure where the intersection of people are that would care about this, but I know that people who are using Helm and people who are using automation together, it's flux users. Sure, there are other solutions, but flux is the only one that I'm aware of that uses the Helm SDK under the hood. So if Helm itself has a scaling issue like index.yml, it's gonna reflect when you use it as automation. So by and large, I haven't seen a lot of people who care too much about Helm OCI support unless they're flux users. And then since it's running all the time, it's like a DevOps Borat, right? It's at scale. It's much worse if you run it every five minutes or every one minute, then so, so. Yeah, when we're developing, we could run it every 10 seconds, but it would kill production. Okay, so Helm release, it fetched the artifacts from a Helm repository. Nothing has been applied to our cluster, but now flux knows that there's some packages. So since we have a Helm repo, I'm also gonna do the same way, and I'm gonna create a Helm release directly without using flux, just I'm just, well, using flux, but not using flux to managers. I'm just gonna tell flux about it, and that's it. Okay, so this is a Helm release. All right, let me first apply it and then test. All right, okay, it was created. The object's created, now Kubernetes doing its thing. Well, it does. This is a chart in the repository. Repository contains many charts. This is a version number, which is what this demo is about. We want to deploy different versions to different environments. And this is how a Helm release object knows about the Helm repository through the source rep. So now they are connected, and whenever new stuff comes to the Helm repository, right here this Helm release will be updated too, depends on conditions. I also configured some values. These are the parameters you saw me before pass them through the command line, but now they are passed through this configuration, which later can be stored in Git, and you can merge and do pull request to change a configuration like you would with code. But right now we're just making sure that it says test on our app when it loads, so we know we're in test and burn. Let's reload just to see what we got. Green Helm repository, green Helm release, pod info, and we can see what that created. Create namespace, deployment, and a service. Deployment is a fundamental building block of Kubernetes. It describes how to create pods from container images, let's say. It does other things too, but that's good enough simplification, so it's gonna run this app on this port, gonna pass some stuff in there, some parameters, and it's gonna run, it's gonna create these pods to replicas, and gonna put them in default namespace. They're gonna be called pod info. It got some health checks, and looks like it all went well. So right now we don't have ingress, we can't reach a cluster, but we can use port forward to have a tunnel into the cluster. It's not for deployment, it's just for debugging. So we have a service, pod info service, and we see that this service right here, sorry. So if you don't know, service connects to deployment. Deployment describes a workload, service describes the connection parameters, the network parameters. So this pod info service tells us that there are these ports that we can reuse if we're in the cluster, to reach a pod info. So I'm gonna tunnel into the cluster now. So we're gonna do K, forward, forward, and then we're gonna do service, and the name of the service is, okay, name. So we always do kind and then name. So the kind of service and the name is pod info. So just type in a cc for short, pod info, and then the port, the report was, okay, now, Kubernetes forwarding to that, local host, 9882 cluster. So here we got test, it's getting the values that we want. It's running version 6.0.0. I'm gonna show you if I have no getups, I wanna run some other version. Let's do it, you know, let's just update it. I go to my, this, go to my helm release for test, set it to 6.1, and this is what Fox does in the background in the cluster, what I'm doing manually now. So I'm going to, let me do this, right. Yep, now it's updated the database on Kubernetes to say that we should be looking for this version of this chart. Let's see what this means right here. Still the old version. Let's do this. So what we just saw there, can we go back to the pod info screen for a second before you, oh, I thought it was not responding because yep, it is not responding because we're using import forward. So that's why we're showing you ingress because this is an actual solution to a problem that you may have if you've tried developing locally on Kubernetes. All right. Yep, it deleted everything and we created with the new config so the connection was lost. The port forward selects an actual, I know what we said was we're connecting to a service, but what it actually does is it selects a pod on the backend and it fixates on that pod. So if that pod goes away, your port forward goes away. Yeah, exactly. So I'll re-establish port forward and I'm not sure it's the website we'll reload. It did and now we have 6.0.1. So now the Kubernetes knows about the new version, everything is screened. Okay, so maybe let's try now to do the same thing but with ingress and then we'll get into flux after that. So this is an ingress configuration that should work and it's gonna say if the host header, HTTP header for host is set to pod info.test Then we gotta look for service named pod info, port 80 on the service, maybe that's the problem. Maybe it should have said this, maybe that's working that stuff. Hey. Yeah, that kind of looks like the problem, yeah. Let me try that. It would have been a huge disappointment if we didn't figure out what we did wrong in this live debugging session. Yeah, absolutely. All right, let's just try it. Okay, so we gotta go service pod info, port 9898 and it's an ingress type and this is a URL. So you can have same host name, many apps at different URLs, very convenient. Okay, so the ingress is Kubernetes doesn't know about the ingress, we're gonna kill this and we're gonna play apply. About that, ingress. Okay, now it created the ingress. Engine X is running because of this object. It knows that it should be listening for pod info, host name. So let's do that. And it's all HTTP, we're not covering certificates. Look at that. So simple. Okay, so now we have what looks like a real URL and if you have local network, you can have that or you can have enterprise network depends what you have, but here I have host names, but it works if I go to local host, same IP address, there's nothing there. Ingress, it works like a reverse proxy, I guess. Okay, cool. So this is progress. Now we add ingress, then let's do this. Let's take everything and let's put it into flux. So now instead of us having to make these changes, flux will continuously monitor and reconcile any changes from git. So we're no longer telling Kubernetes what we wanted to do. Now we're writing everything to git and flux is making sure it's synchronized with Kubernetes. It is baby hard at first, but it's very powerful. Okay, so I'm gonna, this is a little taste of bootstrapping. I'm gonna, this is a sync YAML and I'm gonna just apply this right now and then it will pick up. So it will create git repository called auto-deploys and this is my git repository and it will create a customization, which will load this git repository and then we'll go to this path. The path is base test, same place where the sync is. So it will find itself too and it will go to customization YAML and everything in this file, it will load and will keep constantly up to date. And that will pull in the Helm repo we already have. We'll pull in this Helm release and ingress we already have. So nothing should change, but let me show you something. So right now if I do K, git ingress.a, I'm gonna just just delete ingress.info, okay. No more ingress, I am a attacker or I pressed the wrong button and now everything is broken. So we don't want that. I'm gonna instead ask Flux to put it back to where it was. So let's just, okay. Let's see what YAML, okay. Okay, now Flux knows about this repos and they should be reconciled. You can go here, you can reload. Now we have two repositories. One is PodInfo, our application code presented as Helm repository and different ways of doing it. Second is our management or fleet infrastructure here called auto deploys because that's the name of the talk. But this is our control git repository. So we are pulling this git repository every minute from a branch main and Flux knows about it and then it has customization that is referencing this git repository and every minute it will check if a repository updated and if it did it will look inside this folder and everything inside that folder inside here will be applied continuously. So any change I make within a minute will be alive again. So I've been talking for a minute. Let's see if we got some ingresses. Okay, look here's an ingress. Flux put it back and go here. Let's see what we got. We don't have ingress. You've still got port 8080 in there. I'm not sure where that came back from. Oh yeah, sorry. Oh, we're back to where we started. How do we do that? Yeah, I don't know. Yeah. Oh, is it on port 80 again? It was, how do we do that? It should have been in port 80. Let's do this. Let's look at the ingress I guess. C, O, ingress, port info. So it's hitting port 80. Oh yeah. Yeah, I think that's why I have some more. Maybe you forgot to save, commit and push that change because you... That's exactly what I did. Thank you. Novice error. So while we're on the subject it's probably a good time to mention we've getups which is another open source tool that if you are thinking this is a bit overwhelming we've designed, we've getups with that in mind. So, and VS Code extension also has a similar feature. We've kind of, I'm gonna give Microsoft a little bit of credit here because Microsoft built a flux extension for AKS where you can go and install a managed flux and that's not what we're here to talk about today but we've taken some inspiration from that for the VS Code extension to build a workflow that's similar. So flux config in AKS terms is kind of an umbrella for all of this. If you thought I have to learn about a Git repository and also customization and then there are alternates to Git repository I have to learn how to plug them in together. This is kind of a lot. This is more like a one stop shop entry point click next, next, next by the end you're done. Forget you ever did this. It just works from now on. So that's the experience maybe you wanna have in we've getups it's called getups run and you can skip the Git repository altogether. It will just sync from your local directory to wrap back around to the problem we just had as we forgot to commit. So. Yeah, I wanted to show you the reconciled feature. You can right click and still waiting one minute, five minutes. You can also suspend, delete and there's a UI. I don't know, I'm gonna show you the UI because I worked on it quite a bit. Add customization, you can create a new customization export YAML and then put that YAML into Git and then have Flux maintain it. So this configured getups workflow is very similar to what I just described the Flux config. And in fact, it's actually integrated with the Azure Flux config all of these tools you can use together. So that wasn't completely clear. Yep, yeah. So the new manifest are in Git our bootstrap system picked everything up to replace the ingress. Now the ingress has the right port for the service internal to the cluster. So if every figure is right, we click enter and we get this. This is what it should say. Okay, so now we have ingress that's managed by GitOps. And then what I wanted to show was the two paths where when you have flagger when we have immediate upgrade. So if you have immediate upgrade now with GitOps, you will go to your home release. Let's say when I put 602. Okay, so I'm not gonna apply this to cluster. Like we said, the status. Do you want to use the VS code get extension? Now this might be. Yeah, yeah, it should be. Yeah, so changes, changes. This is staged, okay. 6.0.2 for test and commit. So part of this demo is showing that you can do all of this without leaving the VS code extension. We're both predisposed to using the terminal. So it's kind of hard for us to beat ourselves into that workflow. But if you prefer the IDE, you don't have to leave the IDE. That's part of the experience too. Yeah, and there's really a lot of great plugins like this GitHub where you can see pull requests and I don't have any right now, but issues, you can manage everything from there. But we push 602. Let's see if it updated already, 601. I take a minute, reload a little bit. Maybe we can make it go a little faster. I think we can solve this. And then we can solve this. Oh, it looks kind of buggy because it was 601 there. This is another issue we'd love to solve is that it doesn't live update. It's not polling. You have to click the little circle to make it go. Yeah, each time it's running QPCTL and querying everything. So it's not very fast, but all information is there conveniently present. 602 is now available. Service endpoint, nothing changed, but we made a change in Git and now we have it in our test cluster. Okay, so this is how we work with test environment, but for staging environment, we want something a bit safer. So I'm gonna now create a staging cluster and create a flagger on it. And this might not work because port 8080 on the config. Let's read this here. I'm config, replace. Okay, so this host port will be 80 and eight. Get the stage ingress, 9898. I am full, I'm at the stage, I'm at default. So while we're waiting for this part, the benefit of flagger is it's a progressive delivery tool. And if you are trying to roll out changes in a way that the impact of a failure is reduced, flagger is an excellent solution for that. And if the demo works, we'll see exactly what we mean. So, use the UI, it's nothing here. Again, gonna just install flux instead of bootstrapping it. Flux is installed. Okay, so there's K applied on F, which will take a single file and dash K, which will take the customization file, which is not a Kubernetes object, but Kubernetes configuration, command line configuration, not in cluster. And it has all the other files so load everything from this customization YAML. Nope. Customization YAML in this folder. So we have a helm release, customization, ingress was created, a git repository and a helm repository. Now it should all be staying in sync, reload, see what it looks like. Undefined, undefined, undefined, progressing. Those undefined values are coming from before the flux controllers have reconciled those resources. I think so. I'm confused how that works. I've not really seen it too much before, but seen a lot today. Maybe tell it to reconcile some more. Okay. You may have to do with the amount of load on your machine. Yeah, yeah. That might be why. Yeah, I'm running quite a bit. Okay, progressing. So if you open, so the manifests, the way they work is you have a very simple manifest here. Let's look at the helm release instead on stage. You have a helm release. You got spec, interval, chart, values. This is all that you give to Kubernetes. And then Kubernetes will give you a lot more back. So it will add annotations, time stamps, finalizes and prevent things from being deleted. I guess this is managed by Helm, so it shouldn't be easily deleted. Labels, IDs, and finally status messages, which is a very critical tool for debugging. But it all looks good. This is green. Everything is green. So if things go as promised, we should have, we should have a synchress here, on info stage, on port 8080. Well, that's life in the big city. Yeah, I think that maybe, like you said, this is because kind does not want different host port clusters. There may be some configuration that we missed, but the kind documentation is excellent. And if you want to set up an ingress like this yourself, you can go directly to the kind documentation. Do you want to pull that up real quick? Well, we still have a second. Yeah, fine. Kind, start creating cluster. The cluster. Do you want to go straight to the ingress one? So I did follow this initially to set up Lager and it didn't, it didn't work though. There was a missing monitoring. So I had a custom held configuration. And also actually I found this, it's one of our maintainers, Stefan said, so it was flux local dev. This is a way to set up ingress on kind that's known to work, I'll share. This flux local dev is another great example, by the way, if you're interested in using flux and learning how it works on a local context. Like we said before, bootstrap is not necessarily for everywhere. Bootstrap is the production way to install flux. There are other ways like Terraform. But on a local machine, it would be silly to use Terraform unless your purpose was to learn Terraform. So do we have any questions? I think we're almost, or a minute over at this point. No questions? Okay. Can we get some idea of how familiar people are with Kubernetes and GitOps? Yeah, how many people are using Kubernetes already? Okay, it looks like about half. How many are using flux? One. Okay, cool. I hope this landed and I hope that you go and try out flux now. Anything else? Any parting thoughts? Yeah, every cloud provider has different set up but they're same principles. It's good to get started with kind because it's simple. Except in this situation. Maybe you want DKS. All right, great. Thank you. Yeah, thanks a lot, Kim. Appreciate it. Thank you for being a good sport. Yeah, me too.