 Welcome to this session on Kubernetes Manifest Lifecycle, what it is and how to get it right. I am your host, I am Oli Lansmar, CTO at KubeShop.io. I have a background in open source and a bunch of things over many years. And I'm gonna talk to you about manifests. As I'm sure you're familiar, Kubernetes is very much focused around manifests or YAML configuration files, as you might know them. Gonna talk about the lifecycle of manifests. I'm gonna talk about templating manifests. I'm gonna talk about some tooling, some best practices. Obviously love to answer any questions at the end. I'm gonna do a little bit of a demo as well. This is a pretty high level talk. So if you're already deep entrenched, hopefully there's some new things in here for you. But if you're relatively new to Kubernetes, I hope you'll learn a bunch of new things and find some new ways to think around manifests and related workflows. So first of all, just jumping straight in, what are Kubernetes manifests? If you wanna break it down, manifests are specifications of Kubernetes objects in JSON or YAML format. I'm sure you've all seen those YAML files that kind of describe an object that Kubernetes is going to manage for you once you deploy your manifest. The manifests are, usually it's one object defined per manifest file, but you can define as many objects as you want in your manifest files. And once you have your files, you can manage those obviously, and then you apply them to Kubernetes and Kubernetes will then try to create objects in your cluster corresponding to the configuration in the manifest file itself. Just basic manifest structure. I'm gonna look at YAML here throughout the presentation. Every manifest has an API version and a kind at the top, which specifies which kind of object you're gonna create in this example, the service. The version of the service is V1. Also a name is usually something that's required for manifests, although there are some exceptions. Namespace is definitely something that is optional. This is something used to tell Kubernetes in which namespace you want to create this object. And then maybe most important of all is the content related to the actual object that you're creating. And of course, this will vary depending on the type of object. So in this example, I am creating a service and the content of the manifest is the specification with ports, et cetera. This can, of course, be much more lengthy and complicated. This is just a simple example. Important to understand about versioning and schemas. As I said, the API version and kind to tell which schema and manifest you're about to use. And the schema itself then defines which properties and arrays and types and enumerations, everything that you can use to define the state of the object that you're trying to create. And these schemas are defined using JSON schema and OpenAPI. There's some limitations around what you can do. This is all, of course, documented on Kubernetes.io. Important here is not to confuse this version with the actual version of Kubernetes. So if you're running Kubernetes version 1.24, which I think is the latest for now, this supports both versions V1 and V1 beta one of the Chrome job kind. So if we go back here, you say I have a service here, API version V1, your Kubernetes version might support V1 and maybe an older version, but that does not really relate to the version of Kubernetes itself. And you'll have to keep an eye on the Kubernetes change logs to see which versions of API objects or manifests that it supports. For example, if I had a Chrome job that was using older schema, the V1 beta one, that will be removed in Kubernetes 1.25. So something to keep an eye on. And of course, if you're creating new manifests, always try to use the latest version. But with the caveat, make sure that that's actually the version that you're deploying to. So if you're deploying to GKE or Amazon or whatever, make sure that the target Kubernetes version there supports the version of the manifest, the objects that you're trying to create. Okay, moving, rushing forward. Next thing to be aware of is that manifests pretty often refer to each other, right? So an object in Kubernetes does rarely live entirely on its own. So there's plenty of references in objects to other objects. So just a couple of examples here. Some references are purely name-based. So for example, this is a config map ref inside a deployment and it refers to a name, config map with this name here. The specific case, it's actually an optional ref. You can see optional is true. Other very common types of refs are what's called label-based selectors. So here, for example, we have a service which applies to objects or pods that have labels, annotations with this property name and this value. So it's a way of selecting which other objects this service applies to. And finally, more complex object references are also common for you. So here you can see this is a role binding which refers to a role by not just the name, but also the kind of object it's referring to and the API group it's referring to. And these refs can be pretty complex. And this is important to know, obviously, when you're creating your manifests and you're referring to other objects to get these right because if you try to deploy these manifests and your references are wrong, they're not going to work or they might not work at least as expected. So good to keep an eye on. One thing that Kubernetes adds to your manifests once you've deployed them is a status section. Usually, well, at the end of the YAML file. And the status kind of tells you the current actual state of the object. And I'm gonna show you an example a little later on. But so if you, the status is not something you add yourself but when you deployed your manifest and you can look at your manifest running in the cluster you'll see information on the status of the object. This will vary depending on the type of object depending on kind of the life cycle, the object itself goes through what kind of states or changes or metadata that Kubernetes and corresponding components can provide for this object. But something obviously good to know that if you're creating a manifest, your status is not something you're expected to add by yourself, it's always added by Kubernetes itself. So speaking of the life cycle, let's take a little bit of a step back. We've talked about very high level aspects of Kubernetes manifests, but let's have a look at the life cycle itself of a manifest. So usually, you'd be managing your manifest's files, restoring them in Git, you create them, you edit your manifest, there's a preview step. I'll get back to that a little bit later. One thing super important when you're working with manifest is to validate them. And I'll talk more about validation and different types of validations in just a second. And then finally, you apply your manifest. And what that really means is that you're taking your YAML file or JSON and uploading it to Kubernetes and telling Kubernetes, hey, this is, I've described my object here, I want you to create an object in the cluster corresponding to my manifest. And what Kubernetes is gonna do is gonna take your manifest, store it in at CD, and then using its components, try to create an object corresponding to the description in your manifest. That object will then go through a whole life cycle of its own and that life cycle vary depending on the type of objects. I'm not gonna drill into that here. You say that there's a corresponding YAML is stored in Kubernetes. This is something you can debug and troubleshoot. If your object is not working as you might want, you'd be looking at the YAML manifest deployed and the status that I talked about earlier, you might do some hot fixes directly in your cluster to see if that helps. Or if you're more process oriented, you might actually make changes, go back here, make changes, validate them, apply them, and then see, go through this loop until you've hopefully got your object in a state that is what you desired. And at the end of the day or the week or the month or the year, you delete the object and it'll be gone from your cluster. So very high level life cycle for your manifests both outside your cluster as files but then also inside your cluster stored inside Kubernetes. So let's just have a quick look at these steps. I think creating an editing manifest is something that's pretty straightforward. You'd use your IDE to create maybe a manifest, maybe you'd copy paste from another manifest that you find online or that you use as a blueprint or many IDs have plugins to help you with snippets or templates. To get started, another way to create your manifest is to use kubectl which is the command line tool included with Kubernetes. And it has some commands that will just, by just generate vanilla manifests for you that you can then work off. I'm gonna show this a little bit later on in demo as well. Other ways is of course you could connect to your cluster and extract manifests from there and use them as blueprints. But ultimately, these are text files that you would work with in your IDs or your code editors. Once you've worked with it, let's look at validation. And validation is a super important part of manifests actually, it's something that's often overlooked. I think what many people kind of do by default is of course syntax validation. Oops, sorry about that. Making sure that the YAML syntax is correct and your editor is definitely gonna do that for you. The next thing you wanna do is schema validation. So make sure that all the properties and values are in line with the target schema for the object that you're trying to create. We talked about this earlier in the here. Of course it's important to once again know which Kubernetes version are you targeting and are you using the right version of the kind of object you're trying to create and then are all the properties and values, et cetera, compliant with the schema for that object. And usually IDs will help you here. For example, here I see an error because I made a deliberate spelling mistake. It said names here, but it should be name and it's complaining that the volumes is missing a name. Next step of validation is reference validation. This is we talked about links to other objects. And here we can see we have two links referring to config maps. And this is referring, here's a link to secret which doesn't exist in my manifest or any of my other local manifests which is in this case marked as an error. It's also set optional false. If it was optional true, then this would pass because it's not a required reference. This can be tricky because sometimes the objects you're referring to are not in your local manifest. They are already running in your cluster. So when if you use a tool to validate references or links and they tell you, hey, you're trying to refer to an object that doesn't exist in your local manifests, those objects might still actually exist in your cluster. So this might work well when you deploy. So it's something to be aware of that even if the local validation fails, it might actually work in your cluster. So you'll have to kind of check that or use a tool that'll actually validate references against your cluster as well and not just within the local files that you're working with. And finally, I think maybe the most important or, well, definitely something not to forget especially when you're moving into production is policy validations. And policies are something you can define in regard to a couple of aspects. So one is more local policies in the sense that you might have policies that are naming around labels, around annotations, namespaces, et cetera, things that are specific to your project or your team. Other policies are maybe around performance or security. And there's a bunch of tools out there that will help you validate that you're not creating unsecured manifests in the sense that they might have too many, the objects you're creating have too high privileges or they're using too little or too many resources or in any other way, could be introducing things into your cluster that could have a negative effect on stability or integrity of your applications. So definitely urge you to look at tools that do this for you. Of course, for all of these validations, there's plenty of open source tools that will help you get this right. I'll mention a couple at the end and they're easy to integrate and to use. So it's definitely something you should be doing and discussing, of course, within your team if you're working with others, which policies are important to us, discuss with your DevOps and the people who manage your infrastructure. This is maybe something they would be more concerned about as they put manifests and applications into production environments. Okay, moving ahead, I've created my manifests. I've validated them, they're all green. I've fixed all security issues. Next thing is I have to actually deploy my manifest to my cluster. And this is straightforward using KubeCuttleApply, using tools like Helen and Customize or others. They will provide you with commands to deploy the manifest that you're using to your cluster. That works really well in the local development workflow. Eventually though, you're gonna wanna automate the deployment using CICD. And if you're adventurous or if you're scaling up you might be looking into adopting something called GitOps, which is pretty popular now. You've surely heard of tools like Argo CD and Flex that can promote a GitOps workflow. What GitOps boils down to is not that complicated. What it basically means is that you're managing all the state of your cluster, meaning all the manifests and infrastructures code and configuration management in Git. As you probably do already. And then you're using a tool to automatically synchronize those manifests with that source of truth for your cluster into your cluster itself. So all the changes you make, you just push to Git to your configuration. And then the tool or your CID system will make sure that your target cluster is actually has the same state as the state you're describing in Git. And there's a lot of advantages here. First of all, you have a record of state of your cluster. You can always recreate your clusters because nobody else is gonna go take backdoors to configure your clusters, hopefully. You can audit logs, you have a level of security and access control to who can change what when it comes to working configurations in your cluster. And there's a lot of great tools to help you establish these workflows and manage the reconciliation of what you're describing in your cluster in Git. Oh, sorry, what you're having Git with what you're actually running inside of cluster. Okay, now that was a really quick overview of kind of a basic lifecycle of manifests. I'm gonna do a really short demo of some of these things. I'm gonna use a cube cuddle and a tool called monocle, all free and open source. And I am handing it over to myself. Good luck, Ole. Okay, thank you so much, Ole. So I'm just gonna do a really quick demo of lifecycle from creation to deploy. So I'm gonna start here in my command prompt and run a command to create a deployment. And this is the same command that you saw in the presentation earlier. So I'm using cube cuddle and it's gonna create a deployment for you. And I'm gonna jump back over here to this monocle tool. And this is the deployment that was just created, just how to quickly look at that here to the right. So as you can see, well, I'm trying to resize. This was the YAML that was created by cube cuddle. You can immediately see that there are some things that aren't in line with the schema. These are marked as errors. And I'm gonna actually just delete those because I don't like these errors. You also see that it actually creates the status property, which it shouldn't, I'm gonna remove that. So for now, everything seems good from a validation point of view. I am going to enable validations of policies just to kind of show you what that can look like. So I've now done that. And now you can see that the default YAML created here by cube cuddle does actually violate a bunch of policies related to security and performance. And I'm just gonna find one of them in this list of policy. This one is called, let's disable all, let's enable this one. So this policy is related to that you should specify a specific version of which images you're using. So if you don't specify a version, Kubernetes will use the latest version. And this is a security, potential security issue because you might not know what the latest version contains. So we should be here more specific. So let's write the actual version that we wanna use. And as you can see now, this error now went away. And now I've created my manifest. I've edited it here. I've validated it with policies. I ignored some of them. And now I am ready to deploy. So I'll just press the deploy button here. And I'm gonna just deploy this in my local Minicube cluster and it's done that right now. And let's just connect to Minicube just so we can see what's been created there. We can close this thing to the left. Here we can see the actual deployment. So this was the object that I applied to Kubernetes. And now you can see that Kubernetes has added a status property to it. And it's also added a bunch of other things like default property values that I didn't specify. Kubernetes will add these for you. And it's also created the actual Nginx pod for me. You can look it here. I can see the status that it's up and running and everything is all good. So now let's just end this whole thing by deleting the object, deleting the deployment from my cluster. And now it's gone. And if I reload, you'll also see that the Nginx pod has been removed from my cluster. So we went through the entire life cycle in just a couple of minutes, three and a half minutes as I can see. Back to you Ole. Awesome. Thank you so much for that demo, Ole. Now I'm gonna go back and look at a little bit more advanced concepts related to manifests. One thing is around templating. This is something you'll pretty quickly run into. Basically, let's say you have a set of manifests that you're deploying to different environments, dev stage prod, but you see that there are some differences, right? You might have different resource limits. You might have different network settings. You might have different namespaces as you deploy these manifests, but a lot of the other things and settings or properties in your manifests are the same. So you want some way to kind of have a core set of manifests and then modify these just in line with some needs or some changes you might wanna make across your environments. And there's a couple of different approaches to this. I'm gonna look at two of them. One is customized, which uses a YAML native approach in Helm, which is not YAML native, but nonetheless popular and great, just to kind of give you a high level of view of your options. There are other alternatives as always in this world, something called CDKs or Decrate. You can generate your manifests from code. So if you prefer the right code and don't wanna write YAML, there's a lot of great tools out there to help you do that and adopt that approach. It's not something I'm gonna touch on here, but it definitely something might be worth exploring, so that's kind of in line with how you wanna do things. Let's start with customize. Customize is Kubernetes native configuration management. You can go to customize.io. I think the big advantage of customize is two fold. One is that it uses YAML for templating, so it doesn't introduce another templating language or format and it's also built into Kube-Cuttle, so it's pretty native to the Kubernetes ecosystem. What it basically does is that you have your base manifests as I was showing in this previous slide here, and then you provide what manifest called overlays or patches which then customize applies to these files to generate your final manifests. So these overlays and patches would modify your base manifests as needed. They might set a certain namespace or change some properties or whatever you might wanna do specific to a target environment. Just looking at a common file structure is that you would have a base folder where you have your base manifests and then you would manifest called overlays. Here's a dev and production folder which each has a overlays for different replica counts and then the actual customization YAML file which kind of tells customize how to overlay these files onto the base configuration file or manifest. I'm gonna show you a very quick demo of this. So this is one way of working. The other one is using Helm. Helm has a huge following, it has a slightly different approach. So Helm uses a custom templating language based on Go templates and Helm packages these into what's called a Helm chart, maybe a term you've heard, and a Helm chart produces Kubernetes manifests when you run or deploy a Helm chart. And Helm charts are parameterized using values files which are YAML files which can provide any properties as input that these templates use. And these Helm charts are usually distributed through Helm repositories. You can go online, there's an artifact tree I think is a big one where you can find Helm charts for most of the common applications, everything from Minecraft to MySQL have Helm chart so it's really easy to deploy them into your cluster. Ultimately Helm takes these templates, takes the configuration files, process disease and creates manifests, plain manifests as output and that's what you then apply to your cluster. Just a quick look at the file structure of a Helm chart. You have a chart YAML file which is the metadata file for your chart, licenses, et cetera. I think the important thing is the templates folder where you have your actual templates. As I said, these are go templates but they could also actually be vanilla manifests that don't do any interpolation of configuration values. Once again, I'm gonna show you a quick example for the download. Ultimately, customized in Helm are tools that kind of saw the same thing but you can really, something that is more increasingly common is to use these two together. So they're not inherently opposed to each other and a common approach is to use Helm for packaging your applications and providing configuration that's specific to the application itself. So if you're building an application internally, you'd package all your components that you wanna deploy into Kubernetes in your Helm chart and you'd add configuration values related to the application itself and then once you've used your Helm chart to create that, you can then, your DevOps team, your SREs can then use customized to overlay any configurations that are specific to the infrastructure which you wouldn't be aware of when you're building the Helm chart. So this is actually a really nice approach. So as the application developer, you create your Helm chart and make sure that that has a nice self-contained packaging and then once this gets deployed, your infrastructure team can use customized to set security settings or work with secrets or network configurations, et cetera, that would ultimately fit this application into the infrastructure. If we're thinking again about the lifecycle, how does this kind of fit in? And if you remember, initially I had a preview step and this is in the lifecycle and this is super important, I think, is that when you're using customized or Helm or any other tools is that you, before you apply them, that you preview the output of those tools and then validate that output. So similar to the validations that we talked about earlier, that you run policy, specifically policy validations and maybe link reference validations against what customized or Helm creates for you before you deploy that to your cluster. And this is pretty common if you, that you'll find maybe policies specifically around security issues in these manifests that are generated for common tools. And this is just because the teams that create those tools don't really know what your policy or security requirements are. This is the nice part where your infrastructure team can then take that output from Helm or customize and then further customize it and add the security settings, et cetera. So just going back here to kind of put that into context, so you'd create your manifests before and you'd work with your templates and then you would preview the output of the templating engine that you're using, you'd validate that output and then you would apply it instead of blindly asking Helm or customized to deploy to your cluster, which is the common approach and I'm sure it'll work in most of the times, but especially if you're looking into production, deploying into production, adding that additional validation step can be a huge life time saver when it comes to debugging. Okay, I'm going to hand it over to myself again for a short demo, this time with customize and also a little bit of Helm. And I will be back. Good luck with the demo, Olle. Thank you, Olle. Just a really quick demo of customize and Helm. So I've opened a customization here from the customize examples of repository or folder in the customized repository. And this is an LDAP server. You can see here to the left, the base folder, containing some basic manifests, deployment, a service and then you can see overlays for production and staging. So if we look in here, you can see that the production has an overlay for the deployment and staging also has a different overlay where there's different replica setting. And so what's interesting for me to do here is for example, if I want to see what would I actually be deploying in production, I can use this preview functionality here and now I can see the actual objects generated by customize so I can see the deployment, a config map that was generated and service. I can also see these links here between the objects that are referred to each other and I'm not getting any errors or anything like that. As before, I can turn on OPA validation and here I am going to see a bunch of errors once again related to security policies that aren't set in these manifests. So that might be a good cue for me to fix those before I deploy anything, but let's disable those and go back here. So that was a really, really quick view of customize and let's just have really quick view of Helm as well. We're gonna look at a Hello World Helm chart in Amazon's example repository. So let's go over here. Here's the actual chart YAML file. You see there's not very much there. There's one template, so there's only one template file in this Helm chart and you can see here that it uses kind of the go template syntax to pull values from a values file that's specified here which has some custom properties and I'm gonna do the same thing here, use this preview button to see the objects that Helm would generate if I around use this Helm chart. So here I can see that using Helm with this template, with this values file would have generated these two manifests or these two objects, this deployment and the service. You can see once again, no errors, everything lines up neatly when it comes to links, et cetera. Just for the fun of it, I'm gonna enable the OPA policies and once again you're gonna see security and performance related warnings which I should probably fix if I wanted to deploy this into production, but of course as for learning purposes, these are something that I can blissfully ignore. Okay, that was a super quick demo of templating and what that could look like with both Customize and Helm. And now back to you Ole, thank you. Awesome, thank you so much. Hope that gives you a little bit of insight into how Customize and Helm work. Let's just wrap up about tooling. We talked about, hope this isn't too confusing. So we have the lifecycle steps here and here we have a different types of tools. So CubeCuddle is great for creating and applying objects. I think the common or the knee-jerk reaction would be to use your IDE and IDs are great for editing obviously, but usually they rely on plugins for many of the other features. Some of those plugins might be bundled, some of them might be third party. The risk here is of course that the plugins themselves aren't very aligned and maybe not well integrated, so you'll have to kind of figure that out and choose and install and learn each plugin and see how you can get them to provide you with a nice workflow. Validation tools, there's Trivi, Cubescape, CubeVal, there's Nameit, if you Google it, you'll find many, many obviously apply to the validation phase and some of them provide plugins for your IDs as well. So it can easily be integrated there. Talked about Helm and Customize, obviously for creating, previewing, applying, not for editing, that's where you would use your IDs. And then you have a bunch of cluster tools, lenses, K9S, there are others. These tools are great for working with manifests once they have been deployed in your cluster. They're not huge on the earlier phases when it comes to working with files. Some of them might have the ability to apply, but so there should probably be next year, but they are very much focused on working with your cluster, which is great and they're great at that. And then finally, there are a couple of specialty tools out there claiming to be like Kubernetes IDs, basically the goal of these is to really provide an integrated workflow for what I've been talking about, right? So instead of you having to pick and match plugins and trying to get those work together, tools like Monaco will kind of provide an integrated and hopefully more productive experience. Usually they are still built on the same validation tools in Helm and Customize. So it's more of providing an UI and workflow on top of tools that are already out there. So usually I think you would start with KubeCuttle, maybe your IDE, you'd throw in a validation tool, you'd start using Helm and Customize and maybe use a cluster tool and then you'd have kind of this mishmash of tools that work well together. And then as you move forward, you might be saying, I don't have the energy or time to always keep all of these different things up to date. To date, how about giving these kind of tools a try that just makes you more productive right from the start? But obviously it's up to you. I'll have our preferences there. So finally, best practices. Understanding manifests in your lifecycle is important. Use the latest stable API version. Mention this for your manifests. Keep your manifests simple. Don't overly specify default values. This could be a security risk. The easier they are to read, the easier it will be for others to manage them. Use templating systems. Use them wisely and what I mean there is don't throw yourself into complex Helm or Customize setups until you actually need them. Try to keep it simple as always good, at least initially. Validation is super important and try to automate that if as much as you can. Use tools like the ones I've mentioned for validating and automating the validation of your manifests before you deploy them to your cluster. They spend so much time debugging things in your cluster, hours of why is this not working when just a simple validation pre-deployment would have figured out that you spelled this name wrong and you're referring to a config map that doesn't exist or something like that. And then finally, I did mention GitOps. GitOps is awesome and it's a fantastic approach to managing cluster state. It is though something that's pretty complex and it requires kind of buy-in from your entire team. It's not something that one person can do and then the rest can do work as just more manually. It's something that your whole team adopts and there are workflows related to that. So make sure everyone understands that and kind of aligns with the benefits and then before you adopt GitOps work for them. And that's it. I hope that was somewhat insightful. Happy to stick around and answer some questions. You can always reach me at oley at kubeshop.io. Thank you so much for listening and have a great conference. Goodbye, everyone.