 Hi everyone, thanks for tuning in. My name is Tyler Gilson. Today we'll be discussing building apps in Kubernetes, the unforgiving sea of containerization and the lifesaver tools. A little bit about me. I've been working with Kubernetes projects since 2019. I love building distributed systems and developing POCs with all the latest and greatest Kubernetes projects and sharing what I learned with the community. I'm a Principal Software Engineer at Spectro Cloud. Feel free to connect with me on LinkedIn or Twitter. I'd love to hear your thoughts. Okay. The agenda for today. Briefly we will touch on Interversus Outer Loop Development and how that relates to the software development lifecycle. We'll then look at a suite of tools that I hope will be able to help you experience greater efficiency in your personal workflow. And then we'll look at a demo application, Dad Jokes Generator, which showcases some of the tools that I will be discussing. Moving on. Most of us are already familiar with software development lifecycle and perhaps you've heard of the Interversus Outer Loop Development. But just to set the stage for some of the tools I'll be presenting today, I'd like to touch on this a little bit. Initially, there's been a movement over the past decade or so to shift left, so to speak, in our development workflow, meaning that we take a lot of the practices that are typically invoked in the so-called outer loop as we get near to production. We integrate and execute on those tasks earlier in the development lifecycle. This is desirable because it effectively allows us to capture security issues and perform a variety of tests early on to catch bugs and ensure high quality. But the downside is that it imposes ever increasing cognitive load on us as developers. However, this can be mitigated with the understanding and usage of certain tools in the ecosystem that make this easier. Because the goal of achieving parity between the inner and outer loop is valuable. It's just how we do that without making our lives overly complex. So I'll touch on that further today. Okay, let's look at some tools. So typically we start off coding when we're producing software and where do we do that? We do that on our laptops and our local development environment. However, our laptops sometimes get coffee spilled on them and other things can happen. So if I was to ask you right now, how mad would you be on a scale of 1 to 10 if your laptop broke? Hopefully the answer would be not close to 10. If it was close to 10, then you should be considering some of the new tools that have been developed recently such as Gitpod. And there are a suite of others, but essentially they allow for a containerized development environment. This solves the classic, it works on my machine problem. Whereby, you know, there's some advanced configuration necessary in order to install dependencies and set up test environments so that you can do development against a particular repo. Nowadays it's simple to containerize all of that complex setup such that new developers can be onboarded instantly and seamlessly. Basically, like a container, your development environment is portable, shippable, and shareable. So definitely something to look into. And another crucial aspect of the development environment as everyone knows is secrets and environment variables. And the last thing we want to do is check those sensitive pieces of information into version control. So I'll be touching on a bit of a tool during the demo known as Gitleaks. Gitleaks is just one of many plugins that can be utilized by the pre-commit tool. Pre-commit will invoke the Gitleaks plugin to prevent us from committing sensitive information. This is a very useful tool that we should all be aware of. And lastly, when writing our code, we should of course keep in mind the concept of a 12-factor app. And this reference here to 12factor.net is something every developer should refer back to from time to time even if we're already familiar. So I encourage you to take a look at that. All right, we've written some of our code. Now we need to build and deploy our code. And how do we do that? Well, in the early days of the container world, we might have done that with Docker Compose. However, nowadays we're mostly in Kubernetes has sort of eaten the world, so to speak, in the software industry. I'll be focusing primarily on Kubernetes. We do need to most often leverage tools such as Docker to build our container images and therefore we deal with the Docker file. And then we wrap all of that in automation, typically using tried and true tool like Make. It's been around for almost 50 years now. It's battle tested and I encourage the use of Make, but there are some fancy new tools that are worth checking out, notably Taskfile and Earthfile. Earthfile is very interesting because what it does is it allows you to execute all of the typical tasks you might consider as Make targets inside of a Docker container. And that allows for ultimate consistency across environments. It'll run exactly the same in your CI pipeline as it will on your local machine and it's architecture independent. So definitely a cool new solution to check out. And then, you know, let's say we've figured out how we are going to automate the build of our containers. We've written our Docker files and now we need to deploy to Kubernetes. Well, we need manifests for that. And we're typically not deploying just raw manifests or raw YAML. You know, the two solutions that most prevalent in the industry are, of course, helm and customized. But I do want to mention that each has their own place. People can often misuse these tools. And in general, I would suggest that anything that is user facing and involves configuration that is not internal to your software, that should be done using helm and the templating engine that helm provides. And then specifically, you know, internal environment specific customizations should be applied using customized and that can be done as a post rendering step where you've already rendered your helm. And then you customize it further with customized. Then Q is another incumbent technology that's worth checking out. It provides strongly consistent static typing for your manifests and can execute tests against your manifests. Of course, reducing boilerplate much the same way that that home does, but worth checking out Q as well. All right, so now it comes time to deploy our code. We've built our containers. We have automation in place and we need to get that running and do some debugging and development, preferably in a local environment and some popular solutions there are mini cube kind and K3D. However, you know, developing locally is the goal, but it's not always possible because some of the time we can benefit from having access to a more complex development environment that consists of tools and services developed by other teams. And perhaps that that whole ecosystem is complex enough that it's difficult, if not impossible to stand up locally. In that case, where these other tools that I have listed here below really shine is that they provide a bidirectional file sync that enables us to do in container development. So what I called a full dev experience. So scaffold, dev space and tilt enable you to have that complex remote development environment, perhaps running in a Kubernetes cluster posted by a managed Kubernetes service. And you have your local ID and you're performing development against one or more microservices in that cluster using something like dev space, which we'll see in the demo. And what it does is the bidirectional file sync mirrors changes in your local ID directly into your development container. And you can configure things such as the restart helper to automatically recompile your code when certain files change. You can use delve for remote debugging and step through breakpoints, just like you normally would. So it's the full experience of your typical development in your local ID, but it's executing remotely. The main problem that solves is the classic waiting for the image to build problem, which isn't a problem. It's just tedious. I think these days we're all familiar with the boring waiting as we build our image push it and then bounce the pod and wait for that to get pulled down, which just increases the amount of time it takes to develop software. All right, moving on to security. So we've gotten our whole pipeline in place and we're far from done, however, because throughout that whole process we need to keep security on our mind. And I'll go through a few tools now that can help in that regard. So for secret management, SOPs and case ops or lightweight solutions that one can consider, they enable a get ops based workflow that allows you to check sensitive information directly into version control. So with SOPs, you can encrypt sensitive manifests and that can be done by using a key management provider like, you know, aka Amazon key management service or Azure key management service. It also supports PGP, but that's not quite as secure. That's more for development. That's what we'll show in the demo, however. And then case ops is just a customized plugin that allows you to invoke SOPs for encryption and decryption directly as part of your customization. And then on sort of the other side of the spectrum, we've got vault, which is an enterprise grade solution that can often become necessary. It allows for more flexibility and dynamic substitution of values over and above, you know, having to actually check encrypted manifests into version control. The benefit of vault as well is that it's, of course, cloud agnostic. And lastly, Trousseau is worth mentioning. It uses the Kubernetes KMS provider framework. And what's unique about it is that it allows for a quote unquote, a J experience whereby you store those same secrets in multiple key management providers. And if one fails, it will fall back and retrieve values from whichever provider is available. Okay, so we've got our secret management solution sorted. And now we'll talk about our Docker file. And of course other things, but that's where Chekhov comes in. Chekhov has support for hundreds of policies that have been developed to do static configuration analysis of static configuration, such as Docker files. These policies are developed by experts and can easily screen for the adherence to industry best practices. And dive can also be used in CI pipelines. People often think of dive just as a tool for doing deep inspection of all different layers in a Docker image, as that is its intended purpose. But it also has the ability to perform analysis against certain metrics like space efficiency. You can fail a CI pipeline, for example, if dive considers a Docker image to be too bloated and for space inefficient. And on to signed artifacts. Cosigned at Notary are two competing solutions in the space, although cosigned is sort of the leading solution at this point. And what it allows you to do is generate a cryptographic signature and attach that to an artifact. Signatures can be uploaded to a variety of locations. Today in the demo we'll look at the record transparency log. And then what we can do is we can couple these signed artifacts with solutions, policy as code solutions, such as Coverno or OPA, and enforce policies such as the fact that we would, for instance, only allow running containers that use images that have been signed. We'll see that in the demo. And more security tools, we have the concept of an S-bomb software bill of materials. There's been a lot of hype about this lately, but it's really not just hype. What you can do with an S-bomb is generate a descriptive summarization of all of the dependencies entailed by your code repository. You can generate an S-bomb using tools like sift, getbomb or turn, and they can target a file system or a Docker image or even binary. And when you get this dependency summary, you can store it in a variety of formats. The full sort of machine friendly format would be the S-bomb itself, which can then be piped into a tool like gripe, trivia or Clare to do vulnerability scanning. And what that means is essentially looking at the exact components that are mentioned in the bill of materials and their version. And then comparing those against open source vulnerability databases like the NIST National Vulnerability Database or the GitHub Advisory Database. And of course, there's a mapping there between those dependency versions and known vulnerabilities that you'll want to remediate. And I briefly touched on policy as code already, but popular solutions there, as mentioned, are Kyberno, OPA, Dottree, and they do a lot of different things, but today we'll be focusing on their ability to restrict containers to run exclusively signed images. That's something that we would always recommend in a production environment. All right, that's a whirlwind tour of some of the tools now onto the application itself. My colleague Nick Vermont designed this somewhat silly but demonstrative application known as the dad jokes generator. The idea here is as a user, you're thinking to yourself, well, I want a dad joke, or many dad jokes, because who doesn't want many dad jokes. And you'll send a request to the jokes server. The server will use NAT, which is a go lang based message queue that implements the pub sub model to send a message to the quote unquote joke worker. Joke worker when it receives these requests will forward them on to open ai's API up until a certain point, because what it does is once it receives the response it will cash that joke in both redis and stored in MongoDB. And after a certain number of results have accumulated, it will just return previous results. That's simply to bring in some open source components into this overall architecture, which so that we can then demo how this somewhat complex five tier app can be deployed using depth space. Okay. On to the details. We have the link the code for this project is all open source feel free to check it out and consider playing with this yourself, maybe going through some of the steps that I'll be demoing shortly. Okay. Spend a little bit of time reviewing the code base, and then we'll jump into some demos. In the top, I'll begin with the make file and won't go into detail in the targets, but essentially I just want to mention the make file because we've developed some simple targets here to build our code and build our images and tear down, you know, set up and tear down simple, classic make file stuff. So what we've done is I have built some images. So if I do Docker image last, you can see, I've already built the joke worker and the joke server and these tags that were generated by death space which we'll see momentarily. Okay. Now, as I mentioned earlier, we're using pre commit to basically lint the content of our get commits. And so in conjunction with pre commit we're using get leaks. These will flag dangerous commits. And because we've installed this pre commit hook, the pre commit, the sort of pre commit executes a get leak check every time we go to make a commit demoing that shortly. And as you can see configuring that is quite simple we just pop this into a pre commit config and run command to install the pre commit hooks, and we're ready to go. The command to do that is mentioned in our read me. Actually, the command is not in our read me but I can update that system command. All right, moving on to stops. As mentioned I'll be using sops with PGP to encrypt a sensitive manifest such that it can be checked into version control. Sops config is pretty straightforward and there are instructions in the read me for how to install and configure soft. So at a high level the repo structure consists of these two services the joke server and the joke worker. Each service has a Docker file and a main dot go. And these agents use a shared library that's inside of internal. We've got some constants and we've got joke dot go will be we won't be looking at the code in too much depth but basically, you can see here. We might be looking at the code to get a dad joke. So we might be changing the type of joke later to a Chuck Norris joke as you guys saw there. So this is just a prompt sent to the open AI. Yeah. And now I will show you the most interesting part, which is the dev space.yaml file. That's what ties all of this together. We will be using dev space to not only deploy our app but also debug our app inside of a local kind cluster. Here at the top we have some environment variables defined. Our open AI API key is being pulled in from the environment and our outer environment. We just have some names for images and tags. And as I said earlier, the tag is just is generated from our current get status as such. Now we have pipelines. We'll just focus on the dev pipeline but these are essentially just a series of operations that get executed when we invoke the dev space CLI and then we type dev space dev. And essentially what will happen is a number of different things will be installed in our kind cluster and then our development environment that we'll touch on momentarily will be initialized. We'll see more about that here shortly. Here we have image definitions. So we're actually using dev space to build these images. And what we have is the name of the image which comes from that variable and then references to the Docker file on the Docker context. And what we can do is when we type dev space dev, if the images aren't already present, dev space will build them for us and then actually load those built images into our kind cluster automatically. Okay. And now as I mentioned earlier, we have all of these various components we want to deploy. And they're a mixture of, you know, go services that we've developed here in this repo and then third party open source services such as MongoDB and Redis. And what dev space can do is it can deploy in a variety of ways. It can deploy a helm chart. It can deploy raw manifests and it can use customize as well. So as you can see, Mongo, Redis and NAPs are all just coming in through helm charts, which is pretty straightforward. And then our joke server and joke worker both rely on another interesting project from Loft Labs, which is known as component chart. So this is a meta helm chart, which essentially you provide the minimal specification for a deployment like Kubernetes deployment and then component chart blows that up into the full manifest that you need to deploy a deployment or a stateful set or I believe a raw pod as well as a service to expose that workload. So essentially it's just a very concise way to generate Kubernetes manifests. And we have the joke server and the joke worker. Both of them are connecting through two NAPs with this connection string and then the joke worker is of course also connecting to Redis and Mongo. And the joke worker also needs to be able to interact with open APIs, open AI's API, and it does so using this secret and we'll see shortly how we're generating the secret from our encrypted manifest using SOPs. Lastly, we're also deploying some custom resources into our cluster and those we can see here under custom resources. And that's basically just our Redis and MongoDB instances. So we deploy the operators through the helm charts, but then those operators need to reconcile that custom resource that we create down here to actually deploy those two services. And lastly, we have the dev section, well, dev and hooks. So when we run dev space dev, everything I've mentioned above will happen, a variety of things will be deployed into our environment. And then here what we're saying is automatically configure a port forward against this joke server container. That will allow us to curl it using the local host URL. That's pretty straightforward. Then what we have here for the joke worker is somewhat more complex. We're using the dev space restart helper here to automatically recompile our code when certain files change. So if the content of any of these files here changes, then the restart helper will automatically recompile our code and we will have our changes ready to go just like that. Okay. And we have hooks and commands. These hooks are basically just for cleanup, at least these ones. So when we run dev space purge at the end of the demo, you'll see that we can tear everything down, which we initially spawn up using dev space dev. And then these hooks are just custom actions that we take in order to fully clean up, for instance, the custom resources like Redis. And we'll delete our secret that we create from the open AI API key and a PVC. And then up here, we have another hook, which is invoking softs. So before we apply all of those other deployments, we need to ensure we have a secret in the cluster that the joke worker can rely on. And what we're doing here is we're just invoking softs to decrypt this manifest. This is the encrypted manifest that has our secret in it and then apply it. But how do we generate that encrypted manifest? That's what we do here with a dev space command. This command is really just like analogous to a make target could have been done with make, but I wanted to demo the power of dev space here. So it's just a bash command, we're going to create a secret from our environment. And we'll pipe that out to a file, and then we'll run softs to encrypt that file and save it here as this file name, which you can see matches the one here that we subsequently decrypt and apply. And then of course, we cat it, we echoed out that unsafe unencrypted secret to this file. So we'll just clean up the file at the end. All right, so that's a deep dive into the dev space config and what it'll do for us. Now let's actually jump into some demos. You can see here in this shell I have canine ass up and running. And this is my local kind cluster. It's pretty much a vanilla environment. I've installed Kyverno, as you can see here. Otherwise, we have a pretty clean kind cluster. Now in this shell here, what I will do is I will run dev space. And that was the wrong session. Sorry about that. I will run dev space. And before I run dev space dev, what I need to do is actually generate that encrypted file, which I have deleted. As you can see here, this file is not being found because I missed a step. So what I will do is I will execute dev space run and the name of that, that command. And what you can see here is that this encrypted manifest was generated. This contains our encrypted open AI API key. And we did that using SOPS with PGP encryption. So we've set up our environment. Now we will do dev space dev. And you can see here that hook I was mentioning executed smoothly now that the manifest is there to decrypt. And we've applied a couple of home charts against our kind cluster. And you can see here, things are starting to come up. Of course, we have some errors, which is expected because that's the BB of Kubernetes. It will automatically reconcile until this is resolved. Basically, the joke server tried to start before Mongo, Nats and Rennis were all healthy. But as you can see, they're all up and running now. So we have our whole architecture up. And that was just one simple command, dev space dev. And it deployed everything that we saw in that diagram, including our custom services. And it also did some of the other things I mentioned, which are setting up a port forward automatically. So as you can see here, the joke server pod is being forwarded to local port 8080. And this bidirectional file sync has synchronized the files in our local IDE directly into the joke worker container. And that's pretty cool because what we can do next is we can make edits to our local code and see those changes happen instantly in this container. Not only that, but because those changes were detected by dev space, the restart helper will auto recompile our code and the changes will be live. Basically, as soon as we press save. So before we make any changes, though, let's just generate a few jokes so that we can see that this is working because who doesn't want dad jokes. Okay, yep. The buy someone is a favorite. Pretty cheesy. Okay, moving on. So we've got some dad jokes in our database. But what if we instead of dad jokes wanted to generate Chuck Norris jokes. So what I would do is just go to my source code as I'm doing my, my development and change that to Chuck Norris. But I'm not going to save it because what I want to do first is show you what's going on in that container. And so if I shell in to the joke worker and just show the file system, we can cat internal joke joke.go or tell you can see here, we have tell me a bad joke. And then if I show you the running processes, we have the joke worker process running as PID 277. And you can see this restart helper is actually running as PID one. And if I exit out of here and show you the logs, we had previously saved the file. So you see this restart container message. But what I will do is I'll make the another change and you'll see this this come up again. So when I save this file because now we want Chuck Norris jokes. Right there. Killed restart container was invoked. So that's the dev space restart helper. Now I will shell back in. First we'll look at the processes. So the restart helper is still PID one, of course, but joke worker is no longer 277. As you can see, it's been auto recompiled and it's not running as PID 452. And additionally, we can cat internal joke joke.go grab for tell. And just like that, our source code now says Chuck Norris. Pretty cool. So to just come full circle on that, what I will do is generate five more jokes for you, which will perhaps be funnier. That's the new one. I haven't seen that one before. Good stuff. All right, so that is a whirlwind tour of dev space and the power of its bidirectional file sync. And it's flexible configuration for deploying to communities environment. So next what we will do is we will focus on security. So as I showed you earlier, we have those images that will just hold them up again. For your reference, as you can see, we have an unsigned image and then we have these other images that that were built using dev space. What we're going to do is we're going to generate an S bomb for one of these images and to do that, I will use sift. So first I will output in JSON format. And we'll just take a look at what that looks like. It's a machine friendly format, but it contains the most verbose information and that's what we will need in order to generate our vulnerabilities with gripe. Sift and gripe are complementary tools from Antoray. They're super easy to use. They have GitHub actions that you can implement with just a couple of clicks to automatically generate and run checks against for all of your GitHub repos. So I recommend checking out sift and gripe. All right, so we can see this command joke server s bomb.json file was updated. And as mentioned, it's machine readable, not really human readable, so we're not going to dig too much into all of this metadata. But instead, we will rerun that command using the table format. This time we'll see a much prettier representation of all the dependencies that sift was able to detect from our Docker image. And here we go. So pretty simple dependency name, version type. So we've got some Debian packages and Go modules, et cetera. And then what we want to do is understand the vulnerabilities that are implied by those various dependencies. So we can use gripe and we'll actually just point right at that s bomb that we generated. You can also point right at an image and it will invoke sift and generate the s bomb as a convenience, but this is a little bit faster. So as I mentioned before, gripe is comparing those dependencies and their versions against open databases like NIST database to associate those dependencies and versions against known vulnerabilities. We'll see a summary here shortly. And it's pretty bad. So that's actually by design. And I'll jump over to the Docker file to explain why. But as you can see, we have these dependencies, their version and vulnerability code, which you can look up online and the severity. So this is pretty rough. Hopefully your images, at least not the ones you're using in production would look at anything like that. And the reason why is what we didn't do is use multi-stage builds, which you should be doing in production. So you can see here this Docker file just uses a go lang base image copies in go dot mod and code. Excuse me, compiles that code. And we're done. But what we should be doing is having a final build stage here in our Docker file where we start from scratch using a very lightweight secure image like Google's distreless image. And then we copy our compiled binary into that image, thus reducing our dependency footprint dramatically. If we had have done that, the sift and gripe demo wouldn't have been as interesting, but it would have been much more secure. So that's that's sift and gripe. And what we want to do next is sign our worker image and then demonstrate how we can enforce that that image is used, or that the containers in our cluster are using signed images. To do that, I will use cosine and I will just enter the name of the image that I want to use. So we have to accept this disclaimer here for six door, which I will do. And then we're going to use Google OIDC. And we have successfully authenticated what that means is that signature was generated and it was uploaded to the record transparency log. And now other tools can check that transparency log to validate that our artifacts are indeed signed. So I will show you what that looks like on the command line with cosine. We can do cosine verify plug in our image and then we just have to provide some flags for authentication. So OIDC here, we're going to use my email address will pipe the output to jq just so it's a little bit prettier. And there we go. I won't unpack this because it's not that interesting, but as you can see it succeeded and we have verified that this image is indeed signed. So we know the cosine piece worked. Now how do we enforce that we use signed images in our cluster. As I mentioned earlier, I installed caverno using helm and we have this admission controller here, which will reconcile various custom resources, one of which is the cluster policy resource. So here in our repo we have a the ammo file manifest, which defines a cluster policy. Essentially what it says is for any pod or deployment resources, check the images used by all of the containers in the pods. And if they are coming from the Tyler Gilson repo, then we will use this attester, which is the Google OIDC attester to check the record transparency log and verify that the image in question has a signature. So what we need to do is apply that cluster policy to our cluster. Okay, there it was applied. And now we can in KNS take a look quickly at the cluster policies. And we see here it's being enforced. And we have our same YAML. Cool. So assuming everything is correct, what will happen now is if we attempt to create a pod that uses an unsigned image, it will be blocked. And I will do that by performing K is just an alias for Qubectl. So Qubectl run and we're going to run this unsigned image. And perfect, as we expected, the Kevrono admission controller blocked our request because it was unable to look up a signature for that image. Now, just to prove to you that it really does work for a signed image, I will do the same thing. And in this case, I'll pass in the image with the tag that we just signed. Perfect. The pod was created. It's actually going to crash because it doesn't have the right environment config specified, but that's not the point. It is, it did get applied to the API server, which is, which is what we want. So I'll just delete that. And all right, so we've been doing some development and let's just look at our get status. So we have some, some files that have changed. And now let's simulate what happens if we have an unsafe commit. So I'm actually going to cat. And I'm going to echo my API key to a file. And I'm not going to show you that file because that would be unsafe. But what I will do is I will add food up txt. And then I will perform a git commit. And it's blocked. So that's what I was mentioning earlier about git leaks and pre-commit. So I have my pre-commit hooks installed, which execute git leaks every time I run a commit. And as you can see, they detected sensitive data in that, in the diff for that commit and they blocked, blocked my commit. So I will just do a git restore and remove that file. And that was good. We didn't, we didn't commit the, the sensitive data. So we have finished our demos. Now I will just tear down the environment. So we have this process here where we have dev space running. And I will just exit out and run dev space purge. And before I do that, I just have to delete the cluster policy. Now we can see here that dad jokes namespace has our, all of our services. And when we run dev space purge, it's going to bring everything down. Just like that. Everything that we deployed in our local kind cluster is cleaned up and we're back to this pristine state. So that is dev space. It's super powerful. I barely scratched the surface of what it can do. I encourage you to check it out and see about integrating some of these tools I've demoed in your workflow. We'll jump back to the slides for a brief moment for a final recap. Okay, some key takeaways. Developing cloud native apps is easier than ever. But in order to do it, you need to understand the ecosystem and the ever expanding array of tools at your disposal. And using those tools ties into developer experience, which is fundamental for Kubernetes adoption. However, there are a lot of them and it can be overwhelming. Things can feel complex. So I encourage you to start small and focus on what makes your life easier today. And not only easier, but more secure because security is extremely important as always. And we need to be considering security at every stage of the development lifecycle, which ties into the whole shift left mentality and bringing the inner loop closer in line with the outer loop, without making our lives unbearable as developers. And we should focus on making not only our developer environment, but really everything about our software, portable, shippable and sharing shareable because sharing is caring. Just like those kittens think. Okay. One one last thing before I let you go. Is there a simpler way? Yes, we have sector cloud palette, palette developer engine. And this is just a screenshot from for the same five tier app taken from what we call PDE. And as you can see, we have all the same layers, but they're configured using this nice user interface and they leverage a powerful abstraction that we refer to as an application profile. So this profile defines everything that we were showing in the demo through dev space, all the same environment config, the ordering of the tiers, the images, et cetera. But that application profile can be deployed into a number of different environments. So each environment might be a different Kubernetes cluster or virtual cluster inside of a physical IS cluster. And then not only that, but if you make an edit to the application profile, which is the central source of truth that update the edit that you introduced can be applied to all of the different physical sort of instantiations of your app. So it allows you to model your applications and treat that config as a single source of truth. We have a free trial. I encourage you to check it out. You can click the link here and see for yourself. Lastly, we covered a bunch of tools today. Git, make, sops, dev space, pre-commit, and K9S. K9S was sort of demoed implicitly, but I did mention it. Be careful if you use it too much. You'll forget how to use kubectl, but I find that it is a fantastic tool to speed up and enhance your ability to interact with Kubernetes. And that's it. Lastly, again, my name is Tyler. Thanks for watching. Feel free to reach out on LinkedIn or Twitter. I'd love to hear from you. Thank you so much for your time.