 Hello, everyone. Welcome to Cloud Native Live, where we dive into the code behind Cloud Native. I'm Annie, and I'm a CNCF ambassador, as well as a senior product marketing manager at Camunda, and I will be your host tonight. Every week, we bring a new set of presenters to showcase how to work with Cloud Native Technologies. They will build things, they will break things, and they will answer all of your questions. Join us every Wednesday to watch live. This week, we have Nick here with us to talk about Kubernetes Native Pipelines for Staple Applications. Another exciting thing happening in the Cloud Native Sphere is that TubeCon and Cloud NativeCon North America Early Bird Registration deadline is today. Get your tickets fast if you want the early bird rate. As always, this is an official live stream of the CNCF, and as such, it is subject to the CNCF Code of Conduct. Please do not add anything with a chat or questions that would be in violation of that Code of Conduct. Basically, please be respectful of your fellow participants, as well as presenters. So I'll hand it over to Nick to kick off today's presentation. Thank you. So thank you, and welcome everyone. My name is Nick. I'm a developer advocate on that, and I've been working with communities approximately for the last five, six years. And the topic for today is going to be focused around building native pipelines within communities for stateful applications. So the idea is really to focus on the developer experience from a real use case application and we're going to start with there into from developing the application and moving to from your local laptop to a Kubernetes production cluster. So maybe I can start sharing my screen and we can get started. Okay. So this is what we are going to build today. Hopefully 45 minutes, one hour will be enough. But here is the idea. When I started to work with communities, I was kind of confused. How can I start developing my application and make it a good application for communities? What kind of communities concept I should use? What kind of tools I should use to manage your life cycle of my application from my local laptop to the staging cluster to the production cluster. And what are the key concepts when it comes to also deploying and building stateful application? So what is important there is maybe let's start by defining what is a stateful application. So stateful application means that one or several components of that application are stateful or in other words, they store some data to disk. So it can be a database, can be a caching solution, basically anything that needs to persist between user or for usage or just to present some sort of data to maybe a front end. So you will need some sort of database. So the idea is let's start by developing that application. So it's already there, but we're gonna start with the code and then go into how do we make this application inside a container, like using Docker file, things like that. And then from Docker, how can we move into a more, highly available environment like Kubernetes? So this is the idea. We're gonna start with an app that is I call the model app. So just to give you an idea, I think it's still open there. It's basically this app where it's showing a bunch of characters on the screen from the Marvel APIs. So it's basically like I'm looking for the image and then I'm showing the name of the character and then the comic books where the character has appeared. And then one, two, three, four, five, six cards with this information. That's basically what the application does. So it's a two microservice-based application where the first part is essentially what you've seen. So the front end, which is using a Python flask and the backend is a MongoDB database where all the semi-structured data, so all the JSON information from the Marvel API is stored. And the idea to build this application in Kubernetes to also show you what kind of basic first-class concept from Kubernetes you can use, I've decided to build this application in the following way. So we're gonna build a three-node MongoDB inside Kubernetes which is basically our stateful application. We're gonna have a couple of pods deployed as a deployment that will be basically the front-end application. And then we need to populate that database with the Marvel information. So for this, I've created a Kubernetes job because I mean, there's different solution on how to do that. But the job, like just to show off how it's working, it's a good solution because the job is gonna be running until it succeeds. So meaning that it's gonna try to connect to the Marvel APIs, get the information, store it to the database until it succeeds. So basically until, at least until the cluster, the MongoDB cluster is available. Once the MongoDB cluster is available, the job will succeed and populate the database. And then as a result, the application will start working. That's the idea of running that application in Kubernetes. But for this, we need a proper lifecycle. So I'm gonna start by showing you what tool I can use to facilitate the development of this application on your laptop. So the expectation here is that as I save code into my application, I can do things as automatically building my Marvel container or my job container, for example, as soon as I change some of the code without doing anything else. Sort of just monitoring the file system when a file that I monitor is changed that ideally what I want is the system I'm using or the tool I'm using to build the container and basically deploy it into my local Kubernetes cluster testing environment on my particular laptop. So for this, I'm gonna be using K3D which is just a wrapper around K3S. So but you can think about it as just as K3S. So K3S, which is the rancher, sort of very lightweight Kubernetes distribution, you can basically install anywhere a simple binary. So this is basically there on the top, this would be the pipeline, right? So you start committing code, not just committing, but just saving code to your laptop. Of course, at some point, you want to commit it to Git for further usage, maybe when you want to deploy to production. But for the moment, what I want first is to have the right developer experience on my laptop. So I want to save code. Then when I save code, the Docker images are built and eventually also they are saved into the remote repository, a remote container repository like Docker Hub there. And then once the Docker image has been built, I want to use some sort of tooling to build the Kubernetes manifests and then deploy it into my local cluster. But remember, we also have a stateful application component which is our MongoDB. And this is where you may want to add a couple of extra features so that it represents the end environment, the production environment. You may want to test earlier in the code lifecycle. So basically if you want to run smoke test, including some of the infrastructure component that will be deployed in production, you can do it on your laptop because it's just Kubernetes in the end, right? It's just software. So for MongoDB, we're gonna be using on that, which is a solution that allows you to use local storage of your Kubernetes nodes and aggregate this as a pool of consumable storage for your persistent volumes. And on top of this, add specific features such as replication, encryption, all those kind of premium features you would want to have in production. So by enabling this on your local cluster, you also have an idea of how your application will behave in production. But again, the solution is Kubernetes native, meaning that is just using storage class YAML so you can control it as part of the manifest generation on your local laptop using customized or HEM or whatever. So for today, the tools we're gonna be using for the local, let's say, laptop development, we're gonna be using, so of course Git, we're gonna be using the local Docker that is running on your laptop. We're gonna be using customize. We're gonna be using on that as well. And scaffold that you can see on the top here, actually the documentation is just there. It's really a nice tool. It's also open source part of the ecosystem. It's a command line tool that facilitates continuous development for Kubernetes native application. So basically it handles multiple phases in the lifecycle of your application. So this is scaffold that is gonna help automate the building of the container, the deployment of the container. So building using, can use Docker, can use to build here, for example, we see what is supported. So you can use Docker, like Docker file. You can use also cloud native build packs. You can use custom scripts and so on. So this is for the build phase. We're gonna be using our local Docker socket. And when it comes to deploying, so again, this is for our local laptop based Kubernetes cluster, you can use Coup CTL, Helm, customize or Docker. So we're gonna be using customize because customize helps you decorate your base manifest to match a particular environment. So in our case, I'm gonna have a dev overlay that I'm gonna be using to deploy and to configure my manifest on my dev environment. And I will also have a prod overlay that is gonna be used to deploy in the production cluster. But in the production cluster, we won't be using directly scaffold or customize directly to deploy it. We're gonna be using a tecton and flux, right? But this is the second part. So those are the tools for the local cluster deployment. Now, the second step as a developer, once you have it on your cluster, once the code has been committed to Git and your pull request has been validated by your peers, then it comes, it's about time to deploy either, maybe not in production, but let's say our production, which is probably like the development or the testing or staging area, like the remote Kubernetes cluster that is gonna be useful for that. So in our case, it's gonna be GKE. And the idea is we're gonna be using again a Kubernetes native way of doing things. So we're gonna be triggering a pipeline, which is gonna be doing the same kind of things as scaffold. So the idea is we need to build the container. We need to build the manifest using customize, and then we need to deploy those manifests with the right images into our staging slash production cluster. Okay, so how are we going to do this? So we're gonna be using tecton and tecton for those who are not familiar with the solution, it's also part of the CNCF ecosystem. It's a Kubernetes native pipeline software, meaning that every task you're gonna be creating within tecton correspond to a pod or a container. So an action or a command that is run into a container. And when you have multiple tasks to run as part of your pipeline, those tasks will be sequentially executed by your tecton solution within Kubernetes as multiple containers. So the whole solution, the pipeline itself is completely running in Kubernetes. So again, the idea is to produce the right images. So this time we're not gonna be using Docker because we are running inside Kubernetes. The pipeline is using Kubernetes. It's not a good idea to mount the Docker sockets in production, right? That's bad from a security perspective. So a typical way to build container in Kubernetes, there are several of them for sure. But Kaniko, which is basically building container without using the Docker socket is a good solution to do so. So we're gonna have a task within tecton where the goal is gonna be build the container with Kaniko. We're gonna have a second task which is gonna be using customized to generate the manifests as well that we're gonna push into a particular repository. So this is exactly that particular GitHub repository there. I have a directory which is called target and the manifests are tested multiple times. So it should be working. This is basically the result. What we should see in the end is just when we will be running it live, we will see a different image digest here. And the last part, once we have our manifests that are deployed into this repository. So we will have, of course, the image also is gonna be picked up by Docker Hub by using flux, right? So the application manifest are gonna be deployed into the repository I've just shown you and then flux, which is a GitHub solution is gonna monitor that repository, monitor for changes on that repository and reconcile and like any GitHub solution, the goal is to reconcile the states of the cluster with the intent which is stored in Git. So our intent is to deploy the manifests that are stored where I just showed you before. And the state of the cluster that needs to be reconciled. Well, it's deployed the manifest that are the new manifest or the new objects that corresponds to the manifest that have been changed on the Git repository. So as soon as the container image will change, we will have our front end container that will be replaced. I won't change like the MongoDB portion because it's a bit longer to deploy, but the application is already deployed in the cluster. So what we will do is do some changes to the application and show that trigger the pipeline that will update the image digest into the application manifest repository. And as a result, flux will see that change and we'll replace the front end container with the new code, right? So the first step is really to change some of the code on the laptop, show you the developer experience with scaffold, see check the result on the local cluster, then we're gonna be triggering, let's say the staging slash production pipeline and we will double check that this time, it's our GitOps pipeline will pick it up and deploy this into the production cluster. So if I don't know if there are, is there any question at this stage before we jump into the weeds? No questions from the audience so far that many, but thank you so much for someone saying that they are listening to them, hello to you there. Happy to hear from everyone else of which location they are tuning in, but I might wanna ask a question from you as well. So what would you say are the benefits of running stateful applications in Kubernetes? Yeah, sure. So basically, it's always the same thing when it comes to running things in Kubernetes, you can just leverage the basic feature and characteristics of Kubernetes, which is all about scale being cloud agnostic, being highly available, highly distributed, which is a perfect fit for any application including stateful application, because now I would say we have all the tools to manage those stateful application. And as an example here, I didn't mention it, but the way the MongoDB cluster is managed by manifests, which is a Kubernetes by YAML, is by using the MongoDB operator. So now with the operator framework in Kubernetes, we can encapsulate the knowledge that is required to deploy application on top of the stateful sets. So we're gonna be using stateful sets because that's a stateful application, but also we need to make sure that MongoDB is properly installed with the right permission, the right database size, all of that. And this is encapsulated into a custom resource that will be managed by the MongoDB operator. Great, and then there is a new audience question now. So Laurentinus asks, the one that you mentioned, is there a blue, green, or canary deployment? So it won't be a blue, green, or canary deployment at this stage. What I'm gonna show you today happens before this, right? If I could add canary deployment as part of it, like maybe by using Istio on top of that, but probably that would be, that's already a lot for today, that would be too much. So today is just like basically replacing the pod in production. So it's like kubectl, just replace and deleting the pod. Don't do that in production, you're absolutely right. You should not delete your existing containers, right? You should do like blue, green, or canary deployment, but yeah, for today, I'm just gonna replace the front-end container. So in reality, if that was my production cluster, I would be a bad engineer because I would cause some sort of disruption. Great. Okay, so let's get started. So on the left here, this is my production environment. So you can see just the timestamp, I've got my application has been deployed like 92 minutes ago. So with the MongoDB being there already, this is production, but the idea in the end when once flux is gonna pick up our changes, what we expect is this value here, right? To be like a couple of seconds and a couple of minutes as we change our code. In the development environment, my development cluster on the right, you can see I have, my application is not installed whatsoever. I have the MongoDB operator so that when the MongoDB, CRD are gonna be pushed into, I mean ingested by Kubernetes, then the MongoDB operator is gonna react based on the CRD and deploy the MongoDB cluster. On the left in production, yeah, I didn't mention, but we're gonna be also using some policy as code to verify that the parameters we set for our application are aligned with our compliance system. So for example, I've got to show you a couple of rules. We want the database to be, I think inferior or less than 10 gig. We want a special user to be created for managing the MongoDB database, things like that. And so this is why I, we're gonna be using Kiverno or Kaiverno. I'm not sure how to pronounce it. If anyone in the audience know if it's Kaiverno or Kiverno, please shout out. And so we will have the admission controller there, which is set for audit. So we're not gonna prevent the application from being deployed. If it's not conformant, we're gonna be generating a report. I'm also gonna show you how to use the CLI as part of the pipeline. If you want to also fail the pipeline as a command line, as opposed to an admission controller, like before deploying the solution in the cluster. We also have the MongoDB operator that is there to react based on the custom resource. We have Tecton that is installed as part of the cluster as well. And of course we have the on that solution that is that will be leveraging the local storage to create the various PVC and also add the extra features like encryption and replication. All right, so now let's start with the application itself. So the application itself, as I said, the idea is we start with a pretty empty environment where I have my application. I've got my Python scripts. This is my front end. This is a flask application. I'm not gonna go too deep in the code just explaining to you what it does. So essentially the front end application is just, the job is just to connect to the MongoDB cluster that is gonna be deployed in Kubernetes, connect to that particular MongoDB cluster and then gets the different information I pulled from the API and then populates the different card with it. Like all the comic books, the comic card will be populated with the JSON information I have and render into an HTML page, right? That's basically the code is not, it's super simple. And of course, you want to have the Docker file in the appropriate directory. So in the app directory, I have my Docker file that Scaffold will use because remember, Scaffold is gonna be using Docker to build the application. So therefore I need a simple Docker file and I'm gonna be using gUnicorn as the web server. So I've got an extra configuration for gUnicorn, the requirements, the dependency for my application, Flask that is there as well, and as an environment. And then, so this is my application, it's basically encapsulated within that folder, the app folder, I've got my code, I'm gonna be running into a Kubernetes job that is located into this directory. So Marvel init DB, the role of the code there is gonna be to populate the, effectively is gonna be to populate the database with the information. The role of the front end Flask is to display it, the role of the job inside Kubernetes, the role of that code is to connect to the API, the Marvel API and populate the database. So you will find things here like a Mongo username, MongoDB password, the replica sets name inside MongoDB, the function to get the, to call the Marvel API, to do, to realize all the requests and to store this into a results kind of dictionary and then store that dictionary into MongoDB using the Mongo Client library. And simply storing this as, you know, JSON payload into a MongoDB document. All right, so nothing too fancy there, the application and again, Python and for this, I also need to have a Docker file to be able to build that particular container. So essentially my application is two microservices, not really microservices because I mean, it's quite simple, but they are two containers that will run in Kubernetes. The first, the init DB will be around as a job. So we'll be around multiple times until it's successful and the application will be around as a deployment. So I think in test, we will run like two or three front end in production, we will have like maybe four or five different front end just to address the potential, you know, load on our application. And so that is for the application, then for scaffold. Scaffold again, it's super easy to configure. It's quite intuitive. If you go to the documentation, there is a couple of things I want to highlight here. So I want to build two artifacts. As I mentioned before, I'm just specifying a context, which is the name of the directory. So the app directory, which is where I'm gonna store my flask application. The commands I need to build the container. This is a build.sh, which is there. I'm using buildx because I'm running on Mac M1, which is a non-based processor CPU. So I need to use Docker buildx to build my container if I want to build cross-platform, including x86. So this is why I'm using a custom script. Typically, if you don't use an on-base laptop, you maybe you won't have to use buildx. So this is why I'm using a custom script, which is good, right? It's, I mean, this means that Scaffold is quite extensible. You can just specify your script that will be used to build your container. So same thing for the, so that was for the flask. For the model, the initdb container, same thing, I'm using the same script. Local push through, which means that local means that I'm gonna be using the local Docker socket and push means that I'm gonna push it into my Docker registry. So the only thing really you need to do at this stage is to do Docker login so that you have your credential just cached and then you can start using Scaffold. And then you need to specify the overlay for customize. And now if I show you the overlay, the dev overlay, this is where all the magic happened to move from a Docker container-based environment to a real Kubernetes environment where you need those different manifests. So I've got the base manifest there, which is like the naked application. And my customization, essentially, there are a couple of things I want to provide some create some Mongo config using customize. So customize can dynamically generate things like secret password credentials. And this is exactly what I'm gonna do here. So I'm gonna be using customize to create all those secrets. I'm gonna also populate the different config map for the environment variable to connect to my MongoDB. I'm gonna create here a number. I want to specify three replicas for my environment, the number of pods for the front end. In terms of my database, again, this is encapsulated into my custom resource, which is now a first-class citizen in Kubernetes. As soon as I installed the MongoDB operator, I can start using this custom resource. And I'm gonna specify here the volume. There's a request from the audience to make the fonts a bit larger so that they can see a bit better. Larger, yeah. Okay. This is, yeah. Yeah, okay. So now I'm gonna show it. Yep. There's also audience question, but we can figure out the font size of the course first. Okay. Is it better now? Yeah, yeah, you can ask me. Yeah. So, yes, we got a confirmation. Thanks. It looks better. People can see it better now. So for MongoDB, are you using Mongo? When I started my MongoDB container, when I tried to use PyMongol for grid assets, I get the top error for second-day databases. Ask Simeti. Oh, okay. So, time out. Probably means you have to double check. I don't know. Like this out of the blue, I'm not sure. But typically, the error you may face is maybe your MongoDB, your Mongo database is not up and running properly. So you want to check, look first, use a container with an image that is the Mongo client and try to connect from, before using the Python library, try to just run the Mongo client or Mongo, or it's called MongoSH, the shell container image in your cluster and from the cluster itself, try to connect to the Mongo database to check if it's working. If it's working, then you may have an issue with the way you connect to the MongoDB depending on if it's a cluster, if it's a seed cluster. There are different ways to connect to the Mongo database. You have to check the documentation for this. The first clue, I would say, use the MongoSH container image and try to connect from within the Kubernetes cluster first before using the library and some code. Okay, yeah. So here, what I wanted to mention is in terms of the storage for the database, this is where you specify the volume claim templates. So the type of storage I'm going to use. So in that particular case, I'm going to be using the on that storage class where I have defined all this extra feature encryption, replication, all of that. And then I'm going to also specify the size for my data volume and the size for my logs volume, right? And of course, the storage class here, this is development, my local cluster for development. So maybe replicas, I don't need them. So I don't want to enable encryption if it's local or my local cluster or my laptop and I don't want to have any replicas. You can also, the only thing you need to change if you want to enable a particular feature, it just go there, change it and that's it and save, right? Scaffold will take care of the rest when you redeploy your application. So with Scaffold, there's a multiple thing you can multiple mode where you can run. So let's go back to the Scaffold part, which is there. So you can use a Scaffold build which is going to be building your images. You can use Scaffold run, which is going to run and deploy the different manifest and build the image in your cluster. Or you can use Scaffold in Dev mode, which is probably the best mode, I mean, the principle, the main mode where you should use Scaffold. So the idea is that one, right? So we're going to check what's happening from there. So it's going to be building the images and as you can see, a new namespace is now created here. It's going to start deploying my different, so my application, seven seconds, you can see on the top right end here. You can see that a job has started. The job will probably fail because you can see the MongoDB, it's a three node cluster. So it won't be ready before like a couple of seconds slash minutes. So the job will have to run multiple times, but it doesn't matter. In Kubernetes, a job will be a run until it succeeds, right? Well, at least, I think by default it's 10 time or until it succeeds. And the front end is already deployed, so that's fine, it's just our flask application. But now, if we go back to Scaffold, the interesting part is that now it's also displaying the logs live of your container. So the two container, the initDB and the flask, front end, the container logs are displayed here in what you see on the screen here is the container logs. Now on top of that, I can start also changing my code and because it's located into a directory that I'm monitoring, if I modify the code there and I just hit the Save button, the entire application that is related to these changes is gonna be, although the container are gonna be redeployed. So in that case, I'm gonna modify the page.html and then we're gonna be changing that code and reflects the code in the HTML page live. So here you can see that the job is trying to add data into the MongoDB, but because the MongoDB cluster has not been deployed yet, it cannot succeed. So if we can see now the job must have failed once, you can see error. So now we have the second instance of the job that will be running, but now you can see my MongoDB cluster is up and running. So this particular job should succeed. We're gonna wait. Yes, so now you see it's downloading or it's like kind of doing all this get request from the Marvel APIs and storing it into the database. So now once it's finished, we're gonna just test the application if it's working. Okay, so now it's done. What we can do now is just do a .html port forward for my application, which is running on port 8080 and go back to our local host 8080. So now this is my development cluster, my development application. Now let's say I want to change some of the code. I want to like to use a different syntax for a comic. I don't want to use comic, but I want to replace using comics everywhere, right? So I'm finding all the instances in my code to replace with comics. I'm gonna just kill here my port forward. I'm gonna go back here. And so the logs, so you can see here, this is representing the connection. I've just initiated from my browser. So now let's save. So what I'm expecting is the container to be rebuilt and redeployed in the cluster. So before that, just to prove that I'm not lying, let's check the front end is like three, let's say three, 40. It's gonna be like four minutes or something like this. Now I save. As I save, you can see I just hit command S, right? So it's monitoring my environment and now the container are being built and again deployed into the cluster. So now I should see again, if I go there, you can see like 12th second. This is my new deployment that has been deployed live in my development cluster. So again, we're gonna try to pull forward and I should see here a new code that is deployed. So I've got the right syntax, right? So that gives you an idea of at least the capability for local development. So and if you stop testing everything, what you can do then is you just go back to your scaffold dev process, hit control C and by doing so, it's gonna delete your application, right? And you're gonna see here, the dev namespace is being terminated, right? Now if you want to, I still have PVCs, so you may want to also have a script to delete PVCs and stuff like this that has been provisioned by the operator, not by scaffold. So now I'm back again into a clean development environment. So now that I have done my changes into my dev environment, I want to use my production environment to deploy this application. So again, there are two components for this. The first component is gonna be Tecton. So Tecton, as I said, we're gonna be running a couple of tasks. So we're gonna be using, so Tecton as a concept of pipeline and a pipeline is composed of one or multiple tasks and those types tasks make use of resources. So I have three main tasks I want to realize. The first one, I want to build my docker image the same way scaffold did it. So in that particular case, I'm using a container which is Kaniko and I'm running a couple of commands which is to build the image, I'm using Kaniko executor and then I'm just using the docker file, the path to docker file I specified somewhere else when defining the variables for Tecton to use, the destination, the digest file, all of that. This is gonna be used by Tecton to produce my image. Once I've got this image that has been deployed into the docker registry, what I want to do is also use customize to replace into my manifest, to generate my manifest with a new image, right? The new image that has been set by my previous task which was building the container using Kaniko. So the job of my customize task is gonna be to use customize to edit the image within the particular manifests and then to generate those manifests into a special directory. And that directory then I'm gonna be using a workspace within Tecton, which is basically a PVC. It's just like a directory. As part I'm mounting inside the container that is gonna run that particular task. And once I've got the manifest that have been stored into that particular directory, what I want to do is use git to update, to commit and push the new manifest onto my repository. And but here you can see that I don't have any git CLI task for the simple reason and this is why I really love Tecton as well. If you go to Tecton Hub, they are pretty fine task. And if you look for git CLI, there is some documentation and you can just create a task. You would just name it git dash CLI. And the only thing you have to do is create a variable called git script and then specify your git action, your git commands into that particular variable. So if we go back to here as part of the git script, this is what I'm doing. So remember I've got locally the manifest that are, that have been created by the previous task. So now I'm doing git init within that particular directory. I'm adding the origin, which is where I want to upload my manifest. I'm doing just CD target, which is where I want where I have my manifest, add it, commit, push, and then the upstream repository will be updated with my manifest. And then finally, what's gonna happen is once Tecton, this is the last part of the Tecton task, right? Once Tecton is gonna be deploying, I mean, creating those manifest and push it upstream into the repository, we will have custom, not customized, we will have flux, that's customization. Flux is gonna pick up that particular repository that I have defined as part of the prod customization. So it's gonna monitor this customization using the special source dash in or it's a git source. Sorry about this. So this is gonna get CRDs sort. So you can see here, I've got from a flux perspective, you can see I should have like sources. What cluster is this? Flux pipelines. Yeah, you can see customization there and I should have git repositories as well, right? So I've defined this git repositories with the manifest as part of the repository that flux needs to monitor. So as soon as you will see Tecton updating the upstream repo, we will have flux is gonna pick it up, right? So what we're gonna do there, we're gonna do a flux get customization dash w to check like once it's gonna pick up the manifest that have been updated, monitor the changes, I should see new line there, right? So now what we're gonna do, we're gonna be triggering the pipeline, right? So now I've shown you this particular pipeline what it's gonna do. Again, because it's a Kubernetes native solution, I can use kubectl to trigger my pipeline, right? So for this, I'm gonna create an object based on the model app run manifest. So this is the one that contain all the different tasks, the high-level ones, and this one, this model app run will make use of the subsequent pipeline model app, the different tasks, et cetera, et cetera. So let's trigger this. And again, so create this, so it's created. Now, Tecton gives you the ability to monitor live what's happening. So again, so the first part is building the container. So this is using Kaniko. Second step, we're gonna be using customized to generate the manifest. And third step, we're gonna be using the Git image, the Git task, sorry, the Git CLI task to push the manifest into the remote Git repository. And finally, Flux is gonna pick up those manifests. And the last step really is, I wanted to show you when it's deploying at the same time, I'm gonna have Kiverno that is gonna be monitoring the different objects that are gonna be pushed into the cluster. And Kiverno, this is also a Kubernetes policy engine. So meaning that I can specify my policy as YAML, excuse me, as opposed to, for example, OPA, Gatekeeper OPA, because basically it makes use of Rigo. Rigo is a different language, for sure it's gonna be probably more flexible, but here for more like Kubernetes policies, more simple way to build policy, then you can use just YAML and create your own rules. So a couple of example here, I've got a rule that says that I need to have an admin, at least with all those permissions. So it is monitoring the MongoDB community custom resource. So you specify the kind of resource you want to modify to apply the rule, and then you can type the message if it failed, and then you specify the pattern for my encryption. For example, when you're in production, you want to have encryption enabled for your storage class. If the provider like on that is providing this feature, you may want to have the pattern, I need to see encryption set to true in my storage class. Same thing for replicas, you need to have at least two replicas where you're deploying into production. And the maximum size is 10 gig for my database in production, right? So it's going to be giving you these results. So just here, this one, this report. So gets faster report. Please see report, right? You can see policy report. So we have here two paths, right? And we also have cluster policy report. Some of the policy I've used, I've got four policies to our consider cluster, to our consider policy report. So I've got two paths for the cluster policy and two paths for the policies. And now I can also do this dash. Yeah, I know. So it's going to tell you which one has passed and if there was one fail, it's going to tell you why it has failed. And just to give you an example of something that is non-conformant. So here, if I check out non-conformant, oh my God, I believe. Okay, you can use, so the results I had previously, I've showed you the report, it's inside the cluster because I'm using Qiverno within the cluster as an admission controller. So now you can also, you can choose to stop and fail your pipeline if the manifest are not matching those requirements. So for this, you can use also Qiverno as a command line, right? So for example, here, I'm using customized to generate the manifest. So it's going to output the manifest with my non-conformant, it's not non-conformant, blade slash prod. You can see here, it's prod, but it's the non-conformant manifest that are located there. So, and then I pipe the manifest, so the text file into Qiverno, and then I just going to display the results on the screen. Oops. What did I do wrong? Policy, policy. Okay, it's prod. Sorry, maybe this one. Yeah. So you can see here in my non-conformant where on purpose, I've created a size bigger than 10 gig. I've created, I've disabled encryption. All of that, you can see that I've got three fail, one pass and it's going to tell you why it has failed. So as a result, you can choose to fail the pipeline if you want, right? So this is the last part I wanted to show you in terms of, you know, also adding some policy as code element to it. So now just quickly get back to our pipeline. So the tecton pipeline has finished. Now what I want to see is, you can see here now, I've got another instance of the reconciliation of flux. So flux has picked up the new manifests upstream on my remote git repository. So now as a result, I should see, you know, four minutes, this is when flux picked it up. And as a result, I should also see now, let me check if I have on the, if I just want to check that I've, yeah. So I have removed and terminated the port, the portware direction there. Let's go back to tecton and then use a cook CTL or forward from there. So now this is running, this is my production environment that I want to test. So let's go back to local host and hopefully, so you can see I've picked up my image with the change and I've updated my production environment with it, right? So I am now in a production environment where my application is running, but I have encryption enabled, replication enabled. I've got more container as a front-end. And yeah, the only difference is, as was mentioned previously, I didn't do a canary deployment. I just, you know, deleted, replaced the container using flux, basically updating the application. Okay, so I think I'm done for the demo. Is there any remaining questions at all? Great, great demo by the way. Thank you so much. While we wait for the audience questions, hopefully come in, so essentially now is the time for you to ask the questions. Well, you could have asked them throughout the whole, maybe not as well, but now is your time to try an audience. So let's get those questions in. And then there's the first one here. Where can I get this example on GitHub? Okay, we can post it. I can give you the link and then you can post it from the, where is it located, like the webpage or where we put all the information related to that talk. I'll make sure to post all the different links to the repositories there. Great, perfect. Then Carlos continued, I like the marble app. Yes, it was a really nice demo. I agree with Carlos there. Oh yeah, keep the questions coming in. We have about seven minutes for questions. So there's plenty of time, so type away. But while we wait for the typing to start, I have a few questions as well. So there are a lot of moving parts in the CI CD pipelines. How do you select the right tools? Yeah, so I would say basically to find the right tools, you don't have to automate everything. So it's, I would say depending on your use case and how you want to improve your current processes, you can focus on some of the area I've mentioned today. So maybe it's just for you, you want to be more agile when it comes to your own application and testing on your local laptop, then you can pick up scaffold or try other tools that have the same kind of qualities to deploy on your laptop. If you're more focused into automating deployment in production or staging, then maybe focus more on Tecton or, I mean, I love Tecton because as I said, there is this market, the Tecton Hub, which allows you like in five minutes, just to use a container and do the tasks you need to do. And so if you want to automate is really related to your production, then maybe focus on Tecton. If you want to enhance is maybe provide more security, more compliance, then focus on the policy as code. One at a time use those building blocks in isolated fashion. And once you're happy with them, then you can start combining them together. I mean, this demo took me probably, I didn't build at once. I started by doing the laptop, then I was at Tecton, then I did some things around policy as code, and then I started to add them together. But don't try to do probably everything at once. That would be my advice. Makes a lot of sense. So there was a few, there's a lot of questions now. So let's get through them. So Carlos continued, where, what page? So which page you will add the GitHub example to? So what page? I don't know. I don't know. Is it like, how can we communicate this to the attendees? To what would you recommend? I mean, if you have a page you can share right now, you can post to the private chat that we have here in the production side. And then we can send it to the attendees via chat, or you can, if you don't have it ready right now, you have a Slack channel where you can send it there as well, like later on. Yeah, so there's multiple solutions. So I'm gonna link into the chat here, my GitHub repository where this is where you will find all the repos I've used today with the name. So if you watch the video on demand and you can pick up the name of the different repository, this is where you will find all the code. And if you want to connect on Slack, you can also connect on the, on that Slack, which is, let me, I have to make sure that I'm not mistaking, on that Slack dots. Yeah, so if you go on our Slack channel, which is, I'm gonna just post it into here. You can join this on that, dot Slack.com channel where you can find me if you want. I will make sure to post the links there. If you want to talk to me, you can also find me on the CNCF Slack channel, I'm there. So my name again is Nick Nicholas Vermandy, if you want to reach out to me there, if you have any other questions. Yeah, I think that should be good. Yeah, that's good. And then we have three or four or five questions to go through in three minutes. So let's be quick about perfect. We have the next, like the resources ready. So a question. Tecton is an alternative CI CD technology, correct? Being cloud-native to communities, thanks beforehand, asks. Yeah, it is, Tecton is a CI CD solution. What is specific to Tecton is that as opposed to, you know, GitHub actions or a SQL CI or anything else, it is running itself inside Kubernetes. So every action you do in Tecton is a container, right? That's the only difference, but it is effectively a CI CD solution. Perfect. So do you recommend to take the CI CD pipeline images used in the task to copy the images, scan them and host them yourself? The best thing is just update your remote repository, right? So whether it's Docker Hub or on, you know, GitHub repository to store your images there. Yeah, do that. I guess it's better. If it's on your laptop, you can use like your local, deploy your local images there as well, but I guess it's better just to update your images on a public repo somewhere, right? So the registry side. Great. And then Robbie asked, what would you change to make this demo multi cluster? Is it just adding additional clusters and flux and how does it happen asynchronously? Yeah, exactly. It's a multi cluster would mean that you would have different cluster configured to listen for that particular repository where you have your manifest. So you would have maybe different overlays for different clusters. And then for every cluster, you would have flux or it can be algo, whatever, that is monitoring this particular repository. Yeah, definitely I will do this. Perfect. And then the... Yes, just to... It is important meaning that when you define flux, you tell flux, you know, like the interval time, it's monitoring like every 10 seconds, every 10 minutes, sorry. That is the default, but you can change it if you want. Perfect. And then with one minute left or a bit less last question of today, there's a question, I always having difficulty to set the graceful exit for stateful applications mostly for DBMS type. How would you suggest for killing the stateful pod? To kill a stateful pod, but normally if you use an operator and if in the normally you can scale down to zero, right? If you want to delete the pod, you can scale to zero. This is the best way to delete your pod, right? Don't delete manually. Use just like the... If you use... If you trigger the operator to delete the stateful set, then you make sure that it's properly done. Yeah. Perfect. That was it to our today. We are right on time with the one hour mark. So thank you everyone for joining the latest episode of Cloud State of Life. It was great to have a session about community stateful applications. And thank you for such a great speaker as well. We also really love the interaction and questions from the audience. And as always, we bring you the latest cloud-native code every Wednesday. So next week though, we'll have another great session coming up. So thanks for joining today and see you next week. Thank you. Bye.