 Hello, everyone. My name is Anand Francis. So I'm here to give a talk on code to deployment. So I have with me Co-Speaker Savita. So with the exciting growth of DevTools, now we have enabling developer productivity by enabling developers to fastly and securely deploy code to the production level. So today, we'll see an approach where we are going to use a stack that is based on Tecton Pipelines as code, and Argo CD as the deployment tool, and see how the stack could be leveraged for enabling ease of deployment from code to production. So I'll start with an introduction about myself. I'm Anand Francis, and I'm a principal software engineer at Red Hat. So I work in the Argo CD community project. So there we are working on improving the performance and the scalability aspects of Argo CD. And I'll hand out the mic to Savita. Yeah. Hello, everyone. Welcome to the session. Thanks for joining us. So myself, Savita, so I also work for Red Hat. I do contribute to the project called Tecton. It's an upstream project. So there are multiple parts of there in this Tecton org. So we try to contribute everywhere it is possible. So you guys can reach out to us on Slack upstream. We are even on Kubernetes Slack, Tecton, Knative, and Twitter's LinkedIn. All right. So this is the pretty much agenda we have it for today. So basically, I will be covering what is cloud-native CICD, a brief intro, and then introduce to that Tecton. So Tecton has, as I mentioned, Tecton is a big org. It has multiple sub-projects. So in this particular session, we are going to talk about the Pipelands as code. It's overview how Pipelands as code allows us to do, I mean, allow us to achieve the best practices of security things to achieve our CICD stuff. And then some of the comparative study, which we have done in our day-to-day use cases, I will be sharing that. So I'll pretty much talk about the Tecton chains. So Tecton chains, with this, how can we able to achieve, how can we able to attest our workload? I mean, the CICD workloads. And then Anand will talk about the Argo CD part. How after the CICD will come into play and how we can achieve that using Argo CD. And then image-updater part, where how dynamically changes can be deployed on the cluster. And then finally, we little bit talk about the policy stuff, like how we can ensure security can be achieved when we talk about the end-to-end flow, using a six-store policy controller today. And then finally, we have a demo. So we have recorded our demo because we have multiple components involved here. So that's the reason. And finally, we can take questions if time permits. All right, let's move on. So we all of us know about CICD, right? So I will not go in deep about that. But I am interested mostly on cloud-native CICD. So when we call our CICD as a cloud-native, it's basically if it obeys the principles like running your workloads using containers or the orchestrators, if that tool follows like auto-scaling part, if it doesn't require any particular team to handle something, or it obeys the DevOps principles. So if the tool or the platform or if the product if obeys all these things, then we can simply say it as a cloud-native CICD. So I mean, it's just a comparison. Like in traditional CICD, right? It's mostly designed for virtual machines, but cloud-native is for orchestration stuff. And for traditional, we require a team of ops to maintain, to handle the auto-scaling and all those things. But cloud-native as it's on orchestrators, so all these things we can get inbuilt. So a couple of things I have listed over there. Now, when we say tech, why we took this tecton? Because this obeys all the principles of cloud-native CICD, and it is popularly used. Why? Because it's very native to Kubernetes. That's one of the major reasons why people are moving towards CICD for tecton. So again, it's an open-source project completely. Major focus is for building CICD for Kubernetes platforms. It is still not under CD foundation. OK, so as I mentioned, cloud-native CICD, tecton also follows like built for Kubernetes, scaling is on demand. And even tecton follows like securing your end-to-end pipeline with multiple plugins with that. And it is very flexible. You can plug in, plug out anywhere you want because it has multiple reusable entities in that. So every project have its own core concept, right? So in Kubernetes, we have pod deployment and service. If you know these three concepts, we can able to write our own application and deploy it on the Kubernetes. Similarly, in tecton, we have the same core concepts and that I have listed here. So basically, we call it as a step, task, pipeline, task run, pipeline run. So instead of talking this way, I will be talking in a diagrammatically way just to picture out how things are clubbed together. Step, in Kubernetes, we have a pod as a basic unit or basic entity, right? So in tecton, we have a step, is a basic entity, right? Steps are nothing but the part of task or part of the things yet it's done. It runs as a container. So if you want to compare with Kubernetes, step is similar to container, we can say. So I'll let you know why I have divided the steps into multiple pieces in a different, different state. So in Kubernetes, it is not the best practices to run its own, right? We always run with the deployment. So similarly, in tecton, the steps cannot be run individually. It has to be clubbed in an entity called task. So the task is nothing but the template which can have one or more than steps. So there can be multiple tasks which can do some particular stuff. So I'll tell why I have divided the task into three. I mean one for cloning the part, one for building, and one for deployment. It's because tecton has this visibility of reusability. What I want to say is let's suppose if we have a scenario where I want to build for Golang, build for Java, so the building scenario will change, but the cloning part and deploying part will be same. So in tecton, if we divide that scenario as a task, like cloning can be written as a one task, building one task, and deploying, so this way what will happen, we can reuse this task whenever we want. And this entire things can be clubbed together in an entity called pipeline. So again, it's a template which can have more than one task. And writing it in this pipeline template help us to manage which task to run and which task to run after execution of the previous task and all those things. And in order to run this template, we have a resource called pipeline run. So basically the pipeline task, they are just a static template. They don't run anything. In order to instantiate those template, we have a resource called pipeline run. Moving on, so as I mentioned, I will be talking about the pipelines as code today. So it's because when we said tecton, right, it has multiple things. But there is a project called pipelines as code. What it does is like it's basically an opinionated CI. I mean, its focus is to automate everything so that user, when he sends a pull request or push request, right, CI should run automatically. And finally, he should get some status on the source code management. That is the goal of the pipelines as code today. So it is feasible for different source code management. It can be integrated with GitHub, GitHub, Bitbucket. And even if there is no source code management, if you want to handle, we have an incoming webhook way as well. So some of the benefits is like it follows the version control mechanism. It helps us in the repeatability. I mean, the same template we can use for multiple GitHub repo. So collaboration within the team can be handled very well with pipelines as code. Yes, the major part of it is automations. I mean, one flow should be completed with just few clicks. And then we have scalability. This is like an inbuilt feature because we are on container orchestrators. And change tracking because we are using source code management here. All right, now moving on. So pipelines as code, like how the template should be kept. So basically, pipelines as code always looks for a folder called .tecton inside your GitHub repository. So this is the comparative things we have done with the GitHub actions. So there are multiple solutions in the market. I mean, we have many things we can achieve. But why I took GitHub actions? Because before starting with pipelines as code, in .tecton, we used to have this GitHub action integrated with GitHub project. And we used to write a workflow. Let's suppose creating the kind cluster, running E2E test, and all those stuff. But again, that is only for GitHub. It used to happen. But again, if I want to do for GitHub, I need to go for another solution and all. So that's where pipelines as code help us to come out of that scenario where I can use one single platform for all the cloud providers. All right, so just to give a brief, like we have a Git platform. User will send a pull request. Then the platform, which does the action, and finally report the status back to your GitHub repo. All right, so some of the best practices, which CI can be handled with pipelines, we can very well cover the policy part. We can handle the token generation from private GitHub repository, public repository. Even we have a controlling of the request, like who can send the pull request, who can view, who cannot do all those things. Of granular lever can be controlled here. All right, so tecton chains are used, because we are talking one of the policy controller today here. And tecton chain is the module in tecton org, which actually does that part. I mean, signing of attestation of the payload, signing of the image, everything can be handled using tecton chains. So it uses this SLSA level 2, which is like a next level from basic to this thing. And it always watches for the task run completion. Once it is completed, if there is any images built by the task run, it try to sign that image. And that signed image, we can use later for deploying purpose. All right, so I will hand over to Anand. He'll talk about Argo CD. Till now, Savita covered the CI part. So from code to image, that is being covered by the CI tool. Now we can look how we can take it to production. So the tool that we are going to explore here is Argo CD here. It's quite a popular tool that's used widely. It follows the GitOps principles. So what do we mean by GitOps here? So there are like four concepts, principles that GitOps has defined. So we'll basically start with a desired state being stored in a Git repository. So this could be any Git implementation. But the desired state has to be stored in a versioned storage. Then we have the cluster state and an agent which observes that state. The agent not only observes the state, but it compares with what is defined in the Git repository as the desired state and constantly reconciles it. So the three principles revolve around this GitOps engine that's being implemented. So one is the declarative nature. So GitOps principles can be applied only for systems that support declarative nature. Kubernetes being a declarative system is suited well for this GitOps principles. And the next one is regarding the storage, which is versioned and immutable. That's why we choose the Git repository. And then the last thing is about the agent. So it's pulled automatically, meaning that there is no manual intervention. So as soon as you deploy the agent, the agent starts looking for changes in the Git repository and starts deploying the manifest. And the last is the continuous reconciliation. So any manual changes that is done on the cluster state should be reverted. And whatever is defined as the source of truth in the Git would be applied on a continuous basis. So there are two popular implementations. One is the Argo CD and the other is the Flux CD, which follows this GitOps principles. So this is a typical architecture for Argo CD. So the core components is described in the Kubernetes block. So we have the repository service that does the interaction with the Git repository. So user creates an Argo application wherein he tells where is the Git repository located at and what is the version that needs to be deployed at. So the repository takes care of taking the latest manifest from the Git repository. And next we have the application controller, which is the agent that does the continuous reconciliation part. The rest of it is all the supporting components. The core components are the repository service and the application controller. We have the API server, which is used for interacting with the UI and the CLI. Argo CD supports deployment to multiple clusters. So there is a concept called destinations wherein you can have multiple destination and you can select which cluster you want to deploy a particular manifest. So the next tool that we are going to talk about is the Argo CD image updater. This is used for decoupling CI and CD. So generally there will be two repositories, one for the source code and the other for the storing the Kubernetes manifest. So in order to decouple this, we are using the image updater. So it's a controller which checks the container registry if there are any latest images that are available and that could be deployed onto the cluster. And you can define rules on what you think is the latest and what should be being deployed. So that way you don't have to maintain the Git repository, which contains the Kubernetes manifest for every incremental image update that you're doing. So that is the advantage of going with this Argo CD image updater. And then in Argo, in Argo application, there is a concept called sync policy, wherein you can define whether the sync should be done automatically or manually. So if you are doing automatic sync, then there is no manual intervention. So as soon as there is an image available in the container registry, that is latest, that will be automatically applied. If you are going for a manual sync operation where you want to have some kind of control on what's being deployed on the cluster, you can go for the manual approach wherein the Argo CD image updater will tell there is an image available here, whether you want to pick it up or not. So you have some control with the automatic sync policy that is available in the Argo application. So these are some of the update strategies. So how do we say that an image is latest and that is what we want to be deployed? So semver is like the semantic versioning. If you have X dot, Y dot, Z, and you can define the latest is X dot, Y. So if there is an image pushed with the tag with, say, 1.2, and there is an image update available saying that there is a new version 1.3, that will be treated as a latest. And the latest one is based on the timestamp. So whichever image was pushed most recently will be treated as the latest image. Digest is when you use, if you are using the latest tag, there could be multiple images being pushed with the same tag but with different char digest. So if you want to deploy the latest one based on the digest, you can go for that option. The last one is name. So that is used when you are going for a tagging mechanism which is based on date timestamp. So you can choose the name which will sort it alphabetically and takes the latest one. So we have two update methods from the Argo CD image updater. One is you directly update the Argo CD, which is not a persistent solution. So it is temporary. If there is any crash in Argo CD, you will lose the changes done by the image updater. The next option is you are persisting that image update in the git itself. So even if there is a crash of the Argo CD, when it comes back, it will know that there was an latest image update that was done. And it will pick that image. So the last thing is about the six store. So this is used for, six store provides a set of tools that could be used for signing and verifying the images. So once we start pushing the images to the container registry, we want to have some control on what is being deployed on the production cluster. So the mechanism that we use is signing and verifying. So you sign the images on the CI part and verify the images during deployment so that you know that what you have built is what is being deployed and not anything else. So these are some of the components that we have. The first one is cosine, which is a CLI tool, which is used for signing and verifying it. Savita will cover this as part of the demo, so where we use the cosine tool for signing in the CI process. Policy control is what lives in the actual production clusters. There you will verify whether that image is signed. The rest of the components are used for what is called as keyless signing. So if you don't want to manage your keys, you can opt for this approach wherein the keys are managed by a central certificate authority using an OIDC connect. So the last one will not be covering it in the demo, and we'll be using a key-based image signing that will be used for verifying the images during deployment. I'll hand it over to Savita for the demo. All right. So in the demo part, what we are going to cover is a user will send a, I mean, the user will write a pipeline. That pipeline basically contains cloning the code, storing it in the storage. And we do some static check, like Linter, WET and all those things. So once that is done, we will go for a build. So in this build step, we will build the image and push it to the registry. So this part, we call it as a CI. So once this is done, the deployment side, that image will be used. So before go and deploy, we will do the image verification, whether it is signed by trusted one or not. So if it is not, then it will straight away eject. We won't go and deploy it. If it is success, we will go and deploy in a cluster using Argo CD. Now, who has done the signing here, right? So that's where tecton chains play a role here. I mean, during the build process, during the CI step, tecton chains controller will keep on monitoring when this build task gets completed. Once that is done, tecton chains will look for that and do the attestation and signing part. I'll quickly show in the demo side. All right. So we have recorded because we are involving with multiple components and all. So yeah, before that, let me. So yeah, this is the project repo we just written. So for time constraint, we have already installed pipeline chains, dashboard, pipelines as code. And we have given the instruction here like how to set up. And everything is created even for the Argo CD. So if I go and show, get pods, yeah, tecton pipelines. So all the pods are up and running. But thing is like, tecton involve multiple projects, right? But to deploy each and everything manually, it is very hectic. So instead of that, we can go with operator, where with the help of operator, we can install every tecton modules in a single release amble, with a single release amble. OK, so once that is done, right? So I'll go back to recorded demo. Yeah. So it is initially installed everything, right? So operator. So now what I'm doing? I will be raising a pull request. So you can see here in this repo, I'm sending a pull request with some invalid or some mistake, by doing some mistake. So once after doing the pull request, right? The CI will get triggered. So I'll show here. OK, initially when I did this, right, pull request, let me goal it. Yeah, initially CI did not trigger, right? CI did not trigger because in cluster, we don't have anything to tell like, OK, from where the request is coming and all those things. So I'll quickly go back to this repo, where I will show, like, this is the documentation, right? The repository CR. This is the custom resource we have in pipelines as code, so which actually tells, OK, when this custom resource will be installed in the cluster and whenever a pull request will be sent from that project, right? So somebody should be there to let controller know that, OK, event is coming from that project. So that's where this repository helps in to do that. And we will specify what URL. I mean, from which URL the event is coming. So that was missing initially. So that's where I have created the repository CR. So you can see here, yeah, initially it was not there, so I am creating the repository for this Cubeday India project, right? So once the repository is success, once it is get created, so what we can do, we can go back to repo. And I can, so one way is, like, I can close this pull request and reopen, or else I can send a, like, retest test. So it will just go ahead and start the CI for us. So yeah, you can see here, the CI will start in a second, so you can see that CI started. Now I just wanted to cover one more thing here. So as I mentioned, Python says could always look for dot tecton folder. So in this dot tecton folder, I have kept two repository, two yamls, one for pull request based, one for push request. So it means whenever pull request comes to your repository, right, only this pipeline run will get triggered. If push request, other one. So how that can be distinguished? That is through annotation. This is one way. And another way is, like, we have a cell expression supported in pipeline run. So I have specified clearly, like, OK, run this PR pipeline run if it is for pull request. And always focus for the target branch called mail. And if we want to do for both pull request and push, we can specify something like this. And one more thing I want to bring here about this task, right? I don't have anything in my cluster the installation of this task. But when this pipeline run started, right, how this task will get resolved. So pipeline says code. If we specify task something like this, it will try to download from this tecton hub. So what is this tecton hub? So tecton hub is a kind of a registry for Docker images. Similarly, in tecton, we have multiple reusable tasks that can be used by multiple folks, right? So we hosted a hub where all the tasks can be kept. So by default, if you don't specify any path and all, it will try to download from the hub. But we have a feasibility to specify from which location you want to download the task, either it can be your private, or either it can be from your local cluster itself, or either it can be from your another GitHub repository. And this is how the task can be written. One task, the second task, and the task reference I have just specified as a ref. I just wanted to mention one more thing. Like, how do I ensure that the static, I mean, linting should happen only after the cloning the code? Because obviously, if I don't have any code, where do I should do the linting part? So this run after is the one entity which will help us to design our pipeline. I mean, which task to be run after which task. So this is the one entity I want to mention. And we can specify input-output resource using workspaces, like how the data can be shared, whether it is a through wall. Workspaces is nothing but you can assume like a volumes in Kubernetes. All right, so now going back, the pull request, it is started. And we can easily view on the tecton dashboard the running of the pipeline run. And we can even see the steps, like how they are done. And initially, it will fail because I did some mistake while doing the PR, right? So even we can see the logs of that, like why it has failed. It is because I have not handled the error. So what I'm going to do now is like I'll just go and edit this. OK, so before that, I just want to mention that. I, as I mentioned, pipelines as code help us to report the status back to the GitHub, right? So this is how it looks like. So why it is failed, what is the reason, and how many steps got executed successfully and how many not. And the reason for the failure also we can easily view here, like what was the reason. So now I will just go back and edit the file, right? I will edit the file to correct the mistake which I have done. And I will re-push the changes. So once I do re-push the changes, again my pipeline run will get triggered. So this time CI will be success. So once CI is success, let's merge this pull request, right? So after merging this pull request, the next pipeline run should start, which is nothing but the push request related pipeline run. So that one, how can I check? So if you see here, I can go back to the repository. There should be some indication for me to see whether CI started or not, right? So you can see that this one. It shows that one in progress check. So that indicates that CI is started, and it is running. So let me quickly show that steps, yeah. Sorry, yeah, OK. So this is how the after the push request CI started. So it has done the fetch repository successfully. And building up the image is success. Everything is success. And finally, in the write URL step, I have pushed this image to this repository. Now, how do I verify that? Like, OK, it has pushed, it has signed, and it has attested. So I can go back to my query registry, and I can see, OK, a few seconds ago the image is pushed. And this indicates, OK, so this indicates that it is signed by cosine using tecton chains. All right, so yeah. OK, I'll quickly walk through. Like, once that signed image is done, we can do it on the Argo CD part. Let me quickly show that, right? So when we deploy, right, initially it will fail for me because I have given the wrong public key. I have passed the wrong public key. But when I updated with the correct key in the cluster which is like six store policy, so it works for me in the correct way, and I should able to see. So I am doing the edit of the policy, cluster image policy, and I'm editing the public key with the right one. So once I have edited the public key, as Anant mentioned, that Argo CD always looks for the changes from the GitHub repository or the cluster changes, and try to sync with that. So that's where you can see, like, within a fraction of seconds, it tries to take the updated changes, and it will make us sync. OK, it means it could able to see the right public key, and it has started creating the application. All right, so yep, yeah. Do you want to say something? Yeah. A quick update, like, how you can add, like, multiple environments. So you can use the same Argo image updater, but with different image tags. Like, the first one will always is an integration environment looking for the master builds. The next one is, like, a candidate build which you want to promote to staging. And the last one is the final released build image. So you can have, like, different tags and use Argo image updater to do the automatic deployment. Yeah, thanks, everyone. And this QR code is for the developer.redact.com, where there are, like, a lot of resources to understand all the developer tools maintained by Redact. Yeah, thanks for your patient listening. Thanks. Thank you.