 So, a very good afternoon everyone. Welcome once again to KCD Chennai 2022. So, I am Avi and today I'm going to present you a talk on creating serverless CICD on Kubernetes with GitOps. So, this talk will be beneficial for those who are trying to understand what do you mean by serverless on Kubernetes and how to set up CICD pipelines on Kubernetes for deploying serverless application following the GitOps principles. A bit about myself. I'm currently a software engineer intern at HeadHack India. I currently contribute to projects like OpenShift, Tecton, Kepton, etc. I like to engage with new people and contribute to various projects every month. You can follow me on Twitter, LinkedIn, or GitHub. I also write monthly medium articles so you can check those out too. Now let's understand what do you mean by serverless. So, serverless is just an abstract term and is not to be taken literally. In the same way, as cloud computing does not involve the sky, serverless computing does not mean that the code is executing without servers. Basically, it refers to the experience a user or a customer has and it presents a continuum of how close one needs to work with hardware and infrastructure. In simple terms, it means there is much focus on code. It's an even driven architecture and it's more easier to manage because it's stateless and isomorphic. So, we see there is a gradual decrease in the concern for infrastructure and a gradual increase for the decomposition of the workings. Now let's understand the different categories of products when you are trying to explore the serverless ecosystem. So, we see two categories mainly that is serverless 1.0 and a serverless 2.0. The serverless 1.0 revolves around the products launched by the various cloud providers. It can be AWS Lambda provided by Amazon or Azure Functions provided by Azure. So, the main issue with this kind of services is that they are not portable enough because it becomes difficult to transfer applications for one cloud to another. This is because the signatures are different and the way the Z file is constructed is also different for different clouds. This problem of portability is solved in serverless 2.0 applications because they follow a universal definition. That is, the applications will be stored in OCI compatible container images. It will be exposed to the HTTP server at which it can be configured with the help of innovative variables. Thus, any application related in Python or any other binaries could be easily moved between the serverless 2.0 platforms like OpenFast or Knetive with ease. So, now let's move on to one of the applications we are going to use in our demonstration that is Knetive. So, Knetive is an open-source enterprise-level solution for building serverless and event-driven applications. Originally developed by Google, Knetive now has contributors from IBM, Red Hat and VMware. So, the main purpose of using Knetive is to deploy serverless containers on top of Kubernetes. So, currently, Knetive has two different components. The first one is Knetive Serving, which is responsible for rapid deployment and auto-scaling of serverless containers. And of course, we have Knetive Eventing, which is responsible for enabling developers for setting up the event-driven architectures. Now, let's understand the architecture of Knetive. So, we have the compute cluster on top of which we have the Kubernetes installed. Now, before installing Knetive, we need to have a networking layer. So, in this example, we are using Istio, but you can use other applications like Contour, Core, etc. Then we install the Knetive application. So, with the help of Knetive, we will be able to deploy applications on Kubernetes without having to write about the deployment or the service YAML files. How does this work? So, for that, we have the right diagram. So, the most important thing of Knetive is the Knetive service file. So, in this service file, we have to mention the container image that should be deployed, as well as the environment variables followed by the parameters. So, whenever there is any change in this file, a revision is created, which is snapshot of the application code. So, now, whenever there is a new revision created, the traffic is routed to this new revision. But still we can control the traffic with the help of the Knetive service file. Thus, with the help of a single YAML file, we are able to deploy applications on Kubernetes without having to write the standard YAML files that is required for deploying applications. So, we have understood how to deploy serverless applications on Kubernetes with the help of Knetive. But now, there is a need to create a CICD system to automate this process of deployment. So, here comes Tecton. So, Tecton is a powerful open-source cloud-native framework for creating CICD systems. Originally, it was known as Knetive build, but it spun off as a separate project and is one of the incubating projects from the CD Foundation. So, the main feature of Tecton is that it is cloud-native, so it can be deployed on any Kubernetes cluster across the multiple hybrid cloud mooriders. Also, Tecton is made up of two main components, that is, the Tecton pipelines and a Tecton triggers. Tecton pipelines is more responsible for providing Kubernetes-style YAML-based resources for declaring the CICD pipelines. And Tecton triggers helps you to create the Kubernetes resources based on the information about the even payloads. So, now let us see the architecture of Tecton pipelines. So, the smallest resource in Tecton pipelines is known as a step. So, step is a job that you want to automate. So, a collection of steps make a task. A collection of tasks make a pipeline. Now, to run a task, you need to have another resource known as a task run. While to run a pipeline, you need another resource known as a pipeline one. Now, let us look at the architecture of Tecton triggers. So, Tecton triggers also has various resources. The most important one is the event listener pod. So, this pod is continuously listening to an event that is coming from the outside the cluster. So, whenever it receives an event, it sends it information to trigger binding. The trigger binding extracts the important information and sends it as parameters to a trigger template. The trigger template in terms fires the pipeline run, which is in turn responsible for starting off the pipeline. Thus, with the help of Tecton triggers and Tecton pipelines, we are able to achieve a fully CI CD system. So, now let us understand what you mean by GitOps and Argo CD. So, GitOps is a way of implementing continuous deployment for cloud native applications. So, it focuses on a developer-centric experience when operating with infrastructures with the help of tools that developers are already familiar with, including Git and other continuous deployment tools. So, Argo CD is a tool that follows the GitHub pattern of Git repositories as a single source for truth for defining the desired application state. So, the main feature of Argo CD is that it can help in automated deployment of applications to specify target environments. So, now let us understand the CI CD workflow of the demonstration we are going to see. Before that, let us understand what we are trying to achieve. So, whenever a developer will push some new code changes in the application repository, he must be seeing the latest code changes back into the website UI. So, for that, we need to create the CI CD system. So, first of all, whenever the developer will push the application code into the application repository, a webhook will be triggered which will inform the event listener that is running inside the Kubernetes cluster that an event has been occurred in the application repository. The event listener will then transfer the response to the triggered binding. The triggered binding in turn will extract the important information like the commit URL and the commit ID and pass it to the trigger template. The trigger template in turn will use these information as parameters to run the pipeline run. And as we know, the pipeline run is responsible for running the pipeline. So, the pipeline will be running the three tasks that is listed here. So, the first task will be cloning the source repository. That means the application repository will be cloned inside the Tecton environment. After that, the application repository contains a Docker file. So, with the Docker file, we will be able to build a Docker container and push that container image into Docker Hub. After that, the third task will be fetching the latest GitOps configuration details from the GitOps configuration repository. So, this repository contains the Kinetive YAML files as well as other configuration files. So, after fetching these files, our task will be to update the Kinetive service YAML so that it now contains the information about the latest Docker container that has been pushed to Docker Hub. So, after this change is made, then the updated file is again pushed into the GitOps configuration repository. So, this entire process must be done automatically. On the other hand, RGCT is constantly looking at the GitOps configuration repository. So, whenever it sees there is a change in the files in the configuration repository, it understands it is out of sync. So, it fetches the latest files from the repository and then applies those in the Kinetive environment that is underlying inside the Kubernetes cluster. So now, with the help of the Kinetive service file which has been updated with the latest Docker image, the application developer can see the latest code change in the application URL. So now, let's explore the code base to understand how the configuration files for managing the Kinetive, Tecton and Argo CD are written. So, currently, I've already started a two-note EKS cluster on Amazon with the help of EKS CTL command line. So, currently, you can see my VS Code screen in which all the configuration files as well as the application code is present. So, let's browse through each and every file. So, first let's see the application code. So, here we have a simple Golang application with the main.go file. So, in this main.go file, we have a hello world that is going to be printed on the website. Apart from this, we also have a Docker file. So, this Docker file will be used to create the Docker containers in our CI CD pipeline. So, next, we have the Kinetive service YAML file. So, in the service YAML file, the most important thing is the container image. Here, you can see it is directing to the Go sample app that is present in my Docker registry. Apart from this, you can also see a Docker tag mentioned. This is important because you're going to update this Docker tag automatically with the help of the CI CD pipelines in the further steps. So, finally, let's see the configurations for the Tecton pipelines and fields. So, first let's start off with the explanation of the task configuration. So, here we can see there are two different files for the two tasks. But according to the diagram before, we have three different tasks. So, the first task that is the cloning of the Git repository is a very common task and is present in almost all the pipelines. Therefore, rather than writing the task manually, I have downloaded the task from the Tecton hub. So, Tecton hub is a marketplace from where you can download pre-created tasks and use them directly in your pipeline. So, from this, I have downloaded this Git clone task and I have installed it with the simple command. So, after installing this task, I am able to use the tasks directly in my pipeline. So, apart from this task, I have written this builder task. So, this builder task is responsible for building, tagging, and pushing the Docker container into Docker hub. So, this task contains a multiple steps. The first step is building the Docker container with the help of the Docker file already present in the application repository. So, once the Docker container is built, then it is tagged in the second step. The final step involves pushing the Docker container into the Docker hub. So, finally, with this task, I am able to build, tag, and push the Docker containers to Docker hub. So, the final task involves updating the GitOps configuration repository with the latest code change. So, this task is also consists of three steps. So, the first step involves checking out the code from the GitOps configuration repository. The second step involves updating the cumulative service YAML file with the Docker tag of the new container that just has been pushed to Docker hub. And we are able to achieve this with the help of a utility known as YQ. The final step of this task involves pushing the latest code changes back into the GitOps configuration repository. So, for this, we have already set the SSH keys so that automatically the commit can be made. Next, let's see the configuration for the pipeline resource. For that, we have defined the pipeline.yml file. So, in this file, we are going to specify all the configurations that are required to set up the pipeline resource. We already know that the pipeline constitutes of multiple tasks. So, in this file, specifically, we are mentioning which are the tasks that are going to be executed in this pipeline. So, you can see here, the first task is a fetch repository task. This task is using the Git loan task we had downloaded from the Tecton Hub. Apart from this, the second task is the build and push task, which is using the builder task we had defined before. And the final task is the update Docker tag configurable, which is using the update GitOps repo task we had defined before. So, these three tasks are going to run sequentially one after the other. To ensure this, we have used the run after parameters in this. So, finally, we are able to create the pipeline to set up the continuous integration process for deploying our application. So, till now, we have set up the Tecton pipelines. But to automatically trigger these pipelines, whenever there will be any code push inside the application GitHub repository, we need to set up the Tecton triggers. So, for that, we first set up the event listener. So, whenever a user will push some new code inside the application GitHub repository, our webhook will be triggered. This webhook will send the information about the latest commit to the event listener. Now, the event listener has to understand this commit. So, for that, we are using the GitHub listener. So, once the event listener receives the payload response, now it will send that payload response to the trigger binding. Now, the trigger binding will in turn extract the important information from the payload response, that is the clone URL and the commit ID. So, once it extracts those information, it will send this as parameters to the trigger template. So, now the trigger template is responsible for starting off the pipelines with the help of the pipeline run resource. So, here you can see, we are defining the pipeline run resource and also passing the parameters that are required to run the pipeline. So, thus, with the help of Tecton triggers, as well as Tecton pipelines, we are able to create the CI platform. So, whenever a user will push some code changes, automatically, Tecton pipelines will run, which will build and push the container to Docker Hub and also update the KNP service YAML file with the latest Docker image tag. So, finally, let's see the configuration for the Argo CD. So, here for Argo CD, the configuration is very simple. So, most important thing is the repository URL that Argo CD will continuously look into for any code change. Apart from this, we can also mention the destination namespace in which the Argo CD will install the Kubernetes application. Apart from that, we can also mention some of the sync policies and the sync options. Therefore, by applying all this configuration YAML files inside the Kubernetes cluster, we will be able to successfully configure the KNP, Tecton and Argo CD to set up the entire serverless CI CD for deploying applications on Kubernetes. So, finally, let's see the workflow demonstration. So, in the current screen, you can see the application code repository. So, here we have a simple Go application. So, here you can see it's going to print Hello World on the screen. So, let me show you the initial website. So, currently, you can see here Hello World is printed. So, now, if I go back and if I edit it to something like Hello KCD and then if I commit it. Now, if I go to the Tecton dashboard, I can see here a new pipeline has started to run. So, if I click on that, I can see the three tasks that we had already defined before. That is the fetch repository, build and push image and update the Docker tag. So, the fetch repository is trying to clone the application and then build and push image is trying to build the Docker container with the Docker file that is present in the application source repository and then it's going to push it to the Docker hub. After that, the third task will run which will update the Docker tag in the K native service YAML file. So, finally, all the tasks has been completed. Now, let's go to the Docker hub. Here I can see that a new container has been pushed a few seconds ago. Now, let me go to the top's config repository. Here also I can see a code change or a code commit has been done 28 seconds ago. So, if I click the commits and see what is the change that has been done here, I'll see the Docker container tag has been updated. Now, this tag is exactly the same tag that has been updated here. So, now I can understand that the K native service YAML file has been updated with the latest Docker tag automatically by the Tecton pipelines. So, this completes the three tasks of the Tecton pipelines. So, finally, if I head back to the Argo CD portal, I can see the same status is okay. And also I can see it has succeeded a few seconds ago. And also if I compare the head commit ID that is C0CC with the commit ID present here in the repository that is also C0CC. That means Argo CD had understood a new commit has been done in the config repository. It had pulled the new change and also applied it into the community cluster. So, now if I head back to the original website and if I click the refresh button, I can see it has changed to hello, KCD Chen. So, by this demonstration, we can see whenever a developer will change the code base in the application code repository, automatically the Tecton pipelines will run, which will create and build the container and push it to Docker Hub. Then the K native service YAML file will be updated with the latest Docker tag. And finally, we will be able to see the updated information on the website URL. So, with that, we have come to the end of the presentation. You can find the application code repository as well as the GitHub configuration repository with the following links. Apart from that, I've also listed down the documentation of the projects we have used. If you have any queries or suggestions, you can reach out to me on Twitter, LinkedIn, or on CNCS Slack. You can also scan this keyword code to get all my social accounts. Apart from that, I'm also going to write a written walkthrough of the entire demonstration so for that, keep tuned to my Medium page. So, I hope you have loved this presentation. Thank you all for joining me.