 Thank you everyone for joining. Welcome to our presentation. I am Preeti Desai and I'm here today to talk to you about interoperability in CI CD, RoboCat meets Octopus and OctoCat. I have Jarrett with me. Hi everyone. My name is Jarrett Kiputo. I'm a Software Engineer at Google, working on Cloud CI CD specifically Tecton. Before diving into interoperability, let's first look into CI CD process in practice. Thanks Jarrett. Let's look at what CI CD process is in practice. As a developer, when I'm implementing features in application in this world of microservices, when I'm adding a new feature into a service or an application, I create a pull request and the build process kicks in and runs the basic testing. Basically, it clones an application repo. It builds the application, runs the Golang test, Linting, other tests, integration tests, unit tests, any other tests that we might have. Once the testing is successful, like Jarrett can come and review PR and she says, okay, looks good to me. Everything is good and we merge it. Now, after we merge this PR, the changes are available upstream in either main or master branch. That's how basically the continuous integration process works, where different development developers come together and implements our works on their own features. That was the continuous integration phase. Let's look at continuous delivery. After my whole team, Jarrett is done with her features and I'm done with whatever the set of features that we desire to be part of the release. We are done implementing them. What we want to do next is create a release. The build process again starts running, clones the repo and then starts running the validation. There could be multiple validation checks that needs to be running such as security, compliance, there could be even the documentation checks. There could be multiple of these checks running once those checks pass, then the application, we build application and then we publishes, so the build publishes that image into the image repository. We have created a release for example, and that completes the continuous delivery phase. Next is the continuous deployment. Now once we have that release image, it's certified or released and we want to basically deploy it in the staging cluster so that it's available to the other developers or early adopters, or even if you want to deploy it in production, so that it's available to the users. That is continuous deployment. We just looked at the basic principles of CIED. Now let's begin putting these principles into practice by understanding how the CI CD pipeline can be implemented. We are very fortunate to have so many different options, but the question is, do we have to choose one over the other, or is it possible to build our use cases by using multiple of these systems? The clear question is, are these systems interoperable? Can these systems operate in conjunction with each other? Let's narrow down our options for the next 30 minutes and experience building CI CD pipeline using these three systems, Tecton, Argos CD, and GitHub Actions. So, Jarrett is going to take this into the Tecton world, where she'll basically give us an idea of how Tecton has implemented, and then we'll look at the demonstration of Tecton and Argos CD and GitHub Actions. Jarrett? Thank you pretty for introducing the CI CD principles, and next we look at understanding Tecton. Tecton is a Kubernetes native open-source framework for creating continuous integration and continuous delivery systems. It provides Kubernetes-style custom resources for declaring CI CD-style pipelines. So, Tecton is built on five core building blocks. One, it's a step. Two, task. Three, pipeline. Four, trigger, and five, catalog. Let's explore each of these blocks. The first is a step which is equivalent to a container image in which it executes a tool on the specified input parameters and produces an output. Now, we can name the step to signify and identify what the step is doing. An example, deploy app, and specify the container image we want to pull. Define the environment variables, which is accessible to the container, such as API key. The script, which is invoked as well, has to be specified as if it was stored inside the container image. In this case, we have Cloud Logging, and which uses the parameters specified. The second is a task, which executes a pod on the Kubernetes cluster. A task is a sequence of steps running a sequence of containers. All the steps in a task have access to shared workspace, which is mounted on a pod as an implicit volume. So, a task is a custom resource with kind task, and has to have a name to be referenced so that it can be reused. Now, a case deployed to my awesome cloud, and the task specification includes a list of parameters, which have defaults and description of the parameters. So, in a case, API URL and cloud.com. Then we have steps, deploy app with the same image and the script specified in there. The third is a pipeline, which is a collection of tasks, running a set of pods. A pipeline is a graph, which provides flexibility to organize the workflow based on the user requirements. A pipeline combines tasks with parameters, results, and workspaces. And it also provides step and task level isolation if needed. So, a pipeline is a custom resource of kind pipeline, just like task, and we name it for future reference and reusing. The pipeline's specification also includes a list of parameters, like in our case, API URL, cloud region, et cetera, and a list of tasks, like clone, build, deploy. So, in our case here, we have git clone followed by build task, and then at the end, we deploy the build application. I can manually run my pipeline, but how do I automatically invoke the pipeline, such as when I push a code, commit, or create a pull request? For that, we have triggers. The trigger binding extracts information from the event payload, and the trigger template provides the blueprint for creating a pipeline run. Here, the event listener connects the trigger binding to the trigger templates. Let's look at the trigger binding itself. So, trigger binding is a resource that specifies the fields in the event payload from which you want to extract the data, as well as fields corresponding to your trigger template, which will be populated with extracted values. So, in our case here, we have a git repo URL as a value that's extracted from the event. Secondly, we have the trigger templates. The trigger template is a resource that specifies a blueprint for the resource, such as task run or pipeline run, that you want to instantiate or execute when an event listener detects an event. So, here, it exposes the parameters that you can use anywhere within your resource system plate. Lastly, the piece that connects this together is the event listener. The event listener is a Kubernetes object that listens for events, for a specified port on your Kubernetes cluster. It exposes an addressable sync that receives incoming event events, and specify one or more triggers. And this trigger, or each of these triggers allows you to specify the event bindings or trigger bindings to extract the field and the values from the payloads, and one or more trigger templates that receives these values and allows you to create resources such as task runs and pipeline runs with that data. So, here, we have binding and then the template from above is. Instead of everyone creating their own task and pipeline, is there any way to share usable resources across the organization? We have a Tecton catalog, which can be shared across the entire organization and with the community. And as you can see here, we have already a lot of resources that have been contributed by the community. Next, Priti will look into August CD and how it interoperates with Tecton. Thank you. Thanks, Cherub. So, we just looked at, we just looked into Tecton. Let's briefly look into Argo CD. So, Argo CD is a declarative GitOps continuous delivery tool for Kubernetes. The main purpose of Argo CD is to basically sync whatever is defined in a GitHub application or a GitHub repository and make sure that those sources exist in the cluster. So, we thought that, okay, why not? So, basically to define or implement CI CD pipeline using Tecton, we need to create so many resources, tasks, pipeline, event listener and triggers. So, we thought that why not actually treat the Tecton resources as code and basically create a GitHub Argo CD application for the Tecton resources so that those can be made available on the cluster. So, we looked at the CI CD in practice, kind of, you know, okay, what it is and kind of looked at those diagrams. What we have here is a very simplified use case of, you know, just build, test and apply. So, as a developer, if I want to commit some changes upstream and make it available to the development cluster so that, you know, Jerrick can access those changes. So, now what I have here is the build process, clones and application repo runs all the tests and then builds and publishes in that particular image with the changes and then deploys it on the development cluster. So, the changes are available right away. So, here, we are going to look at the, that simplified use case. We are going to look at the demo of how Argo CD and Tecton together. We have make it possible to kind of implement that use case. So, for the Argo CD part, so, first of all, we have GitHub repository. Here, we have a GitHub repository where all the Tecton resources are part of the repository and also the application is part of the repository. So, creating an application, Argo CD application to sync all the Tecton resources so that all these tasks and pipeline and also the trigger is basically available on the cluster. We created one more Argo CD application to kind of, for the service itself to kind of deploy that service in a cloud. So, this Argo CD application can be triggered by the Tecton pipeline so that it can sync the deployment, that whole idea. So, for making this whole use case possible, the, you know, this is how kind of we have laid out and the very first thing we need here is the secrets for, so Argo CD secrets and also, you know, I'm using Docker desktop, so Docker Hub service account to basically push the images. So, and next we want is basically, we create the application with the Tecton resources. The next is you, we create the Argo CD application, one more application to deploy. And the next is we need a webhook on the GitHub repository. So that whenever a new changes are committed and basically can send payload to the Tecton trigger. So these four things are the basic configuration steps here. Now the workflow, the web workflow is, we'll look at this workflow in the demonstration is when as a developer I'm making some changes to my application, sorry. And then that push is triggering. Basically there is a webhook is getting triggered and sending that payload to the Tecton trigger which is listening for that payload. Then that payload data is mapped to a particular basically that pipeline parameters. And you know, it's basically configured to run that pipeline. It triggers and creates that pipeline run. And the next is after cloning and building and publishing and then it triggers the sync and wait task in the pipeline which basically creates a, it syncs the deploy application which in turn will deploy the latest image or whatever image that we have specified from the image repository. So this is how the entire workflow looks like. Now let's look at the demonstration for this particular use case. So let's look at the demonstration for this particular use case. So let's look at the use case we just kind of went over. Let's look at that use case in action. So here like I mentioned, we have a configuration setup. We need to create those secrets for the tasks Argo CD and the service account or the publish task. So we created Argo C secrets. Now let's create the Docker Hub service account so that the image can be pushed to the Docker Hub. Next is let's create. So I'm using Argo C CLI. There is no application as such right now. I'm creating an application with the repository and the destination cluster and also the path. Now sync this application so that basically all the pipeline and trigger binding everything is getting created on the cluster. Next is again, we create deploy app on the Argo CD. And here we'll see that we have pipeline which was just created and then the event listener. So event listener spins off a pod in that particular namespace. So we have that service running in the namespace, demo namespace. And let's forward so that we can access it. And next is we need to create a webhook so that the payload can be sent to the cluster and we're using. So here we create a new webhook on the GitHub. We basically specify the URL and then JSON. And then just the push event is fine for now. So it created a webhook. Now, so we are done with the basic setting configuration. Now let's go into the application. This is a very simple application. We are just basically changing the version for now making it as one, zero, one. So this is the diff, looks good. Now we can push. So we are pushing the changes so that it's available in the repo and this push trigger around. Let's look at the pipeline. So here look, there was a payload that got to the board and we should see a pipeline run. So we have a new pipeline run here. It has finished cloning the entire source and it's building an application. Whatever we have in our Docker file, it's building that image. And next one, so it's pushing that image to a call Docker hub and one sits. So it's finished pushing. Now it's basically syncing, running the sync on the deployment application. So this will deploy that particular image onto the service, onto the cluster. So this is the port that we can access and let's look at that application. So yay, we have the application deployed. Now let's change this application. So once again, we go and update our application. Let's go and update our file where we say, okay, let's do the next version and our double some application and we change our deployment file as well to maintain the same version. So it matches with the application source and the depth looks fine. Now let's commit upgrading the application to our double awesome application and reset push. Here, we should see one more run and again, same thing. It's basically building an image with the latest changes and it pushes that image to the repository. So now we have the latest image and it is now triggering the sync on our OCD application. And that is all done. So we should now be able to work in accessible application on the same port and it has double awesome application. So the changes that we just made are now accessible. So this is a very simple use case and demonstration of that use case where the changes that are being made are available and basically it's using Argo CD and Tecton. So I'll hand off over to Jarekna for the next Tecton and GitHub Actions. Thanks, Jarek. RoboCut plays with Octocut. We're going to look at a use case to plug and play with GitHub Actions and Tecton. In this case, GitHub Actions is used for triggering while Tecton pipelines for execution. So GitHub Actions is an API for cost and effect on GitHub. Why GitHub Actions? Well, GitHub Actions makes it easy to build, test, and deploy code right from GitHub. So why not write the execution logic in vendor agnostic Tecton pipelines and trigger them using GitHub Actions? The use case we look at involves a couple of steps. First, a developer pushes code to the application repository. Then this change triggers a GitHub workflow or the specification in GitHub Actions for an execution workflow that needs to happen, which thereafter triggers a Tecton pipeline run. And that pipeline run itself contains the pipeline and workspace, and the pipeline is made up of a series of tasks. The first one is cloning, second one is linking, third testing, building, and then running that application with the change or the commit that the developer just made. And some of this pipeline, most of the first four tasks come from the Tecton Hub or the Tecton Catalog, which is shared by the community. Well, the second part, the run itself comes from the Tecton folder within that repository itself. So you can fetch your tasks or resources from different sources depending on your requirements or your infrastructure. So next, we look at this use case and see how it works together. We'll start by going to the GitHub Actions, our GitHub Marketplace and look up Tecton. The first thing we find is installed Tecton GitHub Action. You can see here, if you set specify that these GitHub Actions configures TKN CLI, the Tecton CLI in your environment for managing Tecton resources. So you can see here that the process for usage is that you just specify that you use gerov slash tkn, specify the version as needed. But this is after installing kind or whichever other format of when it is cluster you want to use and then installing Tecton pipelines in your environment. Let's go to the project itself. You can see those same steps and guidelines for how to use it as specified with the repo. And the Action YAML which is the main part of this GitHub Action takes a version that you can specify for which CLI you want to use. So we'll go to the demo repo where we show how this actually is used to test and build a Hello World Go application. The application is simply saying Hello World in this case we can modify what or who we're greeting. And we have a simple test there that verifies that function works as expected. Then we have a Tecton repo with some Tecton resources or Tecton folder and then we have the work clothes folder which specify the GitHub Action itself will be triggered upon when events happen. So let's pull this by locally and see or make a change and see how this impacts things. But first we'll verify the things are how we expect or how we saw it in the in the repository on GitHub the same function Hello and then you can see here we have the same test Test Hello confirming that Hello World is what you would get and we can locally trigger this test and see that it's passing and then you can see that the workflow is as expected it's acting on push pull request and work through dispatch which is a manual triggering and the series of steps here where it's setting up kind we are applying Tecton pipelines the environment we are installing TKN and then we're using TKN to install Tecton tasks from the hub let's see Git clone GoLangTest GoLangLint GoLangBuild then there's one for applying a local task so for Git clone itself can see that it's there in the hub as expected so this is what we are pulling into the environment installing it second one is GoLangTest right there and we should be able to find that in the hub as well yeah we have GoLangTest there with all the parameters and what species is it expecting and then the LinkedIn GoLang CI links the same case expecting to have it in Tecton hub with specifications of all the parameters it expects or can take and the workspace containing the source to build and lastly from the hub we're installing GoLangBuild similarly we can see that this is a task for building Go projects and we can see the parameters and what species it's expecting lastly we have GoLangGran which is not available in the catalog of the hub but we've specified it locally in our project and you can see here the labels, the description, the parameters which expect a package, context, version etc and then the workspace itself which has the source code that will be run and then the script which actually contains the logic that we want for this function to run the application all of these things come together in the workflow file then finally we're going to start a pipeline Tecton pipeline YAML which is specified here as well that puts or connects all of those tasks together let's look at how that works when we open the pipeline.yaml file we'll see that we have the work area where the workspace where the code will be committed to and then expecting the results and then we have the clone tasks linking, testing building and then running all of them organized sequentially and they're all with tasks referencing the respective tasks from the hub or the catalog or GoLangGran which is specified in the report itself and then lastly we have the workspace which comes from our volume claim templates so here assistant volume claim here that's specified within the Tecton folder we'll start that pipeline and then we'll list it and then describe it and validate that it does all those steps or tasks that we're expecting to so we'll list triggering of this workflow using a git commit or push happening we're going to modify hello world and say hello cdcon or we can say hello anything really but let's say cdcon so we're going to validate that that's the only change you've made then we're going to commit add the change and then commit it hello cdcon as the message then we'll push this change then we can see that that triggered an action, we get an action and we're starting the workflow run and look at the code you can see the commits you can see the commits share for that change that we made see go back to the action or the workflow run you can see we checked out the code, we set up kind then we're installing the titan pipeline we install tpn installing all the tasks from the catalog and then the git clone has been installed there and it has the same when you describe it to the same specification that we have from the hub same thing with the linting task, the building task and then the running task you can see here that tpn was installed in version 18 for linux which is auto handled by the git tpn git itself so we see that the pipeline is running and we're showing the logs the clone task has executed, you can see share is exactly what we saw in the commit within the repo cloning, the clone is so scored we're linting a then you can see the tests passed with the coverage detail there as well we built it and then we ran and you can see hello cdcone the application has been updated the pipeline running successful then when we describe it and get the further detail we can see all the tasks that executed, how long we took and their status and you can see that the result of the commit share has been propagated all the way to the pipeline running so this demonstration simply shows you how tpn can be used for writing the execution logic while git abuctions is for triggering and how these two interoperate should solve for your use species thank you thank you everyone and if you have any questions let us know