 Hello everyone. Yeah, is everyone set up? Everyone is good? So welcome. Welcome to this presentation. This presentation is going to be about tecton. So we're going to do a deep dive on how tecton works and what is it exactly. I know that there was a like two talks yesterday, one was mine as well about tecton and but this one is going to be explaining you and show you exactly how tecton works and and it's going to show you how tecton works and what's the life cycle of a typical web application and how we do CI CD and with that application. So my colleague is the main star of the talk, but he has a bit of a cold but he really wanted to stand up for that. So I let him to do his magic. All right. Thank you, Shumal. Before I begin, you must be wondering like do I need any prerequisite or if you just can do a quick check is like, okay. How many of you know like, how many of you don't know CI CD at all or do you know CI CD rather? Everyone knows CI CD, continuous integration, continuous deployment. Is there anyone who are not aware at all? So we can just give you some glimpse. Okay. Everyone knows Kubernetes here. There's no one. Is there is anyone who don't know Kubernetes at all? Okay. Cool. Because this is going to be in the context of doing the CI CD, I mean doing CI CD in the context of Kubernetes. So yeah, that's a prerequisite check. Okay. So just before we begin with is like why tecton and why we are discussing this topic. So the whole point, I mean starting the change with those challenges with the cloud native application development. There are of course every there are very characteristics and the definition of what is the cloud native application or what could be challenges but in nutshell for the timing if you want to put it like these applications can be deployed scale independently and we can think of it as more of like the highly available which lives in the cloud somewhere, right? However, some of the challenges we want to want to move towards the cloud native challenges. Oh, sorry cloud native applications is like We can choose we have to go through a lot of changes as well In a sense, what are the challenges in those in terms of changes is like one thing is like pattern. We have seen like In order to do those frequently ship this application deploy it independently, we need to change the form of the application itself It's like monolith microlet small services, whatever we want to call it in their respective forms We have to do it in that way second thing we want to change is something called the infrastructure most of the initial The cloud native development is dominated by the VMs but over the period people have started realizing the advantages of the containers and and that's where whole Kubernetes ecosystem in the Container ecosystems already present here, right? And when we say we want to deploy This application so so if you see like if you take these two things together you want to Not deploy the multiple applications Compared to the I mean the number of application has increased So we need a very effective way to deploy those applications and again the form of the application deployment is changing It's just other than the VM best. It is more of like the container best so If you take this two denominator and if you want to go ahead is like then how should be the cloud native CI state experience would look like So something is called containers We can so we can start something at center theme as the containers, okay? So if you want saying that some cloud native CI CD We're expecting the two characteristics from with respect to containers one thing This CI system should be run on on any kind of container platform such as Kubernetes or it may be different form But it should be Easy to run and deploy this CI system there, right? If you're thinking in that way and of course second form characteristics We are expecting is like the CI system should give some sort of building blocks at least to build my container app Contents for my applications. Okay, instead of I am writing it instead of I'm defining it If at least it can this this give me two parameters. It would be really nice And when it comes with the containers There is added advantage and there is good side effect of the contents comes in probably show me could speak about it Internet about the cloud is that there is Like when we say like there was something there is no cloud It's just someone else computer you can do the same jokes about service It's like there's no summer service They see like cloud it is just like somewhere else and that's what the service is meaning but I mean in really like what you get out of service is that You don't have you don't have like Jenkins You don't have like a server that stays here forever that stays here as a container And I would that would take care of your job or sending some job like we like even with the community's plugin And I wait forever is like what we do and what we want to do with With tecton is to have like fml jobs like quick jobs that runs like what we're going to explain after like your task and your pipelines and in a quick way and Come up come down you don't have to do to manage the life cycle of that Long-term containers or anything like that or the scaling up scaling down. It's all Little thing that does one thing and does very well and doesn't do like a thousand things together. That's the meaning here of serverless so I Like there is a lot of meaning To a lot of people are implementing serverless differently there is There's a project called k-native which is Which is the community's way to do serverless and we actually Tecton came out of serverless So if we used to be called k-native build so we are we were like the build part of serverless, but we decided like to Go out of serverless because CICD doesn't Necessarily mean that you need to install to install like the full serverless task to stack sorry and and then and then use it like and In and you can use it independently of the k-native project and And that's so the work so I was I was explaining yesterday Do you know talk is like a lot of our work that we've been doing lately is to come out out of serverless of the k-native platform and To stand by your own but now the focus is going to explain like what we do with tecton and what does what's the meaning of Those a small task that goes really quickly and that define clearly what you want to do and goes away when it finished Thank you Just to add that So the idea here is like unlike just to in nutshell like idea here is like this stuff running some Some process that will take care of executing your job You can think of the tecton is very dumb pipeline the execution logic research in your containers or the pipeline itself Most of your logic the execution logic and we'll see actually how to look like but at the end probably you could able to understand and Relate it but the advantage here is for the two perspective one is the operations You don't really need to babysit the CIC system. That's the one advantage it. I would say when I say babysit I'm not saying you have you're getting rid of the operations. No, those are going to be there But really don't need to worry much about it and how it is scale right second thing is like from the developer perspective It may often end up in in cloud native environment like developers or the engineers or maybe non devops expert person may end up owning the CI CD For them the big good point about it is like they also don't need to worry much about the operation hustles Right, they can focus on executing their CI CD pipelines. That's that's these are the two aspects Up at the serverless and that's where the tecton stands today So the tecton is open source project It's hosted under or the gun under the city foundation and these are companies and also more companies are contributing to it But the intent here is to provide the building blocks in order to build a CI CD platform We are going to see those components and the blocks How it's going to help in but that the idea is to like it's more of like be it Kubernetes native or more as close to Kubernetes so the whole advantage of this like once you able to get used to be the Kubernetes ecosystem You would feel like it's so many channel experience or it's more of like unified experience You can pretty much leverage out of your Kubernetes knowledge and then you can start think CSD context, I mean CSD context in the same Kubernetes experience So that's the one part of it However, there are already few of the projects exist which solves a similar set of problems I mean then how and why tecton has started all together as someone has already mentioned The intent of the KNT was to like take your source code and run it in serverless application deployment. That was the intent and some of the things that Probably could stand out with other respective CSD systems like it wanting to be Kubernetes native as well as declarative When we say Kubernetes native and declarative in a sense Just like if you want to run multiple instances of the application in the Kubernetes We create something called deployment and we just declared to say the declarative says that I want to run This many number of replicas now it's Kubernetes jobs or something running in the Kubernetes Let them take care to maintain those many number of replicas if you want to stick the same analogy Now it's going to give a declarative CICD resources on top of Kubernetes that we are going to see it eventually and we just saying going to say that Hey, this is the CICD workflow. I want to execute. How you execute it is your responsibility I mean how you are going to do it. It's it's more of like declarative way. We are going to do things and The most important thing I at least see like the promising or take on is composibility when I say composibility is like we get We can take the Kubernetes ports terminology or the power pod analogy is like In a sense What is the basic building block in the Kubernetes and everything is built on the top of it? If I want to run something called the deployment kind of workload general kind of workload I will create the deployment but underneath it's pinning up the part if I want to execute job on its stateful applications Still it's going to run the different form of workload But at the end underneath it is a part which is running there, right? It just the workload fashion is different same way if you want to extend the analogy into the CICD context so this composibility aspect would help you to basically build your basic building blocks of your CICD workflow and What it can use like you can now define those block first in the bottom level and then you can build something on top of it So the two things it can be tested this individual block can be tested individually that helps really and second thing is You can reuse those blocks across your whole CICD and your DevOps workflow And and we are going to see its power as well like how it would basically help in any in some way So so composibility is most of the important thing. We are going to focus in this talk So so instead of speaking much about concepts and we'll see like we'll start with something called We'll do the CICD for this application. It's a minimalistic Wikipedia application or I would say like Mickey application Which allows to edit the articles and publish the article. Okay. Now what we are going to do We are going to build a CICD pipeline for this application. Okay. Now if you want to See like how the CI and the CD workflow would look like I will just give the high-level workflow So assume that whenever you want to introduce any new change in the source code We want to execute certain tests in certain order So if I want to just depict this test in this context is like we want to execute some sort of lint test then the unit test so lint test is more of like the static code checks and Making sure that formatting is correct and all those things unit is our usual unit is that we write down for your application However into an test and acceptances is like end to end test We are going to execute all all rest APIs and we're making sure that it's functioning correctly Acceptances is something is like you are making sure that this application basically able to serve those million number of requests per second So that's kind of we can say automated acceptance in this context and once this is this gets functioning We will take this application and we'll deploy it into Kubernetes cluster. So in order to do that What we are going to do is like we are going to define some primitive build and deployment mechanisms. Okay, we're going to see in that context But but just to begin with like you'll see the see a pipeline just to as a first stage of writing the system. So Okay Before before we dig much into system level just do the worry of how things looks like Okay, I'm not sure can you see it or can I shall I increase the font? It's better. Oh, yeah, okay Okay, so I will just start with something called a make file. That's that's is it readable or visible Okay sure So the so if you look like So this is in general source code. I just wanted to show it like there is some make file exist Which already knows like how to build your binaries how to execute your test and etc it to before begin writing the CHD workflow and Yeah, that's that's the just one thing I want to highlight and this application already has something called A docker file and and and the deployment manifest, but we'll speak in the context of the deployment use case Okay, so now again going back to So going back to this thing is like okay So when you want to say like the lintest right or what we do mean to say is basically we need let's say source code and I want to execute something called the goal in command which comes with the Something either with the go run time or you need to explicitly install it But for the unit test, which is typically the go test command that I want to run But in case of the acceptance and let's say all the it waiters is like we need source code Then we need to build a binary or let's say container in this application Whatever you want. So in this case, we are just going to build a binary We are making that application up and running and then we are executing some sort of the end to end Test and and some some Mac target, right? So this is this is the Sedimentary procedure we want to use in order to now automate all our see our workflow. So so if you are able to Hold this part then how I can leverage on the tecton in actually order to automate this, right? So if we take the example of unit test case, what we are going to do is like We're typically just going to mention. Okay. This is this procedure. I want to execute and when I say procedure I just want to say it's like steps in in this context, right? So what we are going to do, we're just going to execute in order to get a source code We are just going to execute our git clone command is basic git clone command in in the container Which has the git binary and the second step subsequent step I want to run is just like the go test command and which is the Which is going to run inside the school line binary itself and And the intent of here is like now we are going to use the containers as a primitive form in order to execute Always yes, it is system right now if you can see like Tecton provides something called steps where it's more of like the individual containers and it's sort of the automatic action that we want to execute and This steps is nothing but the container specification coming from the Kubernetes So the intent is like you can pretty much leverage on the Kubernetes construct over here So for example if you want to do git clone and you want to use it clone it from the private repository You can pretty much live it from the secret in this case You don't need to reinvent the wheel again. You don't need to understand the system like how I'm going to provide the secret So once that is done Steps don't execute themselves. Okay. I mean just like the content is don't execute themselves in the Kubernetes We need some some additional construct and that construct is nothing but the task here So Tecton products of this is fundamental resource that Tecton provides something called the task and now We want to run those steps together as an individual unit and that's something all the basic execution entity in the Tectons Things but essentially idea is like you want to execute the steps We want to execute those containers, but sequentially not all together not concurrent okay and Okay, now we have defined our steps or let's say unit test task right now Now actually want to run it right how I'm making sure that how we can test this task So there is something called the one more resource called the task run So it's sort of the running environment in a sense like in order to run this task I'm just going to create this task run wherein we are going to just mention the reference store unit test So in loose analogy if you want to give is like an object oriented program Just like we have the class as a blueprint and we create the the object in order to create the class similar way Task is just blueprint of the automation steps you want to execute, but we'll create the instance of the task So we're just going to see a small demo here So you can see what here is like Part of TecDon, you have your task and you have a higher level of Thank you All right So you can see like now we are defining the task definition over here This is how we define the task. We are defining the steps where it we are instructing like hey Take my source code just use this command execute this command script in this gate container In next up go ahead use this goal and container execute go test command and just make sure that use the workspace DR in correct way Use execute this command in this context or this path once we define the task definition All we need to create one more. It's was called the kind task run So you can see like the kind and API was and it's pretty much similar that what Kubernetes gives us right? It's not just we are extending it in just see how you context So here in order to run that task is like we create the task run and and which will be going to refer to the task And we'll just say like so OCS kind of okay just to see the set the context I'm already running the tech turn on this machine and just like I'm running the open shift Which is more of like the Kubernetes distribution from from Red Hat And the OS is equivalent of the Cube CTL or other it's a superset just so nothing different Okay, so now we are just we're going to use the same Cube CTL command in order to create the task definition But before that we'll just keep a watch on those something some resources in the Kubernetes such as in this case Is like this pod okay? We'll see like what happens as soon as we start executing your task and and task itself right? so we are just now going going to create this resource and And then as soon as we create the resource you can see that it's something something started on the right-hand side It has created some some part so this is basically what is going to happen is like It's going to convert your task specification and specific vision to part and the content specification, okay? But the interior is like now you can see like you can pretty much leverage on the Kubernetes and the Cube CTL Toolset and ecosystem in order to just interact with the pipeline resources and the build resources You don't need to think out of box much if you know the communities context good enough I'm not saying expert, but the good enough But however, let's see like it has created this task in the executed task in its respective pod, right? But if you want to see like what is happening inside my pod We can again use the Cube CTL logs common to see it But the problem here is like you can see like some test has run some git colon has happened But we don't know able to relate like in which order has happened and what is happened, right? And that's that's where what we want to do is like in order to give the better experience We have created its own Tecton is one component called the TKN CLI interface where you can leverage on the TKN in order to just query the resources In the for the Tecton so here you can see like if you want to see the status of the task It's actually Showing in the competitive better way the former the information. It's pretty much intuitive now If you want to see the logs of this task like what's happening inside the task is like We just simply say that hey TKN task task run logs show me the okay for the given task run names Show me the logs I don't need to figure out the what is the name of the pod and and everything and here we can see like Here you can see like it has already showing the name of the resource or name You can see like it actually shows the logs in very decent way because it associate the step name and the logs logs that made itself If you can just see it Yeah, so that's that's the it's a better way to use the TKN Just rather than using the Kubernetes otherwise no restriction on using the cubes it'll come at itself However, there are other ways we can interact with the Tecton here. I'm running a open shift cluster, which is Again, it has its own dashboard But it's also offered its own UI in in in order to interact with the Tecton resources or the pipeline resources now You can see like you can go to the pipeline menu in the console and you can pretty much interact with the All the resources there Thank you from and and one more is like we have also US code extension So you can install the VS code extension and then you can use it in order to interact with the all pipeline resources So one of the good thing about the VS code extensions is like it has all that context menu like so for example I want to see the log of this task, right? I can simply say right-click and See the log of this task itself like show me the logs delete this task and basically. Yeah that's So just giving this primitive soap was going to use it ahead and ahead throughout the demo So I'm just showing this this way so That's the pretty much about the how we can create the tasks task run and how we can individually how we can test the individual components All together while working with the system. Okay, so We were on task run, right? So now we understood like how to define the task and and basically how to test with the task itself Now we are actually going to build the whole pipeline a CI pipeline at least So what we are going to do is like the way we are defined the task We are just going to define the whole set of the task for the other Unit test into a test and the acceptance test right same task definition And then we use the pipeline resource in order to compose whole Compose the whole pipeline itself In order to do do that take 20 finds something a new resource called pipeline here intent Here only thing we need to do is like we create our tasks resources Our first thing and then we refer in refer all those tasks resources in our pipeline definition and You can see here in our case the workflow wise we want to execute these two tasks the lintas and unitas Parallel but once Pipeline execute those tasks then and then only we want to execute the it to a test and the acceptance test now in order to define this all tasks Dependencies and the execution workflow. We are going to leverage on the pipeline and that's the that's the import importance of pipeline We can run multiple tasks, but in particular order For the simplicity we are going to skip the inputs and outputs. We're just going to see it, but for the time being Let's leave it there Okay, and we just Resume the same demo. We'll just say build something on top of it or extend the same Whatever we have done in the step one by the whatever the step one step. I have written over here There is no relation with the steps that we're defining the task and etc Just for the sake of simplicity so we can follow the tutorial later on we have Doing the incremental way of creating the resources so you can see that now We are going to create all the stars very similar way that we have done in the previous step So in this case the goal in just executing the goal in command it. It is just invoking With clone and basically making sure that it were to execute this thing The only thing just for the sake of clarity since I'm running this demo on the open shift by default The open shift doesn't allows to run containers with the root privileges in order to just run this task Or this container with the root privileges because this make target execute some some root command that requires the root privileges For for that reason. We are just adding the security context and some some additional Specification over here. Otherwise nothing specific Yeah, once that is defined once our task in place We are just going to create our pipeline and in the pipeline is like we are just going to reference those tasks and Define the order in a dependency between those tasks and once that is in place Yeah Once that is in place We are just going to create the pipeline itself Okay, just we are going to create the pipeline but before the pipeline as we have already mentioned like it needs that some security privileges So I'm just making sure that configuration such as service account and security context is in place Yeah, so as we already created our task We are played out all tasks set of tasks now We are just going to create a pipeline resource and in order to execute that pipeline resource We are going to just create it's a pipeline run where the pipeline run is just going to refer just like in the task We are referred the task. It's just going to refer the pipeline itself So once that is in place now you can see on the right hand side It just started executing all our task in it respective parts now You can see like it has started to parts innocence. It's a started executing two tasks parallel. Okay In order to see the logs of all this pipeline all again We can leverage on the TKN command which shows the logs of for for the given pipeline just using the same command pattern Just like tube city logs. Sorry TKN pipeline run logs or PR is a short form. We are using alias And with the follow command you can actually just follow the live logs. What is happening there? That's the one way of Seeing the logs and again if you want to follow what is happening with your pipeline Like you can again go on the console as well and you can follow the logs And it's also shows a nice dependency graph of like how your pipeline would look like, okay And you can follow the logs from here as well. So Nothing specific Okay, so now you know is like how to compose our task and how to compose our pipeline So what we're going to do next is okay, so So here is the problem if you see the pipeline definition every time we are repeating the gate clone Okay, which is not added technically, but we just did it for the sake of testing so we can test individual tasks independently. That's the okay But the that the intent is like so if I come from the any programming background, right? And we know that every program language do offer some primitive stuff. So for example, I want to use link list As a program, I don't implement it, right? I just consume this list API from the program language similar way now. We are doing the CI CD Doing git clone the publishing artifact. It's kind of very common task, right? Can't we just expect something from this pipeline that hey, I'm just going to declare my Gate URL. I want to clone and you just do it. I don't want to write them all this glean git clone steps, right? So that's the one thing can this pipeline offer something called declarative and it can just redefine and just reuse throughout my pipeline itself so that that's where It's a valid us and that's where there's something called the pipeline resource Okay, so the intent of the pipeline resources like in the pipeline context to define set of the primitive task that we want to achieve Now the resources could be defined in just two form one something's called the input and something's called output So for example in order to execute this task, we can say my repository of source code is a resource I need to resolve it first. So please provide this input for this task before you start executing my pipeline as I execute pipeline or task and output is something for example, let's say Whatever the things have done I want to publish it on some artifacts Repository or some let's say bucket, right? And we can define something called the output resource and and that's where the pipeline resource comes into picture How we are going to now we are just going to see the same unit test task But instead of doing the git clone command now We are just going to leverage on the resource and we see like how declarative we can do it So what we are going to do is like Yeah, so if you see like we are just defining the same task Here is like but it's that git clone step is is basically gone away instead of we just focusing on Executing my go test command and instead of cloning the git source What I'm saying that this task needs an input of type git resource with with some some resource How the resource is going to come into picture like first thing we're going to resource definition So take them provide one. This is the last building block of take on is like it provides the pipeline resource Where we can define of pipeline resource of which type the gate, okay for the gate resource it except few parameters such as the gate URL and And the which revision you want to check out or which branch you want to check out right once we define these two things Now we need to make sure that we are able to reference this resource and attach the resource to my task And that's where what we are going to do is in the task run We are going to Okay In in task run what we are going to do is like we are going to attach this input resource That is the same resource that we have just defined over here right and And now we are just going to create the same task but instead of doing the git clone We are going to see the same example but with the git resource so So if you see that in this demo So we are just executing the same git resources and and sorry same task But if you see the logs over here unlike our previous git clone command It's actually Doing the git cloning first for your task and then actually started executing your task itself So you as an user you don't need to care about how it is going to do it. You're just going to declare it I need it you do it you figure out yourself. I'm not going to worry about it. That's the intent of resource Some more For example, if you want like to have another VCS another source control system that's That's going to that's going to plug that you don't want to use git So you can't use the pipeline resource in this case. So what would you do? What you do is like you're going to have a task that's going to use your VCS Or get in a certain way that we don't handle like this that it's and you're going to put that like I did first So it's going to check out your code at first is going to to get it locally now It's like we have a concept of that just gets introduced. So it's not in our slide called workspace inside tecton So inside a pipeline or a task you have a workspace and that workspace is all your shared drives Between all your tasks so you check out the way you want with your SCM your task at first And then you can go That's that's one concept and Now if you want to just reimagine your pipeline with the pipeline resource Instead of now defining the gate step all you are just going to say that I need this resource So whatever the task you want I need this resource instead of Writing this git step again again. The only thing is like Okay, in this case why we are doing the git clone again and again the intent is is not doing it again The only intent was to like test individual tasks with its own dependency That's all that's was the whole point. So for example if tomorrow something breaks Let's say this it we task is not working. I want to test it how I'm going to test it So the intent over is like you can just run this small amount of block and you can just figure out or debug it What's going wrong with that? Okay, either you can mock this dependency or you can just Just basically leverage the dependency which is mentioned in in your pipeline workflow, right? So that that's the only intent it can be done in in better way as well I mean it just done in that way, but otherwise it's not as it has as show as mentioned So if we do that way is like just we are going to create the pipeline run along with again, it's it's just binding it resources and Yeah, if you want to just redefine the same thing is like now we are just going to create Pipeline run along with just it is own resource and of course we are going to change the modification I mean modify the task definition as well because if you see like Okay Okay, now if you see like we have modified all the task in order to instead of writing that gate clone We are just modified all this task is like just leverage on this resource then We are going to define a pipeline in the pipeline. We are now going to mention the resource I need this and in the task reference as well now we are going to say that hey this task You you need to bind this what are the resources I have declared here. I need to just need to bind with with this Task itself and now once we done the wiring and the binding of these resources Finally, we need to pass the reference of the resource that we are going to create right and in your pipeline and how to do it is like How to do it is is basically Okay in the pipeline run itself I'm not able to stick it properly, but yeah So essentially what we are going to do is like in the pipeline run itself We are going to pass this resource definition as we have seen here Okay, just pass my pipeline resource definition all the way it's bind to in your pipeline then all the way It's going to bind it into your task. That's how the wiring is just being done So what we'll just do is like We run the same pipeline, but with the resource so the the good thing about thing is now now We are getting rid of all those gate cloning from the task in a pipeline definition Now your pipeline is becoming more generic one by one So for example, if if you have another goaling application and you have almost similar workflow You don't need to redefine the pipeline all you just need to create the pipeline run with just different resources So for example for this repository, I will create this resource in order to test the other repository I want to exit same pipeline. I will just create another resource and I just been to pass it So that's how we can generalize the task and how we can generalize the pipeline And that's the thing that we were mentioning about the composability and generality in a sense You can composite one you can make it generic and I mean you can define entities once building block once you can make it Generic that possible and then you can top building something on top of it And and that's how we can leverage pretty much out of system And you can see like we are just creating the same pipeline But just with the reference of the resource Okay, now that's that's about Yeah, that's about Yeah, that's about CI pipeline and Just we are going to finish CD pipeline and then we mostly will done with our demo So now we saw the one part of it how to integrate your CA I mean how to write CI pipeline in the CD pipeline If you want to generalize how the steps and task would look like just to give the schematic We have one task is something called the build container which take your source code now It's going to container is your application assuming that your CI has completed successfully And then once your come containers get published to your repository or sorry a container registry Then we are going to deploy your container in your Kubernetes application in Kubernetes environment with all its Kubernetes manifest And how we are going to do it is it's just the general schematic is like now We are going to resource leverage on the same model except the one thing is changing We are not defining one more resource called the image of of your resource of type image or kind image Because what ultimately we're saying that as we execute this task built us It should produce and resource of type image just to depict or emulate its behavior of producing or publishing something From the task, okay, and then we are going to take the same image reference and then we are going to deploy our application So what we have done is like we have written the basic Kubernetes manifest like deployment services But in the deployment spec all the way we are just going to replace use the same image Okay Use the same image that we have already published in our previous task And we'll just pass that image reference as a resource reference And then we'll substitute that image URL in one step and once that is done Then we'll just going to use the kubectl primitives in this case Things could be done in different way just for the sake of demo. We are just doing in the rudimentary or the primitive way, okay? Now we are just going to see a defined One or deployment let's say that's the last step of your pipeline Building a city pipeline itself. Okay, but now thing over here is We can see the pipeline definition Okay The most of things are there see I related tasks are there in that pipeline But now we have added something additional task something called the build task and that is here Okay, and we want to run it after the end-to-end task and there is something called the deployment task, okay? And the deployment is going to run after the build task Completes its execution any of this task gets failed which has the dependency then pipeline just going to terminate its execution So that's the whole point and and basically deployment Task is going to run after a build task But now you can see like or just to give a reference It's going to refer the deploy application and the build us is going to refer to the task called the builder Okay, now if you can see over here We have the the task definition for the deploy application But but from where the build our definition is coming in Don't you think like building an container application or building container image? It's kind of common primitive that any CIC system would need So now instead you writing it what we are doing in the tecton is like you are trying to collect the all Common use cases task and you are curating under something called the tecton catalog now if you can see in this definition Yeah, if you can see it here like we are just going to use the task which is present in the catalog It's something called the builder task and it has already the list of this commonly use cases task over here We're going to use it how we are going to use it just use the QCTL apply and we are just going going to copy the Reference URL where you have mentioned over here all we just need to add her to its contract We just need to pass it respect to resources and the parameter and Now you can see over here Yeah And that's mostly like how you want to do is like is to use tasks that have really been written and by a tecton developers I think I think we do have the demo issue We are going to have to run out of a little bit of time We I was going to explain triggers, but I think we have time for that We can ask maybe a question or two But if we are not if you have an addition to a question We are just right there like at the booth and being able to answer whatever question. So if you have a question Yeah, I mean just this is pretty much Okay, so we just summarized few things But we are just going to share this material and it has a link to all video and everything probably you can look at later on So just to summarize tasks to define some atomic action that we want to execute in order to execute club of Steps we are going to create the task and if we need any resources We are going to give the resources to the task itself And in order to execute us we are going to create task run and in order to create the multi in order to exit multiple tasks We are just going to execute through the pipeline in the certain order and Yeah, that's one thing about it But this is not the way we run the pipeline It's fine to to write and develop your pipeline definition But in in real use case you want to execute your pipeline certain event So for example change in your source code on your GitHub repository or source code repository And that's where another building block that comes in and is something called the triggers We have like three different concepts, so you have like the template the binding in an event listener So the template itself is what how you're going to define your pipeline run when we explained before and And you are going to have like some parameter that comes out of the of the event So in this case for example from GitHub you get a web book that web looks and some JSON with a bunch of information about the repo the commit SSA and stuff like that and You are going to create your pipeline run out of it out of those conditions So you can have like those parameters are going there So you get the binding is going to Find that condition To the template and that's why you And the event listener is like what create like a service that's that's you're going to expose publicly with ingress controller and you are going to plug your web book into it to be able to want to to be able like to To listen to the events that's coming from the web book Yeah, that's how it's pretty much schematically look like web book comes you might listen lessons It knows that which resource it needs to spin pipeline pipeline run get resources and at then again It's going to just execute the same flow, but with something on the event Okay Of course we can split and actually execute lot many things as I mentioned just like execute certain things if this branch is master is your certain things if Branch is non-master so that can be done everywhere and we can selectively some of the tasks in the pipeline So only the last bit we want to cover before we end our demos like where Tecton stands today Tecton is pretty new. It's it's just a year and a half old project. It's like almost here, brother So we are going to do the beta release in the coming February Where you can expect the stable pipeline API not other components like trigger and etc. However the in the subsequent work all we want to focus on provide the better trigger and the Webhook support and then we want to build something while the task and we want to enhance our catalog And we want to emulate something called the marketplace experience. It's very similar like what docker did with the containers So it's very simple simple things like every time you need to do something in your container world You're going to pull something from registry instead of you writing everything and you are going to build something top-of-of-it similar expense We want to emulate our here is like how you are going to compose your pipeline some things are Presently available on the marketplace put it pull it down and build something on top of it And the last thing is like the pipeline resource extension is coming into picture as already mentioned it has the limitations right now But we are addressing that issue as well. So those are the few things you could expect in the post beta release Yeah, and guess we are done with us. Thank you so much for listening us patiently