 Good morning everyone. Thank you for joining us today for another episode of OpenShift Coffee Break. So today we have our usual suspects in Natale as our main co-host and myself, the other main co-host. And we have two special guests today. So we have Savita from Red Hat Engineering and we also have Kuran. They both work on tecton upstream community and OpenShift projects and OpenShift pipelines. So today they will be speaking about, so last time in the tecton show we covered the basics. We explained what tecton was, the reason why it was created. In this episode we are going to talk about how to trigger pipelines from deep repository. So basically whenever you make a code change, you do a git commit or you create a pull request or something like that. You want to have some automated pipelines within your workflow and basically that's what we are going to uncover today. So Savita and Puram, if you want to please introduce yourselves and of course Natale also, but he is very famous already. Yeah, I am Puram Bey and I am working on tecton upstream project and downstream OpenShift pipelines. And I mainly focus on tecton triggers and I have also done some work on tecton logs and on metrics as well as logging pipelines. Okay, cool. Savita? Yeah, hi. So I am Savita. So basically I am based out of India, Bangalore. So I am also a developer who works on OpenShift pipelines and also tecton projects. So I am also an upstream contributor to triggers and catalog plumbing. So yeah, that's about it. Cool. Thanks. Natale? Yeah, thank you. Thank you everyone. Welcome. Good morning. I hope you had your coffee shot here. OpenShift Coffee Break. My name is Natale. I am a product marketing manager with OpenShift. We are hosting this show together with Jafar and we are very happy to have again Savita in the stream and also to have a program for today's talk. Jafar, I am really excited to see pipeline as code. So I am looking forward to see what it is and how it can be implemented on top of OpenShift. Okay. And so just one clarification for our viewers. We are making so as you saw Savita was already on a previous episode. So our goal is to have a series. So we will be having several episodes about tecton and OpenShift pipelines. And as we go along, we are going to explore more advanced concepts. So yeah, today we will be speaking about how to trigger pipelines from geek events and especially explain what happens in the background, which is basically what you guys have implemented to make it work. That's the great thing. I believe you have contributed to creating the things inside tecton upstream to make such things work, of course with other people. So that's really cool to have engineers like you talk about how these things work in the background and how they evolve. So, okay, let's get started. Who wants to go first and maybe, you know, have a quick reminder about tecton concepts. Yeah, so I will go first and then demo. I'm not yet. I think we also don't hear you very good when you speak, I think. Yeah, exactly. Yeah. So Graham, I'm sorry to say that, but will it be possible to maybe just disable the video stream to make sure we have enough bandwidth? Oh, okay. So you don't think I should save some bandwidth? Yeah, yeah, if you can just remove the video. Now we can see the screen. It's fine. But if you can just like cut the video from the zoom stream, that would be fine. Oh, you know, Wi-Fi. You cannot trust Wi-Fi. Always plug an internet cable. Yeah, exactly. All right, so thank you, Graham. We can see your screen now. The floor is yours. Yeah, so we have already discussed what is tecton and I think I'm not sure whether we have gone through various components of tecton, which are the shared components to provide a Kubernetes native CI 3D pipeline. The first component is tecton pipelines, which we have already covered, which is packaged as open chip pipeline, along with triggers and operator. And then next week, we have tecton trigger, which manages the tecton resources based on events. And these events can be cloud seven or bubble events like GitHub. And then we have something called tecton dashboard, which provides a UI for handling tecton sources. So in open shifts, generally we don't use dashboard because we already have something called that console. So we already have integrated tecton components there. And then we have operator which facilitates installation of these tecton projects in an easy manner using operator life cycle management. And we have also something called Tecton CLI, which provides a TK tool to handle all these tecton sources. And also we have something called Tecton Hub and Tectlock, which provides reusable tasks and pipelines. So let's go to the tecton triggers main concept. Basically, Tecton trigger provides some CRDs, which extends our tecton architecture by expanding the Kubernetes. Some of these CRDs are trigger template. Trigger template is just what pipelines and tasks in a parameterized form. And these parameters are based on events. And then we have something called trigger binding. And trigger binding, CRD just extract those, extract payloads to the corresponding variables, which are referenced in the trigger template. And then next CRD is trigger CRD, which is just a combination of trigger template and trigger binding so that we know from where to take our variables. And along with that, we have something called interceptor. What interceptor does is it just take payloads based and modify it. Okay, thank you, Kram. So if I try to demystify a little bit, maybe, yeah, do you have some more, go ahead to show me the next slide. So I have just this one. Yeah, so let's pause for a second on these concepts and try to explain why we are talking about this. Okay, so usually, when we are, you know, as a developer or even, you know, if you are implementing continuous delivery, you want to trigger your pipelines from events like you make a commit, or you merge request or something like that, then you want some some pipelines to, to automatically get executed. So if we remember, remember the tech how tech turn works, you have this notion of pipeline. So basically the pipeline is the definition of everything that is going to run like the tasks and the steps, etc. But the pipeline itself doesn't run. It's not an instance of something that is running. It's just a static definition. Like if you were in Java, you have your class and then you have the instances of your class. So the class doesn't exist in itself. Once you instantiate it, you have some objects. That's what has a real existence. So in text on concepts, the same thing happens with pipelines. It's the abstract thing. And then you have pipeline run, which is a running instance of your pipeline definition. Correct. So what we missed was, okay, so I have my pipeline that is defined somewhere. And when I trigger or when I run, or sorry, when I have my deep events, I need something to create my pipeline run automatically. Okay. And the pipeline run needs some information like where is this deep repository? What branch is it? What is my working context? Etc. Etc. So it needs some dynamic information because we don't want to have static information in the pipeline static definition. So we have what we call placeholders or variables that we define in the pipeline. We say, for example, this is dollar D2RL. This is dollar Git branch, et cetera, et cetera. So this is all going to be variables. And at runtime, when we want to trigger the pipeline run, we want these things to be filled, right? We want to get the proper values. I believe where those triggers come into play. So we have the pipeline, we have the pipeline run. And those triggers are going to say, okay, so when I have this Git event, please take this variable, this variable, and this variable, and fill that in the pipeline run or to create a pipeline run. Okay. So that's what I'm seeing here is a shortcut. I have pipeline on the left. I have a pipeline run with all the data that is already filled. And now we are trying to explain what happens in between. All right. So, and yeah, yeah, so please go ahead again from and explain maybe some of the additional concepts that, you know, do this magic like how we get the information of what Git repo we are in, et cetera, so we can go ahead. Yeah, so if we have a web of events from GitHub, generally it will be there in the payload. Or maybe some information might be there in header, but generally we use payload feeds and those payload feeds are accepted to the parameter defined in trigger binding. And it might have, let's say 10 feet, like it might have commit ID branch as well as that's what we need only commit ID, let's say, so we will just have a field called commit ID variable. And it will be accepted to that we will give proper body dot commit ID, let's say it's in the commit ID position. So it will be accepted to the trigger template because it's fine that we need this commit ID. Okay, yeah, so I'm not sure everyone can hear fine because we have some audio issues. So that's why I'm rephrasing. So, so basically what you just explained is, we have something that says, here are the variables that are interesting to me. I want to get the URL to get URL the branch, et cetera, et cetera, the commit your ID maybe or something like that. And those things we define them in the trigger binding. Correct. We say these are the things that I want. And you have the trigger template that says, here's the data that you need to fill, like the git repo, et cetera, are going to be variables. And basically the trigger bindings says, this is you take this data from here and you put it here. Is that correct. So basically, that's why we call it binding because you say this data that is my variable actually comes from this information, which is somewhere else. Yeah, exact location of the field is, I would say exact location of the field is defined where it is. Yes, and I would say it can be nested also. So you can find proper nested field in the trigger bind. Okay, cool. Thanks. So I hope this is clear. For people who are watching, if something is not clear enough, please don't hesitate to ask questions in the chat. And we will try to answer them as we can. And then we have something called trigger CRD, which is a combination of trigger binding and trigger binding. So yeah, and then optionally an interceptor. We have not discussed an interceptor, but your interceptor is just what I have said earlier that it modifies your payload view. Or it can discontinue an operation because let's say you have a push event. You wanted a full event. Something like that can be done by an interceptor. And then we have an email listener CRD, which provides an event sync. Basically all these operations like trigger binding and trigger template operations are done by email listener. It extracts a parameter from trigger binding and date resources for the corresponding trigger template. Okay. Or you can also provide an interceptor in the email listener to pre-process the event payload. And then we also have some advanced concept called cluster interceptor. Cluster interceptor provide a cluster scope in the back of this trial. You can have a K native service, which basically runs your business logic on the payload that is coming. It's just a K service actually, nothing else and return a 200 response. We already have some like GitHub, Bitbucket, GitLab and the common expression language CL interceptor already available. Cluster interceptor already available. So if you need in addition to those interceptor or something, you can define your own. Okay, okay. So this is funny because it reminds me of, so a few years ago I was trying to implement this kind of CI CD with the with the GitHub actually. And they used to provide a bot. It was called Probot. Probot was basically this type of framework where you can define these things you say, if I have a push event. Then do this. But the do this thing is you basically have to write your own code to extract the information from the payload to do your own custom logic and then it can interact with something else. So basically, I was implementing some Node.js backend that waits for something to happen on GitHub part. And that does something like, for instance, it can create a project in OpenShift, it can deploy an application automatically etc. But all of that you have to do it by coding things yourself. It was just an empty shell. And what you guys did here, I think with the event listener is you have implemented this feature out of the box. So basically you create your pipeline and you create your whatever you need to interact with the Git or GitHub etc. And now you do instantiate your own backend that listens to these events and that already has all the logic to extract the correct information and to create whatever it needs to run the pipelines in OpenShift. Is that a correct way to phrase it? Even complex operations can be performed by me. Let's say you want some, let's say you have a pull event and you want, let's say, approve it or approve that's okay to test. Only then your backend, all these things can be done by interceptors, CL interceptors or combination of those. Okay. And that would be interesting I think for maybe an extra where we can go a little bit deeper into interceptors and explain basically how we can use them to to implement what you said like approval and such things because I know that approvals are something that are still in the works upstream like the structure to define how we can do that is still in the works. So that could be interesting for an upcoming show to explain how we can do those custom interceptors and integrate maybe with some external service or something like that. Yeah, basically one more point I just want to add here like when we have trigger installed right so by default, we will have some three or four core interceptor like Github Bitbucket, GitLab and CEL along with like as this cluster interceptor introduced the main motive is to earlier with having this only like fixed interceptor was blocking the triggers to grow. I mean it was not able to make use of the triggers in a dynamical way. So that's the main reason to introduce this cluster interceptor so that user can it's kind of a plug in plug out mechanism like anyone can write their logic and they can plug into triggers and they can get the benefit of this eventing mechanism. That's pretty cool. Yeah, so if you have your own workflows that you have already automated in some way like if you are doing, I don't know, GitFlow or GitLabFlow or things like that where you automatically create projects or deploy applications, et cetera. Now, that can be maybe a way to implement this type of behavior in a more dynamic way. So yeah, that's really, really interesting. And yeah, let's aim to have that dissected in another episode because we like everything that can be automated. Thanks, thanks a lot for the explanations. So this summarizes, I think what you have said. Yeah, yeah, please go ahead. Yeah, yeah, this summarizes what I've said, let's say an event is coming to the I will list a part of things, and then finding is expecting those parameters and providing a program template. And we have some time set, this is a combination of trigger and then in turn it will create a text on source. Okay, so text on resources are things like the pipeline run, et cetera. So basically something that that instantiates that something that will run. Yeah, so it's in the detail. Let's say you have a GitFlow and you have some JSON payload and it's going through the route of the email listener. And then it's going through the trigger binding which tells each parameter. Okay, trigger binding basically have parameters and then in turn these parameters are provided to the trigger template which then it creates a text on source. Okay. And so, so is that correct to say that the event listener is an application itself that runs in the pub. Like, it's something that is, it's an, it's, yeah, it's an event based architecture where you have a pod that listens for something to happen. And when it intercepts that event, it does something. Is that correct? Yeah, it's correct. That's why we also call it sync. Actually internally it's called sync. Okay. All right. And so is it that part that creates the trigger binding, et cetera, or how does, how does that happen? What, what does create those, those things like, I mean, what fills the information is it the event listener itself? I didn't get what is the question. Yeah, sorry. So, so what, what does the event listener do exactly like you have an application. And it intercepts the events. And then it extracts the data from the payload. Is that correct? Yeah, yeah, yeah, it's accepted those data from the payload and applies to the binding values. And it's got all other values, which are not needed. Okay. So basically it's a K test pod. I mean, like when, when we create an event listener custom resource, right, so it creates a one pod which keep on running and also so that that's the reason we will get the URL in the event listener so that we can use that URL and send the event. So whenever event comes that so before that only someone has to apply this trigger binding and trigger template. These are like a static template, we can say. Okay. Whenever an event comes to event listener pod so then actions happen. So then the actual work will happen like trigger binding will fetch those dynamic value and pass it to the template and the template internally creates the pipeline run and task run. I mean, take turn resources. Okay. Yeah, thank you very much for reminding this this point because it's very important. The URL actually is what we put in the webhook definition correct in the in the deep repo side. So you say, I have a webhook, and you select whatever events to you want to intercept. And you put that URL that is generated by the open shift. The router basically to point to the event listener for the specific repository. Just as just as confirmation that event is an HTTP post from a webhook right it's always HTTPS HTTPS is a post with JSON content or any other form. It's the kind of event which is that this component is listening to and and I have a question. I don't know if you are aware for the next version of OpenShift. If there will be an improvement in the pipeline UI, you know, now pipeline UI helps you creating at is a pipeline in OpenShift web console. If that will be easy also to create the trigger part, you know, you have to write your trigger template or trigger binding. I was wondering if there is going to be any UI help on creating triggers also this part. So maybe like a more thing we will see in the demo section. So right now, like OpenShift, yeah, OpenShift UI have basic template for the triggers and which will just give us the end to end flow. But coming to the advanced use cases, so right now it's under still implementation. But yeah, to answer your question right now UI supports the trigger adding part along with the pipeline. Yeah, so yeah so if I'm if I'm if I'm not mistaken. That's what you mentioned about like the pre existing stuff for GitHub and GitLab etc. Because you can already say I want to intercept this event from GitLab or whatever, and it will create those event listeners etc with the correct information. Yeah, because, because if you have to create everything the IAML, it's a bit complex. So, so since we are speaking about both tecton and OpenShift pipelines, and we are rightfully, you know, we have a foot in community and we have a foot in products. So, does these things. Okay, can they also be done from the upstream UI or is it something that we add in the OpenShift console as an added value, like, I haven't played with the tecton dashboard. I don't know what the UI does. The dashboard UI and the console UI are completely different and they're also following different architecture actually. Okay, but these things that that that Natalie mentioned like creating the event listeners automatically from the UI etc. Is it something that we can also do upstream or is it just in OpenShift pipelines that we have this added value. Yeah, I can just tell my experience I can upstream dashboard like dashboard what we need to paste post, sorry, paste the YAML files. I mean, there is no direct form creation. I mean, with few inputs, we cannot create all these resources, but instead of like, we need to paste the entire YAML. But in case of Open, in case of Open, why everything is automated. Okay, okay, cool. So, yeah, so that's part I would say. So that that was a very, you know, genuine question I didn't know what the answer was but so this is basically something that we do as part of the added value of the OpenShift console or we will we make it even easier to use upstream features. But in a more productized way. So I think we have things like also designing the pipeline. Maybe you will show something like that during the demo, where we can just show the screen. Okay, but yeah, thanks for the information. All right. So it's half past 10. I don't know how much time we, yeah, okay, so let's go ahead and have some, some concrete visualization of those concepts. Yeah, so here we have our triggers, even listener CRD. So we have just a kind even listener as well as service account, which is used for creating resources. And we have some trigger reference trigger. That's the name of trigger here. And then we have trigger binding CRD. And you can see here exactly the kind trigger binding and the next thing that you need to notice is items. Name get revision and we have a specified body dot head dot commit ID. It means let's say this is a position where commit ID is available in the GitHub. And body, body, body is your payload. If it is header, then you have header dot something. Okay. Yeah, so sorry, just to make sure I understand it correctly. In your payload, when you have your gift event, it has something called commit ID and it has something from repository that URL. In your JSON payload. And basically what this says the trigger binding says extract this thing that you have in value. Yeah, and store it as the revision, is that correct, like this is going to be the parameter name. Yeah, revision. That would be used somewhat somewhere else. Yeah. Okay. Okay. And I guess this JSON, you're kind of navigating the DOM of the JSON, not from the web book post. HTTP request. And I guess this, those fields here change, no? If you're using GitHub or Git or Gox or GitLab. So yeah, from the experience, I think they're pretty the same, but sometimes they change. So if you want to say something about it. Yeah. So by default, we provide, in OpenShift, we provide some trigger binding like GitLab, GitHub, and I'm not sure, I think Bitbucket also they're providing. And all those values are really to their own, I would say. And so, so a funny thing that you mentioned this Natale, because for many, many reasons, so at some point, many years ago, I think it was eight years ago, or maybe 10 years ago, there was a group of software vendors that said, okay, so whenever I have an issue like a JIRA issue, if it's in JIRA, it's something, if it's in GitHub, it's something else, if it's in GitLab, it's something else. Can we come up with some sort of standard to make things to be able to integrate? And we define pivot formats that any integration tool can attach to. And then every tool fills this information. And there was a standard that started to be defined called OSLC. I think it was open services, life cycle, something I don't remember exactly, but basically, it's exactly what you are pointing to, like, so when I have a payload from Git, it's going to be a different JSON. When I have a payload from GitLab, it's going to be another different JSON. So thus you are, you have a specific implementation for every tool. The goal of this thing was to have one standard pivot format that you integrate with. So it's always going to be head underscore commit underscore ID. And whatever you have on the other front, like if it's Git, it's going to be transformed to fit in that field. So you don't have to implement it for every different tool. So I'd be, you know, I'd be curious to see where these things are. And if it would make sense, you know, even upstream to start to define something like this is the standard that we want for an issue. And then GitLab fills it and GitHub fills it and et cetera, et cetera. So maybe that's a conversation that can happen upstream to see, you know, if there's a way to, it's basically the same thing that happens for Tecton, right? So before every IT vendor has its own definition of pipeline and what a step is and what the task is, et cetera. And it was not compatible. But now with the Continuous Delivery Foundation, they said, okay, let's define some standards. And let's create these Kubernetes resources which are a pipeline or a task or a step. And now every tool that implements Tecton uses the same standard, right? We don't have anymore each one with his own definition or his own DSL to define a pipeline like the Groovy for Jenkins and the YAML for GitLab runners, et cetera. So now if we are using the Tecton part of it, it's going to be the same YAML, it's going to be standard things. So maybe it can make sense to go even further and say, okay, now let's standardize on those bits as well. So we have one common payload that is filled by the different tools in the same way. So there was a discussion regarding this having trigger binding in cat log. So what will happen is we will already get trigger binding for GitLab, GitHub and Bitbucket, all those will already have and we already have a pattern defined and value will be provided by upstream. So I'm saying we can just use in the trigger template the same value by the latest GitLab or GitHub. Yeah, okay. And you already deployed it back at log. All right. All right, all right. All right, so yeah, just closing the parenthesis here, but Natalia, this was a project that I had in mind for a long time to have this, you know, this unique standard, and I even wanted to do it like with Tamil transformations, where you have, you know, the standard listener, and then you do some transformation to generate the correct payload that we expect. So you intercept the GitHub payload and then you transform it and you generate one standard output. That will be, you know, always the same thing in here. And you see, you have it here as open source upstream project that we have today. We are talking about it, which is very, very cool. And you know what, this is another example of Kubernetes, which is a standardization also platform. We're going all everyone into the same standard, same open source, same project, same standard. So this kind of, you know, converging on on the same thing, which is very cool. I think we can go in the next step because we're going otherwise out of time there. There is also some interaction in the chat, some people like to know more about tecton yet we will share after this live demo, some resources to get started tecton and also, if you have any useful links as a leader and please, please share with us so everyone can can start learning tecton. So here we have to find some trigger template. You can see some params that are that are available from trigger binding. That's a good revision. Get a positive URL message content type. And in the resource template, you're using it like pt.params.message pt.params.content type. And then we have some cluster interceptors that we won't go into detail, but just what it does is it's modify the payload. That's what quality said. And this is how we define it. And then you have a service. It creates a service, a deployment part running, which modifies it and give it 200 response. So what you have is name of the deployment, name of space, and path. Okay. So for another episode, I don't know when. Let's try to have this slide. Let's try to implement a custom interceptor that integrates with service now to do an approval, for instance. Let's let's have that as a background. That's interesting. Alright, thanks for having me and go ahead please. Yeah, so it was just also like some. What are what this feature because some some of the questions that in every name is they have to have a listen. It's. But the water. You mean, there is less. So, then you have a trigger. TLS. Connect. I'm. Also, you have TLS. That's. This was also. And that is for human. Sorry. We have, we have audio issues. It's dropping a lot. So, please excuse me if I if I rephrase what you what you said, well, basically, people who are watching can read this. The file, the most important or interesting thing I see here is the server listing. Is it correct to say so Savita please confirm. So when we spoke about the even listener, we said there's a pod that is always running to intercept whatever things. So that's cool. But I'm imagining if we have hundreds of integration like webhooks. That means we have hundreds of pods that are running, but they are just waiting for something to happen. And they are consuming resources. What, what I understand now is this can be serverless also, like, we can have something that doesn't exist like no pods but just a URL. And once we have the webhook, it actually creates the pod. It does its thing and then it's shut down. Shut down. I still have to work on my English. Okay, so that that's very cool. That's very cool because now we are, we are saving resources, but we still have the eventing side of things. So basically, like, until no event listener whenever we create is to create k at a spot, right? So now like we integrated with K native service. So with the help of K native service, we could able to achieve the server listing. So, so there is another. We have a custom resources in event listener, like we can specify Kubernetes resource or custom resource. So basically this custom resource can be K native, or it can be any other thing which can be integrated with event listener. So along with Kubernetes pod and K native service is something if user wants to run, they can implement their own CRD and then integrate with event listener. Okay, that's cool. So, so they can create their own operator to do something and then integrate with it. That's cool. Yeah, so we are going a bit short of time. Do we want to jump in in the demo or I don't know how much time you want for your demo. Yeah, I think it would be better if we jump to demo so that like we can see all those things in action. Yeah. Let's go ahead. Thank you very much from it was very interesting. And it plays the ground for the future episodes. Thanks a lot. Thank you. You made my life easier now for the demo. I'll share my screen. I hope it's visible. Yes, it's fine. Just please. If you can increase the font so we can see better in your demo. Sure, sure. Yeah. I see something click is not working in my mouse. Okay. Yeah. So, yeah, so for time being I am I just installed everything like a tecton openshift pipelines using operator. So, I don't know why it's okay. Give me a second sorry for the interrupt. Yeah, no worries. No worries. So, yeah, it's fine. Yeah, it's fine now. Okay. Yeah. So, yeah, okay. So everything is installed in this cluster. So all the openshift pipelines and all. So the one thing to make sure that the openshift pipeline is installed or not so we can see a pipeline column here. So pipeline trigger so it shows like, okay, this cluster has pipeline installed. So now like without wasting time, I'll just go and create a pipeline initially. And then I will add a trigger to the existing pipeline to show like how we can add trigger to the pipeline and do and create a pipeline run based on the events. Okay. So, okay, let me create a new namespace called demo. So project. Yep. So, yeah, so project got created. So I'm going to use this form git so that I can specify my own GitHub repo. So this is the one which I have created for this demo. So by default it selects the builder as go. And I'm not going to touch any of the things. So just I will add a pipeline. So what it will do when while creating this form. So if I click this button add pipeline, so it will create a pipeline template for me. I know need to create pipeline template manually. So instead of that, this, this openshift pipeline operator has integrated all these things. So this is the functionality of the openshift pipeline operator. Okay. When I create this thing so a pipeline will be created. Yeah. So let me go back to this pipeline section. So I'll select the project which I have created. So you can see a pipeline is getting creating. Okay. And it's already running. Yeah. So because in open in this dev console, what we have done as part of the UI in order to make our application or like my tasks, my pipeline is working or not to from the GitHub. So what we have done like initially when you will create for the first time, we are triggering the pipeline run also automatically. But later when we do some edit to this pipeline. Okay. So, okay, before that I just want to show like pipeline running triggered already and just for the refresh purpose like this pipeline run contains three tasks fetch repository building and deploying and all these three tasks have several steps. So those steps will actually run the containers and do the operation. So now, like how to move this one I want to edit. Yeah, go back to the pipeline. Yeah. Okay. So here, like, if I go to pipeline, I will have a edit option, right? So here edit pipeline is there. So if I do some edit operation, I mean I can add some task or edit something. So once I do this edit operation, after that to get changes of those edit things, I need to rerun here. I mean, I need to start it again. But I don't want to do that start manually. Right. So as you explain as we discussed in the present while presenting like we don't want to start any of the pipeline manually. Instead, whenever an event occurs, it should rerun, right? So based on the events. So to do that, like we have a trigger concept as we discussed. So till now we just created a sample pipeline and pipeline run automatically. But later on, whenever there is some changes to my repo. So basically my repo is this one. This is the one which I have given while creating the pipeline template, right? So whenever there is some action happens here to this GitHub repo, it should automatically trigger the pipeline run for me. So to do that, so earlier there was no add trigger form here, but now recently we have added this add trigger. And here we have an option to select like which provider we want, like whether it is a Bitbucket, GitHub or GitLab. So basically we have supported all these things and it will be inbuilt added to this operator. So that by default all these things will be created already so that we can make use of them directly. So, yeah, so for my use case, what I will do, I will make use of this Git pull request. So I'll use this Git pull request, which is a trigger binding. Okay, and by default it will take this app name, which Git repo from which branch and what is the image name. So everything will be taken by this UI itself. So now my job is just select this provider type and click on the add button. So now that's really awesome. But otherwise it would take too much work to create on the YAML stuff. Yeah, like figure out the JSON payload, how does GitHub store the pull request in the JSON, etc. So very cool. Yeah, exactly. So now like once I have added the triggers to see like those resources got created or not. Again, I can go back to administrator view. And here I can choose trigger tab. So you can see an event listener got created just now you can see time also. And one thing I want to show like I have not created it. I have not created any trigger bindings, but openshift pipeline operator in by default it, it have all this cluster trigger binding. So it's available at the cluster level. So that so, so basically in triggers, we have two CRD called trigger binding and cluster trigger binding. So all these things are shipped as a cluster trigger binding because everyone can use it across the cluster. And if someone wanted the namespace level they can directly come and create in the trigger binding using this YAML way. And also we can see a trigger template. So before that in the cluster trigger binding, I just want to show how this GitHub pull request look like. So if you see YAML here, yeah, so we can see these are the different, different parameters. It's actually watching. I mean from the webhook. So URL and revision action full request number and full name and everything like whichever the required information. And same thing will be used inside trigger template. So I'll go back to trigger template now. So we can see here in the YAML section see we use all those parameters from the trigger binding. And in the spec like we are making use of the from which repo and what is the name and all. And we are finally using the pipeline which we created initially. We are not creating any new pipeline here. Instead we are just making use of the already existing pipeline because we did add trigger from the existing pipeline. So it automatically created this trigger template and added this resource template of kind pipeline run. And while adding kind pipeline, it refer the pipeline which is already existed. So now, yeah. So now I believe we are going to add the webhook to the GitHub, right? Yes. So before that one more thing I just quickly want to show this event listener how it looks like. So this is how event listener looks like where we have a triggers and template binding thing. I mean these things club together so that information can be shared. So now like as I have created everything. So this event listener internally creates a KIT spot as we discussed. So it will create a Kubernetes spot. So and it keep on running as it's a Kubernetes based. So it keep on watching on the event. So it's like continuously running and it just watches on the spot port. Okay. And another thing is like when we create this one, it automatically creates a route also. Okay. There is some error from the UI side. Yeah. Just refresh the console and see if it works. Okay. Sure. This is a 4.8, right? Yes. 4.8. So yeah, it should be some... Okay. Cool. Cool. All right. Yeah. So now like... Yeah. Sorry. Let's copy that I think. Yeah. So when I click this URL directly, I'll straight... Yeah. So I will get the error because I have not passed any information, right? I should pass the proper body format. Otherwise my event listener should straight away reject the information. So that's why it gave the error. No. I will make use of this URL and I'll go back to my repo. Right? So this is the simple go-up which I use for the creating pipeline. I will go to settings and add it to the webhook. So here is the webhook. So I have already set it up for testing purpose. So I will just straight away clear everything. So here we will specify the payload URL of our event listener one. Right? And content type always should be application JSON. This is the one which it will expect. And then we can select anything which we want. So right now I'm interested in full request. Right? So I just click on this one. Let me individually select. So I just selected the full request. So in full request, we have these many events. And these many information like whenever there is a PR open, close, reopen, assigned, anything happened on the full request event should send to my pipeline. So I will just update this webhook. So webhook is updated. So I have already created few PRs for the testing purpose. So what I will do, this is the existing PR. Right? Any action on this full request. So if I do some action, it should automatically read trigger for me. So before doing anything, I will just go back here and show like currently we have only one pipeline run running in this demo name space. You can see. So now I will just close this full request. Right? After this, if I go and see a new pipeline running up and running because an event comes to this event listener event listener and then it recognizes that full request and it fetches all the information from that trigger information to trigger binding and given to the trigger template. And finally it created the pod for me. I mean this pipeline run from the trigger templates. So if I go back here, this is the one I was saying. So in the trigger template, we have specified a pipeline run. So, yeah. So you can see here we have specified a pipeline run. So this pipeline run is executing the pipeline, which is the original one. Cool. Yeah. So that's really cool. And I know that it works very fine. So great to see that work in life. I have a question on the pipeline run. Can you please go back to the like to the UI? Yeah, okay. Pipeline run. Can you click on it? Like on the pipeline run there? Yeah, sure. Is there something on the metadata? Okay. Trigger event ID. So say now I want to understand what has triggered the execution of my pipeline. Like there was a commit ID or there was a pull request ID or whatever. Is this stored somewhere in the metadata for the pipeline run or not for the moment? Every event that comes is assigned an event ID actually by human listener. Every event that comes from GitHub or GitLab is assigned an event ID and then they will go by. So there it is given an event ID. Let's see that here. We have other issues we couldn't understand. Yeah, basically he's trying to say like each pipeline run whatever we trigger it has a unique event ID. And also like in the pipeline run we can see triggered by who I mean like who has triggered this pipeline run. So this information. She's important. Yeah. And if we go to the event listener. Can you go to the, can you go to the labels? Last labels will have more in the labels. Yeah. This one. Full event ID. Okay. Yeah. So maybe that's something we can talk about offline, but what will be very, very cool is, is to be able to trace from the pipeline execution what has started from the Git repo. So like if it's a PR, I click on the pipeline run and I have a direct link to the PR to understand what code has been merged and you know these things that that could be that could be really nice. I think it's very easy because it's already in the metadata from the pull request. All we have to do maybe is to add a label as Graham said that that that links to the URL of the of the of the PR. Yeah, that would be really nice. Yeah, that's a good, that's a good point like it, it, yeah, we can keep on add and increase this thing. Your feedback is very valuable. And we can consider this then. All right. All right. So that's why we would bring your engineers here. So, so we can also have this type of exchanges with the, even if people are asking some questions. So thank you very much. I believe we are already past the, the, the, the hour. Yeah, we are a little bit out of time, you know, in a, in the morning is pretty, you know, clean free. So we can go extra time, a bunch of minutes. So we have time to wrap up everything. No, I put in the chat is the link to our learning portal because somewhere asked how to start learning about tecton. We have, we have a scenario where you can start learning about the open sheet pipeline on open shift. Then if you like to, we have the tecton deep dives, a series of all online event when an instructor will do a deep dive on tecton and the material for learning tecton is is the other one I put in the also in the chat. Those are two valuable asset that Red Hat developers offer to start learning open sheet pipeline and tecton on top of open shift and Kubernetes. Yeah, I just wanted to, to share our resources. I don't know if you folks have all additional material we can share with the people to start learning tecton and all the things we have seen today. Maybe like we can add all these links in the YouTube in the YouTube link so people can directly refer from there. It would be great. Yeah, very good. Yeah, very good. Okay, so thank you very much again to everyone. It was a great session. Thanks for being always the awesome host and asking great questions. So in the next sessions, I hope, you know, we will be able to cover these things that we discussed, like, you know, maybe this suggestion how to trace back to code. If it comes in one of the upcoming sprints, I will speak about it also with Siamat. Well, I have a suggestion, Jofar. Our next show, this is the show, our show is bi-weekly on Wednesday. The next one would be August 11th. If we want to do another tecton episode, we can use this show. So we can try to prepare something, but we will see each other on August 11th. Yeah, perfect. And so, yeah, so I was going to say, so again, as a wrap up, thanks a lot. We already have good topics to cover for the upcoming sessions. We will see for August 11, maybe we can speak about the new feature called pipeline as code, really, like we can where we embed a tecton in the Git repository and then we automatically trigger that based on those events. So that's another further step, but we first wanted to show this thing before because this thing runs now. The pipeline as code is in depth preview, but we will speak about it definitely. And we will, I'm very interested in just some interceptor thing because I like to tweak things. Me too. And I have some very good ideas. You know, because some interceptor can do what we said. It can pull the data from the pull request and it can add a label to the pipeline run to have the URL. So that's very simple use case we can do already. And then we can see integration with service now or whatever. So thanks a lot. Thanks again. Thank you very much. Thanks for everyone who connected on the stream. Thank you very much, Natalie, for setting this up and making sure that everything runs smoothly. And thank you, of course, Savita and Kuran. And we hope to have you next in other episodes. And I wish everyone a great day. And we are going to end the session now. Thanks and see you on August 11 then. Thank you. Bye bye. Thanks everyone. Bye bye.