 Okay, so hello everyone and thank you for joining us today. My name is Jafar and I work as a tech marketing manager for Red Hat. Today we have a very esteemed guest from the engineering and we wanted to welcome you to this new session about. So usually we pick up different topics for the open shift coffee break and this one will be a special episode because it's the first one of a recurring series. A series that we wanted to do about pipelines. So we will specifically be speaking about texon, which is a Kubernetes native way of doing CI CD. And today I have Ciamar, Savita and Nikhil from the engineering and we have also our lifetime guest, Tero, who will still be sharing coffee with us. So if you guys can make a quick introduction for yourselves. So who wants to go first? I can start. Okay. Ciamar here, my product manager for OpenShift pipelines that based on the upstream takedown project and at Red Hat I generally look after the CI CD space on OpenShift. Cool. Yeah, I can go next. So here. Thanks for having me here on the OpenShift coffee break. So myself Savita. I'm basically part of DevTools group group. And I basically work on OpenShift pipelines and upstream projects like tecton KNATU. So currently I'm based out of India Bangalore office. Okay, thank you very much for joining. And Nikhil. Hello, I'm Nikhil Narbis. I'm also from Bangalore, India. I'm part of developer tools in Red Hat. And I contribute to tecton CD projects and I mainly focus on the operator and the product, the productization part of OpenShift pipelines where we bring the upstream components like pipelines to this into OpenShift. Okay, thank you very much again. Hi, my status change from co-host to permanent guest. Since I worked for Red Hat last week, but currently working as a senior engineer DevOps in a company called Bangalore. So I will be providing the external view of the thing so that not everything is so. It's a red and black and white. So I will be challenging, see if I can do engineering. All right. All right, you're very welcome to do so. And of course, we see that you're still wearing a red shirt. We know that even if you work for a different company, you can never leave Red Hat. Correct. Okay. So, yeah, thanks again. So the goal here today is, as I mentioned, we're going to have a recurring series about tecton. And of course, we will also speak about the productized version of it, which is OpenShift pipelines. And this, the goal is to have pragmatic sessions where we go. So as we move along in the different episodes, we learn new things, new things about the tecton concepts, and we show you also how to do that. So it's basically, it's not a master course, but it's basically showing you the different concepts, how to use them, how they work together. And because we have the great luck to have engineering guys from us, we can also understand better what Red Hat does in the communities, how we develop things, et cetera. So Siamak, since you are the product manager for that, how about you tell us a bit more, how Red Hat got involved with tecton, why we chose that, and what we do in the communities as a starter. Sure, so I'll be happy to share a little talk a little bit about that. The continuous integration space is really, is not in the shortage of solutions. There are, if you go, like as long back as you go 10 years, 15 years, there is a very colorful landscape, a variety of different solutions that target continuous integration. In different ways, they all want to solve the same problem that you want to automate the activities involved in the software development lifecycle, like building the binary of your application, testing it, perhaps running some security checks against it. And there are more and more activities that all solve the same problem, you want to automate this, but there are different approaches to this. So this is quite colorful. And if you go a couple of years back, there is also a large number of cloud services that do the same thing, look at Travis and signal CI and get that pipeline is a little bit of action and so on. So it's really, there was no shortage of solutions, if you, if you may, if you will. However, as we saw so red hat, like I'm a part of OpenShift, OpenShift is the Kubernetes throw enterprise Kubernetes from red hat and there is a sharp really increasing adoption of Kubernetes across the industry. Not everywhere, but it really grows double digits and a lot of organizations that are building cloud native applications Kubernetes is not really, it's a given that is a platform for deployment. So you see this growth of applications being deployed on Kubernetes and the CI systems that are completely oblivion to, to the platform, to the continued platform. So there was this gap that was discovered in the Kubernetes community as the CI systems are very different from how, from the infrastructure that the application is deployed on. If you, and when you're using cloud services to an extent maybe you don't even want that to be aware of your environment but as you get closer to deployment or if you're you want to run those in house within your own infrastructure, even if your infrastructure is a cloud but if you're the owner of the infrastructure, then it starts to become problematic that way of that division that exists. And then you can also notice there has been a lot of efforts in bridging this gap, right, so there has been a lot of engineering efforts about how we can make Jenkins to work much nicer on a container platform. You can see similar to other container platform to other CI engines. You can see it also. So, most of the cloud based CIs, they do recognize enterprise needs and they have a version of that that runs on on Kubernetes, at least the pipelines can execute on Kubernetes and there are ways to have the runners, for example, to schedule the actual execution of the jobs on Kubernetes. There have been a lot of efforts to sit in between and bridge these two words together, which works to a certain degree but there are still limits to that. The obvious ones are, for example, one aspect is that how the CI engine itself, how well that is suited for Kubernetes environments and the more traditional monolithic that solution is the more difficult it is running on Kubernetes. And then Jenkins is a great example. Everyone is Jenkins user, Red Hat, including Red Hat, and running Jenkins on Kubernetes itself, it's difficult because Jenkins is designed really for a different era. It was designed way before containers, very thin. The limits that exist is still that, for example, if your application is deployed on Kubernetes, you keep your credentials as secrets on Kubernetes. And perhaps you have a secret manager that is managing those centrally across multiple clusters for you or using a cloud key manager of sort. And then you have the CI engine that needs the exact same credentials because it needs to go to the same bit repos or deploy to the same clusters. And you have a completely different way of redefining the exact same credentials. So now we have two different places to have the exact same credentials and then you rotate them, update them and so on. And when you have configuration in one board, Kubernetes all conflict now in other places is different. So it's this different concepts like created, created frictions on how we bring these borders together. So this was recognized first when the Knative project started by Google as a serverless platform at the heart of Kubernetes and there it was recognized that we also needed built system that is more closer to Knative, closer to Kubernetes. They always see a system that can build container images are are like really good, but all these gaps make it difficult to use it within Knative. So Knative build was formed to focus on this problem. And as use cases were collected on what does Knative build need to do, quite quickly it was evident that we're not talking about building container images, we're talking about continuous integration. There are so many other activities around directly related to build but need to happen right before or after building an image. So that was the pivoting point that the Tecton project that Knative build was broken out of Knative project and became really what Tecton is today. It became a project that focuses really on this gap between CI and Kubernetes and a CI framework that is really at the heart of Kubernetes builds on the concept of Kubernetes. So you build your pipelines concept with the same concept that you're familiar with you build a pipeline you have tasks or stages and of sorts and so on. But you really build you really build them with the concepts of Kubernetes as well you have pods in it you have containers, you have secrets, coffee maps so for someone that is on Kubernetes. This seems a very natural step you're learning here with extremely low because you are like really reusing everything that you have from from your existing skills and even constructs that you have on Kubernetes and build pipelines from them. So that was really the problem that Tecton trying to solve to create a native CI that is built for Kubernetes is familiar to Kubernetes and inherits the operational model of Kubernetes as well. So it's, it becomes absorbed into the cloud Kubernetes platform. This is a background of a project like not that brief project. Yeah, but that's the genesis. It's interesting to see how, like, I didn't know it came out of the creative project. So it's such a fast moving project that it feels like ages ago even though this is like two years ago or one and a half years ago if I'm not mistaken like this and one and a half years ago perhaps but it is. It's a very fast moving project and an open shift we do quite ourselves that we want to make Kubernetes simple for developers are our goal is really to be the number one developer. Platform or on Kubernetes. So we monitor this space very closely and it was quite evident for us that Tecton is is really did the right direction for continuous integration on Kubernetes. And we were already investing in creating for several of those workloads and we started investing in Tecton as well. We have like two of the really sharp engineers on the Tecton team with me on the call that we we work a lot on upstream and address issues and bring customer cases that are very relevant within the enterprise environments and work through them. And within Tecton upstream project with creating enhancement implementation discussing and then bring them down as a supported product to brand and customers. All right. I have a easy question. You mentioned that there are several different like toolings to do the CI CD. What about those customers they have a zillion lines of Jenkins code groovy and scripting and everything. What is the like the way to move forward if they want to move to let's say that is it lift and shift or is it just reengineering. Or is it the same concept like moving from VMs to container now is moving from Jenkins to cloud native pipelines or something. Are there any like code and rules. How that should be done. I wouldn't say like this is the typical answer that you don't want to hear from me on this that it's all depends right depends on what what customer has done but I would say that this is similar to any other type of change we have seen in the waves of technology matures and evolves and you are left with the decision on how do I move from my existing way of working to this new way that addresses some of the challenges that I that I have. But at the same time there's a lot of work for me a lot of existing investment so there's this trade off of how do I address those challenges without like stopping still and spending a year just refactoring stuff right this is the same conversation in microservices and monolithic applications or think container VMs like you mentioned or this happens like this is the exact same type of change. What do we see at a lot of our customers is that they usually draw the line at some point and every any new effort that is being gone into CI or building pipelines or building a platform that provides CI as a service to internal teams that redirects to investing in Tecton in this example while maintaining and keeping the existing investment and over time analyze and see very make sense to perhaps move some of those existing efforts to take time but we don't see that often that people like stop and move everything that they have already on Jenkins or some other platform to to take time it's a more gradual movement but absolutely like this becomes a pivoting point that no new investments takes place for example in in creating pipeline through Jenkins that's as the customers that started off to take time. Okay, make sense. I have actually another question to the engineering team. Since you follow the upstream closely and the upstream is driven of course by the customers and the users to which are actually the customers and user are demanding from the project. So, take time is has become quite wide as one of the it has the luxury of having quite a large number of vendors and individual contributors in the community, each of them bringing their own use case the use cases of their companies or their customers to the community so it really grows in a variety of different directions. But I thought I can like get help from Soviet and nickel that they focus on two of the sub projects of takedown they can talk a little bit about their areas of their which direction they see in in those areas that is that is growing. Would you like to go first nickel for ladies first actually so we time. Yeah, so yeah. Yeah, I think we both can mention some so maybe you can go first, and then I can talk about other points as well. Yeah, so as I mentioned like I most of the time I spend working on triggers, which is one of the sub project of entire tecton CD, like tecton CD. So, and also operators so maybe Nikhil can address most of the things in operators so coming to triggers. So, initially like when tecton CD project started it started with the pipeline project, I mean sub project module. So, once they started pipeline thing everything pipeline run and those things has to be run manually. So, they got a requirement or like use cases like how do we schedule these pipeline run or task run dynamically based on some events or some mechanism so that those things will be dynamically run. In that case, trigger event based mechanism they were they thought and they started with this trigger project. So, initially like they started with the alpha API so that's how the Kubernetes API start works initially they start with alpha then beta then GA. So, triggers is still isn't the alpha state. So, this first started adding the basic integration with the GitHub. So, because most of us will be very familiar with the GitHub source code management tool. And this way they have started into like extending the SCM tools like GitHub Bitbucket GitLab. And after that event one flow of event mechanism started working. And later like once the basic flow is started happening then we got a lot of requirements I mean user inputs like okay we want now customization of these event mechanism. I mean we don't want to use the just the basic input provided by triggers and also we want something customization on top of that. So, this way it started evolving and now we have the customization step. And also like not trigger supports the KNATU KNATU based event listener also because right now till now what was happening whenever trigger creates the objects. The pod is keep on running even though someone is not sending events. So, in that case the resource utilization was unnecessary wasting. So, in that during that time like we thought okay let's use the KNATU based mechanism also. So, now the trigger integrates with KNATU also. Yeah, so this way we start adding the features based on the user inputs or based on the use cases and based on like lot of inputs from the developers companies who started contributed. Yeah, so right now trigger is in alpha state so maybe like another month or two months we will be making it to the beta. Then, yeah, this is how we start contribute and add the new features to triggers. I think, yeah, Nikhil, can you just add just quick comment in between. Yeah, one second before you. So, just let's not forget that I know we are all very well versed into tech time. We speak about those different concepts, like event listener and triggers and stuff like that. But what I wanted to do because let's keep in mind that this is a one on one session we are basically introducing. Although the audience might know what I wanted to do is first have a quick recap of those main concepts, just to set the floor so everyone watching can understand what basically we are talking about. And then we can, if it's okay with you we can articulate more in depth information about those concepts. Would that work for you. All right, so let me just I'm going to share my screen and show you those different types of concepts. Meanwhile, I will have a question. Maybe audience doesn't know that is it correct that when the API is in alpha, the API might change, but after beta the API doesn't change anymore. And then of course GA it's solid. So that's a good to know when customers start using Decton that if they use API, which is alpha, there might be changes and even breaking changes. So that's, that's good to know. Yep, exactly. So we kept our triggers for around, I think year, I mean, more than six months is in the alpha state. So, yeah. Yeah, sure. Don't try to build productize TI or production CI based on alpha features. That's the goal or message there. All right, so very quickly, I'm going to introduce those concepts to set the floor. So, as I mentioned what we wanted with Decton or what the community wanted was to basically standardize around concepts that everyone will be free to implement how they want but basically make those things able to work using Kubernetes native resources. So, with Decton, basically we extend Kubernetes with some new concepts. We use what we call the custom resource definition mechanism that allows us to enrich communities with new things. So basically, if we go at the very base layer, we add three things which are the notion of a pipeline, the notion of a task and the notion of a step. So a pipeline is a sequence of tasks that can run in sequence or in parallel. Each task will run in the pod and each task will contain several steps that will run in containers in the same pod. So, to give you examples, so all of those things, of course, are going to be defined the Kubernetes way, mainly in using YAML resources. And if we see an example of a pipeline, we see here that we have several tasks that will, for example, clone the source code, then build it and then deploy it. So we can have conditional, we can have retries, we can have some execution logic built within the pipeline. But the very important thing that is, although all of those things happen in different pods or different containers, we can share data between all of those steps or tasks. So, for instance, say that you wanted to clone your code, and this can be done by a task in a pod, but this pod will write the data to some shared workspace, we call that workspaces. And it's going to be, for example, a persistent volume that another pod can then mount. And if you wanted now to build the binary of the application using something like Maven, we find the source code already there. We can also use things like subpaths to say I'm going to clone my code in this specific folder, I'm going to store my Maven artifacts or cache in this folder. So you can basically share data between the different pods that are playing some parts in your pipeline. Next steps are tasks. So each task is going to run in the pods, meaning that it's going to use a container image. So the great thing with that is there are existing tasks that you can find that we should, for example, with OpenShift, or that you can find on a marketplace called the Tecton Hub. But if you were missing some specific binaries or streets that you need to run your CI step, then it's very easy to build that into a container image. So that's one of the skills. So Tero, you asked how should I move from my traditional CI to Tecton. So one of the, I would say, key skills would be to learn really how to write Docker files or create container images and embed whatever custom tools you need to perform your steps. So if you needed a specific CLI, then you have to build, if there's no image that already provides that, you can build your own image, embed all the tools within that image, and then you can reference it within your steps as your image that will run the comments. Okay. And final step is final concept is the step. And basically the step says, I'm going to use this specific image, and I'm going to run this specific comments. So for instance, if you wanted to build your Java application, then you can do a Maven package or if you wanted to install your, your dependencies or whatever you can do a Maven install, etc. So basically, you have a task that has several steps and each step can have different commands with the container images. Okay, so that's the final concept that I spoke about, which is how to share data between different items. And basically it's going to be a persistent volume that will be shared between different parts of your pipeline. For instance, you will have a first task that will get some data, write it to the persistent volume, and then you will have another task that will reference the same persistent volume, we call it a workspace and text on concepts, and it will be able to find the data there. So these are the building blocks that you need to know about. And I have something that I have built also to talk about the triggers. But maybe let's save that for a bit later on when Savita and Mikhail show us the demo and try to expand the demo with this notion of I'm going to make a commit to my code and then trigger the execution of my pipeline. Alright, so that was it for the concept. I hope it sets the floor to understand what we are manipulating in terms of resources and how they get executed on Kubernetes. Should we go now back to Mikhail to answer the question at the upstream. You mean the direction in which the upstream is going? Yeah, correct. Let's finalize that question. Sure, definitely. Thank you. So I can answer that question in two aspects. One is the general philosophy and the second one is the direction in which the work is going. So the general philosophy is this. So that is in the beginning. So it was kind of assumed that Tecton is a tool that we use for the outer loop builds, that is, when you want to build and publish something in the enterprise. So at first, we did not have that focus of whether Tecton can be used for a developer's workflow. That is, I'm building a software and then I wanted to run some tests and, you know, and then just deploy in my local setup. But right now, discussions are going on in that aspect, that is whether Tecton should consider this developer's local workflow as well. So one aspect is that we are also thinking about ways to make sure that CICD pipeline is also a part of the application and so thinking about those two in two different terms. And the second aspect that is the general work that is happening. So initially, a lot of focus was on the core functionalities that is how do we write a task and how do we share tasks and the different features that we need to share data between tasks or pipeline. Such things. But now the workflow, there's a lot more focus on supporting features as well. For example, as Yemma mentioned in Tecton, all the workloads run as containers. So when you run a pipeline, the tasks come up and they run as task runs, which is essentially ports. So all the logs, everything stays inside the port. But if you want to retrieve those logs, you cannot keep those ports forever. So now there are projects like Tecton Resins in upstream. So basically it's a method to upload or save your logs somewhere so that you can kill the task run ports. And then there is a lot more workflow related work happening inside operator, because initially the focus was on just installing the components. That is, you just want to install Tecton pipelines or just triggers. But now people want to have access to methods using which they can automate upgrades, configuration and customization. A good example is we have the operator project, which can be built for two different platforms. That is, when it is built for Kubernetes, it has certain set of behavior and it provides the upstream Tecton dashboard. But when we build the operator for downstream, it does not supply the upstream dashboard, but in addition, instead it provides some cluster tasks, which comes on OpenShift. So in that aspect, there's a lot of focus now on to the supporting features or supporting workflows, rather than on the code workflows. Okay, makes a lot of sense. But then, as I assume, it's a really fast-moving project. Yeah, I'm speaking of that. So thank you very much. There's a question about pipeline resources. And because you guys have been involved in the upstream engineering discussions on all those things, could you tell us why they have been deprecated? I know there are some issues with them, some limitations. They have been mostly replaced by this notion of workspaces where you can instead of referencing pipeline resources, you can say the code is in this workspace and just choose that. For example, can you tell us more why they've been deprecated and what's the new way of doing that? So I can give an overview. So that is, initially it was thought that the way we share workflows will be using pipeline resources. That is, we have types of pipeline resources like Git, image, storage, cluster, et cetera. And then there was a point where people were starting to request more and more types of pipeline resources. And the types of pipeline resources are input in the core protection implementation. So that is, each time we want to add a new pipeline resource, we'll have to add that to the core implementation. So instead of adding pipeline resources, we thought about duct-type pipeline resources. That is, we can provide an interface using which users or other developers can create custom pipeline resources without having to make changes in the core implementation. But that did not take off. But instead, I think when Tec-Tone started becoming popular, people were more interested in sharing workflows using tasks and pipelines. So pipeline resource, it's generally saying that, okay, so basically, in simple words, pipeline resource is just a collection of steps. So if you use a pipeline resource, just before running your pipeline, it will add a few steps just before or after your defined steps. So for example, if you say pipeline resource of type Git, it's just going to add the mechanism to clone the Git repository. But right now, it is more clear for people to see that in a task, like the Git clone task. So the pipeline that you define is what you see running. But with pipeline resources, you see these additional steps running. It might be a little unclear. So two reasons. One, that is, it was kind of difficult to support more types of pipeline resources. And second, I think using tasks and sharing workflows using tasks and pipelines instead of pipeline resources makes it clear, more clear. All right, thanks. Yeah, so the last way, concepts we have to manipulate, the easier it's going to adapt, to be able to adapt Tec-Tone and those things. So thanks a lot. So I do have one question though. So I know you guys are working on the operator and extending it to make it easier to do things by instantiating Kubernetes concepts. So one of the things that I think are useful when you are doing CI or CD, for instance, is the ability to say, I'm going to pose execution for some time. And then when I get someone to approve this execution, then it's going to trigger a new pipeline or something like that. So are there plans to have this type of like generic approval concept added to Kubernetes as a CRD or something like that? Or can you tell us a bit more about how we are going to implement this notion of unapproval gate using Tec-Tone pipelines, for instance? If it's something that is in the works, if not, we can speak about the other topics. It's an open source project. You can implement it yourself. Yeah. I can't speak a little bit about that particular topic. All right. So this is a very common request, right? And it really comes from, like I said, a majority of traditional lines. Yeah, like it's not even traditional. A majority of teams have at some point been using Jenkins, right? And Jenkins, this is one of the like patterns you use. You add a manual approval there. And the pipeline will waste for like things happening to outside the system. So I'm not saying this is only in Jenkins, but this, like immediately comes to mind from Jenkins. So this is definitely a very common request. And there are discussions and requests on it within Tec-Tone as well. In Tec-Tone, some of the issues when you look at a Tec-Tone project in the community are created as like this type of larger topic questions. There's not much we do about manual approval. So there is definitely a use case for that and interest in the community. There is also interest in it from like a lot of customers I talk to, because like I said, they are coming from some other CI system that supports this. So at some point this will appear, I believe, in Tec-Tone as well. And the timeline is not known because the discussions, the community to consolidate and come to a level that fits the Tec-Tone model really well. The original thought was that these are really modeled at different pipelines. So at any point, if you have to really wait for a long time and then do something else, are they really a single pipeline? There are two different sets of activities that need to happen, but you're modeling it as a single pipeline and connecting them with the manual approval thing in between. So that was like the original thought. But we do also recognize the case for, even though they are separate pipelines, but having them together with manuals, that makes it easier to correlate these activities to each other. So it's much easier to have insight into and flow and the things that happen and the phases that happens on the system, but rather outside the system. Auditability. Yeah, exactly. I do expect that this will appear in Tec-Tone at some point, but we don't know yet when. Okay, thanks. All right, so thank you very much guys. So I believe Niki and you had something to showcase what we talked about. Would you like to go over a quick demo of pipelines now running on top of OpenShift because I believe we add things on top of the core Tec-Tone to have a very nice and productized version within OpenShift. Would you like to show that? So for info, we have 20 minutes. So we have prepared a brief demo just to give an overview of the different things that are possible with Tec-Tone pipelines slash OpenShift pipelines. So let's accept the share of her screen and then we will take you through the demo. Yep. Yeah, I'll quickly share my screen. I hope it's visible now. Yeah, it is. I shall add one point here that is, so just like Asiama said, the engine behind a Tec-Tone is a Kubernetes. So there is no other, you know, a CICT engine that we can store. Exactly. So basically, so we can leverage our Kubernetes knowledge. So that is we interact with Tec-Tone the same way we interact with any other Kubernetes resource. So at the same time, one of our goals is to make sure that people get a feel that they are interacting with a CICT system, not Kubernetes. So that is where the Defconn sort comes in. So what we try to provide is a CICT experience instead of, you know, how to deal with endless ML files. Yes, you can always edit ML files. But this is what we have, you know, we want to show you how this Tec-Tone CD, which leverages Kubernetes features, can be provided as an integrated system where you can be, you can forget that it is running on Kubernetes and use it as any other CICT system. Yep. All right, thanks. Yeah, so as like we talked about operator and different Tec-Tone CD components. So for the installation purpose, as Rafa said, like we release our OpenShift pipeline as a productized stable build. So we recently released DA, which is 4.1. So yeah, I'm just going to the, this is a 4.8 cluster. So I am installing the Red Hat OpenShift Pipelines from the operator hub. You can see 1.4.1. This is the GA release we did last month. So I'll just, with few clicks, I could able to install the OpenShift Operate, OpenShift Pipeline operator on this cluster. So you can see like a lot of channels, but the stable is the one which we released recently. So yeah, I'll click on the install and it started installing the operator with the OpenShift Pipeline components like pipelines, triggers and like cluster tasks. All those things will be installed when we install through this operator. So you can see like it's still under progress. Like it will take few minutes to install it. I hope it should come soon. And it always installing the OpenShift operators. So meanwhile, Nikhil, you wanted to add something about operators like why we started adding the operators. Why we are not using, instead of directly installing pipeline triggers separately. Why we have to go through operator itself, but what are the benefits we get? In simple terms, we use operators to import human operator logic into software. So which that means is that, so you take on pipelines, triggers, all those are upstream projects and which needs a lot more of human operator manipulation or a system admin's job to get them installed and make sure that they are configured properly. And they upgrade properly without breaking your workloads. So that sounds like a lot of documentation and a lot of issues. So instead of that, what we are trying to do, trying to do is gather all the best practices and recommended practices around, you know, around the lifecycle of technology applications and compare them in software as OpenShift pipelines operator. So, so then we are trying to take this operator to different levels of maturity so that initially it will provide insulation and upgradation and then we'll start supporting metrics. So we can support backup and recovery such things. Yep. Thanks, Nikhil for the brief information about the operator. So you can see like the operator is installed successfully. So in the installed this thing. And if you go like all of the tecton pipelines, OpenShift pipelines components and pipelines and triggers will be installed in the project called OpenShift pipelines. So now we are ready with the pipeline on our 4.8 cluster. So let's just install one of our basic pipeline to show like how a pipeline is created as part of our demo. So today we are going to show one of the front end and back end application integration and deploying through pipeline. And also we are planning to show like how we can send event from the GitHub so that a pipeline run will be created based on the events. So let's make use of this basic add flow through Git. Now, so I will be installing one of the pipeline called front end which is OTY. Yep. So I will be giving the Git URL as OTY. So it should automatically choose the runtime builder, which is Python. So our UI application is in the Python language. So you can see here we have we can choose any of this deployment or deployment config. So I'll go with the default configuration. And also here we get the like option whether we want to create a pipeline template or not. So I will be choosing this add pipeline and then we have a create route application. So it means like for this UI pipeline a route URL will be created so that we can easily access it. One more thing I wanted to show here in the visualization. So this pipeline when we choose when it creates it basically have these three tasks like first it will fetch the repository then build it and then finally it deploys it. So now let's create the pipeline. So it starts creating the pipeline. We can see a good view of topology in the topology section of this pipeline. So did you have any Kubernetes resources in your Git repo as well or just a source code application? So yeah, yamls are there. So but it doesn't take these yamls. It just take the code. Right. You generate deploy your application and generated those manifest for you. All you have them there you if you didn't have them that would have been final. So it deploys application generates deployment and routes and service and also added a pipeline for you based on take time that builds application. Yes, exactly. So you can. Oh, sorry. I just created my pipeline in the open ship pipeline name space itself. So we can see a lot of things here, but that's okay. So here you can see a pipeline or UI application got created pipeline. But yeah, so we get error. Like image pullback of because in the operator we have some restriction or some restriction to not to create any of the pipeline example or resources in the open ship pipeline name space where actual components are running. So this is just a restriction we have added as part of our operator code. Nikhil you want to add something like why we have added this restriction mean well I'll create another namespace and create a pipeline over there. Sure, sure. As we're running short of time you can switch to that for the cluster if you have that accessible, then you can show the final executed, you know pipeline time. Yeah, yeah, sure. Okay, so, so that's a good question. I mean, that's a good question about why that open ship pipeline name space is different from other namespaces. So, like I was mentioning before, the operator embeds a lot of operator knowledge. So that is one of the operator knowledge is when you want to run a pipeline or the tone application itself. It needs certain are back permissions and privileges so that it can run on open shift. So if you get the upstream release and then set it up, then all of this will have to be done manually. So what the operator does is it create create a service account called pipeline in all the namespaces with sufficient privileges so that it can be used to run your city workloads. But the operator doesn't create this pipeline service account in namespaces with a open shift or a cube prefix. So open shift pipelines is an in space where operator will not touch. So it doesn't have that default service account which could build the image and then put the image. So now you can see the reason for not allowing to create the pipeline resources on open shift pipeline. So for the time being, I have created a new project called demo and I have created a pipeline over there. So this pipeline run is in the running state. So if you want to see which step it is happening right now, so we can see patch repository is completed. So it means the cloning of the code is done. Now it's doing the build you can see the four steps over there generate build push and digest to result. So it's like create the Docker file build it and push it to the internal registry and finally we will get the show so that the same show will be used in the deploy to deploy the thing. So now like meanwhile if you wanted to see what all things it is going we can go to log section and we can see okay. So in which step it is exactly happening and where the progress is happening through this different different steps. Now meanwhile like until it happens so I can show a few more things here. So if we go back to pipelines and see here like we can we have all these options like once we create through add flow, we will get a pipeline and a pipeline run running automatically. But later we do some edit to the pipeline and we wanted to run it again. So in that case what we can do we can click on start and it will we need to do this manual step to start the pipeline run again. So instead of that what we can do in order to avoid that manual interventions we can add a trigger over here. So this trigger basically what it will do it will watch on the events and trigger a pipeline run for us. So maybe like maybe as Jaffa said like we can stay tuned to the next episode for more information about the trigger and deep dive of the trigger concepts and how those dots will be connected from GitHub to the pipeline run creation. So for now I'll just show the workflow here not to the concept wise. So you can see here like we have a list of providers supported like we have Bitbucket GitHub GitLab. So for now for the time being I am selecting the GitHub pull request review comment so that I can send some review comment to the existing pull request. So for that I am choosing the GitHub provider type and you replace the OTY and you can see everything as it is the default values. So once I add this one yeah so meanwhile you can see a pipeline run is success so the pipeline run is success now. Now let's go to administrator view and see in the pipeline section we have as I have added triggers right now so I can see an event listener and trigger template and cluster trigger binding. So like yeah the definitions and importance of these things we can see it later. So now what I'll do I will get the URL for the event listener so that I can configure in my webhook. So yeah this is the you are I can go to networking section and the routes I can get this URL. So if I directly hit this URL I'll straight away get the error saying that OK I have not sent any body format because get a full request review comment is expecting some comments in the body section. So that's why if you directly hit the URL we get this error. Now let's go to this is the repo which is my own folk one so I can have access to the settings. So what I'll do I will go to webhook section and I will quickly add the webhook. So here I'll just remove the existing default values and I have posted the webhook URL here and trigger code expect the content type to be in the application JSON format. And by default the push event will be selected but I don't want the push event because I am interested in the full request commit event. So I will be choosing that event here and I will add a webhook. So this one's webhook is added once it is success you can see a webhook you can see here no events are triggered till now. So what I'll do for time being what I did I just created a pull one pull request and it kept it ready. So meanwhile I'll just watch on the pipeline runs here. So if I go to pipeline section and pipeline runs you can see there is only one pipeline run is running. So now what I will do I'll just add some comment over here. Let me go to file changed and let me add one of just a simple comment. So once after I add this comment event will be sent to the event listener then event listener will do the operation of yeah I think it's taking time. Okay, so is GitHub pages working. Maybe try to refresh the page. Yeah, sure. So I'll just quickly refresh it. Let me go back to file changed comments, a single comment. Okay. So I think there is some issue. So, yeah, try to reply. Under the, the first conversation. Okay, let me let me try that way. So GitHub is playing with us today. So. All right, so let me try to delete something even. Okay. Wow. Nothing is working with the comment. Okay. So yeah, for time being what I'll. So once we add this single comment what it will do it will just come here and start trigger the pipeline run. But I just wanted to show that. It's fine. It's fine. We know it's not on our side that it's not getting triggered so no worries. I can testify that it works. So thank you very much, Savita and everyone because we are getting close to the end of the show. What we will do is actually for the next show. We will talk about the cinematic that happens to have that event. So here the pipeline with everything you said like the event listener, the trigger templates the trigger bindings and everything and explain how all of that works together. So that's going to be a first, a second topic. And as we go along with those sessions, we are going to extend that pipeline with more complex topics. So thank you very much for everyone for your time. Thanks. Samak Savita, Nicky L zero. It was great to finally be able to start that that show about tecton pipelines and open shift pipelines. And I hope to see you soon in the next sessions. So, All right, I'm going to stop the stream and thank you very everyone who has attended. Nice day everyone. Thanks. Have a nice day. Bye.