 Good evening, good afternoon, good evening and welcome to another episode of Working in Open Shift. This is so awesome. We are in open culture that everybody has something to think about. On top of the Red Hat portfolio. Good morning, good afternoon, and good evening. You have lurched into another episode of the Level Up Hour. So, thank you for joining us. Please make sure to like, subscribe and share. Let everybody know that we're out here on the air. I'm Randy Russell, Director of Certification at Red Hat. So, shout out to all those Red Hat certified professionals out there. And I'm joined by my co-hosts, Jafar Charibi and Scott McBrien. Greetings, gentlemen. And so today we have a very interesting episode which we're going to talk about pipelines as code. And to help us untangle that is Shamul Bhujna, who is the lead architect and developer of, guess what, pipelines as code. So, I think this is going to be a very interesting episode. So, first of all, let's talk a little bit about pipeline as code. Jafar, I think I'm going to back it up a whole lot and say, well, you know, I think most of us at this point in time are familiar with the concept of a CI CD pipeline, you know, continuous integration, continuous deployment where you have your application, the new feature, the bug fix, whatever. And it is whisked seamlessly and effortlessly through test and stage all the way to its glorious end in production, right? So, help me understand what we're getting at when we talk about pipeline as code. Okay, sure. So, again, I wanted to thank Shamul for joining us here today. So, he will be extending a bit more of the new features that we are offering on OpenShift but to give some background to this. So, we all know that traditional CI CD tools have been evolving since we started using that kind of tools to provide some kind of scripted automation to do your pipeline. So, basically, writing what we call domain specific languages where you have some scripting, specific scripting like Groovy or whatever specific languages was used for that specific CI CD tool. So, basically, for example, if we speak about OpenShift and Jenkins, we're providing something called support for Jenkins files, which basically is a Groovy script that orchestrates your CI and CD steps throughout the different environments. But as the Kubernetes ecosystem really boomed and I would say exploded and it started to become the de facto infrastructure to run continuous applications. So, I'm saying Kubernetes is the infra. Basically, some people will throw some rocks or tomatoes at me, but basically, yeah, we can consider that it's the new, the modern infrastructure to run those applications. And something that has emerged about two years ago, which is could we think about Kubernetes also as a CI or CD tool. So, basically, instead of relying on a dedicated CI tool, like Jenkins or I don't want to call out those tools because they are great and they are good at what they do, but like Jenkins, GitLab or Travis or whatever those tools. So, the goal was, can we have something that is relying solely on Kubernetes as the CI CD engine? Can we extend Kubernetes to understand new concepts of CI CD? And since the introduction of what we call custom resource definitions in the Kubernetes API, that allowed us to create some new concepts that were not existing in Kubernetes before in a simple way. So, the key thing behind it is in a simple way. So, basically, you can now tell two Kubernetes, here's a new concept. It's called a pipeline. Here's a new concept. It's called a step. Here's a new concept. It's called a task or whatever. And basically, now Kubernetes starts to understand how to trigger and run pipelines natively by orchestrating containerized actions or tasks. And thus, we don't need those dedicated CI CD tools anymore. And one of the drawbacks, I would say, well, not drawbacks, but one of the things that you had to take care of was the administration of those tools. So now, since you are switching to Kubernetes, that's, I would say, less administration. So that's the first. It's more probability because now you can have your CI running on any Kubernetes cluster that has those concepts installed. So, basically, it's based on an upstream project called Tecton. And the great thing with Tecton is that it's all of those great CI CD vendors, players coming together to come up with some sort of standard. So the standard way of pragmatically describing what a pipeline is or what we call pipeline as code, right? So how can we write YAML files that translate into pipeline execution? So that was the whole idea behind Tecton. So that was to provide a Kubernetes native CI CD ecosystem. All right. So if you have a, now that it's Kubernetes native, you get the benefit of not having some integration points that you might have otherwise. You have the consolidating down to a smaller number of technologies or products in a sense and potentially a smaller number of vendors, which is also sometimes a benefit. And then it is actually something that's native. There's something ironic in this in that I think a lot of the world has come to Kubernetes and to OpenShift with the desire to build out their CI CD pipelines. And it can be effective for that, but it has not actually been a native capability from the outset. So it's really interesting to me that now the purpose that so many organizations come to OpenShift is actually going to be supported natively. Would that be a fair statement to make? Yes, definitely. And so the interesting thing is that, so of course Kubernetes is known for its scalability. You can now orchestrate thousands of tasks and pipelines because Kubernetes allows you to have that scalability you can dedicate. So you could do that before with the traditional CI tools by using what we call agents. So basically you had to deploy agents. You had to administrate them, upgrade them and all of those kind of topics. And also one of the things was say you wanted to integrate now with a new set of tools. So either there's an existing plugin that you can rely on and just take it and use it, but you are limited to whatever the plugin offers. Or you are a massive Java Guru and you can, for instance, create your own pipeline. I have some colleagues who did that, but like they are crazy guys, but they are very talented guys, of course. But now with the Kubernetes and Tecton way of doing it, it's basically you are creating container images that contain all the tools that you need. And then through the YAML steps, you are referencing those images and say this is the command that you need to execute in this by putting this specific image. So now it becomes much more, much easier to extend the capabilities of your CI CD ecosystem. Basically, as long as you can put it in a container, it's going to be fine with your pipeline. It is. It is ready to go. So we actually already have a question in the chat. And Shmuel, I think a perfect introduction to your participation here would be to put you on the spot. So we can't ask the question is, okay, how do we manage pipeline as code when we have hundreds of microservices? And in brief, the answer is? The brief is that you're doing integration testing. So first, we need to maybe explain what's pipeline as code. I mean, I've done a great introduction to Tecton pipeline and how you specify pipeline like in YAMLs and define your testing and the steps, the different steps for the testing. But pipeline as code itself, what it does is that you have a repo history, like where you have some codes inside it, microservices, for example, like Shmuel was mentioning. And you want part of the development process, so when you're doing a change to your code, you want to have your pipelines living with it. So you want to be able to change what's happening in your code, like reflect what's inside your pipeline. So historically, and a lot of CI systems, does that is that like you install your pipeline before and whenever there is like a code change, then it tests it. If you need to make a change to your pipeline, you go on the server and you change the pipeline. And with pipeline as code, the basic idea is to be able to have your code inside your pipeline, inside your code. So whenever, for example, you're adding a new component that needs different kind of testing, different steps inside your pipeline, then part of the PR that you're going to send, it's going to pick up that same change and it's going to take effect with the change of your code. So the question of strings is like, how do you do that when you're like so? As I said, the repo history like is very tight to change that you have inside the pipeline. So the pipeline reflects the code. And how do you do that? Like when you're like multiple microservices. And usually, like usually it goes beyond pipeline as code is that it's how you're going to test your microservices. So to be able to test your microservices, there is different techniques to do that, like integration technique, integration, integration test. And when you have like multiple microservices, you usually use contract testing. So you have contracts between different components, microservices, and you just like making sure that there works to each other. So when you're going to work your pipeline as code pipeline, you're going to have like integration testing of those microservices, but together and every time you're doing like a PR to your code, like it's going to run those integration tests, making sure that your change is not interpreted. Yeah, so thanks a lot, Shmuel. That's some great insight. And if in fact, this question alone can be discussed for like the whole afternoon because it's a very complex topic actually. I did a webinar on that specific topic. It's like CICD for microservices. It was a few months back. We can try to find the link and share that afterwards. But I think it's also a good topic to just because it's, yeah, there are a lot of things that you need to address, both from an administration standpoint and from a developer standpoint or from a DevOps standpoint. So basically one of the key aspects is because you are managing hundreds of services, these type of new pipelines allow you to have some generosity, something that can be reused across the similar applications. So for instance, if your application landscape is composed of 30% of Java apps and 50% of AWS apps or whatever. So I'm throwing random numbers here. You can try to factor the Java apps into a similar pipeline that will allow you to just point out to a different Java repo and then build out the application with the same exact pipeline and then deploy that microservice somewhere. So what you really need to avoid at all cost is if you have 100 microservices to create 100 different pipelines, that's something that doesn't scale. It's not something that you will be able to manage. So the key topic here is factor your pipelines and have something that is generic and that can be dynamically adapted to your microservice. So one pipeline definition can serve 30 microservices, for example. That's one of, I would say, the key aspects. Of course, what Schmuel mentioned, integration testing, contract-based testing using things like Mox, when the microservices that you rely on are not still available in the testing process. So yeah, there are many, many techniques that we can maybe discuss. And yeah, Schmuel, I think you wanted to say something. I just wanted to add to your answer is that it's like there is, like as you say, there's so many things we can talk about for microservices, but there is as well how you're going to structure your code. Are you going to have multiple repostory? Are you going to have a mono repo? Are you going to have multiple reposts punched together? And the complexity is going to be enormous, different ways. So a lot of people go to mono repo structure to be able to solve those kind of problems. And then, yeah, if you start to have like 100 microservices you'll need to think about how you're going to move it to a mono repo structure, because the complexity can be really great if you don't have that. I will have 100 examples of pipeline as code is basically right, what you want to end up not with. Using the earlier example, so you say, okay, well, so maybe it's 6040 Java to Node.js, what you're hoping is that in the perfect situation maybe you have two approaches to pipeline as code, one is your Node.js approach, one is your Java approach. Maybe you have to subdivide it a bit more, but again, you don't want to have 100 approaches. And that might be the real challenge is that in theory it would be possible to do that, right? Yeah, so also one of the key aspects of this pipeline as code or tecton is the reusability of tasks. So basically you can define shared tasks that users can reference. So basically you don't have to reinvent the wheel for pushing an image to a registry or for building code or for doing integration testing, et cetera. So it's all there, it's ready on your shelves. You just have to reference it and say, please use this task in my pipeline. And that's how you can have some generosity. And that's how also you can lower the barrier of administration when you need to make something evolve. So if everything is spread out into the 100 pipelines, then whenever there's a change, you will have to manually commit that to each and every one of those 100 pipelines. If you are taking the shared approach, shared tasks or whatever you want to call it, you can factor the change into a single place and then it's reflected in the 100 pipelines that are going to reference it. So there are many things that you can use to I would say industrialize the way that you build your pipelines. And actually would be happy to have another episode around this. One of the other things that we can leverage, or I don't know if we can say leverage something, but use as a leverage is also in the picture. It can also simplify a lot of those things. And it can also address what Daniel mentioned, which is this notion of mono-repo or multiple-repo where every microservice is going to have its own code. And then you have, if you are using GitOps, then you have to think about where you're going to store your GitOps resources and stuff like that. So organization and standardization are key topics that need to drive the organization. And of course, it's not going to happen like from just one guy deciding something. It needs to be taken into a DevOps approach where you have everyone thinking about the end goals and the best way to standardize the code, the repos, the pipelines, the testing approach, etc. So I think we have spoken largely enough about that. Covered the microservices. Let's back up a little bit. For the benefit of Stabby and myself here, what needs to happen for an OpenShift administrator to make this capability manifest itself in an OpenShift environment? Or is this something that the developers simply start doing and it works as though by magic? Of course, that's the end goal is to make it appear magical for the developers and make their life easier. And I guess Schmuel can talk in more details about the end goal of this. But let's dissect that into several things. So I will speak about the OpenShift pipelines. Schmuel can speak about the pipeline as code. I will just rephrase what he said to make sure that everyone is on par in terms of understanding what it does. So as an admin, first thing is you will have to install the OpenShift Pipelines operator. OpenShift Pipelines is our downstream productized supported offering that is included in OpenShift based on the Tecton upstream project that we spoke about. So it allows you to basically define pipelines as YAML and Kubernetes native resources. So first thing you need to do is install the OpenShift Pipelines operator. Luckily, we made that very easy for our OpenShift users. So as an admin, you can either go through the operator hub and install it or you install it through command line. So I can show you that once we go to the demo section. And then some things need to happen for the pipelines as code. So just to make sure I'm going to rephrase what Schmuel said. So once we have the pipelines installed, what we want to allow our users to do is to ship the YAML of their pipeline in their application and don't need to worry about creating the pipelines in OpenShift beforehand. Because if you are using the traditional way, your code even if you commit and push your changes, nothing's going to happen unless you have a webhook somewhere that triggers a pipeline that you have already configured before. So it takes several steps before this can happen. So let's say this is the traditional way of doing it. I create my pipeline in OpenShift using YAML or using the UI. So because we have a UI builder where you can basically draw your CI-CD pipeline. Then you create a webhook that your code repository will use whenever you trigger some plugins like Git push or Git pull or something like that. And all of that needs an administration. You need to configure all of those things. So now switch into the TKN or Pipelines as code, sorry way of doing it and Schmuel if you would like please to explain how it works and what needs to be done that would be good. Yes so as you explained that pipeline as code plugs into OpenShift pipeline. So it's a feature on top of OpenShift pipeline that you can add and to make it to be able to do your CI directly from your GitHub or from other VCS provider source code. So currently Pipeline as code is a dev preview feature. So it's not yet integrated inside the OpenShift pipeline operator but we are working toward it. And it's currently the way it works. It's like it's available inside the GitHub organization OpenShift pipeline and where you have some instructions like how to install. So the thing is like there's two parts to it like how to do the install. There is the easy part which is like applying YAML files inside the cluster like to get that capability. And the second part is to be able to say like to your source code provider like GitHub that all the events happening inside your repo it's going to go back to the OpenShift pipeline cluster. So you need to be able to say that. So to do that the main way is to create a GitHub app. So that GitHub app is going to be central to pipeline as code and it's how you're going to manage all the interaction between pipeline as code and your source code. So what I mean is like whenever like a developer is going to send a PR pipeline as code is going to see it and it's on the cluster is going to see that there is that tecton directory inside your source code repo with a pipeline inside it. And it's going to see if it's possible to like if the cluster wants to like as information about it or if we are able to run it or not. And then if the user as well is allowed as well like to do that. And then it's going to run it run the pipeline as defined inside your source code. So at the point of time of your PR and then it's going to report like it's going to do a nice report of all the things that has been failed or has been successful and with link to the logs to the console. So that's the basics of it. And there is multiple features of pipeline as code around that to make it easier to to develop a pipeline as code pipeline and which is which sits on top of open chip pipeline. So we have like different like capabilities like like for example we have like what we call like chat ops capabilities. So you are able like for example like when you have like an issue with your infrastructure and the pipeline is failing due of that reason you can change just like do like a slash retest inside your PR itself and it would like restart like the process and you can do like different things like for allowing people. So by default like try like to make sure that we don't allow anyone like using your infra. So not everybody can set up PR for anything but so but someone who's alone like can just do like a slash okay to test and automatically like you'll have like allowing this. There's other features as well to make it easy. So by default something that is not possible to do with OpenShift with OpenShift pipeline Vania OpenShift pipeline is to be able to get task that's not cluster task or that's not like task inside your name space that's already installed but with pipeline as code you're able to say like I want to pick up that remote task from somewhere else or from the hub which is like a catalog of all the tasks that's been contributed by contributors and automatically you can use it inside your pipeline without having like to pre-install it or to reference it from somewhere else and that's the other question that we had previously that we were talking about it's like it makes it like as well even more easier to share your different tasks because you can just have a simple URL now and you don't have to pre-install it. So that's some of the features that you have with pipeline as code and I think part of the demo it's going to be a bit more clear like what the flow looks like and how you can you can do a PR and iterate into it and after be sure that it's green or not. Yeah sure so thank you as well for the I would say this transition should we maybe go to the demo to... Yeah so you know it's going to be more visual and afterwards we can talk about what happens in the background. Live demo? Yeah exactly. What can go wrong? Alright so first step is I need to succeed in sharing my screen so that's the first step of a successful demo. Can you guys see the screen correctly? Yes we can. Yeah it's good. Yeah let me try something real quick here. So I have an OpenSheet Cluster here where I have a Node.js application so the goal here is to keep it very simple. Let's have a look at this app. Our OpenSheet starters it says version 2.32 pipeline as code. This application runs in a namespace where if we look at pipelines I have no pipelines defined so traditionally if I wanted to have CICD for this application in this project or in another project the first prereq would be to have a pipeline that is defined in the namespace and then I would execute the pipeline by creating what we call pipeline runs and if we refer back to what I said about having something that is generic the pipeline definition is the generic aspect of it and the pipeline run injects some data to say here's the deep repo that you should use as an input, here's the output image that I want you to generate and stuff like that. Okay we have our image, we see it's version 2.32 here and let's have a look at our repo. So to speed up things for the demo I have already created a pull request and the flow that we're going to look at is I have this specific pull request that is open right and basically I can use it as a conversation with all the developers so whenever I'm committing new change to my application I want some CICD process to be triggered automatically so we can iterate etc and once we are all satisfied we can ultimately merge the pull request into the main branch or something like that. So I have a new branch which is our feature branch and afterwards I want to merge my code changes into the master branch or sorry we should say main branch but I haven't updated the terminology since then. So okay let's make some very quick changes to our code so I'm opening up my IDE here and basically you can see I have version 2.33 and I'm going to say it's the level of our level of our change so now I'm going to commit my changes here say changes I like how it sounds so and then I push my changes and wait for something to happen magically so as a developer this is what I did so let's watch what happens in here as the commits get into the repo so there are some magical things that we are going to talk about in the background and now you see in the GitHub interface I have some indication that the pipeline as code has been triggered and that it's running a CICD pipeline so the interesting thing is that directly from OpenShift I can see the status I can see that it's running etc and I can also look at the details directly from here so when it's finished I'm going to have the summary if I wanted to check the logs directly from my terminal I could just run this command and I'm not logged in so I need to log in there first but let's go back to the UI and have a look at the pipeline so here from the developer perspective I can see that it's running it's been triggered recently it's doing the build steps etc and just to make it easier for me to consult to see I have added this notion of pipeline runs into the developer perspective so just a very quick tip for people who are looking at this so say you are using a concept of I would say image stream that doesn't exist by default in the navigation bar on the left side you just say add it to navigation and then on the developer you have a new item that shows you the things that you need so that's what I did for the pipeline run and I see now that the pipeline is being executed here so for the sake of time let's go back to some of the previously run instances and actually what happens is once the pipeline has succeeded the pipeline has got a feature from OpenShift updates GitHub with the results the build times for each step etc and I can have a direct link from here to the task if the logs are still there if they haven't been erased etc so you see that directly from the GitHub I see the status I get directly to the logs etc so very nice feature I love it Schmuel very good job guys it's much easier for developers to trigger the pipelines I don't have to create my web hook in the repo and select the events etc so very nice and thank you for making our life if I'm impersonating the developer our life easier so other things exist in this extension so let's go back to how it works so basically within my repository of my application I have some files that have been defined so I have the pipeline as code that you can see here and it basically describes the steps that are going to be executed you see that I have the first task that is fetch repo second task is build the application to store the container image and last task is going to be deploying the application somewhere I can see it but yeah deploy the app so very simple steps in my pipeline now whenever I trigger an event so on our case we were speaking about a pull request right so the pull request sorry it's a different one so I believe it's this one so the pull request in sorry the pipeline run injects the data that I need for the pipeline to run because where the repo is branch I'm going to pull the code from et cetera what output et cetera so here I've done some static things but basically what's very interesting here is on what type of events am I going to be triggered so here in my code in my application repo I have a file that says whenever you intercept pull request configured for this specific repo and how we configure that please trigger this CICD pipeline run and the pull request needs to happen on this specific branch okay you can also trigger on different events like a push if I'm doing a git push then trigger the pipeline and if I'm doing the pull request to a QA then trigger a subset of the pipeline if I'm deploying to production maybe do something more elaborate et cetera so basically you are defining the how like so the when which is basically on what type of events are you going to react and you are defining the what is going to happen in the pipeline so basically that's the magic behind it you have so there's some kind of structure we spoke about this notion of structuring your repo if you want to provide some kind of magic and that's basically what we decided is that you have to ship a dot decton folder inside your repo and this is what is going to be triggered so now before all of those this magic can happen from the developer perspective some other things need to be done in the background and that's basically what we are going to try to cover for the remaining time just before that some very nice features that I like that small and so yeah we see that the pipeline has finished everything looks fine it's all green all good you see that it's been run just now we are not like doing smoke and mirrors here it's a lifestyle and now say I have published something else or whatever and I want to retest basically a specific PR so I just submit the retest command and the pipeline as code is going to intercept that and basically it's going to trigger the pipeline again if I didn't specifically make my commit or whatever so very nice feature I love it again Schmuel spoke about some more advanced workflow features where you can approve the PRs like okay to whatever okay to test I think or something like that so yeah that's I mean I love it it's advanced it's not something that we provided with the previous tooling we didn't have that I would say two way integration where from OpenShift you do some stuff in GitHub for instance and then get your results back in OpenShift itself so it's a two way integration and I just love it so if we cut for a moment I love the demo we did have a couple of questions come in that I thought we might want to turn our attention to briefly here and so the first question was is it possible to deploy resources to multiple namespaces or pipelines limited to deploy resources in the namespace which the pipeline is deployed great question no no no I don't mind either way but it's just that maybe something that we didn't talk about because it's a bit low level is that like when you have like the way it works is that you have like to be able to say that your repo is going to be handled by pipeline as code on that cluster and on which namespace we have what we call like a repo story CRD so our resource a custom resource called repo story and that's going to say that I want to be able to handle that repo and coming so whatever like today is going to be events coming to OpenShift pipeline and coming from that repo is going to be tested in that namespace so in that case is like when you start like it's only going to start on that namespace but then if you want to use different namespaces you can have part of your pipeline as code step you can have like a deployment step or like something else that's going to be able to deploy on another namespace but then only allow it to do that if you are like some rules to say that this user is allowed to deploy on that other namespace because obviously you don't want to allow it to do everything to allow to have access to everything on the cluster. Yeah that's a mix of you are going for example to use service accounts dedicated service accounts to do subsets of your pipelines and whenever so for example for me in a previous demo I was doing the build in a different namespace and then I was deploying in this namespace but just for the sake of simplicity and avoiding to switch back and forth I put everything in the same namespace but what you have to do is to give that service account the correct role to be able to do whatever it needs to do in that namespace so for example if you want to build and push the image to a specific namespace then you have to give it the edit role or something like that or if you want to deploy you would need to give it the specific rights using the OpenShift role based access stuff. So I think the answer I'm discerning is that it is yes but it's not something that it's going to actually require a bit of user and access control and you're going to want that because to the earlier point you don't want it to be the case that something can simply be leveraged and the next thing you know there are things coming into your pipeline that you never expected to have coming into your pipeline. Especially with events coming from the internet you really don't like to let everything happen. Yeah that might be a problem. So we had another question too going back a little bit the deliciously named bacon fork was posing the question of what kinds of things can you check for and this was while we were in the demo and so we might have to rewind our brains a bit here what kinds of things can you check for and I guess I would say it depends on the particular point in your pipeline that you're talking about. Yeah so I'm not sure I completely understand the question but basically you are defining in your pipeline whatever events or whatever tasks you want to trigger and this is what is going to be checked within the pipeline so if for some reason a specific task fails you check because basically a pipeline is a set of checks and actions so I'm checking I'm doing something then I'm checking that what I wanted happened if it happened then I go to the next task if not then I fail miserably my pipeline and say okay so please go ahead and make changes so whatever so step is going to be I would say a check in itself because it's using text on you are going to define whatever tasks you are going to reference tasks that you have built on your own or that you have pulled from the public hubs and then you are going to use them in your pipelines. I hope I understood the questions correctly but that's my honest answer to what I understood. You are sticking with it. I was just adding like maybe if you didn't understand the question and the question was about what kind of events pipeline as code can handle and in that case we handle the events of push and pull request so that's the two events and that's ending and you can do that on the different branch like you say main master or whatever and release tags if you want it so if you want to launch a pipeline whenever you push to release for a tag then you can handle that events too. So that's just like to add to it in case that was what the user was meaning to ask. That's my question and you know you are on air. You do get to just speak up and ask and you don't have to type them in the chat. Indeed but I wanted to make sure that I didn't forget to ask it and then also you know it's kind of in the middle of Jafar's demo and I want to interrupt his flow. I said I was going to show also some of the cool things that they built for the admin side of it so there's still another demo that I wanted to show. I promise we will come back. I'll try to push whatever I can in the demo. You're going to have to talk really fast. So you know rolling back to the beginning I just have a more rudimentary question about the integration with GitHub. So did you guys set up a webhook that the operator is triggered by the webhook or does the operator just like do something? Thank you very much for the question because that's exactly what I was going to show. So the clever guys, so I hope it's going to work because I'm switching to a completely different cluster that doesn't have it installed and I hope it's going to work. A new live demo. Yeah, a new live demo. Let's take some risks again. What could possibly go wrong? Let's go to that demo. We did have some more questions and hopefully we can maybe come to them at the end. But I am here to see this. So first thing, so as a prereq as I said we need to have the open-source pipeline operator and that needs to happen so you can do it for example from the operator so open-shift operators so I see that we have the pipeline operator that should be installed. So we see that it has been already installed in this cluster but the pipeline as code shouldn't exist or if it does we are going to delete it, like savagely delete it. So I'm on a different cluster right and let's remove it and show you how that's not what I want to remove. I want to delete pipelines as code and it's not there so that's good. Alright, so that's what I thought because I had taken care of that. Now let's go back to our terminal and we have some nice line called TKN pack and TKN pack is for tecton so we have the tecton CLI which is TKN and we have I would say an extension of it called TKN pack for pipeline as code and one of the options that you have is the bootstrap and the bootstrap is basically taking care of all of those background things that you said that you mentioned about and that you need to do as an admin is basically enable the feature on the cluster. So let's try it and hope that Schmuel did some good job on that. So no pressure. Yeah, no pressure. I mean if it doesn't work that's not my fault, right? So I have to mention speaking of that, that Schmuel has been amazing in terms of support. So okay, let's go to Schmuel in static and basically we were on Slack and I'm trying some funky things and I ping him on Slack and say you know the application or this stuff doesn't work. He says okay please remain there. He makes some weird changes on his code. He pushes and asks me to try it like 20 minutes afterwards and it works. So amazing. So basically he is your CICD pipeline. Yeah, exactly. No, I mean he is making changes to the pipeline as code. Pipeline as code. Pipeline as code, use pipeline as code to do the development. So it goes much faster. So yeah, we're going to create a github app called plough app and it says okay I have detected that there is a route that is going to intercept all my events etc. And look at the cleverness of these guys. They have created an automated way for us. Oh my god. I have to log into github and I have my two factor stuff. So yeah. Okay, create the github app for me and whoa it's there. Magic. So now I have my github application that has been created for me and this is basically what is going to intercept all of those funky events. So first thing, that's the conversation we had yesterday. If you are using the app is here, before using it in the repo, let's have a look at some of the things. So yes, it points back to the receptor that we have on our cluster, right? And now it says enable SSL verification. I am not using trusted certificates on this homelab. So yeah, I'm just making sure that I'm I would say yeah, not using that setting. Now if I go back to the main app, I can now install the version and I can choose what repo I want to install it on. So I can say push it to all my repos or only to some specific repos and then it's going to intercept those events and it's the equivalent of creating all the webhooks for the concerned repos, except that it's one single entry point that you create and that can take some of that magic behind the scenes. So all went well for the enablement of the feature on the new cluster and if I check for as code, I should have a new project here and maybe I have also my running part in there. So yeah, great. The interceptor is working fine and now if I use that application on the repo, it's going to create the pipelines on this new cluster even though I haven't created pipeline definitions on my namespace. Just quickly because I know we're going to go out of time is that there is a lot, like with push trap like the thing that makes it easy is that it makes a lot of manual steps like automatic. So it's going to create like a secret on the cluster and it's going to manage and do a lot of things. So that's a quick way to get you started. So the end goal actually as a product is to make it part of OpenSheetPackline but TK impact push trap is mostly like if you want to get started and be able to try it on. And there is as well like all the manual steps that we fancy in documentation if you really wanted to go that way and hopefully we are working to get the operator. So a lot of the steps are going to be handled by the operator but some part of it like the creation of the GitHub apps which actually we implemented this because we had a lot of because before we didn't have this but a lot of people were making were not checking like the right case like when creating the apps it's really tedious manual process but that's why we made it so it takes like a lot of third-party reports. But now it's like it makes like literally if you want to try on package.nascode you just need to go to the web page and download the CLI TK impact push trap and as long as you have OpenSheet cluster and I GitHub account then you'll be set to go. Okay so yeah let's make sure we've got the list together here. You're going to need the pipelines operator and then we're also going to need what was that package again? TK impact CLI Yeah so if we can maybe drop that in the chat just so that people know because I think there's going to be a lot of interest in being able to do this being able to have that single view into this within OpenShift to manage it within OpenShift I think just the more I think about it the more beneficial it starts to appear. So we did have a couple of other quick questions first of all as always there's the question about what are the resource requirements to implement this? Is it going to be part of my same name space cluster? So that's the beauty of it is that there's no resource requirement extra. There's no incremental resource requirement beyond the OpenShift deployment you already have. So OpenShift is implemented as OpenShift pipeline so there's no demons, they're all like using like internal components from OpenShift pipeline which is triggers component task pipeline. So you don't need like to have like a special like deployments and it's going to have like demons and things and so it is that it's only going to react to events coming from outside and it's going to use like what we call like EventListener which is built inside OpenShift pipeline to be able to handle those events. So there is no extra requirement resource requirement. Great and so I think another question that we had was can you share with us a holistic view of each stage of the pipeline and so I'm wondering if perhaps we have a link or web resource that we can point someone towards? Yeah so sorry to phrase it that way because I'm not a native English speaker but I don't think the content of the pipeline itself is meaningful because basically I've done a very simple pipeline to show like build the code build the container image and deploy so basically very very simple stuff but we have a lot of other more sophisticated meaningful text on pipelines that we could show you maybe if it makes sense as another episode where basically we do all the stuff like CI, we use Sonar, we use unit tests, we use integration tests, we use code coverage and all of those funky stuff and security code scanning So basically there could be a lot of different stages and that's the beauty of this is that you are code literally defining what all of those stages might be and it might be that you have a simple a very simple naive one stage model for purpose of a demo or something a bit more than that or you could actually have something that has quite a few different stages because it's required by your particular app or governments or you know compliance whatever kinds of things might be in the picture that need to go through a much more staged process and so I guess it's in a sense the answer to that question is is the whole list of views whatever you choose to actually build as your to put it there, exactly very cool stuff, well let's see do we have any other questions or closing comments, Shmuel you look like you got a... You just need to add something is that about OpenShift console and I don't think Jeff had a chance to show it because OpenShift 4.9, so on OpenShift 4.9 on the OpenShift console we have an integration like a visual integration of the repo stories like of Pac-Man as code, so basically if you have like Pac-Man as code installed on OpenShift 4.9 you'll get