 We are going to see something about Obershift Pipelines today, which is a Tecton upstream project of that. So, you will see what it has in store for us. So, why we need, obviously people understand that they have been using Jenkins for quite a long time. But there are few things with Jenkins, Jenkins is a great tool obviously, but it has, it is not made for containers, right, it is not ready for container world, where we are right now doing. And also like it also needs a lot of attention from the operators or the call the build guys to keep watching that what is happening there, whether I need to add extra things or something like that, that is required as well. And also like it also quickly becomes a overhead, right, because if you have so many jobs running, then it will start to consume a lot and lots of resources and it will be start to, the builds will start to get very slow as well. And typically if you imagine, if you are running a container native applications where you want to build and deploy Kubernetes applications, your container images, then it becomes really brittle, little more brittle and the configuration becomes much more complex, right. And I had to add all those kind of plugins, that's what we call as plugins mania because I have to keep adding lots and lots of plugins on top of that. And finally like you might have your own experience with Jenkins as well. So it's a great tool, it was a great tool and probably it needs to move to the container native way, that's what you're going to do with Openshift pipelines. So Openshift pipelines, what you have is that we have these three basic silos over which it's going to run, it's going to be containers, so everything is going to be container. So you have your task examples or pipeline examples as well, the files all traditionally is going to be the same YAMLs which are going to write, since you have got introduced to CRDs, it's another CRD you are going to see right now. It's all containers, it's going to be serverless, what I mean to say is that when a pipeline is run, once a pipeline is done, the service is going to come down, which means that every pipeline run is going to start your server, execute your build and come down, all right. And then it's typically fitted for DevOps because it's designed for microservices because it's containers, cloud native, which means that it's designed for DevOps. So what all it has, right? So it has standard Kubernetes pipelines, I said earlier, the CRCDs based on Tecton and then it run pipelines in containers, each pipeline step will have a container, you will inspect that when you're running the containers as well. And it has a powerful command line tool called a TKN or Tecton CLI, a Tecton CLI basically helps you to kind of see the logs, see the task, how the pipeline is running, whether it's fail, any configuration problems. We can debug everything there. It can build images with Kubernetes tools, for example, right? Typically if you're out of tools like Builder, Kaneko, Jib, and then we'll also have other tools like Podman. This doesn't require Docker daemon anymore. So what we basically do is that we build the images using CRIO. How many of you have CRIO, container runtime interface? So it follows the container runtime interface and container runtime image specifications so that you, the builds are, images are built and run using those images as well, using the specifications. I think obviously OpenShift 4 no longer uses Docker. OpenShift 4 uses CRIO behind the scene, right? And then it can be deployed into multiple platforms and then it can be have virtual machines, serverless, et cetera, et cetera. It can be very easy to run. And finally, it gives you an integrated CICD experience with a lot of plugins as of today, like as of today, we don't have a dashboard on OpenShift for Tecton, but soon in 4.2 release and above all, we should soon be getting a Tecton Pipelines dashboard as well on. So what does it has to do? So this is what an intro about Tecton. It's an open source project providing a set of shared and standard components for building Kubernetes-style CICD systems, all right? Which is done by CID Foundation, which has contributions from Google that had CloudBees, the Jenkins guys, the IBM, Pivotal, and many more people, right? Great, so what are the concepts behind these things, right? So this is what we're going to see now and I'll show you the examples of each one of these things. The first one is that step is something which is very the smallest unit of thing that you do, right? The task that you do within a build, right? The build, let's say for example, I want to check out a source from GitHub. So that's going to be one step. And let's say if I'm a Java user then I'm going to use Maven build to build the application that's going to be another step. Once I build the application, I want to convert that into what I call a container image, right? That's going to be the next second, third step. And also, once a container image is done, I want to push this container image into our Docker registry or a container registry, what we call. So that's going to be the third or fourth step, right? These are the individual step that compiles off what is called a task, right? In task what I do, I have these steps sequentially done, lined up, and they get executed sequentially within a container. Once all these steps are completed, the tasks seem to be completed, all right? And pipelines is nothing but the pipeline is composite of multiple tasks. Like, all task has multiple steps, pipeline has multiple tasks as well, right? I hit task 1, task 2, task 3, makes one pipeline, all right? And pipeline resource, for example, each of these tasks or a step or even your pipeline, each of them expects some input and output parameters, right? I need to send some input, okay? For example, I need to say, where is your GitHub resource, right? For a GitHub project, right? That should be one resource. What should be the end image name? For example, docker.io, slash something, slash something, slash something, right? So you give a fully qualified container image name. We need to tell that what should be your image name, that could be one resource. And then probably you have one repository, one repository or one version for development, one for QA, one for production. It'll be keep changing all these things. So we need to have something called as pipeline resources, which helps you to plug these things based on your pipeline runs, okay? I can change, okay? For this, I use this pipeline resource. For this, I use this pipeline resource. It get matched, so that your step, task, and pipeline are very generic. So they are not tied to one specific application built or something. If you have a common pattern of building, then I can write the steps, and I can use it with any other type of application, just change the parameters. We'll see exactly the same way how I'm going to do the build. We'll see that I'm going to use the same pipeline, same task, but I'll just change the parameters and the repositories. It's going to give you a different output, all right? All these are static resources. I mean to say like, when I create step, task, pipeline, or pipeline resource, basically what happens is that I just have them there. There will be no reactions within the system. There will be no ports created. Nothing will be running inside this, right? I need to have a way by which I need to run the tasks, okay? I can run the task individually or also as part of the pipelines. So that's what we do with task run and pipeline. The pipeline run and task run, basically what it takes, it takes your task to run or it runs your pipelines as well, okay? That's going to be a bunch of series of steps that's going to be executed with a success or failure result at the end of the time, all right? Great, so what else we have? So we'll see about that in an example. I explained that as part of the CRD. This we talked about task and then we talked about pipeline, combines multiple tasks, express tasks in order, has inputs and outputs parameters, links, tasks to input, like one input from one pipeline step could be an input to another one. Like you just carry it over and then you get under different nodes, right? And basically it gets distributed as well. The pipeline resource, we talked about these things as well. Task run and pipeline run and task and pipeline catalog. For example, you'll also be using catalog here. Is that reusable tasks which can be defined and available? I just need to run OC commands or QuickCuttle commands to get install those tasks. Once the tasks are inside, then we start running the pipelines again, all right? So what I'll do is like I'll stop my talking. Probably I'll just show you one little thing. How the, might be that dashboard looks like. This is something which might be, this is not, it's a premature image. But end of the day you'll see something like this on a dashboard, on OpusHift soon. So where you can also start, stop the pipelines from OpusHift itself. Right now we do it with Jenkins. You can also do it with Decton Pipeline soon. Some 4.2 that's expected. But anyways, just watch out for the news to see what happens with that, right? So I'll stop the talk. Maybe I want to show you more than talking. So let's get into a demo, all right? So let me go here and then stop these guys. Just save some shifting of things. This is K&ADAPT tutorial. I'll close this as well. So just let this be there. And then we'll go here and then I'll say, OpusHift project. I'll change the project. So let's go to the console first. I'll show you these projects. I'll just close the unnecessary windows for a second. This is a tutorial I don't need. I don't need this, I don't need this, I don't need this. I have a lot of tabs open. I'll just close these ones. Okay, I'll just start with Decton. So where I can find the Decton CD GitHub repository is here. You can just go to github.com. Decton CD and then you have this here, right here, right? You can just go there. I can just download Decton. I think the CLI is from there. You can just download the CLI from there. And then Decton CD, the project is hosted here, Decton.dev. So that's where it is hosted. You can go to Decton.dev kind of to get this project in and then it leads you to the GitHub repository as well. And then what also we'll see is like, I'm just going to open up something new for us, so Red Hat Developer demos. I have a small pipeline catalog for this demonstration. So that is what it is here. I'm just showing you this pipeline catalogs here. So you can go to Red Hat Developer demos. You'll find a pipeline catalog here. And the demo sources which I'm going to run, the sample applications, they are right in this place called as tutorial pipelines. So this is the demo which I have here. You can just go here to the demo if you want to pull these demo resources. Anyways, I'll ask the people to send you all these links so that they can have the look at that. This is the place where you can download the demos which I'm going to run right now, the site's there. Let's do this. So let's see what are tasks, right? The first one, what I'm going to show you is that, let's go to my, this is the offline repository of that. I'm going to show you a Quarkus talk, probably since you have been in Quarkus talk since today morning. What I'm going to show you is that it's a Quarkus task that builds your JVM thing, right? Just a JVM type of Quarkus application, all right? The first one, as usual, we have the CRD. So this is from Decton.dev and then this is of kind called as task. This is something which you have to write. I give a name, call as Quarkus JVM. I'll come back to this name where we use this. And then we start something with inputs. We need to say like what the task takes inside, right? What I need, I need to give it to. The first one is a source. I call this a source. You can give any name you want, but the type is git. What it means is that this particular type is type GitHub repository, which means that I can have a URL, I can have a revision, whichever revision or branch which you want to use, so that when I specify it there, it automatically goes there, right? I'll come back. You will see when we run the pipeline run, I'll show you how you actually map this. But in this case, what you find right now is this is very generic. This is not tied to any specific build, okay? It just says it needs an input of type git, right? Which I need to map it to source. And then it takes a bunch of other parameters, like any other application. I say I need a context dir, where I need to shift and do, what should be a tls, a verification done, a destination image name is nothing, but what should be your final image name? I'll come back to that in a second when we see the pipeline resources, and few other parameters which I have, which I take for this as input. What it actually gives, it produces an output which is of type image. The type image is nothing but your Linux container image. So I build the application, generate a jar. For example, let's say I run a Docker file or a Docker build, and then it gives me an image, and that's where I tag it to this image, right? This is width of type, container, Linux, container image. So basically what happens here, my Linux task gives you an input, my task, and then it gets a parameters. Once I have parameters, I do the build and do everything. Once the build is done, I'm going to produce an output which is going to be of type image, all right? And what are the steps involved? So basically this is inputs. I had inputs now, I had outputs now. Now I'm going to go for steps, right? What are the steps involved in this task? It's as good as any build, we'll have multiple steps. As we saw, tasks consist of multiple steps. I'm going to call this step as JVM build. I'm going to say like it's going to produce me a jar end of the day. I'm going to use an image, this is my builder image, right? This builder image has all the tools that is required for me. For example, a Maven, and then Graal VM, or maybe any other tools that you want, if you're a Golang developer, I have the Golang tools installed. If you're a .NET developer, you'll have all the .NET tools installed to build your application, right? If you have Python developer, I have Python tools. You can have all this kind of builder images created for you, I just say which image always. This is image pull policy, because since you are doing a development or demo, whenever I change, I want it to be keep pulling. And then I set few environment variables, basically, saying, okay, these are the environment variables, which is understandable by my builder image. For example, in this case, I'm using a Maven mirror URL here, because I want my builds to be faster. So I have a Maven repository set up, so I just point my build to use that repository, right? That's something like that. You can have things like that, where you want to pull things from outside. For a Node.js, you won't have a local repository from where I can pull these things out, all right? And then it takes few arguments. I'm going to run a basically script called Maven run within that container image, and then I need a security context, and then I allocate some resources here. Like I say, this is going to be a build, it's going to be a little bit CPU consuming and memory consuming. I say that I have a maximum, give it maximum of six GB of RAM and four CPUs, and then a minimum I need four GB and two CPUs to run the build, so that it runs in optimal speed. You can change these parameters as well. So this is my first step, which where I'm building my JBM build, and the next step, what I'm going to do, I'm going to do a container image build. So I'm going to use builder, in this case, builder is a dockerless way of doing builds. I'm just going to use builder to kind of build my application container, right? Which is going to produce you a Linux container image, right, from the previous steps. The only advantage with steps is that the data that's part of the previous step is shareable with the data in the next step, which means that it always has a folder called a slash workspace, so I can go find the files from the workspace that can be added into your docker file, right? If you have a file in the workspace, it will check out all your source folders as well so that I can refer to my source files as well. For example, usually you check out your docker file into a container, into GitHub repository, and once you check it, a GitHub repository, when I call it back, I know the path where I need to root, you have the docker file, so I can start running the docker file and use the workspace to kind of get the data for me, right, the build data, the artifacts. So that's what I'm going to do here, and once I do the build, the last part is that I'm going to use Builder tool again to push the build image into a registry. If you're using OpenShift, in OpenShift you have an internal registry automatically, so I'm just going to push this image into OpenShift internal registry, all right? These are the three steps that I'm going to do as part of this particular task. So how do I refer this task? So let's see an example of referring this task. I'll go to, you've seen this customer preference recommendation example in the morning, people who are there as part of my STO session. So we deploy the customer preference recommendation in the morning, so what I'm going to do is I'm going to take the same application, but I'm not going to run anything manually. I'm going to use pipeline runs to build, deploy customer preference and recommendation. In this case, I'm also going to make recommendation as serverless, again using pipelines, I'm not going to do any manual work here, all right? So let's see this, how, where do I use this? So let's, I have something common in place, since I sold the tasks are reusable, I have something called as a common pipeline defined here where I'm going to refer to the task, but before that, I also want to show you what is resources. We saw a few resources, right? I told about pipeline resources. In this case, if people are aware of OpenShift, we have something called as Image Streams. If people are new to OpenShift, you can just ignore the Image Streams here. What I'm also going to define is the first pipeline resource I'm saying that it's tutorial, giving it's a name, and then I'm saying it's of type Git, and then I say this is a URL from where you need to pull, and this is a revision, which I need to use, all right? So this is of type Git, which I'm defining, this is a pipeline resource. The type Git is understandable by tecton pipelines by default. It's one of the built-in types, which you can use. I say, okay, tutorials Git is something which I need to use. Whenever I use this Git, whenever I'm doing a cloning, go to this repository value given by URL, and check out the version which is given in the version table, right? The revision one, that's what I'm doing there. Where I use this is here. Let me go and find this out. When a pipeline run, I'll put it here. When I say app Git, I'm just referring this resource to tutorial Git, right? Which I'm saying that, okay, this is an application Git where I need to use is resources when I'm running a pipeline run. I say to the resources, use this particular app Git, right? And similarly, I have an image defined in the same way. Let me go to one of the image, everything else pretty much the same. I have the image defined here. I say it's type image, and the URL is that image registry. This is the local registry I was talking to you about. Tutorial customer, blah, blah, blah, and even, et cetera, et cetera, all right? Have to do these things. And then also I need to show you where exactly we map that in pipeline deploy. So which is the name I've given to my pipeline deployment. I give it a name, have a resource called as app Git. That's what is mapped to resource tutorial Git because from the pipeline run, I can change it, right? That's what I mean to say by pipeline resources. If you imagine that if I have put this particular resources here within your pipeline deployment, right? Within your pipeline, what happens is that my pipeline is tied to this particular resource, all right? That's what Tecton tries to alleviate for you. In this case, what happens is that I'm not tied to any resources here. Only my pipeline run is tied to the resources because when I run, I know which resources I need to use, what should be my output image, what all the parameters I need to pass. That's when, that's happens only when I'm running it, not when I'm defining it, right? During the declarative state, I just say, okay, I use this app Git, I use this mnemonic, right? Or called a second pseudo name or something like that. And then I refer the pseudo name back in my resources saying that where I need to map. Let's say tomorrow I don't want to use tutorials Git. I want to use my demo Git, for example, right? I just go ahead, go to my pipeline run, change it. I don't need to change my deployment because already the pseudo name is mapped here, right? It makes your life easier because you can change your resources or pipeline from QA to dev, to dev to broad, anything kind of stuff, but you don't need to change your deployment, right? It just need to be created once and your pipeline runs can give it whatever parameters it want to give. Make sense? All right. So it also has a task ref. This is where your task is referred. I have multiple tasks, as I said, pipeline has multiple tasks. When you say this is the build task which I'm referring here and then it refers to corpus JVM if you see that the same task which you defined earlier, right? I'm referring to the task which means that this task will be used here and then I'm passing some parameters to the task here. See, I'm saying the tail is verified. That's not required. Some context, DR, some main menu area, et cetera, et cetera. And I'm also giving it resources and also giving it output as well, right? This task, whatever the task inputs, output and parameters is defined in the task if I'm just passing them here, all right? Are you able to relate this? Any questions? Okay, yes? Okay, I'm coming to that in a second, all right. So what now happens is that, so this now, finally what I have multiple tasks I do, there's one thing called an OpenShift Client task which means that I can run OpenShift Client, OC Client commands which you saw again. It's again an image which has this task built in and then I'm trying to say, okay, roll out the latest deployment config which means that I create a deployment config without deploying getting triggered. The moment the build is complete, the image is pushed, OpenShift automatically starts doing the deployment. You'll see that when I run the build, all right? But what happens right now is there are something which I need to run in a privileged mode and I need some cluster permissions to create few things. That's what your service account does. I have to say, okay, this particular service account has permissions to do A, B, C, D, E. I'll show you an example what exactly it means. So we have to use service account always because there might be reason for example, there might be reason I have to elevate my privileges. For example, I need to do Linux, typically Linux, you do sudo, something like that. I want to level up to a cluster admin and do something or this guy has A, B, C, D permissions to do within the cluster, okay? That's again a big difference between your upstream Kubernetes and what we have done on OpenShift. OpenShift, we made sure that you have to have, since it enterprise ready, we made sure that you need some permissions to do a few tasks. Just like that, you cannot do whatever task you wish to do, right? We want to restrict that. That's what the service account does, okay? So how do we run it? Let's run this example. We'll deploy customer application first. I'm going to follow the instructions just from here. So I'm already in the project. I created this service account called as pipeline, which I guess created. And if you see this, these are some of the permissions which I'm giving to this pipeline, okay? That means that on this particular project, for this particular service account, I'm saying that it can have privilege rights, right? Which means that it can alleviate to privilege privileges. It can have edit privileges. It can also run as a user with privileged user. Again, for the default, it's another service account. And any UI need another service account. I want to also start to run some service account, Istio thing as well. I need to give those permissions as well. And then here, what I'm trying to do, I have a other set of pipeline roles. I'll come back to this, what roles it has. Let me go here and show you. When I do Knative thing, I'll come back to this in a second. So this is something which we need to do. And then I bind this namespace to this pipeline. So either you can use these kind of individual lines of defining permissions. These are all open shift specific stuff. Or what I can also do is like, let me go and see if I have this here, Knative client. I can also define something like this as well. So this is a way to define the R back. In Kubernetes, I say, okay, on these API groups, I'm given a blanket permission here. I can do all these things, right? Which means a get, put, delete, whatever the verbs, it is acceptable by Kubernetes API. I can do all these things here. This is kind of creating a role. Instead of, see, this is one way of doing it. Like OAC, OCADM policy I can do one by one. Or I can also define at a bulk level, right? What all things I can do, okay? So this depends upon, for example, what I have done here is that when I deploy it as a Knative application, this particular pipeline service account should be able to know what is already deployed, get things from that, delete things from that, do all these things, right? So that's the reason why I've given this pipeline role there. Okay? So you can follow any one of these ways. Actually, I prefer doing this way because it's much easier, it's declarative, you know what happens. At the first, pretty new to Kubernetes, this looks complex, but this is very nice thing which has happened there. It makes your life much easier, right? So I'm just doing that here, creating this and attaching to this particular namespace, okay? On this particular guy who needs the service account which has this thing. I have already deployed Nexus and I also created the tasks just to save time. Let's see what task we have. I've told you, this has a client. I have the client called as TKN, sorry, sorry about my depot. Version, I'm just in 0.1.2. Probably should have had another new version released. So the first thing I'm going to say TKN and then let me enable the source completion. Source completion, the deserch, TKN tasks, LS. When I give this, this gives me what all the tasks that I already have created, right? These are the task names. For example, if you see in my demo repository, I had all these tasks that I ran to create. I just use OC create command to create all these tasks. Okay, the pipeline client task is to my Quarkus talk, Quarkus native, Quarkus JVM, that's what we saw. And then KNative service create task. This is what we're going to use to create a serverless service. We'll see that I'm not going to deploy any YAML here. It's going to create your service automatically for you, right? I have all these tasks created already, so I'm just going to go to the next step, what I need to do. The first one is that I'm going to create this application. I already have the resources created. Let me check if I have the resources. I think I'm just starting to create the resources as well right now. I create the image stream, the image, the GitHub repository reference, everything, everything. So now I'm going to do OC create again. Now what I'm going to do is like, I'm just going to create this pipeline first. It's not a pipeline run, it's just a pipeline. It's a definition of what my pipeline looks like. I'm just going to deploy the pipeline. Sorry, I missed the F here. I say create a pipeline. Let's see if I create the pipeline task pipeline. Unless, all right, I have a RS tutorial deploy pipeline created. Once I have the pipeline, I also want to create the KN pipeline as well, which is going to use a KN deploy. It's similar to one, only thing is the last step changes to deploy as a serverless service. I'm just going to do this as well. Let's go back and see. We are good with two pipelines. I have one pipeline that creates a normal OpenShift deployment. Another one which create a serverless deployment and OpenShift using KN ADV, okay? So I'll come back to that pipeline definition in a second. So we are now good to do. What next we have to do is like, I'm just going to deploy our customer application. The exactly the same customer application which we saw today morning for customer preference recommendation. I'm going to create the same one, but in this case, it's going to be built from sources. I'm not going to use an image which already exists. Let's deploy the application first. So I'll show you this. I'll also do one thing here, OC watch, OC get pods. You see here, oh, I already have customer preference recommendation running here. Oh my God. I'll delete them. Just here, okay, apply customer clean. And then I also do preference clean. This is the residue from the morning demos. And then I also deploy recommendation in CES, even since I had two versions, right? I'll just do the V2 also. I'll give some time for that to clear off. And then let's do the app creation here. And then let me see which project I am in tutorial three. Let me go back to tutorial. Where is that, where is that, okay? I'm getting back to this. So I don't have routes here, probably. So what I also do is like OC delete route customer, right? So I don't have anything right now. I'm just going to start from scratch. So I just get the app YAML. Now what happens is that it goes and creates a deployment config here called as customer. I don't think so I have any deployments left over, all right? But we have zero of one parts because I don't know what image is right now there. I tied to my image in OpenShift, but I don't know what image is there because the image is not built it, right? Because I'm not referring to the external image. I want to build it from source via tecton and then share that image to this so that it can start deploying it, all right? So what I do right now, go here. Let's see the instructions so that once you get back to these demos, you know what to do. I created the application. Now I'm going to create a pipeline. Let before I create this, I want to show you what this pipeline run has in it, right? So let me put this here. Go to customer and pipeline run. If you see here, I told you that pipeline run reuses pipeline but the parameters get changed, right? Because I'm going to use the same pipeline deploy for customer preference recommendations for everything. But one thing is that I'm going to say you what deployment config you need to use, what is the resource, tutorial resource you need to use. For example, for customer, I'm using the same RST deploy here. Let me take that thing for you in this side, okay? So I'm using the same RST deploy. You see this? I'm referring to the same pipeline. If you compare the right and left, I'm just using the same pipeline reference here. Once I do this, I say my deployment config name is customer. If you see in my example, go to the console, the deployment config name is customer. This is where it needs to update or trigger the deployment after the build is done. So that's what I'm saying here. An application source directory is here. So what I go, let's go to the, where is we have this, okay? So within this, I have a customer resource directory, right? In this tutorial sources, I'm having the sources right here. So I'm just saying that go to this particular context here and start the build within that, okay? That's what I'm trying to say here. And then trigger is manual, which means that I don't have GitHub pushes pull and then I'm going to use the same service account here and then I'm referring to the resources here, right? Probably these are the resources which you can have. Let me put the resources right below this so that it's easy for us to refer. I'm saying that what resource I have. The first one, I say tutorials get. The same reference which we had earlier, tutorial pipelines repository version V1. That's what I'm going to use here. And then I'm saying that customer open shift image. What does the customer open shift image has? Where is that? Okay, this is a customer open shift image. And then I'm saying that go to the internal registry. Tutorial is a sample namespace within that pull the customer V1. So this is what's going to be the end of the thing, right? You've tied up everything right now. And one little thing to note about this is that you'll be, if you have seen from since today morning whenever I create any Kubernetes resource, for example, when I create this Kubernetes resource the name gets unique, right? Next time when I again do a create what happens you say the name already exists, right? It will not create a project for you. So what to do this pipeline runs can be run any number of times, right? It's not just one single run. So for that what you have done is that in here I use a generated name. In generated name what they basically tell you is that, okay, start something with the prefix customer pipeline and generate some random characters and happen to it, right? You'll see that in an example. It's just like your pod ID, right? It gets attached to it so that every new run has its own unique name. So I can go back and see the logs if something has failed in the previous run or something I need to go refer to the old logs I can go and refer to the old logs as well, okay? So that's what I'm going to do Ram. Let's go and create this pipeline run. So we see this TXPTPT something. So this is what something we can do. And how do I see my logs? Here comes our TKN tool, TKN PR is pipeline run, shortcut of pipeline run, follow from all containers if you see there are eight containers here which means that there's eight steps. In this I'll show you the step names in a second. I say follow this step on all containers so that all container logs are there from where? From, okay, what did I do wrong? TKN, okay, sorry, TKN PR logs, all right? And then I say follow the logs it keeps following the log. Meanwhile, we can also see the name of these things dot spec, okay? I'll just go here and then change this to, I'm just pulling this from my Docker repository, local Nexus repository, so that it comes from my tutorial pipeline repository as well, that's a parameter which you already passed. I'm going to give this name here. What happened? The name is wrong, get pods, okay? Dots spec of containers, dots are, dot name. I'm just getting only the names for the containers. Is there something wrong here? This is flat, right? Cons u n t a i n e r s, is there anything wrong? Let's see, get pods, this is something which is, okay, let's see if we can get this from the build pod here. Go to resources, pods, there is a build pod which is running here. Is this the one? No, all right? I think I did this yesterday. Just wondering what's wrong with this? Go see pods, okay, the build is successful, the deployment is created right now but we can still see the name. I'll show you the name, what names it says because since it's a big name, I have to go all the way here, get pods, if an o yaml, y gives a tool to read the yaml. I got everything here and then I say I need the path, spec dot containers, all containers and just give me the names alone, right? All right, I don't know what wrong it is there. If you see this, we had multiple steps, we have seen the steps like from the task jvm build, build, push, all these things, each of these tasks, the steps within the task gets its own containers from where within, because that's the reason why we give an image name. It starts the container within this own pod and it suffixes the name, something like build step. In this way, you know, okay, these are the part of this particular build step for which this container belongs to, right? So that you can go on and even check the logs but the TKN logs does the task for us because it gives the consolidated logs from all the containers. That's the reason we gave TKN logs pr dash f is for follow dash a is for all the containers, right? So that you don't need to go into these individual containers here, right? So let's do this, okay, we have done this one and let's go and see what we have here. We got the pods running right now, the customer pod is running and then deployment configs when you go here, it has one of one pods, I've not created service in the morning, if you see the example I did, OC create deployment, OC create service, I have not done any of these things, I just create the deployment config which is OpenShift specific staff and then if you also see the routes got created, I have a URL also to access this and then I can go here and you'll get the same response back because we don't have preference yet, right? We are not exactly the same response we got today morning when you deployed customer alone, all right? So let's do the preference deployment also now. So let's go here and I also wanted to do one thing before that, let me have this command handy, okay? I'll just paste it here. So because it's a pretty long command, okay? And then what I'm going to do here is OC create dash F, preference, all right? Now you see that I'm not doing preference pipeline creation here because I'm reusing the pipelines again, if I go to pipeline run of preference here, let's open that as well, preference pipeline run on preference, it's exactly same RST deploy, right? But the thing is that I'm changing the parameters here, right? I'm making to use preference folder to go do the build and using the preference deployment config to do the config again, all right? So what I need to do basically here, first create the deployment config as we did last time, preference app.yaml, I create this, if you go back to the console, you'll see that there is one more deployment config called as preference created here, all right? But again, zero of one part because we don't have the pods yet. So do that, what I'm going to do is like I just get OC create dash F, preference, preference pipeline run, okay? The pipeline run of preference is started here. Let's go and see what's the pod name we have here. I'll just control C, watch gets pods and then grep, I say pref, so that I get only the preference pods here. I get this big pod name, it's getting initialized right now. I'm not sure during initialization I can get the pod thing. And I also have the pipeline references here. So I say TKN, PR, logs, FNF and A, all right? And then let's see if we can get all these container names now, good? If you see the container names here, it's exactly the same. One, two, three, four, five, six, seven, eight. One, two, three, four, five, six, seven, eight, right? It's no change, nothing has changed here. Same containers, same type of application deployment. But one thing is that what I'm building differs here. For example, I build a preference here, not customer. How did I change that? That's using the parameters, right? It's going to be the same repository. You can even change the repository if you wish. But in this case, I kept it in the same repository. So it's taking the same application now, building preference application for me right now, right? Once the deployment is done, I think it's, once it builds an image, it deploys the image right now for you, right? You can have the customer preference application rolled out right now. Today morning, what we saw is that we did them individually, I had the same Qtl commands repeated for all these tasks right now. I'm not doing this because Tecton Pipelines helps me to do the task much easily, right? I point to the source repository, tell it how to do build, how to roll out a deployment, I'm done, right? I'm pushed the image back. So where do we see the images? You go back to your Tecton here, you get bills, image streams. You get a lot of image streams here. For example, customers here, preferences here, right? When you click on that, you scroll down, you see the image show also, right? Preferences B3 something. If you go here, it has something like B3 CD 30, right? This is exactly the same thing which got pushed there, right? Internal registry, right? Image stream is, you can imagine like a view of your entire image and tags, right? Just imagine like that, I can use the image stream name instead of referring to multiple different repositories, right? It's kind of a view, what do you call view in databases, right? From multiple places, you get one view. Similarly, you can have multiple repositories giving one little thing called as customer in this case, okay? Can you imagine image stream, something like that? I've done this, my builder's done, my preference is rolled out, okay? And you also see the preference application running for me here. When I go here and say this, I'm going to get the next one which says my application is not available, recommendation, all right? Any questions until now? Because the next one, what you're going to do is like, we're going to use the same up technique or pattern, but I'm going to do a scanative serverless deployment of recommendation. Before that, any questions you have? My questions? Great. It's good or bad. The silence is dangerous thing, right? Because this guy is talking too much since morning, I'm not able to understand anything, right? Could be either way. I take it as positively that you guys understand and you can reach out to me any time you need a question. So what I'm going to do next is I'm going to show you what's K-Line, it's a similar one, same resources, but what I'm going to do is I'm going to do a K-native deployment. Let me show you what I have here. The morning we saw we had two preferences, right? V1 and V2. And what I said in the morning is that win serverless, if I change anything, right? Any parameters or image name or environment variables, it's going to start a new deployment, all right? Because it's factors, 12-factor application principles are followed, any change in a configuration is going to trigger a new revision, right? That's what I'm going to do. I have V1 and V2 versions of this. What I'm basically going to do in V1, for example, let's see the deploy first. What I have in my pipeline, K-nDeploy. I have similar thing, the gate resource, the app image. But what I'm doing here is a service name. I'm trying to give a service name here which needs to be created. And the source directory from here I have to build and create the service. Again, I'm using Quarkus JVM here and all the same parameters. The only big change is that when I change this particular K-nCreate service, right? K-n is a K-native client. If you want to see, I've not showed you that. K-native, let me go there. Sorry. This is a K-native client. It's a command line client to create serverless applications using K-native, right? For example, you can go to this particular URL here. It's right there, K-native slash client. It's still under development, just got 0.2 release happen. It tells you what all the things I have to do. There's a user guide which you can follow to create the K-native applications here. So I'm going to just use this. I've already packed it as part of my tools image. So I have the command available. I'm just going to pass the necessary parameters to this command so that it creates my K-native service for me, right? So that's what I'm doing. That's the only change. If you see the previous example, what we did, we did an OC rollout of it, or OC rollout deployment, a Q-pertil rollout deployment. But in this case, I'm just going to create a K-native service. That's the only big difference other than that. It's still going to do the same build, same deployment, same image push, et cetera, et cetera. All right? And then what's the difference between these two things? For example, I have a K-n pipeline run. The here is that I'm going to use recommendation OpenShift Image v1, right? I have two image versions. And then if you see the v2, I'm just saying that I'm going to use v2 version of the image, which means that to the K-native world, the serverless world, this is a config change. So I have to have two revisions basically created automatically for me. The moment I do the deployment, there'll be two revisions created for me. All right? Let's do that. Let's go here, see what instructions I have here. What is that? Okay. Reference recommendation, I'm going to say, I'm going to take this version. So it's going to save you a couple of more commands. The moment I create, this is created. Let's see the logs as usual. T-K-n, PR logs, FNA, FNA, oh, sorry. All right, so this gives you only preference. Now I say, give me only recommendation. All right, we have this. And then let me paste this here to see if I can get the same container names here as well. Unfortunately, I'd not change the name push to K-n create. So we'll pretty much, right, you pretty much get the same thing. Step gate, JBM build, image exporter build, image exporter build, push, image digest, right? That's all I do. And then that's the same build, same thing, you see this, everything is exactly same. The only thing you'll see at the last, let's do this. I'll say, watch, oh, see, get KSVC, wait for this command to complete. So we're going to see this and then this building image right now for you. And then the finally at the last, it's going to create you the K-native service, okay? Just watch the K-native, KSVC command. We just watch that as well. All steps are exactly same, which means that, see the amount of reusability we are having here, so how much amount of things we are using the code that I'm writing, there might be some redundant thing which we need to write, but that's okay. Compared to like, we have to write completely new builds every time, instead of that I'm reusing the task. If you imagine in Jenkins, I cannot reuse the tasks, right? Every time I have to create at the same steps or I can copy paste your very hard core developer, then you go into Jenkins workspace, get the XML out and then copy paste the same thing again, right? So that's all that's not required here. I just go refer the same task name again and again and things happen smoothly for me, right? I just change the parameters, it's going to do the same build again, all right? So the image is pushed, there you go. Now, I've not got a OC deployment. If you see the deployment is 0.3 here, which means that this is not a normal deployment. This is a K-native deployment, which is a K-native serverless service, which is deployed here. And I also got this, maybe I got the URL now, right? To access the service. Let's click the URL to see what happens. There you go, right? We got this thing right now there for us, recommendation V1, but let's try to tie it up like what we did today morning. And there is a surprise for you there as well. So let me do that. Let me control C, okay? We'll wait for this to happen. It probably takes some time to do initialization and then probably it breaks as well, all right? So we'll see, get for recognition. So we see this after some time. So the trick here is, if you want to see only the running pods, this is a trick to see the running pods, okay? I'll repeat the command again. OC get pods or kubectl get pods and use a field selector, which says status.face equal to running. When I say this, it'll show you only the running pods, right? Now we chose all the pods, completed, terminated. I'm not terminated, but completed pods. But sometimes you might need to filter out the pods to see which alone you need to have, in this case running. That's what I'm doing there. I'm just going to say OC watch, just running alone, right? We have customer preference and recommendation running here. We'll allow this guy to terminate. But now what happens? When I go back to my, who's he, correct that? I'm going to get another failure now, okay? Still the recommendation is there, but my application fails, not able to reach recommendation, okay? It gives you some other like, some disconnect reset headers, et cetera, okay? I'll tell you why. What happens in a Knative serving is that every time I call a Knative service by a new URL, it basically goes to a Knative serving pod called as activator, okay? The activator checks for your host header to see what's a host header. Like in this case, like something, tutorial customer, tutorial recommendation, blah, blah, blah and all these things, right? The moment it sees that, then it says, okay, I have to activate the Knative service, which is dormant or which is available, and then redirect to that particular stuff. Now what's happening is that that's there is a Knative serving namespace, all right? But what we have deployed is in tutorial namespace, okay? Let's go back to my source. When I see my source, what happens is that, where is preference? It's Java application property file, but I've not had any tutorial suffix here, right? What basically, what I'm trying to say here is I just have recommendation here. So what it basically means is that it tries to go see the recommendation in the same namespace, okay? Which is activator namespace, I'm not there, right? So you get some weird errors, right? I need to tell this explicitly to go and look up in the tutorial namespace. How do I do that? First, we need to find the URL for this. So what you can do is like OC get case we see, recommendation, if an O, YAML. So you see two things here. This is a URL. Anything that is svc.cluster.local means that within that particular cluster, right? That's what Kubernetes say. When Kubernetes sees Kubernetes DNS, which is running inside. When it sees anything svc.cluster.local, it tries to go and find it out inside the cluster, right? Inside the cluster where I have to go, I have to go to a place called as tutorial, which is your namespace. That's where this particular service is reside, that's where recommendation reside, right? So what happens is that when you try to use only the short name like this, what basically happens is it understands that everything is within that particular same namespace, which is calling, which are calling this particular service, and it finds out that it's not there, so it throws you some weird errors. All right? So let's go and update this. So now the question is to do this update, I have this in my application properties, right? Exactly here. So do I need to do a new build? Hmm? Sorry? No, it doesn't do it automatically. There's a question for you since you've watched since today morning. Kubernetes and all these, again, it's a basic fundamental Kubernetes thing. Do I need to do a deployment? Build again? No. No. Okay? I'll tell you why. So this is micro profile, again, this is more Java centric, but just imagine that this is an environment variable. So if I do a config change, it has triggers an automatic redeployment. So what I'm going to do this, I'm going to take this environment variable. I'll show you what I mean by that. Put this here. I just need to do some capitalization for the normalization of this. And then I change this to this because environment variable does not accept dots. So I need to use this here. This is more Java centric, but don't worry, just imagine that this is an environment variable. And then I say the URL, right? To this URL. So what I go do is here, just copy this, go to your preference, deployment configs. You go to preference, there is something called as environment here where all your environment variables are defined. Again, 12 factor cloud native application development. If I change any environment variable here, it triggers the deployment automatically because I don't want to change my image. I'm just adding on environment variable to override that value. Okay, that's exactly what I'm doing here. I'm just going here and saying here, this recommendation URL is should be something like this. All right? And then I go save here. At the moment, let's watch the pods as well. All the running pods. We'll see this LSDH team terminated soon. And let me save this. Hopefully if all goes good, I should see a new deployment getting triggered here. Like, let's see here. You see this? The new preference deployment getting happened here because this is more, it's again, it's Kubernetes feature as well. It's even OpenShift feature as well. In OpenShift, I'm just going and doing it in console. In Kubernetes, you just need to use a CLI to go and update this, okay? These are all some additional things that Kubernetes OpenShift gives you on top of raw Kubernetes. So it's easy for the developers to go and adapt to Kubernetes, all right? Again, this one is specific. If you are Java developers, you are interested to know what this is. Just go to micro profile config. This is just for the Java developers. So if you go to micro profile config here, on the GitHub page or in the micro profile config page, so you'll see everything here, whatever I've done here is exactly here. Like, this is just for an additional thing only for Java developers. If you wanna go to micro profile config, you can go and find this here. All right, let's see what happens to my pod. My pod is up and running now. Let's see if I'm successful here. And we don't have recommendation running here as well because it's a serverless pod, it's terminated because it did not have request for quite a long time. Let me go and refresh it here. If all goes well, let's see the recommendation coming up soon. Okay, let's see if I can do one more OC, get pods, test W. There you go, all right? Now it's exactly the same thing which we did today morning, but today morning we ran it in a serviceful way. Now I felt that recommendation is required only when I call it, all right? So I made it serverless. Now what happened, customer cost preference, those two are always standing services and recommendation comes up only when I need it, all right? Okay, the next one, we just have six more minutes, I can show you one more stuff. What I want to do right now is I want to create another pipeline, okay? KN pipeline, vtouran, what I'm going to do is like I'm going to have another version of recommendation created. We had two versions of recommendation. I'm creating another version of recommendation. In this case, I'm not deploying two different deployments, right? In the morning we did two different deployments. In this case, I'm leveraging on Knative feature where when I change the name, right? The image name, which is a new configuration, thereby as per Knative specification or thing it has to create a new deployment again, right? It still has the old deployment, so we'll be basically having two revisions, okay? So what we do, I'll just start this up, all right? And then what I'll also do is like watch OC get revisions. I just have only one. You see this? WVF88j, that's the only revision we have right now. Once this pipeline gets completed, we'll have one more revision up with the same URL, okay? There's no change in the URL because it's just another deployment that's going to happen, all right? So TKN, PR, LOGS, FNF, FNA. Let's see what happens right now here. Maybe we need to wait for some time for this to come and tell the LOGS for me. So this goes for the termination again because it does not require anymore, all right? Okay, this is still in init state and then let's see OC get pods, okay? So you got this thing running again. For us, this is the V2 deployment and then soon if the V2 deployment is successful, then what right now happens to this like we'll be having one more recommendation created, right? You can also try the native one if you're there on the Quarkus talk. There is a native task also available but I prefer the JVM one because each native task takes approximately four to six minutes to complete because it needs to build the native image out of your jar, okay? So I have this again, pretty much the same step. Let's wait for a few seconds like while it's getting done if we have any questions I can take because we have just five minutes left. Any questions? Do you like this? Okay. Yes! Okay. Thank you. All right, so we are doing this. Let's wait for some more time. Say, I think for this I don't have probably, I'll try to update my GitHub repository with the diagram so that you know what how the pipeline actually works. I'll do that. No problem. Thank you. Do you mind opening a issue there so that I can, I can do. All right, I've done this. This is creating another service. There you go, right? I got one more version created. This is a new deployment recommendation V2. One second. I'll just finish this demo and come back. I've done this. So what we'll also do is to like, let's try to do this. Let's see running parts, all running parts. State, running. Right, we have the new version running here as well. Recommendation, new version of recommendation running. All right, the deployment is still happening. It's taking some time for the deployment to complete. Let's go to poll, right? When I do, when I start to do a poll. So the other one also comes up soon on the polling. But what happens right now is when I do a polling, I'll get only the W5, I'm not sure this is the latest one or the old one. It takes some time. Now we see one, V1, V1, V1, because as per the specification, only the latest one goes into picture. Okay, let's see what's the other division we have right there. Recommendation, recommendation. Okay, OC get, yes we see. I can still in V1, OC get revisions. I have V1 and V2 here. Though both have same generation and then let me do poll. Ideally, I need to get the latest revision up here. Okay, I have not changed the V1, right? I'm not changing the git repository. It's not V2. It's the same again, again, it's deployed a new revision, but still printing the V1 from the old one. Okay, but the end of the day, if you see you have two revisions, in this way I can do, I can change any application is running as serviceful to serverless way as well, right? I think that's pretty much I have. Probably I can update the diagrams as you asked for. So any other questions I can take, otherwise I would say a big thank you for you all.