 Hey, hi, Debiani, how are you? Hey, hi, Karan. How is it going? Great sessions up till now? Great sessions going great. Yes, yes, great session going so far. So algorithms and drones and Jenkins and now Istio, so great sessions. And then now you are coming up with yet another great session based on Takton pipelines, like OpenShift pipelines with Taktons, right? Yeah. Fantastic. So guys, please welcome Debiani Chatterjee from IBM. And she is a developer advocate and does a lot of great stuff around CNCF community out there. And she is also mentoring a lot of students and running webinars, hackathons, right Debiani? So maybe Debiani, you're going to talk anything about that? Sure. So today, in some time, I think Daniel Cook will be joining you guys. So I am a part of, I run the part of hackathons with IBM that is known as call for code. He's going to be giving more deep dive into what call for code is, but it starts sometime, the preparation starts sometime from Q1 next year. That is an on-chan. And then we continue that up until July is sort of the last dates to submit ideas and code solutions that you have. And then November and the prizes get distributed, the top teams get selected. And then these projects are incubated by the Linux Foundation and IBM as well. And these are all moved on to incubations as open source projects. So this is quite exciting. I have also been working with a lot of students in college to talk about technology that is coming up emerging trends and all of that. Mostly things on cloud, a little bit about AI, ML, and this and that, bits and pieces of things. Fantastic. They even looks like a lot of things are going at your side of the world. Yeah, sort of, yes. Fantastic. All right, guys. So I think we are so are we good to start, Divyaani? With Tecton? Yeah, yeah, let's start. Yeah, can you share a screen, please? Sure, just give me one sec. So you should be able to see my screen right now. Yes, we can see your OpenShift console. OK, great. OK, then you can bring it up from here and I will go out of the frame. OK, thank you so much, Karan, for the introduction and for the rest of the things. So folks, I'll be running a demo today. We have a very short runway of around 30 minutes. And I'll be sharing a few links with you guys so that you can take these home and come back. So I am having my Hoppin open as well so that I can share the links with you guys. So I'll be starting off with the pipelines and also just to let you guys know, my network is a little unstable today. So I'm going to be switching off my cam while I'm speaking so that there is no lag and there are no jitters as such. And I'll be switching them back on once again, because the talk is almost at the end of where we are. So without further ado, the things that I'll be sharing with you would be one of this. This is a very short and crisp definition of what Tecton does, some of the features, and some of the things that you can take a look at. Since the demo would be taking you guys from the beginning, it would not be necessary right now that we go through this. You can take it back home and check it out later. I'll be adding this up onto this repository later on. And sharing the link, I've already shared the link of this repository. So what I have with me is a cluster created. I have created the cluster on IBM Cloud and it's a very basic cluster, which I have opened here with me. And what I've done is I have logged into the cluster right now. So before we get on to all of that, if you are having a cluster with you right now and if you want to be checking out how to do this, you can follow along with me. You can read some of these parts. These are basic things about Tecton. So Tecton started life as part of the Knative project to figure out and improve CI, CD capabilities, and then it was later donated to the CD foundation. So Tecton, as we know, it is open source and it is governed by the CDF foundation and it is Kubernetes native. There are a lot of features that you can basically talk about. It's always faster. Things that come up, things that are improved, things that go to market, they are always faster. They're easier. It runs on Kubernetes environment. It is Kubernetes native. So agility and control is something that is at your hand. Then you also have access to serverless and also it uses cloud resources when needed with serverless. So that also reduces costs and enhances your control. Now, Tecton's primitive, the base with what Tecton works is tasks and pipelines. And they are implemented as custom resource definitions. And that is included in a YAML resource file. So because of that, you also have some other things like pipeline resources which are in a pipeline. So that is like a set of objects that are going to be used as inputs to a task and that can also be a part of the output by a task. So your task is the basic thing that happens and the pipeline has a sequence of things that needs to be done. And your task can have many inputs and outputs. It could be like a GitHub source which contains your application code. The task output can be your application container image. It can be deployed in a cluster. And the output can be something like a JAR file that you can upload to a storage bucket as well. So what we'll do is we already have access to OpenShift 4 cluster like I showed here. And another thing that you have to have is the Tecton CLI. The link is here if you can click here. So I already have that installed on my system. And if you have that ready and if you have the cluster ready, what you can do is I hope you guys can all see my screen properly. I'm just increasing the font by a little bit so it is easier. So what I've done is I have gone here where you can copy your login command in order to connect your terminal with the cluster. So what happens with IBM Cloud is this is available as a Cloud Shell. So what you can do is you can also log in with the Cloud Shell if you don't want to be doing it on your local machine and you can connect and do things here. For me, I'm doing it on my local machine right now. So you also have this. It takes about 60 seconds in order to get the setup and start it. So I have this login command here. If you see here, this is OC login blah, blah. And it is here. I have logged in and I have access to my cluster already. If you keep going back to this repository, you will see that all of these are mentioned in detail so that you can go back home and do it as well. Your login command would be available on your screen. So as of now, we have been mostly talking about OpenShift Playground. Yesterday, when I wanted to access Catechoda or OpenShift Playground to try out this, to give you guys a place where you guys could try your hands as well, I could not find the OpenShift Playground anymore. I'm sure there would be something that comes up as a free place, a free resource for you guys to try. Until then, you can always go to IBM Cloud, sign up and try it here, or you could try Developer Sandbox from Red Hat. You could just Google Developer Sandbox and that should show you. So once you are set with the cluster, once you are done with the login, we could just start by testing the environment. We see if things are working fine. You have all the commands here, everything mentioned here. So you could just copy and this should work. So I have these, I have three nodes with me. We'll go step by step. I hope you can cover everything in the next few minutes. You'll get the namespaces here. You could also skip these. These are not mandatory steps that you have to be doing, just getting familiar with the things that we have. If we do OC Get projects, you have a list of all projects that are available with you. We do not have created our project yet. And the first thing that we will be doing is, we will be taking a look at Tecton concepts, which again is here. Once again, I'm not going to be running with the concepts here because we'll have to cover the demo. So in short, what I can tell is, in order to create a pipeline, what we do is we create custom or install existing reusable tasks. We create pipeline and pipeline resources to define the application's delivery pipeline. We create a persistent volume claim to provide the volume or file system for pipeline execution. And we create a pipeline run to instantiate and invoke the pipeline. These are the steps that we are going to be taking today. So this is a gist of what was there in part of the first exercise. With this second exercise, what we'll do is we'll install the operator, that is OpenShift Pipelines. This was Cloud Shell that I was talking to you about. You could use this as a terminal as well. And things would work for us. So I'm here right now. I'll go to operator and let's check out install operator. So I have this right now, which is buy Red Hat. I have not installed this. If I go to operator hub and search for Tecton or pipelines, I should be able to get what I want to. My network is really not that good today. So please bear with me while I take the tortoise space. So we'll keep the rest of the things as it is. We're not changing anything. This is a very basic trial that we are gonna be doing. While the installing is going on, we'll get back to the repository. So I'm not taking you guys to each and every detail, like we have to be on the administrator side of things in order to do this. So we are here, we are installing the operator right now. And once the operator has been installed, you should be seeing something here. So while this happens, okay? So while this happens, this is almost, yeah, this is done. What we'll do is we will create a new project. So I think most of us here, while we are talking about Tecton, we know that one of the things that OpenShift is easy to use is because of its GUI that comes along with it. You could create projects, delete, and do a lot of things not just from here, but from the GUI as well. So we have created the project right now, and we will be getting the service account right now. So what the service account does is it sort of builds and pushes the image. This would be used later in the tutorial. So if you are doing, taking any other method, using any other method to install, sorry, deploy the same application, you can do it from the GUI as well through the developer perspective. So we have the project built. Now we'll just check if we are having it here. So the operator installation was done. We'll go to the developer perspective and check from the project. It was something tutorials, so pipeline tutorials, was it? So we have the project with us, but we are yet to build anything into it. So what we'll do is we'll create pipeline tasks which are steps that are gonna be working sequentially. Keep checking back the repository to see what an example of a Maven task would look like. So there are many examples that you could take a look at in the links that have been here. So now what we'll be doing is we will be creating this. The benefits get installed from the repo, which will be, you know, which we'll be needing for creating a pipeline next. So this has been created. Next would be the one for update deployment tasks. That is done as well. So now if we wanna check the tasks at hand that we have created, we can list them. So in order to run tkntask ls, this is where you would need detect on CLI. And if we check what cluster tasks we have at hand, is this gives you a list as well. So now what we'll do is we will be defining a pipeline. And again, folks, here is what an example would look like. So in order to do this, we would need, so you know, the various stages of the pipeline needs a storage area to communicate with each other. And this would be achieved through persistent volume claim. And we would need to create a persistent volume here. So it has been created. Let's see where the status is. This takes around a few seconds. Usually it is pending right now. Here if we go to the admin view and you can check in storage persistent volumes, it is yet to be created. So if you see here, the claim is pending right now. And once it is done, the status would change. For us, for you to be able to understand whether it is done or not, you can do it here or you could run the command again and it would tell you if the status is still pending or it is done. So, you know, pipeline run, so once this is done, the status would sort of change from pending to bound when it completes. And a pipeline run is how you start a pipeline and tie it to the persistent volume claim and parameters that should be used for the specific invocation. So with the next exercise, once the PVC thing is done, we'll use the TKN pipeline start command to link the source PVC volume to the shared workspace where we have referenced that in the pipeline task definition. So it is bound. Similarly, you run this again. You see the status changing from pending to bound. Done. Let me just clear this out a little bit. So next exercise would be to assemble a pipeline. Yes. So what happens here is a pipeline defines a number of tasks that should be executed and how they interact with each other via their inputs and outputs. So here we are creating a pipeline and there are a few steps that happens. So, you might have noticed that there are no reference to git repository or the image registry that is pushed to the pipeline. That is because pipelines in Tecton are designed to be generic and reusable across environments and stages through the lifecycle of the application. So pipelines abstract away the specifics of the git repository and the image to be introduced as parameters. And when triggered, you can provide different git repositories and image registries that can be used during the execution. So we'll be doing that bit in the next exercise and the execution order of task is determined by dependencies that are defined between tasks via inputs and outputs. So what we'll do is we'll create the pipeline. This is done. Now what we have to do is we have to add a parameter. So what we'll do is we'll go to the, go to where we were before and we'll go to the dev perspective pipelines. We have build and deploy and we'll be, if you see here fetch repository, build image, apply manifest, update deployment. These are the steps that are gonna be happening. Go to the parameters tab and we'll be adding a parameter here. Let's see what it was. It is TLS verify and we'll be setting it to false. Save and reload this. Now what we'll do is we'll take a look at the list of pipelines, so it lists here. This is the pipeline that is listed. Now that the pipeline is created, we can trigger it to execute the task that we are specifying in the pipeline. And in order to do that, let's check this out. This is going to take some time. These takes our inputs. So here if you go to the repository, you will see that to track the progress, you can run this command. However, this you will get from here that is a sample of the command that needs to be run. So the command sort of is given here. You need to run it according to the one provided here. Once run, you should be able to see an output on that. Give it some time. Usually you could also go and see pipelines here. So these are the times where the application UI and the API building takes some time. The task running would take around two to three minutes. So folks, while the task run is happening, and I am guessing there would be less time for us to complete these. So I will give you a few links that you can see. So what we are doing today, we are at the end of everything, we are triggering the pipelines ourselves, we are triggering the bills ourselves. And if you need to do any updates to the deployment, you have to trigger it yourself. But what if you wanna do it on your own? I mean to say automatically. So I'll give you a link in order to check out GitHub WebBooks and automate the deployments. Let's see where we are still running. So once this is deployed, the front end is deployed, we'll start another pipeline to deploy the backend. Once the front end and backend is deployed, you should be able to see the application here on the topology and the application is a voting application for the ever running debate of whether we like cats or dogs and it should be showing here. Let's see how far we are. I am almost nearing my ending minutes and let's see how far we can go with this. If I'm unable to complete the whole thing, you guys can always come back to the repository and try this out yourself. So the build is continuing right now. Usually takes around two minutes. This is taking some really long time. The days when you have to do a demo or you have an event, other days when things take longer, your network faces issues, my apologies folks on that, especially to Karan. I'm sure he has to. As long as it's not frozen or not simple. If it's moving, then it's doing something else. Hopefully our fingers crossed and keep it running. I mean, that's the beauty about reproducible demos which I'm a fan of, like the one that you're showing Debiani right now, that you have a GitHub repository and anyone who hops onto the GitHub repo, all of the instruction can basically end up running the same demo at his end, right? Exactly. So there's nothing, no secret sauce that this will only gonna run on Debiani's machine. Yes, yes. So that's fine. So this is the link, right? Debiani that you have already given up to the audience. Yes, Karan. It's your open pipeline link that people can. I will post this link once again. So you can try it. So I see your text that Playground is on again. That's really great news. So folks, you guys can go back today and give this a try and see how things work. I will just stop sharing my screen. Right. So there are multiple options for the audience out here. So if you guys need to play around with OpenShift, like what is OpenShift and free playground environment, so you can hop onto developers.redhat.com slash learn where you can get access to our short, very short ultra short tutorials, readymade tutorials by our great team. And talking about great team, we also have a great speaker, presenter and the man, Sebastian. Hey, Sebastian, how are you? Good morning. How are you all? Good morning. So you only woke up. Okay. So Sebastian is our technical product marketing manager at Red Hat Developers BU and he is joining us from France. Yes. So Sebastian, how good they are in France? Well, it's still in the morning early in the morning. Winter is slowly starting, but yeah, you know, I live in the south of France. So I'm pretty, it's not as hot as in India, but you know, it's a pretty good weather here. Okay. Thanks for joining in Sebastian. And I was just telling the audience that, the audience can play around with our ultra short instruct based courses on free courses on developers.redhat.com slash learn. So we have recently migrated from an old platform to a brand new platform. Yes. A lot of, a lot of a short, short, you know, snackable hands-on tutorial for you guys. So definitely check it out. And Debiani has shared the link to her step-by-step instructions on OpenShift pipelines. And check out, you can, I'm pretty sure this could also be run on any OpenShift platform that I can absolutely get into you guys. And that's a beautiful OpenShift. You can, you can run it on anywhere whenever you get an OpenShift platform. So you can maybe get out, OpenShift platform from, from the developers.redhat.com slash learn and maybe run a Debiani's playbook there. So this will basically will work out. Yeah. Nice talk. And I also shared another link. We have a complete tutorial. So once you're done playing on the playground and you want to go further, we have a complete, a complete tutorial. I put the link there. And also you can try it out. Yeah. A lot of great content that I'm, around tacked on and OpenShift pipeline is based on, not going tacked on. So we, you will, I'm pretty sure you will have a lot of things to, to uncover and learn from this talk. So Debiani, any, anything else you want to share with the audience around, around your topic? No, Karan, I think we are almost at the end wrapping up today. So it would be great to complete this, but I think it will take an hour 10 to 15 more minutes. This is facing some issues. Other than that, I think we are almost set. I have shared the link. If you guys try around, there should be, there should be it. Yeah. Great, great things take time, Debiani. And two, I have one, one question. Like, like the one that you put, like cats versus dogs, Wim versus Emak. And so one more question, like, tacked on versus Argo, what's your take on that, Debiani? Are you, are you following that, that thing? Not yet, I have not, you know, put onto the wagon of Argo yet. I am still with Tecton. I think both can work together nicely. Yes, yes, yes. I think that's, that's what we are, unlike, unlike, unlike Wim and Emak, at least Tecton and, and Argo can work together. I have seen people are using it very effectively in production. So, and at Radar, we use both by the way. So, so yes, those are great tools. Surely give that a try and maybe come back for another talk, comparing both and seeing how both works together. Yes, definitely. All right, Debiani. Thanks a lot for, for spending time with us and, and showing us the live demo of a Tecton and Opusheff Pipelines. Thank you so much, Karan and Sebastian. See you guys. Bye-bye.