 I have Cedric Claiborne, an OpenShift Developer Advocate into one, presenting on cloud native CI CD in under 15 minutes. Take it away, Cedric. Thank you so much. And it is true within 15 minutes or your money back. That's 100% guarantee on my part. So we'll go ahead and get started here, but I just want to say thank you all for being here. My name is Cedric Claiborne. I'm an OpenShift Developer Advocate. And I'm here to kind of tell you and give you a little bit of a hands-on intro to cloud native CI CD working with OpenShift Pipelines. And so how we're going to structure this is we'll start with a brief overview of CI CD and more specifically what CI CD is. And then we're going to introduce some OpenShift Pipeline concepts. And then we'll actually be making a real world pipeline right here in under 15 minutes guaranteed. So if you're like me, probably 8 to 10 months ago, a little bit of background, I'm a student, but I'm also getting some experience. It's my second summer interning at Red Hat. I love it. I love the people. It's a great place. But you're probably wondering, if you were me, what the heck is CI CD? So I've gone ahead to pull up the definition from the Red Hat website. And I believe it really encapsulates the process of CI CD. So it's introducing ongoing automation, specifically automation and continuous monitoring throughout the lifecycle of apps from integration and testing phases to delivery and deployment, the CI part and the CD part. And if we think about it, CI CD has a few different meanings. It's split up in half. The CI is continuous integration, which is this automated process for developers like us. Think of the building, the testing, the merging process and a GitHub repo that we're used to. I was 25 minutes ago just doing this on Jenkins for some of my Java projects I'm working on, which can solve issues of having too many branches in an app in development at once that could possibly conflict with each other, things like unit test, all that. The latter part though is CD, which refers to continuous delivery or deployment kind of each get used interchangeably. But what's important to note is this kind of helps illustrate how much automation is happening. So continuous delivery means that devs changes to an apps or automatically bug tested, uploaded to a repo, whereas continuous deployment refers to kind of getting that in production much faster. So whereas people like you and me can access it right after that. So it's that next step and builds upon the benefits of continuous delivery right here in the middle. But let's get to the fun part of what the cloud native version of CI CD is. So it essentially refers to a few different aspects. Firstly, on the left here, containers. It's got a support apps that run on containers orchestrated by Kubernetes or in our case, OpenShift. Secondly, it's got to be serverless, meaning it can run and scale on demand without meeting a central itch in to maintain that. And then finally up here on the right, it's built with devout practices in mind, meaning teams can have their own delivery pipelines alongside apps they build without the dependency on other teams having to weigh things like that. So our CI CD should be able to perform all of this with this in mind. And you already know, but there's plenty of options for your projects to choose from. For example, Jenkins we use here at NC State University. I have a love hate relationship with this guy, at the end he normally helps me out, but the one we'll be taking a look at today is called Tecton. So it's a powerful and flexible open source framework for creating cloud native CI CD systems. Been around for a minute, super, super flexible, super easy to use, integrate straight into Kubernetes and allows you to build, test and deploy across cloud providers and on-premise systems. And so OpenShift pipelines that we'll be talking about today is this native integration of Tecton into OpenShift and introduces a set of tools that are compostable, declarative, reproducible and cloud native to make building pipelines as easy as can be. So let's quickly go through the building blocks that Tecton provides, also known as OpenShift pipeline building blocks. And these are the main concepts that make up OpenShift pipelines. So steps, tasks and pipelines, they already make up the bulk of our CD system. Firstly, we've got a step which is a single unit, a single operation that we perform on our code. Say, for example, the unit test on your Python code which Tecton can perform inside of a container which you provide. So the one that we've got on the top right, that little screenshot that's building our app with Maven and the bottom right, it's gonna parse our Python code. So super simple, super basic, you can build upon these. And then secondly, we've got a task. So a task is a collection of steps that we just mentioned. So one, two, three, four, five, however many you want from unit test to building apps, all those types of things, Tecton runs these tasks in sequence inside of a Kubernetes pod, which in turn can run independent of pipelines. And then finally, the most important part, we've got the pipeline, which as we talked about before is a collection of tasks that can run in a variety of ways with different conditions that specify when to start the task and when to run the task in a variety of ways. So these are reusable across projects with it, which is definitely my favorite part. So it saves you and me valuable time that we could be spending on pretty much anything else. So we talk about this reusability. So this works with these input and output resources. So everything from Git repos to PRs to images, all sorts of things, you can use the pipeline with to be essentially flexible and reusable. So whether you're working on project A, project B, project C, it can all be run with the same pipeline. So I think that's super awesome. Definitely saves me a lot of time. And then lastly, we've got task runs and pipeline runs. So these simply define the execution of a pipeline. And I'm going to show you this a little bit later in this hands-on demo that we're going to run, but this specifies exactly what to be put into the pipeline. So for Git repository, anything like that, that is going to be the specific. So the pipeline run and the task run are kind of going to tell the pipeline what specifically to do. So here we can see it all come together. So feel free to take a screenshot and check it out later because these pipelines are honestly a little bit fascinating. But let's get to the fun part of actually working with OpenShift pipelines. I promise you guys a demo, we're going to do that today. So as mentioned before, pipelines is the native integration of Tecton onto the OpenShift platform. And it offers all of these awesome developer features from being Kubernetes native, serverless, running pipelines in isolated containers and even going as far to having a visual studio code extension that you can install. So a lot of different features that they offer, but to install it, you can simply go to your cluster and search it on the operators hub, just the pipelines operator. And it's super easy to use. There's also a web interface for people who are more visual learners. But for me, I'll be running the Tecton CLI. So TKN is what we're going to be using for the rest of this demo to show you how dang easy it is to make your own pipeline. So if you'd like to follow along with this demo and you don't already have an OpenShift cluster ready, I've got you because we've got a hands-on lab that me and my team at Red Hat actually work on. So it's over at learn.openshift.com which provides a set of Catechota scenarios that are completely browser-based. Like you don't even have to have anything ready that'll allow you to follow along and learn not just OpenShift pipelines, but we're talking pork is serverless AI and a whole bunch more demos using OpenShift. So let's go ahead and hop into there. I'm going to stop sharing and reshare so I can switch tabs, but that's where we're going to be continuing. So it's learn.openshift.com slash middleware slash pipelines if you want to follow along with this demo. So give me two seconds here and I'm going to go hop in. I do like the pun. But yeah, give me two seconds here and I'm going to go and share my screen from the learn.openshift.com side. So let's see that should be sharing right now. So once you're in this side of our learn.openshift.com site, you'll go in and you can get a brief outline of what you're going to be doing through this workshop. Remember, you don't even have to have anything installed. These are completely free. We've got a lot more that I work on as well, but this is one that I've contributed to in the past year. So the first step to working, of course, with OpenShift pipelines is installing the operator. So I've gone ahead and done this. It takes a couple seconds. You can either do it going from, of course, the web console as we talked before, going to Operators Hub and installing through there or simply with a YAML file, applying that to the Kubernetes cluster. And then you can verify the installation with this quick script that'll run every five seconds to make sure you've installed it. So once you've installed it, great, congratulations. Let me go ahead and clear this real quick. See. A little bit buggy. But what we'll go ahead and do from this point on is create a new project. Actually, I might need to refresh the page. I think sharing on Hoppin can be a little, a little bit tricky on Chrome. So give me like two seconds here. That should still be able to see it. So yeah, as I mentioned before, you've got the outline brief right here. And we'll go ahead and reinstall this real quick, but give us two seconds. Sweet, cool. So it configures it for you. There we go, we can type. All right, cool. We'll log back into the cluster and go ahead and install the operator through this super easy OpenShift object. And then we'll go ahead and verify the installation. And so next step, of course, is gonna be super easy, creating our Kubernetes project or our OpenShift project. And you can either do this through the console here, through the terminal here, go to the web console right up here, which will bring you to a new page where you have your own personalized web console that you can use for up to 60 minutes, completely free. Again, you don't have to pay for any of this, which is super cool, don't tell anybody. But we'll go ahead and create this once the operator is finished installing. But let's take a look here at our first task, which I think is super awesome to see this live. So a task is, as we mentioned before, a series of steps that run in a desired order, doing everything from a unit test to building something. And we've got pretty much the most basic tasks that you can see right here, simply echoing hello world to the console. So let me create this project real quick. And let's go ahead and work with this sample task. So we've got this here, and it's already on this file system in the directory task slash hello.yaml, it's already there. So as you can see, we've got a little bit of data about it. The name is hello for our task. The step is called say hello, just literally running a UBI image and outgoing hello world of the console. So all we have to do is use the command OC apply, apply that, and then we can use the command TKN task start to really just go ahead and start this and see it in our console. So it's as easy as that to really get started using these different tasks. We'll give it a second here and it's gonna go ahead and spin up this UBI image and do this command to the console. And we should see it here in a couple of seconds. It's gonna look a little bit like this. So we'll wait for this task run. As I said before, task run kind of gives you the specifics on configuring these different tasks and pipelines and stuff like that. So we just ran our task right here and it gave us the output of hello world. So super cool to be able to see that live. Working with task resource definitions can kind of allow us to make custom pipelines and a whole lot more. So this is a great example of a task that has a whole bunch of different parameters. So things from a manifest directory to the different steps of applying an image. Here you can see the specific image that we have an origin CLI. And then also arguments like we had before where it was super easy. We were saying echo hello world and we saw that right there with through the say hello step. Well, we're gonna be doing this just a little bit more with applying custom manifest to kind of build our application when we start this pipeline. So we'll go ahead and apply these three tasks which I'd love to get into more detail with but we gotta run through this real fast and I wanna take up you guys's time too much but we've applied these three different tasks right here from a persistent volume claim to updating deployment and manifest. And if we run TK and task LS this is gonna show us what kind of tasks that we have active. So the hello task from before we can see that that we applied it a minute ago and then apply manifest and update deployment. So this one is the apply manifest we can see that it's been here now. And so now we can see it and let's get to the final part of actually creating a pipeline. So a pipeline is a collection of these different tasks that we can use different parameters to start to stop things like that. The tasks that we're actually running is combining a API and a UI for a voting application and pushing that out using the two different tasks that we had before apply manifest, update deployment to actually spin up a pod, actually two pods that'll be live on our OpenShift deployment and be publicly accessible to the web. So if you keep on following this along within 10 minutes, you'll be able to actually see this live but let's just take a quick look before we end up at this pipeline and what things are gonna really look like when you're working with OpenShift pipeline. So what this is made of, you've got the same metadata as before, our build and deploy pipeline is right here. A bunch of different parameters and specifications here. So as I talked before, these are completely reusable. So you're not gonna see any specifics when you're working with these pipelines. So deployment name is gonna be custom, get URL. Of course, there's no specific URL that we're working with but we know it's a string. And then actually you get down here you can actually see all the commands that are gonna be run. You can see everything from a builder, yeah, working with builder down here to creating workspace and using the apply manifest task before. So there's a bunch of different steps and the last part here is actually using a pipeline start and triggering this pipeline with a bunch of different custom parameters from our URL to our image to our registry and then actually deploying that live to a cluster which you can do after this presentation. And I highly encourage you to do so because it's a great learning experience and you can continue on to verify the deployment and actually have this live. So unfortunately, I don't have enough time today to be able to do that but I do have enough time today to be able to thank you for being here. I appreciate your time and I hope you learned something today about cloud date of CINCD, how to get started, how to do tasks, how to work with pipelines and once again, I appreciate your time. My name is Cedric Clyburn. Feel free to reach out. I just created a Twitter, I might be in college but I'm just not too tech savvy with Twitter. So feel free to follow me, I follow back, see some of my other presentations I've worked with, OpenShift pipelines a lot but other things like Odo, Odo's a favorite of mine and getting started with work with OpenShift. So I have different presentations I've done on that but again, thank you so much for being here. My name is Cedric Clyburn and enjoy the rest of your day at DefCon. I'm happy to stay on here and answer any questions if you all have any. Yes, good. Okay, that's where you just jump off before I ask you. Can't do that. So not many questions yet. I dropped one in earlier. I don't know if this is a free view or not but here you go. So can you comment at all on something like Argo workflows versus tech on pipelines? How do you see them fitting together in like grand scheme of things? Yeah, wait, what was your last question? How do you see them? How do you see them fitting together on a system? If they fit in at all, do they like compete against each other or? So yeah, I've worked a little bit on our workshops that we have with Argo and I honestly think they're great complementary tools. So I've personally, I've never used both of them at the same time. I feel like they have great applications for different purposes. If I want to be a little bit more nitty gritty and see the specifics of what I'm trying to run and all the steps and everything, I would definitely go with Argo. OpenShift Pipelines is more if you're, if you like to be a CLI type person, if you like working with YAML files right in the terminal. I actually was introduced to OpenShift Pipelines first in my CICD journey. So I, that's what I'm more familiar with but I think they both have their distinct advantages and disadvantages but I'm a big OpenShift Pipelines guy but you know, I'll love. Exactly, right, we're all just trying to solve problems. Yeah, exactly, exactly. And I feel Argo CD is very, if you want more specifics but Pipelines is very native to the whole OpenShift ecosystem. So it's literally built in like if you know, they've got an entire web console within the OpenShift console. So it's easier to work with if you're a visual person and you just want the simple steps. If you're still here, so normally that whole presentation takes an hour. I was running through it but it takes 50 minutes to an hour. So half an hour is the presentation, half an hour is the lab but I had to condense it to 15 minutes, actually 10 minutes beforehand but they gave me a little bit more time, which I appreciate. So I tried to condense all the information but there's definitely more out there. And if you search YouTube, I've got a whole presentation and my team has the full hour talk about using OpenShift Pipelines and a great introduction as well. So yeah, good question. Do you have any links that can drop in chat for our audience right now? Yeah, absolutely, I can bring it up. It's not me specifically but it's Brian from our team, which I love in the door. So he's a great talker. So I'll go ahead and throw that in there if you're still here. Like two seconds. So yeah, bro, thank you so much and I do appreciate it. Yeah, I mean, outside of that, it doesn't seem like we're getting too many questions. So I won't keep you for too long. I don't get a sign.