 Hello everyone. Welcome to the Jenkins online meetup. Today we will be talking about using the Tecton client plugin with Jenkins and using Tecton from Jenkins. Our presenters today are Vipha for James and Garrett. Thanks a lot for your time and for joining the call. Next slide please. Okay, so today we have three meetup hosts, Himadri, Kara and me. We will be doing all the logistics and helping with Q&A and the discussion. Let's go ahead. Next slide. Okay, so I just picked somewhere about Jenkins online meetup. So it's a meetup organized by Jenkins contributors for users. So we mostly focus on key studies, bar stories, presentations, live demos and live discussion. So you can see from this slide that we don't care too much about slides, but we really care about the discussion and what you put to participate in the chat and we will have Q&A there. If you want to know more about Jenkins online meetup, there is a meetup page here, which is basically our landing. And there is also online meetup description on the events page with more information. Next slide please. Okay, just quick heads up. There will be a few other events soon. So the upcoming webinar is on May 27th about Kubernetes created for Jenkins. This time it will be in EMEA in America's time zone. And as you might have noticed, we are recovering the series of Jenkins and Kubernetes meetups, so stay tuned for more announcements. Also, there is Q&A and Jenkins meetup coming soon. In June we will have CDCon with a lot of Jenkins content. This is free to participate online conference. So please register and later we will send links. Then on June 25th we will have Jenkins contributor summit. One of the topics today will be Jenkins on cloud platforms including Kubernetes and we plan to have discussions about Tecton and Jenkins interoperability there. So if you're interested in this topic, please join. And in September we will have DevOps world, again with strong community agenda. Over the past two weeks we've been doing Jenkins online meetups, describing community agenda and call for papers. So please take a look and it's still possible to submit your talks there. Next slide. So, yeah, since we are talking about Jenkins on Kubernetes, we are looking for speakers. So any topics related to Jenkins and Kubernetes are welcome. So your case studies, whether it's automation, plugins for Kubernetes, we have dozens of them in Jenkins and you might be working on something else. Also any parts of the systems like Jenkins, handcharts, operators, basically anything that is interesting as well as integrations with Kubernetes tools like let's say open policy agent, Prometheus and whatever else you have in cloud meeting system. So, that's the quick introduction. If you want to speak, there is a link online meetup speaking, I will post a link to slides just once I finish, and we invite any kinds of presentations. So, Himadri, Kara, would you like to add something. Yeah, hello, hello everyone. So this Jenkins online community is really interesting community and I also joined recently so like request if anyone is interested to join the community. Please do so there are a lot of things to work and a lot of things to learn from others as well. So, and it will be fun learning and a lot of things on the community. So, if anyone is interested, please do contact. Hi all. And I'll just say that we also have a cloud native sake which runs every Friday, and I've put the link in the chat it's on the Jenkins IO site among our six. We discuss a lot of cloud native initiatives you can bring your own ideas. We've discussed the text on client plugin there a lot as well as other initiatives that are happening in the cloud native Jenkins space. So please do join us. And next slide please. So, oh, there should have been a CUNY slide, which apparently it will be just next slide. Okay. So, yeah, just to quickly summarize how we do this meetup. So speakers will do the presentation. And then we will have zoom CUNY so at any time during the meetup please feel free to ask the question. And our meetup course will be the process of this question offline or ask speakers when there is an opportunity. After the meetup we will have open discussion. So basically once we finish with all questions, we will stop the recording and grant everyone voice permissions so that you can ask any question of the record. If you have any questions related to Jenkins, maybe something unrelated your work. It's basically our period of after party. And of course we invite all speakers to stay as well. After the meetup, as Kara said, we have cloud native seek. Please use the link shade in the chat and also there is cloud native secret where you can ask any questions related to this presentation offline. There is also interability seek in the continuous delivery foundation where both Jenkins and Tecton projects are represented. And this is also a good channel to ask questions I will add the links during the presentation. So, that's it. Again, zoom CUNY for questions and we will have a lot of opportunities for discussion after the webinar. Yeah, so the information up slides. If you could return one slide back. And yeah, we just want to welcome Gara James and we have thanks a lot again for joining the meetup and for working on the Tecton client plugin. It's a great addition to the Jenkins ecosystem for everyone who wants to use Jenkins in cloud native environments. And we are looking forward to your presentation. Okay, so thank you everyone for joining today is online meetup on using the Tecton plugin. My name is Wibhav and I work at Red Hat. I work with Red Hat given Jenkins images and open shear specific plugins in Red Hat. And I also work on Tecton on the side. So, let you continue James and Gara. Shall I go next? I'm James Strachan. I used to work at Red Hat a few years ago. Now I work at CloudBees and I work mostly on Jenkins X and Tecton, but a little bit of Jenkins as well. And I'm Gareth. I've never worked at Red Hat and I used to work at CloudBees. I'm currently in between roles. But yeah, I worked at CloudBees on Jenkins X and Jenkins. Let's move on to the agenda. So today we'll be going over a few things related to the Tecton client plugin. First we'll understand what Tecton itself is, get a little bit of an introduction, get acquainted to some facts about Tecton. Then we'll see why we've decided to make this plugin itself and see the differences between Jenkins, Tecton and some of the paradigms and some problems because of which we are trying to solve. Then we look at how to install Tecton and Jenkins so that we can use it with a Tecton client plugin. And then we look at a demo which Gareth will give for the Tecton client plugin. And then after that, we will see what is next for the plugin itself. So yeah, let's start with understanding Tecton and getting to know Tecton. So Tecton is a CI CD tool for Kubernetes and it is used to create declarative pipelines with CRDs. Each resource in Tecton is actually a custom resource definition in Kubernetes. So it is built using the building blocks that Kubernetes itself provides to create something like a custom controller. And considering it's Kubernetes native, it's built with everything that Kubernetes provides and it's built to be able to provide serverless pipelines. And this is done through using containers as individual steps and tasks and we'll talk more about that later. And it also gives a powerful user interface. It has its own command line tool dashboard and you can basically use anything that you can use in Kubernetes with Tecton including our back. And there are a lot of integrations which you can use such as custom tasks with which you can directly use your own other custom resource definitions to watch them and do other stuff. But that's a different story. Let's move on to some of the basic Tecton concepts we will be using today and which we'll need to know for using the plugin. So the most basic element in a pipeline in Tecton is a step and a step is basically an execution that happens in a container. So a step is can be a parallel to the step can be a container, parallel to the task can be a pod and a pipeline basically is an orchestration of those tasks. So when we define a task, we have basically defined multiple steps which run linearly and each of those steps run in containers of their own. And when we create a task, it is basically a template with which we can use to build a task run. A task run is nothing but a pod and when a task run is defined and all the steps in the task runs are running, these steps are actually running in containers of themselves in the task run pod. So that's how the parallels are drawn from Tecton to Kubernetes. And when we create a pipeline run, a pipeline run, which is a graph of tasks, a pipeline run, which is a graph of tasks, it is basically orchestrating all these tasks, which are already created and seeing the order of execution in that way. So these are a few concepts which we'll be using today pipeline resources was used for inputs and outputs to tasks and pipelines but now it's deprecated. Let's move on to the diagram to see what the execution actually looks like. So when we create a pipeline run, it is created based off of a pipeline which is already defined to which we can give inputs and outputs and such. And each task in a pipeline is each task in a pipeline runs as a task run after the pipeline runs created and each task run runs is nothing but a pod which is that and each step runs as a container. So when you have a task run it directly draws a pattern to a port in which the steps are running continuous themselves, which can be shared, which use the shared environment of the pod, which whatever we are sharing in that case we can share like workspaces and everything. And you can also see to it that multiple tasks runs are sharing, sharing an environment by giving them a persistent volume which they can share to kind of manage resources between themselves and do certain things which need a shared context. So this is this is what it looks like when you're running with that kind of pipeline and we and we would be using this these concepts to help us run pipelines and communities and tech on helps with that and what the tech on plan plugin does is it helps you use tech on for the stuff and users don't have to learn how to use most of communities all they need to know is how to define their pipelines and how simple community stuff works and they basically then run their pipelines on communities after that. So let's move on to, like, what it looks like. What it looks like right now to use a tech on by a tech on plugin versus what it used to look like running Jenkins pipelines on communities, James Stratton would take over. Thank you. So you might be looking at tech on and looking at Jenkins and thinking, like, what's the point where why is this different. It's the same but different from 30,000 feet. It's conceptually the same. It's a way of running pipelines and Kubernetes. It's one of those things that the devil's all in the details that the implementations are completely different. But from 30,000 feet, you can write a pipeline in Jenkins and you can write a pipeline in tecton and Jenkins can trigger the tecton pipeline. So from 30,000 feet, it's the same. Where things differ is how it all works under the covers. Now, as you probably know if you use Jenkins quite a bit, when you use Jenkins and you run lots of Jenkins pipelines, all of the pipelines run in a single process the Jenkins controller. So the Jenkins controller is where all the pipelines run, and the Jenkins controller then orchestrates all of the steps that run in all the pipelines using remote and chattyness. So if even if you're using Jenkins with the Kubernetes agent and the Kubernetes plugin, you're spinning up pods on Kubernetes which then chat to the Jenkins controller and the Jenkins controller is talking all the time about run step one, now run step two, now run step three. So it's very chatty. And the Jenkins controller is a single point of failure. There's also various other issues that Jenkins pipeline typically has lots of Jenkins plugins to be able to implement itself. So you need to align your Jenkins file and the agent and the controller so all the plugins are of the right version and set up correctly so that your pipeline can run. With tecton, a tecton pipeline is completely standalone. So each step of each task is completely arbitrary you can use any image anywhere on the internet for any step, and those images are only used by that one pipeline. So you could have 100 pipelines running in parallel all with completely different plugins to use the Jenkins term, and there's no issue. There's no version conflicts, there's no issues about plug-in versions not matching other plug-in versions. So it's really easy to put all of your versioning in the tecton side of the house, and have a very simple controller that doesn't need to change very much. In terms of making it easy to become more cloud native, the more you can make your pipelines completely standalone, the easier it is to operate your pipelines over time. There's another big difference in the whole networking, so a traditional Jenkins pipeline, every, if you're using say the Kubernetes plugin or the Kubernetes agents, you need to install the general P agent into every pod, and then there's this floating chattiness that happens between the controller and the agent to tell it to run each step in turn. As an end user you then have to write say a pod yaml to define all of your container images, and then your Jenkins file has to refer to step containers in your pod yaml. So you're constantly working on two separate files remembering which step rooms in which container name, and what was the image again and where is everything installed. We take on each step is just completely standalone, it's basically an image in the command. So it's really really easy to define multiple steps, get them all related to each other and separate in a separate file. Let's do the next step. The next slide so we go. So generally think of tecton as similar to Jenkins pipelines but it's just another way of doing it by trying to reuse the Kubernetes platform more. So when with a traditional Jenkins pipeline the Jenkins controller is starting invoking each step checking what each step does and so on and so forth. When it comes to tecton. Basically, the Kubernetes orchestrator is running each pipeline. The tecton controller initializes turns each pipeline into a pod. Once a pod is created, you're just using the normal Kubernetes orchestrator. So if a pod dies, Kubernetes will automatically restart it. So your pipelines restart themselves magically. If a machine pipeline is running on dies, it started automatically and another machine for you. That's reusing essentially the Kubernetes orchestrator. So your pipelines are serverless in that there's no long winning controller that's orchestrating it just Kubernetes orchestrates it. So there's no single part of failure per se for your cluster. Another big difference and again it's implementation detail from 30,000 feet you don't notice is the traditional approach with Jenkins is tends to put everything in one big container image but all the tools you possibly need in one big image. With tecton we tend to avoid kitchen sink images and we just tend to each step will just use one particular image like your use the upstream maven image or the upstream node image or the upstream QCTL image, and you'll tie them all together into multiple steps. There are similarities between the two in that there's a shared file system in a Jenkins pipeline and a tecton pipeline. Now conceptually similar, but in terms of getting the most out of Kubernetes that they're hugely different. Now it's worth remembering by the way if you're using Jenkins already and using Kubernetes and you may be using the Kubernetes plugin or using Kubernetes agents. Now it doesn't take anything away right we're not saying get rid of what you have to do but we're saying consider using tecton as a way of improving, improving to make your pipelines more self contained. So each pipeline is truly self sufficient and he's not coupled with anything else. It also makes your pipelines more reliable with Jenkins it's quite easy to write a bad pipeline that takes down your entire controller which then breaks all of your pipelines. So in tecton you could write the worst pipeline you could imagine. And if that pipeline dies, that pipeline dies but it doesn't affect any of the other pipelines. So, this site kind of serverless model in tecton does make it easier to develop pipelines that are independent of each other, and make it easy to scale all of your pipelines without worrying about how many Jenkins controllers you have and so forth. Next slide. Oh, in fact, let's go over to you, Gareth, for how to install it. Yep. So, if this is a, it's a slightly more involved installation than a standard plugin to get the tecton client plugin working. We've, you obviously need access to a Kubernetes cluster to get tecton installed and have that running. You don't necessarily need to be running Jenkins itself on the same Kubernetes cluster, it could be a different cluster, it could even be running on a VM somewhere it doesn't have to be the same thing. But it's just, it gives you more options. So what our recommendation is that we use, we install Jenkins and tecton into different namespaces. You can use, just use the standard Jenkins Helm charts to install Jenkins. Keep it nice and simple. The Helm chart has the ability to specify additional plugins. When you want to install, they get automatically downloaded and configured. And with all of its dependencies that they will need. So you can configure that section. And then by putting tecton into a separate namespace. Now, when you run the Jenkins Helm chart Jenkins runs as a Jenkins service account. What you need to do is give permission for Jenkins to be able to access the tecton tecton resources in the other namespace. If you go onto the, I don't have a link here, but onto the tecton client plugin GitHub page there is a doc with some installation instructions which gives you the role binding that you will need to create to allow that access. It's pretty straightforward and you can apply that using Helm or just a straight QCTL apply if you need to. Some of the demos you can do some of that if you want to just try it out if you're running it locally and you've installed Jenkins manually you'll probably be running as cluster admin, which in which case you don't really need to give any questions because you have all the permission you need. Just like I said, yeah, this is standard Jenkins CI Helm chart. So this is, this is kind of describing the environment that I'm going to do a quick demo on. Now, does anyone, James or Fab, anyone want to add anything or if we've got any questions to answer before we go into the demo. We've got one question I'm just answering it on the chest, which was pipeline as code. How about we do the demo and then we'll keep going with questions at the end. Yeah, that's the best approach because I believe this question will be answered using the presentation. And if anyone has more questions, please don't hesitate to ask them. I will make sure to process all of them. We have plenty of time today for that. Cool. So I'm just going to stop sharing and share my desktop instead. I'm just going to show you this. So is that coming through? Okay. Someone give me a thumbs up. Yeah, cool. Okay, great. So this is, this is a local test server that I have running. It is using Docker desktop for Mac with its inbuilt sort of Kubernetes support. So it's not kind, but it kind of is. It's like a local Kubernetes cluster with Docker nodes. What do they have to do to get it working? Well, the only thing you really have to do is install an ingress controller. That's just the point to remember because by default everything is running on localhost but you can't access it. So just as a point to note there. So what I have, I have, I'm going to show you on some name spaces here. So I have Jenkins installed inside the Jenkins namespace and tecton installed inside the tecton pipeline namespace. And you can see the ingress, which is just an engine X ingress controller running there. It's pretty straightforward. What we do is we tend to use a custom image for Jenkins to download plugins automatically and build them in so that we're not downloading on restart. It tends to reduce pod restart time and keep it consistent with the versions that we know are there. So if we go and have a look at Jenkins, just see what it looks like. Jenkins namespace. You can see here we have, we've got a pod running. It's been running a little while. It seems to be going okay. We've got some agents and it's all configured in a stateful set, which is the new help version three version way of doing it. But one thing to, one thing to note is that my, because this is running locally and running on local host, I don't have any web hooks configured from GitHub to talk to this cluster. So I'm going to have to invoke some jobs manually. So what I want to show you, I have a repository here, my main branch. I have a repository here where this is my sort of test repo. It is available on GitHub. I can put the links up or you'll be able to see it. Inside the repo I've created a dot tech ton folder. With a hello world YAML. And that is a simple that is a pipeline run, which has embedded pipelines and tasks inside it. So if I open this up. I'm generating a name when it comes through. I have configured a workspace workspace is, as James was saying earlier, a shared volume that can be used between tasks, and it's kind of passed between them. So there are some things that you need to be aware of, but that they sort of take on specific things in the order that tasks run to make sure that it gets correctly passed between them because it will try to sort of optimize that the running of the tasks a little bit. And then in here I have a pipeline spec. Now I've defined a number of parameters. So these are actually dynamically set by the tech ton client plugin. So it will clone the repo. It sort of has a look at it. It knows whether you're building a branch or a PR and it passes that information on to tech ton. So that you get a correct correctly build a sort of a PR or the main branch or however it is you have configured. So I am this next task is using the get clone tasks straight from the tech ton catalog that I've so I've installed that task manually into the namespace, and then I can reuse that task by referring to it for the test graph. And I pass in the repo URL and the shard that I want to clone and it will clone that application, clone clone the repo for me at the particular point, either on the branch or on the full request that I've submitted. The next task that I'm running, I want to build the application. I've specified that it wants to run after the fetch repo command and uses the same workspace just to make sure that I have actually got the source there. And I'm using a go lang image. Now, this is the one of the really nice things about tech ton is I don't have to create an image based off. I don't know you bun to and have everything installed on to it. So all of my build tools and the right version of go lang on all of that stuff I can just use the proper upstream images that are official and supported. I know exactly what's in them. You know, I use go lang 115. And I'm basically calling make So this this application is very basic. It just has one file in it. So it doesn't really do much. But just I just this is the highlight how it kind of how it works together. And then I'm doing a Docker build. So I run after I want to build my Docker image after I built the application. I'm using the Kanako task. So Kanako is a sort of an in cluster Docker builder. So it means that you don't have to mount the Docker agent, the Docker socket sorry, and you run sort of natively inside a Kubernetes cluster. And it's a lot more secure than trying to use a Docker demon. And just because I haven't got the credential to push to my get read to my Docker hub installed yet. I've just put no push as an extra arms just so I'm building the image but I'm not doing anything. Cool. That's what my pipeline looks like. So I'm going to move this for a second. Just so we can see it working. I am going to switch to the tech from client namespace. I'm just going to watch the pods that are running in there just so you can see them start up. Hopefully this text is big enough for you to see if it's not just please give me a shell. And then on this window here, I'm going to watch the pipeline runs. So you can use PRS is a short time for that. There should be nothing there yet. So back to my Jenkins instance, discover my repo. Once the main branch and I'm just going to build now and I'm going to have a look at the logs. So this is a standard pipeline. Nice and simple. I'll show you what the pipeline actually looks like to trigger this but it could be a freestyle job as well as a pipeline or a multi branch. So it's just standard things to kick it off. You can see the pods are started. The pipeline runs have created their running state. It has created three pods. It's actually created four pods because Tecton has an affinity assistant to help things run on the same nodes. But we have three different pods. One to build the application, one to doccaboo and it's going through. You can see on the left hand side, the logs have been streamed nicely. And that is pretty much it. So you can see that pipeline was very fast in terms of pod startup time. There was no extra JVM or agents that needed to be started to run that was very quick. Obviously, if it's the first time you're running a pipeline, it probably will need to download those Docker images and take a bit longer. Once they're on the node, they're cached and are nice and quick. And you can see here, we have some nice reuse of the components. Cool. So I was just going to quickly show, just make this bigger, what the actual pipeline looked like for that. So I'm using a Jenkins pipeline, standard Jenkins pipeline. I'm saying you can run on any agent. I need to put the checkout SCM step in there because otherwise I don't get all of the environment variables set. So if you use, if you configure this as a pipeline, you do, if you configure it as a multi branch pipeline, you don't just something to be aware of. So you need to put that step in there and make sure it sets the, all the right environment variables that the Tecton client plugin can then interrogate and set and pass on to Tecton as parameters. And then the interesting line there is the Tecton create raw. So it creates a raw resource based off that Hello World YAML. It's a file. There are other types that it supports. You can inline YAML here if you want. Or you can specify a remote URL if you want to download from something straight from the catalog or somewhere else. And I'm specifying a Tecton namespace here to do this. I've got some, I've got a couple of PRs that I can show on local branches. I'll just show the code that I think. So this is, this is quite nice. So what one of the things that Tecton is really good at is, is kind of like, it allows you to build up or compose your own pipelines from multiple tasks that already exists. So somebody could create a task that is installed into your cluster and that that task can be reused by multiple pipelines and like that's how that's the sort of basis of the Tecton catalog. So one of the quite nice things we have. So we have a, I'm just going to show you this small piece here. So this is using the JX release version from JX which is a, which is a plugin to interrogate, get and calculate what the next version for a release should be based on the sort of conventional commit history. So this is a way that you could create a task and make it sort of share it within a cluster, if you wanted to. So it creates a next version as a result of this task that can then be used for the rod in the pipeline. And writing writing to the file, Tecton results, next version that's that's the Tecton way of sharing, almost a sharing state really between steps. And how I'm using that here is that when I want to build my Docker application now I can refer to the version here. I can pull it straight out. The other one I'm going to go to is, so this is a example of a more complex Jenkins file that uses kind of some of the standard pipeline features that you can do. So if I wanted to run a different pipeline on a change request as to my main branch or my release branch, you can use standard semantics in here to tell it what to run. So that is pretty much what I have as a demo. I'll stop sharing and switch back to, has anyone got any questions about the demo whilst I'm on here. Yeah, there were some questions which seem to be related to the demo. So the first question was asked if you want to have a pipeline as code method, how should we get a formal implementation. I believe we answered this question during the presentation, if not please follow up. So the next question is, should, how should we decide the choice of tools, depends on where we are migrating from. I guess it's rather a question of whether we use ticked on or whether we use classical Jenkins pipeline or maybe can take us under the hood. So what do you think about that. That's a great question. I think. One of the things to think about is what Jenkins plugins are you using, like if you specifically want to use a particular Jenkins plugin. That's going to dictate how much Jenkins file you use versus how much tecton use the sweet spot of tecton is I just want to run, you know, make him go or node and I want to run a bunch of tools together. And then when that's finished, I might want to do something Jenkins with the Jenkins file so getting that split between the Jenkins file and tecton is really. If you're literally just like a lot of pipelines, you're just doing like a Maven stuff or node stuff or go stuff or whatever. That stuff can just be in tecton is nice and simple and easy and standalone. But if you really want to use the Jenkins stash plugin or the jink report plugin or whatever then ships it to the Jenkins controller then do stuff in the Jenkins file. That's one of the things sometimes if you know Jenkins really well you probably going to do things more in Jenkins than tecton. And this one of those finding the right balance and find the right tool for the job. Yeah, at least it provides some freedom. And if you ask Jenkins experts heavy users. I think actually the recommendation is to do less in Jenkins, literally the representation named like that by Jesse Glick at one of Jenkins falls, so the 17 I believe, because the recommendation if you use common built tools like made and gradle or whatever, delegate more to these tools, and use Jenkins as a rocker as a specific integration. This is still very early in the tecton plugins life. I mean, still it's only just gone one zero so we, we still I think need to build up this kind of best practice kind of recommendations of how we think you should be using this. But I think I think you're right on like I think the more we can push a delegate operations to the underlying tools like Gradle, maybe whatever, and that stuff tends itself more easily to tecton. Jenkins be the orchestrator of those things. So you know what do you need to do before tecton and after tecton. One thing we could do with improving is helping make it easier to get state from the tecton pipeline to the Jenkins controller. Like in Jenkins we've often use stash and unstash and those kind of things inside of a step because we've often had. My family is really confusing when the first time Jenkins, you might have a pipeline and you think the pipeline is running on a node somewhere but it's actually running on the controller. And you're never quite sure where the step is it running on the agent or is it running in the controller and where's the state and you've got to stash and stash and it is a bit kind of confusing. It will be kind of nice to have very simple canonical steps in tecton land that we can just say I finished my text on pipeline now upload all of this state to the controller. And after we go back to the Jenkins file we can then use any of the standard Jenkins plugins for a unit reporting or something like that. So it'd be nice to polish the experience a bit more so that after tecton has happened, you can use state from inside tecton in Jenkins but yeah I think we're all gradually learning to improve in this space. So anything back from users will be appreciated. So the plugin has been just released as a one dot zero. There are GitHub issues, I believe for the team will later show the links. So if you try to out if you have any ideas, please use it as a channel for feedback. And it will much appreciated. So there is another question. Is a built look remain to in Jenkins, even if I delete pipeline run resource from Kubernetes. Yes, it's a quick answer. Yeah. And that's a great answer. Thank you. Although, if you then delete the Jenkins controller pod. The log is on a persistent volume, usually in in kubernetes if the pod comes back and the same system volume is there then you still keep your log. Well, it's worth saying that even if you're just using Jenkins, so forget tecton for a second, even if you're just using Jenkins, you should get configure your Jenkins to back up all of your logs and artifacts to long term cloud storage like a bucket or something, because you can use your Jenkins controllers. If you're not careful, you can use your persistent volumes. I've deleted clusters by accident, quite a lot of times in my life. And often the persistent volume stick around, but they don't always. So, yeah, make sure you put everything in the long term storage if you can. James, do you want to talk briefly about the, the catalog stuff, the JX catalog stuff. Yeah, I can't, I don't have a demo for that. That's quite, that's quite right. Yeah, so this, it's a slightly long story. So I've been on a very long journey of the last kind of five or eight years of building and deploying pipelines across many repositories. And we started off in the early days with Jenkins shared libraries and then using Jenkins shared libraries across repositories and then gradually kept iterating and iterating then we got, we started Jenkins X and we started generating Jenkins files. We started to use Tecton, then we tried using Tecton and the two called kept with KPT from Google, which is a way of sharing YAMLs across Git repository, which is pretty cool. And we've been on this long journey of continually improving how can we effectively share pipelines across repositories. So let me step back for a second. Imagine you're a company and you have 10 microservices and those 10 microservices all need pipelines for dealing with CI and pull requests and dealing with releases. If those 10 microservices are all say node or Java, the pipelines are probably exactly the same. So you want to share those pipelines, you don't want to keep maintaining 10 separate huge chunks of YAML. But sometimes one of those microservices wants to do one thing different, you want to do a step before you release or after you release so you want to generate some extra docs or you want to do something different. So you've got this classic problem, how do we customize a pipeline for one repository, but maximize we use across all of the repositories and not have a maintenance nightmare. So we've been trying to kind of work on this kind of tricky problem for a number of years with various efforts. In the Jenkins X community, the one we've come up with now is we borrowed a trick from there's a tool called Co co in the go community for building container images. And there's a tool called mink from the K native community and that's for again building container images, and they use a little trick where they use the magic image tag. There's a magic image tag in that case they do docker build so we decided why don't we use a magic image tag so we can write canonical tecton pipelines is 100% tecton there's no DSL there's no wrapping there's no. Co generate this is vanilla tecton, but we can post process that tecton and expand magic tags. So we have a magic uses tag on images, which basically lets us reuse a whole task from a catalog, or just a specific step from a catalog. Now this might sound a little bit weird and meaningful, but the basic idea is, all of the pipelines we use in Jenkins X they're all pipeline this code they're all stored in the Git repository of the micro service. But we don't reference any specific image tags or commands in any of those steps we just reference a version of a step in a library. And then this means we can then override any step in any repository to use a different version of different images and commander whatever so we can override any of the step method anyway. But at the same time we can share a definition across repository so it gives us this kind of dual optimization of we can share, we can upgrade all the pipelines together without having to pull requests on every single one of your but we've always got that get out of jail free card that we can just change one step in any pipeline whenever one. And then once we're happy with it we can then migrate it upwards to a lot shared library so it's this. It is a really thorny problem right in CICD how do you share pipelines everywhere, but use pipelines as code, but allow microservices the independence to do things differently. So you can kind of align things you don't have a maintenance nightmare it's a it's a super hard problem unless you try the surface before you don't realize how really how this is. So to cut long story short we have this user syntax, which is just a magic string on your images. We've added that support into the tecton client plugin for Jenkins so if your tecton YAML has this users syntax. You can reuse that pipeline so any of the Jenkins X pipeline you can just use them inside Jenkins via the tech from plugin. The only one thing you need to do is in the Jenkins configuration you need to enable the tecton catalog flag in the Jenkins plugin, which basically then triggers this post processing step that it basically looks any of these uses tags and then in lines, whatever that step really is. So it's a form of reuse, you can think of it as kind of like Jenkins shared libraries. But it's slightly different it's based on referencing files in git using a git tag, basically, with a very simple override model that you can then override locally any of the values. So it's, it's totally different to Jenkins shared libraries but it's vaguely conceptually like it is trying to solve a similar problem. So the thing I really like now you might find this usually stuff a little bit weird when you first look at it but the thing I really really like about it is once you get your head around it. It's mechanical and simple. There's no programming there's no weird functions to test it's literally just sharing YAML. And there's various tools, command line tools in the Jenkins X community to visualize the effective pipeline that would be run if you use the pipeline so you can kind of understand what's going to happen before it happens. I hope that helps. If you raise issues on the Jenkins Tekton plugin, if you're at all interested in the user stuff or pop by the Jenkins X Slack channel, and we could just talk about it. I'll post that in the chat. But it's worth remembering none of this is necessary for the Tekton plugin in Jenkins you could just use 100% vanilla Jenkins. Well, 100% vanilla Jenkins and vanilla Tekton without the user stuff user stuff is completely optional. It's up to you to use it. I posted on one of the chats earlier by the way. I saw a blog recently about a company that looked at Jenkins X and for whatever reason decided didn't quite fit their use so they ended up building something very Jenkins X like they need hit this problem as well. So how do we reuse pipelines across repositories allowing customizations but maximizing sharing. And they went a slightly different route to the users approach they went with customize, which is a cumulative tool that lets you take some shared YAML and then customize it by overriding things, which is another way of doing the same thing. So if the uses thing doesn't kind of gel with how your brain works or your team. So this is another option. I personally find customized quite complex. I find it even harder to use than using stuff to use this stuff. Once you get your head around it is real quite simple whereas customized can be a bit complicated. But you know, try both. I mean, use one or the other. It's worth saying as well. Sorry I've been rambling about this before a bit. Tekton itself does have a reuse mechanism built in. So you can reuse a task so you can reference a task in git, which is great. The only downside with the task reuse in tecton is you can't change it. You can basically just reuse a task and that's it. And if you want to add a command line argument to the Maven deploy step. You basically have to fork the whole task somewhere else and make a new version of the task and then you have to reference a new version. Anyway, everyone's got the wrong folks of everything everywhere and you get in this kind of big mess so that tecton doesn't yet have a step reuse mechanism, which is what the user stuff does. I hope it does at some point there's a few call requests and extension proposals in the tecton community to do that and I really, really hope Texan gets it eventually but right now uses is the is the best I've seen. Sorry for the long answer. Okay, so there are a few questions left. I suggest to firstly finish with the presentation. There are a few slides left about the future of the plugin and then it will be natural to answer questions which we still have on the list and feel free to submit more questions. So what's next for the tecton plan plugin itself. So right now, we had an idea for the tecton plan plugin to have its own DSL for tecton. So what's next for the tecton plan plugin itself. So right now, we had an idea for the tecton plan plugin to have its own DSL for tecton so that we can just use the pipeline in Jenkins file and we started work on that and created the tecton extension for that and as you saw in the example that we can use it right now. So in future, we're planning to extend this to do some other stuff and see what else we can come up with and this was also an idea for GSOC and we are still looking at how that's going to work. Then, Gareth, could you talk a little bit more about avoiding taking up an executor? Yes, sure. So it's probably more of an optimization at the moment but when we kick off one of these with Jenkins, you kind of need to run it through the pipeline on an executor because it clones the repo and it has access to the workspace. We're investigating ways of trying to not do that so we can minimize resources as much as possible on the Jenkins side because it executes another JVM somewhere that is running and we want to keep that as lightweight as possible so that we get better build densities in our clusters. That's the general idea. And then the next one is support files with multiple types. So right now we support YAML. So in the future we hope to support JSON properly and also, are we thinking of supporting XML? I think it's also having YAMLs with multiple documents in the same file. We currently say don't do that because it makes it difficult to understand if there are multiple pipeline runs or a pipeline run and a task run in the same file. So what logs should we be streaming to the console? It makes it a bit awkward. So we say don't do that at the moment but it would be nice to be able to handle some level of that. And that probably leads on to the next point which is the naming conflicts between the two. So the demo I gave uses the generate name feature in Kubernetes to always get a nice unique name. But there are times when you possibly don't want to do that. You may want to name your pipelines something a bit more predictable and similar to sort of the JX way where we actually name it based on the branch or the pull request number and the build ID as well. So that might be something that we could look at. If you enable the uses syntax of the tecton catalog, the name is based on the repository and the branch and everything. We could maybe do that natively for all tecton pipelines. We could probably also use Jenkins X in the back to kind of like what we're doing right now with the effective pipelines. So like that's a really good network to get context of the same context all over the pipeline. Yeah, so currently we do a tecton catalog integration that way and we also figure out how to how we can make that better. Then after that, there is a GSOC project idea on cloud events plugin right now. And in the future, once that rules out properly, we will think of how we can get that integrated with the tecton plugin and maybe, you know, be able to trigger tecton tasks like through Jenkins through cloud events. So that's also something I think about or if it means that we directly trigger tecton resources through cloud events and not through Jenkins itself. So this is also something we are working on. So this is what's next. And we, we, we discuss all these things in our cloud native sig meeting which happens every Friday. The meeting is up on the Jenkins events. So calendar, and you can check it there. And thank you for joining. Thanks to all the speakers for the presentation. Thanks girls and sweep half thanks James. It was a really interesting discussion and a lot of context. So, again, we are looking for feedback from all participants. We encourage everyone to try out this plugin and send feedback through GitHub issues. There are more demos and really presentations coming soon, hopefully. So stay tuned. And there was one question regarding the roadmap. So if we could return back to the previous slide. So the question is about, is there one to add the github's pattern support for the tecton plugin. Not sure the answer though. Well, one way of looking at it is the tecton plugin can use pipelines in github and trigger the tech so you could store the tecton pipelines in your github repository and then use github to manage the pipeline. That's a slight different thing to when people talk about github's and Kubernetes where they're using a git repository to do the cube CTL apply effectively to deploy applications. I'm not sure if the tecton plugin is the ideal thing for that. I mean you could obviously use that. You could make a DIY github solution with the tecton plugin. But people normally think of tools like Jenkins X or Flux or Argo for the last mile. Google config sync is another tool in that space. Fleet is another one. There's a bunch of tools like that. I think more of the tecton plugin feels more about doing CI and releases, rather than managing production clusters. So you probably don't want a Jenkins serving your production cluster just to be able to deploy your applications. Although you would be able to use the Publish Terraform images, which is something that we can't do at the moment because they don't have a shell. So you would be able to create a pipeline to do a Terraform apply, for instance, onto an environment. Yeah, there is the what makes prediction. You need something like a CIO environment or something to run your Terraform to make your cloud infrastructure. So the Jenkins plugin would be ideal for that, for defining a Terraform stuff, it would be ideal. Yes, absolutely right. It's a slight tangent, but often the github's approach that we've worked with popularizing with things like Flux and Argo and whatnot. So it's a slightly different use case to Terraform. Like Terraform, you tend to run it once to set up your cluster and then you might run it occasionally if your infrastructure changes, but it doesn't have to change that much. But to deploy new versions of microservices, which happened very frequently and very rapidly. It's fairly common to run a deployment agent in your cluster to avoid having to publish root access prediction cluster admin tokens around the internet or whatever it is. I'm not sure if people would want to run Jenkins inside production clusters. However, using Jenkins to run your Terraform is a perfect example with a technical gain. That would be awesome. Thanks for the answer. The next question. So there are many stages in pipeline Ryan, like docket build, etc. Then you wait to show in the stages in Jenkins UI, as I understand we only see build stage in Jenkins UI, which you might implement in Jenkins file. So what about the tip-ton stages steps, etc. We could totally do that. We don't right now. But we could totally. Yeah. Isn't it on the roadmap for the plugin. For the users to see maybe if we can raise an issue about this, and then work on it later on. It will be interesting to see the tech concepts and everything like in the Jenkins UI. So if the user doesn't have to go back to the animal and then check the stuff that we can see everything in a nice Yeah. It'd be nice to basically make it look like blue ocean, like a full pipeline with steps to work with. We're mostly just showing the logs right now. It'd be nice to publish the pipeline graph. Jenkins has API for that. And it would build a good story for a pluggable log storage. So you may have seen that we already have implementation. For example, you can store pipeline logs in AWS cloud watch, you can store them in other services. And there is API under the hood, which allows to implement other connectors and potentially because stream pipeline execution data from other services to Jenkins and display them as natural pipeline graph steps. So it's actually wise we have it. It's definitely something interesting for tick tone and maybe for many other stories like let's say Jenkins file runner we were discussing recently. But yeah, as we have said, please submit a ticket if it doesn't exist already because it's definitely something to discuss for this particular plugin. Something else that I actually failed to demo was we support the checks API as well. So when the pipeline runs, you get events going back to, for instance, GitHub. If you've got that, if that's how your repository is configured. And that works really nicely. So we could add in, we could add in support for more information going back onto that. So for informations about the tasks and the task runs and potentially log back into that API. That's also an option. Just to clarify, does this engine use checks API plug in Jenkins or does it use tick tone implementation. It uses the checks API plug in Jenkins. Okay, so basically you can stream this events to any receiver, not just GitHub actions but for example, there is a product for Slack integration, etc. So any consumer of these check events can be used. Yeah, at the moment, at the moment we're only really dealing with like pipeline is running. It's completed or it's failed, but we could certainly add in more areas around that. That's great. Thanks for the clarification. Yeah, there is another question pipeline resource was striped out in one of the slides. Is it removed from tick tone. So my understanding is it's still kind of there, but it was left in an alpha state. It wasn't promoted to beta with the rest of the tech ton resources. So it's like, I think they're thinking of it more as an implementation detail rather than something that you actively go and deal with. It could come in handy where in the situation where you're using you want to use tech ton as well with Jenkins. There are some things Jenkins could do with check on and you have probably have a lot of stuff set up but you would like to start using tech on. So you can use the plug into kind of like, instead of switching between them continuously you can just use the plug into kind of manage everything from Jenkins itself, which would be, which would be nice. So, but it depends on like use case use case. So if you keep switching back to tech on because you have a lot of stuff over there and not much stuff on Jenkins so it makes sense to just use tech on but if you only use a little bit of tech on with everything else in Jenkins. So it makes sense to use the plug in. So you don't have to keep switching between the tech on dashboard and the Jenkins dashboard. Thank you. So the last question we currently have in the least pipeline resource was right. We already answered that. So there was another question but like James answered it synchronously. So, thanks a lot to everyone. Again, we encourage everyone to send feedback to try out this plugin. We will be also sending careful up a feedback form after the meetup with links to recording slides and we will ask a few questions. So if you have a few minutes we will appreciate if you fill in this feedback form. And any additional information will be appreciated. Like we said, in the discussion, there is cloud native meeting happening in just two hours. So if you want to join and to have more discussion, you can join this meeting and they will be rather technical dive. So any hardcore questions are welcome there and you have two hours to try out the plugin if you want and then come and ask any questions. Okay. And yeah, right now we will start, we will stop the recording, but you're welcome to stay online. And again, we will have something like 15 minutes for informal discussion. Any topics about ticked on Jenkins integrations, or basically whatever other topics you would like to bring up in Jenkins, you're welcome to stay and discuss these topics. Thanks. Thanks again to everyone. And looking forward to meeting you on May 27. So you will have an online meetup about Jenkins Kubernetes separator, which is another ongoing project which provides support for managing Jenkins using classical solution and join us. So thanks all. Any closing notes from others? Okay. All right. James. If no, thank you. And yeah, stay online. Thank you.