 All right. Well, thanks everyone for sticking around for my presentation. So I just got a couple of slides to go over and then the rest will just be me kind of fumbling around inside of Jenkins. Now, you might have been wondering, is this title clickbait? No, not clickbait. We will run GitHub actions on Jenkins at the end of this presentation. But first, let me tell you a little bit about the Jenkins standard library that makes it possible. So it's a shared library, and it's open source and free to use. It has tests, example jobs, Java documentation, all the good stuff. And its goals are to provide abstraction of very common CIS problems and provide features not implemented in the community and work around for some of Jenkins works and don't mind the lack of stars is a very new project and you are among the first to hear about it. But why not create a plugin? Well, the Jenkins plugin ecosystem provides many powerful building blocks that you can use in your pipeline to do a certain tasks, but they typically don't extract away complex problems. And they often require some boilerplate code before they can do any heavy lifting. And if plugins are like batch commands, then the Jenkins standard library is like the pipe operator, right? And extended functionality is great, but, you know, does this actually solve some kind of problem? So in my day-to-day work, I meet many teams that are utilizing Jenkins as a CICD tool. And I noticed most of these teams are building the same internal tooling in their shared library, things like a logging utility, wrappers around PIP and NPM. So we are all constantly developing the same solutions in secret. And that's kind of the problem I'm trying to solve. The Jenkins standard library, it gives teams a single place to iterate on common tooling. And taking inspiration from pythons, all batteries included standard library. I want to provide utilities that allow teams to focus more on their application pipelines and less on yak shape. So the shell step is extremely important building block that is used in almost every pipeline, right? But that doesn't mean it couldn't be improved. It has features requested by the community. It has very common use cases and some quirky behavior. So let's jump into the first demo and see what improvements we've made on the shell step. All right. So here's a pipeline job. It has our library imported and it has a couple of little helper classes. Mainly we're just going to be looking at the bash client. And all we're going to be doing is echoing hello from the Jenkins standard library. And after we run that, we'll then look at the kind of things that the result object gives us. And we're going to run this twice. The first time is just going to be a regular command. And the second time is going to throw an error. And we'll be able to catch that exception and also look at the things that the exception gives us to play with. So it behaves pretty much like the built-in shell step, right? We still see that the output is printed to the Jenkins console. But the result object that we get back has the standard output. It would have the standard error if there was any error. And it has output, which is both combined. And it also gives us the exit code, right? And then when we run the fake command, we see the error on the screen. And we get back the error and the output and the exit code. And we're going to run like the first demo. So for the second one, we're going to run pretty much the exact same logic. But we're going to use the silent version, right? And this one's pretty handy because in a lot of the tooling that you build, maybe in your shared library, you might not want all of this to go to the console output. You know, I've gotten into several different Jenkins over the years and the console output is so long. It really can't, you know, it slows down the front end and you can't find the useful information. So this is the exact same code running again, but now we don't see anything in the console except for what I've logged. So it's hidden that from users. And for the third one, we're actually just going to ignore errors completely. So again, we're running the same kind of code that we've just ran the last several times. But this time we don't have to set up a try catch block, right? So we've automated the try catch block. You know, this is handy if you already know you're going to need to look at the exit code. And there we go. So we get the standard error, the standard output only contains, or the output only contains the standard error, and we get our exit code back. And the last little example I'll show you is again the same logic running again, but this time we're going to change the log level. So we've been running with info. We'll change the pipeline log level to debug. And this is something we use throughout the library to, you know, change the functionality of things, not just what's being printed but actually change the functionality of whatever's being run. And this makes it really handy when you're debugging either a new pipeline that you're developing or a pipeline that has stopped working. So we see the debug output, and you're kind of getting a peek of what's going on, you know, behind the scenes. And one of the things that's different and we're trying to solve is most of the tooling that we're installing relies on your shell sourcing bash RC or bash profile. So a lot of tooling that you install these days pie environment Ruby environment, any of those two things like that are messing with your bash RC. And, you know, it's expecting you to source it when you interact with those tools over sourcing bash RC. So what we've done is now not only are we showing the output of the command, but we're also printing the actual command itself so if you were using bash variables here, this would be handy because it would it would interpolate those variables and expand them for you. And we get everything else that we've always been getting the standard output, exactly things like that. It's an HTTP request. It's increasingly common to interact with various services via API calls in our pipeline. So taking inspiration from Python's request module, I created a simple clone that would make it easier to make simple HTTP requests. And we'll take a quick look at some example request before we get to the main event which is get out actions. The code here again is going to be pretty much the same. We're going to import the library and import the, the classes that we need. We're going to make a get request to HTTP been, and we're going to pass a map of parameters that we want to use in our request, and some of these are kind of listed where are nested where we have one frame there is a list. This one needs URL encoded because there's a space. We're going to make that get request. And then we're going to inspect the response of that, that item. I feel like my, my mouse is going to this real slow. All right, so we've made our HTTP request. And what we got back was a response body that was a JSON string. We got back a map of headers. So what's really cool is we got back a response that JSON, which is already a groovy map so that that string body that was returned to us has already been transformed into a groovy map ready for you to use. And if we look at the URL, you can see that it automated the URL for us we didn't have to manually do all the parameters the parameters was a map and it figured out all the parameters for us and it URL encoded everything for us. So for example, it's use JSON. So not only can we of course receive JSON but we can send JSON so we're going to send it a groovy map of JSON values, and it should take care of all the work for us by converting that to stringify JSON. So here we go we get the response dot body it's a string. And if we look at the data film from HTTP, it is our map that has been stringified. And then we get all the same the response code that is this an okay response things like that. And that was a post request, of course. And then the last thing we have some authentication that supports, we'll make these requests, and we get back the is authenticated true for us. So, why would you want to run get have actions on Jenkins. Well, the first reason just might be that it has an amazing abstraction that you just give it this map of string inputs, and it returns a map of string outputs. And when we look at plugins and compare them to plugins plugins can take many types of inputs and they can behave in many types of ways. Another reason is that actions don't need to be installed or configured ahead of time. And depending on who or how your Jenkins has managed this could be a huge benefit to your team. Actions also allow developers to write in whatever language they feel the most comfortable in. And this is probably the number one driver behind the huge explosive both actions as seen in the last year or two. There we go. So act allows you to run your GitHub actions workflow file locally. It even comes with advanced features such as being able to simulate GitHub events like push and pull requests to trigger the proper work. As soon as I found this amazing project my first thought was would it run on Jenkins. So let's try it out. And let's take a look at the repo that we're going to be using. So this is an example repo. It's provided by the act team. It is like a little node a little JavaScript application. And this is the workflow file will be running. And it's a pretty simple one checks out the code it'll set up Node.js it'll run npm install and npm test. So rather simple. And the code again just got the library imported. So so far we haven't needed any plugins except for maybe the node plugin which comes with pipelines to run the shell step and other plugins have been used so far. And we'll check out the, the, the little GitHub actions demo repo that I just showed. I'm just going to run workflow and workflows are really cheap and lazy wrapper I built one weekend. And so it's going to call act, and you can pass it a string and anything you pass to it will get passed directly to the act binary. And it's just going to return a string of whatever string output would have come from that coming. So it's going to run and it's going to work, but this bill will fail. And that's only because my Docker container is conflicting with the npm server that's running so it's going to it's going to try to run this application and some kind of node server, and they're going to have a court conflict. But here you can see the act is doing its job it's checking out code and setting up npm. It's going to pull down the dependencies, and then it's going to run some tests and it's going to fell on the test step. If we go up to the top. We should see the error. Yeah, yeah, they're right here. Address is in use 8080. All right. So, while running an entire workflow and Jenkins is cool. I seem realize that I didn't want to convert all of my existing Jenkins jobs to get a workflows. What I had really wanted was the ability to take a single action or two and inject them into an existing pipeline. So that would allow me to leverage the large number of GitHub actions available without installing additional plugins or building my own solution and groovy. So let's see this action in action, I guess. So we got the action step imported. And then right here is needed if you're doing some kind of like Docker and Docker where I'm inside of a Docker container and I'm passing my socket to get have actions right and I'm creating more containers. So if you aren't doing Docker and Docker type stuff you don't have to worry about this. But it is needed to make the solution work. So we have this stage called Docker action. And if you've ever used actions and this syntax is probably familiar with you right this is the same thing that you would pass into a workflow step. So I'm just giving it the name, which is a generic name that's displayed when it's run. I'm giving it a uses, which points to the action on Docker that I want to use and the version that I want to use. So basically a GitHub slug with the user name, the repo name, and then some kind of reference. And then with with I'm passing any parameters for variables that I want to use. And then that's it. I call it and it's going to return a map of outputs and we'll print that map. All right, so check that. This is just awesome. Yeah, everything done through pipeline library. Yes. Yeah, it's beyond my expectation what you could do is though it's still a combination of plugins, because for syntax highlighting, etc, but still it's a nice concept. Yeah, and no plugins, there's no plugins. So yeah, we check out the action, it's a Docker container so we build that container locally. Then we, you know, we have to map in like our current directory into the container. There's a whole like eight or eight of us there's a whole GitHub specification document on how to build an action correctly. And then we just run it hello Docker action and then we get this map back. And one of the things that returned was an output full time. Now, this was pretty cool and it didn't take me that long to build but turns out actions support several flavors so we it's not just Docker that we have to be able to run. We also have to be able to run JavaScript so there's a JavaScript action. I think pretty much the same thing is just going to run the hello world action these are, by the way, these actions that I'm running are just provided by they're used in the tutorial from GitHub for how to build action so that's the actions that I'm using. And yeah, we'll come this one, it should be the exact same experience. Yeah, one question do you support the container actions so when you need to actually build Docker image to execute the action. Yeah, that's what we just did. We built the Docker container. Oh, so JavaScript, etc. It's still packages container in this, because the GitHub actions you have an option to run it without container. Yeah, but yeah, he it's always a container. So what I did for the JavaScript one is I made it behave like a container. So instead of like actually installing node locally which I think they already have like node installed on their runners. I'm just wrapping wrapping whatever sent whatever repo set. I'm just injecting that into my own node container and then running it. So here we go. And it works. It works behave except for they put this event payload thing which is not in, not in the Docker version but it's in their JavaScript version, but it still gives us the time of the build. And then there's a third one there's a composite action and a composite action is actually quite complicated. It's an action that can run like other action steps. Mainly when people use it, they're using this shell, they're using run, and they're running bash commands. So in this example, we're going to pass it some bash to run. And it's going to set an output in the map of outputs that are returned and the name will be test, and the output will be some value that this could be anything. And I actually really like this one, because you could take a really long complicated legacy bash script that your company has, and you could add some outputs to it because a lot of times that's what we just want. We have multiple spots in the bash script and we're just like man I wish I could have whatever this output is into an easy to use variable. And so now that makes it a lot easier because you can inject it into a map. So here we go. I think I ran the wrong one. Did I run both or just one. Sorry about that. Actually, the advantage of this approach that you can also use a Jenkins parallelization features. Yeah, because you still have full control. Yeah, maybe one question about the implementation. So basically you run everything on the agent side right. So when you execute within no context it's a very agent it's not execution in the controller. Yeah, so when you run a GitHub action. So you executed on the node. So basically the executes using Docker engine on this note. Yeah, so it's not executing on the controller. No, yeah, I definitely worked on agents yes. Yeah. Was asking just in case because not all Jenkins plugins we have in such way, especially for Kubernetes for Docker is a bit easier. But yeah, thanks for clarification. Yeah, we run the script and we're able to set the output with the output pack. Another thing I wanted to show. We'll run the first one again the Docker action again, but we'll take advantage of that pipeline low level for debugging because again most of the things we build respect that and can change the functionality of what's happening. I think for this one I haven't implemented where it won't delete the running container it'll leave the container up for inspection. But that's something that I want to implement in the future is when you run it in debug. And if you run debug during this build, it'll keep this container up so that you can access it and mess with it. So what we're seeing in green is like, it's looking at the metadata file and the metadata file tells you how to run this action. So it says hey I'm expecting these inputs, and here's the default value. And here's the outputs and what they are, and here's how to run me and this is the arms that I want. So we're parsing this and handling all this. And then here's the build the build log and the actual script that we run. And then, then the output. So if we come back and hit replay, we should be able to remove like who to greet. I've actually never did this I probably shouldn't do it live. But it should say hello world, it should, because we're parsing that file and handling everything. So that behaves exactly like a GitHub action would, if you ran it on GitHub actions. So that, what did I do this. I'm having like this weird sink issue you see this, where I'm running but it's running like a previous job. All right, hello world. So we got the default value. And yeah, I mean, that's it. A big thanks to the Jenkins community and CD, Khan and everyone involved for making these amazing events possible and if anyone else besides all that has questions, we'll be taking them for the next 10 minutes. So thanks a lot for this presentation. I think it should go straight to the Jenkins online meetup, because I believe that it will be a really good session for users. And I wish that we could even make this library more official, or maybe talk a bit with GitHub teams, etc, because it's a really nice concept. And actually they provides a lot of powers to Jenkins. So, for me, as an individual contributor, I would be definitely interested to promote that. And I definitely see many users adopting this approach, even on the Jenkins infrastructure because we have an open question for example, how do we integrate release a drafter with Jenkins pipelines. On solution. But I think that we could just use your library to trigger this. It have action though it actually is to one question is about passing credentials and secrets. So do you just pass them through environment or do you have some kind of automation in your library. I haven't looked into using any kind of credentials yet, but you're definitely going to like most GitHub actions is going to want to get your the GitHub API and do things with it. So, I haven't tackled that yet, but I was going to test it. I'm pretty sure for now you could wrap that in with environment, right, or like with credentials. Yeah, so credentials binding plug in together with mapping, but you would even it should be inside the library, so that users don't use it outside and they could pass in the declarative way like basically GitHub step to GitHub action step definition. So you could say that I want to pass these secrets and that's it. Yeah, it's definitely something we could do. In order not like the GitHub API, you would probably want to do something like action. You know credentials, right and set credentials ID. Or whatever they are I don't know if they're integers or strings but yeah you'd have to set some kind of credentials ID, and then it could inject because in, you know with GitHub API it's just, it's just secrets. I think it's a secret. So yeah, we could inside the library inject the secret for sure. Yeah, that's really nice. And so if somebody wants to contribute to this library, or what would be the recommend to a dual if you have a license or contributing guidance. Well, I know the answer because I asked for that. But yeah. So you have. Yeah, so. Let me find it. My sense I think I picked like gpl v3 that's probably like one of my weakest things was trying to understand licensing and things like that. I try to pick one that made it easy for anyone to use and take. And I think the only downside to gpl is that you can't bundle it with other code. So maybe that affects what you were saying about getting it integrated with another project but we can probably just change it. And then for contributing, there's a contributing doc out of doc. I just wrote it and I've never tried it so I'm not exactly how close it'll be to to your experience, but it should be pretty easy to get up and running. Because the project itself is actually using, you know, it's using Gradle, but it also uses pre commit and Python so actually the testing setup is quite weird. Because I'm not like I don't have a really strong background in Java so Gradle and Maven I try to stay away from, but I wasn't happy with the there's the two main testing frameworks and Jenkins right there's Jenkins Fock, and then there's another Jenkins like pipeline or I can't remember. I didn't, neither one of those would do the things that I needed to do. And because, like for instance I have the logging thing. Right. And I needed to test like did the logging work but from a user perspective. So none of those returned to me like the raw logging of the actual job it didn't return to me the Jenkins console. And because I'm doing things that manipulate the console I needed a different way to run it. So my tests are using the Jenkins file runner, which I think earlier for you're definitely familiar with. Yeah, kind of. So what I've done is I have my tests written out for each package and each class. And then what I'm doing is I'm starting a Jenkins file runner. So I have this pie test. The fixture is what it's called in pie test. But this is just passed into my test. And it's a function, and it acts like a decorator so it returns a function, but it's returning this run test function. But all it's doing is creating a container running Jenkins file runner. So it creates the container, the container is running and it passed that container back to the test function. So here's the run test. And what I do is I pass it the path to the job I want to run. So all of my tests are actually defined as real Jenkins jobs that can run on any Jenkins. So you can take any of my my test jobs, throw them into your Jenkins after you install the Jenkins standard library and they'll all run and pass. So I get the container running. And then when you run run test, it's just running the exact command on that existing container. Right. And it's, it's pointing to the job path and running that single job so I'll show you the test so that makes a little bit more sense. And then I'll show you what the actual tests look like, because the Python is just basically passing in the name. I'll show you the logging one so it makes a little bit more sense why I needed such a weird testing environment. So here, my container is that function I just showed you. And when I what I'm passing to it is the path to the job that I want to run. So here it's going to run logging logging example. And the job that I get back is the actual role Jenkins console, which a lot of the other testing plugins wouldn't give me. So here I can say hey is this here, and then I can change the level and say is this year, I can change the level again and say is this year. So that's what we're seeing here. Good. I'm just looking at really nice implementation because yeah I work a lot on this information of parts and libraries and I see that Victor is also quite interested in know what you're doing. Yeah, because yeah Victor also did a lot of hard core automation including Jenkins pipeline library. Yeah, I really like this approach though we're still yet to implement the Jenkins file runner based test automation. And actually there was even a session tentatively scheduled for the today's summit about testing pipeline libraries, I believe with this Jenkins file runner. But we haven't gotten up for forum to get this session running. Yeah, I could library about a year ago, but I didn't really want to do much with it unless I got it tested to the level where I felt comfortable with telling people like yes you can use this. It's legitimate break on you. I really struggled with the testing. I found Jenkins file runner but I struggled with getting it running so I actually put this away for like six months, and then I had some time around Thanksgiving holidays and in the US, and I went back to work on it and got got this And yeah so this is this is that same job the logging example and I use this as like documentation on how to use things and as test. And so like I got this weird stuff you know like you have to worry about pipeline CPS. So I'm always triggering like different CPS stuff like shell scripts at the beginning and into my test to make sure that anything I write is going to be CPS compliant it's not going to mess anyone up. But here's that logging logic. And the logging logic is kind of boring. Some of the other logic that I have a little bit better. So like if we check out the request library. You'll see like I make a get request, but then I really start hammering into okay is the response okay. Does the JSON that it returns is it the correct JSON, all that so it's, it's, it's kind of heavily tested. And then some of the code, like maybe no actions actions is the most recent code I wrote. And some of it I'm actually trying to write unit test as well so I got the example jobs, which are like my functional test, but then like I'll come in here and I'll create a little tiny test for individual function so you know is this returning. Is this one is if it throws the right air right. So did you put something in that you shouldn't have put, you know we can test for that. I guess I added an if right because you can do ifs inside of get of actions. So I'm testing that it skips jobs like it should. So it's a weird it's a weird testing workflow it doesn't take that much to get set up so if you have Docker. And I think I got everything in like just a requirements file inside the test directory. Yep. If you could run pip install in a virtual environment you can run pie test, and it should run all these tests and they should all pass. And I have a Docker so by default it uses my Docker image, but there's steps in the contributing guide if you want to build your own. Um, and maybe only that's a conversation that we can have about the Jenkins file runner because I still did this day cannot get it running. I stole your stuff from CI. Don't you have like a CI dot Jenkins file runner repo. We have some CI for integrate but integration testing. There is quite big. No, no, no, no. You have. Yeah, yeah, yeah. So that's that's the Docker file I'm running I came in install that Docker file. Because I couldn't get it running myself. And so this is this is that Docker file but I have stripped everything out that I didn't that I thought I could. Mm hmm. Yes, so for that I have for the lightweight container in recent versions which is based on the JDK 11, basically Jerry from adoptive JDK so the entire container weights something like 140 megabytes and Jenkins file runner. And it's the most stripped version you can get at the moment. Well, unless I know how to remove 20 more megabytes, but it wasn't won't be that pleasant for Jenkins stability. Yeah, yeah, I stripped out a lot of that and got it down to like 700 megs but I think it could be smaller. I just mean like, mean you should talk later about that repo and in the review and instructions in it because I have some ideals on how to make it super easy for anyone to use to confirm runner. I think it's kind of complex right now if you're not from Java, and you don't know what wars are and you don't know how to build Gradle build is quite complicated. Yeah, so speaking of that actually I was wondering how much of your framework we could reuse. In Jenkins file runner, maybe you have seen there is a project called Jenkins file runner test harness, which is written in SH unit two, basically. But yeah it's quite heavy weight as well. If you want I can just share show it to you. Yeah sure yeah you can take it. Yeah. I shouldn't have any secret. No, you can keep recording I will show just my screen. Okay. So again I haven't cleaned up my windows so if you see something weird to clean it up but yeah, but now there is just a lot of memes. So, yeah, I'm going to GitHub. And here there is a pro. Hi folks. I have nothing cashed in five folks because I was testing one patch by given, and I had to reset up as clean. So, yeah, there is main Jenkins file runner but there is also one Jenkins file runner GitHub actions, which is not quite active and Jenkins file runner test framework. So, this was a original prototype which was created by a various others and various stand front. We work together in a team. This framework is built around customer package for images. And it uses SH unit two, but it was built for better old versions of Jenkins file runner, which are quite heavy. This framework is also quite heavy weight and I was when I was rebuilding unit testing and moved the most of the tests from this framework to actually just unit tests. So, currently if you go to Jenkins file runner. What I have here. Yeah, so still incubating project, but whatever. So, there is slim packaging right now, and it's really slim so it gets your 140 megabyte images. And you still cannot run tests for example for vanilla package. What you can see here it's probably not the state of the earth implementation but still. So there is a small test. This small test allows to connect to Jenkins instance and verify everything using standard the Jenkins test harness framework. But you can also run a Jenkins file runner with all the features like Jcast, etc. Just by passing such configuration files obviously it can be verified, but it's a unit test at the moment. So, for some kind of tests, it might be easier to do it this way. Or, but yeah I understand that for real integration tests it might be actually preferable to have your framework. Maybe have some equivalent of Jenks fellow runner test framework maybe Jenks fellow test framework to the zero, which would actually be built around the more complicated test framework, because this one needs to be reworked for modern packaging. The custom work package are now is optional for Jenks fellow runner and customer packages still needs to be updated to support the recent person sufficiently. I cannot do that because of, let's say, not, not technical reasons, but somebody can build it around that's so around slim packaging. So we create the Jenkins plugin manager rent inside the image it's CLI tool. So you can actually build a test framework around that and make it generic enough. So what you did for your library but maybe something we could reuse for example for Jenkins pipeline library. I'm not sure whether you have seen this project. So for Jenkins intro, we have our own pipeline library which we use surprise for testing of Jenkins. So reduces our pipelines to one liners for building standard pipelines standards plugins and it's actually quite complicated inside at the moment. So Victor has invested a lot of time in creating a Jenkins pipeline unit based test. But it's a Jenkins pipeline unit. We don't have integration tests at the moment. And yeah I made an attempt actually to integrate the Jenkins test harness I believe it was some way here. Well, yeah. Did I close it? Yeah. But anyway, so there was a pull request which would integrate the Jenkins file runner in this. Am I blind? Oh, I'm blind. Basically, the point here that it required some patches on the library side but it allowed to actually run a Jenkins file runner test and I believe that he had a demo for that. Yeah, so make file which basically runs integration tests in this library. So here you can see that I'm using Jenkins for the runner test framework so it's basically around the stage unit two. And this is was quite a problem for me because it was native Docker, etc. So it requires quite heavy resources to test it, and I would rather prefer something like weight without maybe even Docker at all. Yeah, this is a framework how it looks like in practice. So it was running to smoke test is doing some setup. And it was testing the build plug-in step so it was basically building the image with maybe an resolve tools invited to emulate our CIG environment. You can see that I also had to keep windows built so I had to patch actually our production tested. But it was running. So I was able to do integration test for while using this old framework. And I think that it was modern approaches that have been much easier. Because the stage unit two is designed for testing bash. And it's not exactly the most friendly framework for testing in general. So, yeah, I think that your framework would fit this use case quite well. So, again, yeah. So this was created quite a while ago. I was unable to finish it again, due to non technical reasons. I have to pronounce it too often when I talk about Jenkins Falron unfortunately, but yeah, so just sharing because they think they're there and if somebody wants to take it over. So you're welcome to do so. It's open source. Oh, nice. I think I'm ready. Yes, just want to say as well that I really like what you just saw is the way that you send the idea of these objects that you can use and make it have actions. And there's a question regarding how do you foresee in the future, the way you can. I really like the idea of the give have actions and the more I hear from my users, how they use the CI. The more I realized the benefit from the have action is the usability that it seems like they like more these Jamal, though I don't like much. So in terms of how to make the usability easier. Have you thought about the files will leave always there as any give have workflows and the pipeline to be consuming these files, rather than defining them in the declarative file. Sorry, in the pipeline itself, but I can swim in these files that are more probably easy to read from the user point of view, rather than adapting the changes in the pipeline itself. It makes sense. Yeah, I think I was, I was kidding you're talking about, like, almost hiding the Jenkins from the user. Right. And just having them worry about the workflow file. Yeah, I think that would work. I mean that's kind of how this whole thing came about I was working for a machine learning team that works on self driving cars. And I think they were a little bit unhappy with their infrastructure team, and some of the things that she can so they were wanting to run. Yeah have actions but they were unable to. And so I just kind of started doing around like they can, can I kind of as anyone trying this and no one was trying it, and it wasn't that hard to put together. So I think that's a use case. I mean, there's going to be people that are kind of stranded and they feel like they're straying on deepens and they want to move somewhere else to camp. And now we can bring some features to them that that's easier for them to use. And then I think on a second, like the other thing might be, well, maybe we just want to make it easier for users. So we can create a couple, you know, a generic library function that just checks out the users code, and then once the users action, right, that's all we have to do. And then they can, you know, trigger it from their, from their, from their GitHub and run the job. So I think that's part of it for me. I wanted to just reach out and leverage a action here and there, where maybe I couldn't find the correct plugin but I could find the action. And I was like, man, if only I could just run that action I already have this pipeline built. It's already really nice and robust. But I just need that one action and I really just don't want to recreate it all. Especially if it's like, for one, one, one thing that I know I'm never going to use again. So yeah, there's a couple different use cases. And I certainly hope people use it because right now it's like me and two friends so I say, say, really great job. I'm willing to listen more about this. Yeah, I can go on and on. I think we just had a really good conversation about Jenkins X and interoperability and where is all this going where's CI and CD going. And what will it look like in 10 years and there's definitely different opinions there. Some people in that call were like, we'll all be demo pipelines and real simple pipelines in the future. And then there's people like me, I've worked with a lot of legacy stuff, a lot of complicated stuff long learning ML builds. And for me, I'm like, you know, I'm not sure I want to do everything in batch in the animal. You know, I kind of like having this full term in complete language at my disposal. And I know I can build anything I need that I come across. And it won't take me that long. And there's definitely like a middle ground. And that's probably where GitHub actions is is, you know, you define it in batch in the animal. But when you run it all the steps are built in some kind of high level language that that developer wanted. And that's definitely part of its growth was one it was very close to your source code. So it makes it easy to see the build and have visibility into the built. And then two, they can be any developer can write whatever action they want in the language or their choice. But I don't think like, I don't think we're going down the road where, like, if Jenkins doesn't switch to bash it's obsolete bashing them up. Because if you look at vehicles, you know, we've had over 100 years now for vehicles to just look like one shape, and they don't. So you've got different, different vehicles right different cars trucks vans buses for different, different needs and I think that's is, when you're doing simpler pipelines, like Kubernetes where you can deploy with qctl, and your pipeline's not that complicated, you can probably do everything in the animal and batch. I've had these really crazy pipelines that you know you're building applications and then you're triggering builds and are a bunch of other builds for a bunch of other teams in the same organization to make sure that their stuff works with your application. And then once all those results come back, you know you're you're looking at test results and they have to be a certain amount. And then you're going out and we're making a service now ticket. And then you have to wait to the RIV board approved your deployment a month later, and I built those kind of crazy pipelines and I would never try to do that in bash, and, and that kind of stuff doesn't really work with get how that. So, definitely it's an interesting space right now. Yeah, have the same sympathy with you. Can I ask a small question. This, you have action thing is really impressive. And I definitely can see use case here, but your shared library is a nugget. And I think I will look into it. I think I can use it right now. But how in Jenkins look they were usually a great, a great out text, which states that, like, I'm going to execute echo step I'm going to execute the sage step and, but your looks looks very clean. How did you hide all that stuff. Yes, with a plug in. So yeah in the read me I talked about how to make your Jenkins look like my Jenkins. So it's the simple theme plugin, and it basically allows you manipulate the CSS in the GUI. And so there's two classes pipeline hyphen annotated and pipeline hyphen new node. Just that display to none. And then they disappeared. So if you like going your Jenkins log right now and like, you know, hit F12 to bring up the debt tools. You can set those classes to display none, and you'll get a really clean output. Yeah, I, I, one of the things that kills me is opening up someone's Jenkins log and it's like thousands and thousands of kilobytes long maybe 30 or 50 or 100. You know, this is GUI nearly crashes. And you're trying to scroll through, you know, some giant C make build, looking for like the one thing that's important. That kind of stuff kills me so when I teach Jenkins I try to teach people that, you know, when you're trying to build this pipeline, you're the user, like make it nice for yourself because no one else will. So make these usability features that that help like the debug when you flip that log level to debug, you should your tools should behave different. So I have a bunch of stuff I haven't added to the library because of time I actually just got the testing kind of stuff working in February, and I added a couple things and then I started working on the get have action stuff like two months ago. So I have a lot of stuff I want to add to the library things like, I wrote, I have a terror, terror form client that deploys terror form code, super sweet. You know, things like, I have a packer wrapper that will allow you to build packer and lies. And when you turn on debug and run the job packer will not delete the am I when it's done the image when it's done like the running one. So if you have a cell build you turn on the bug you run it a second time, and the image stays up so you can SSH into it and figure out why did you build fell inside of the packer instance so just making your tools really easier to use for us because we were the users and they should be a pleasant experience. Yeah, I opened your repo on my personal computer and I put a star on it because it's really great. Yeah, I see the read me section with which hiding these messages. Unfortunately, I thought there's a way to hide output from selected steps. Right, so from regular pipeline called I actually won this messages. But we have some pipeline functions that uses maybe echo step a lot maybe is unique step a lot and you see this wall of text and the logs. It's, it can become it's cluttering it's going to be really confusing so. Yeah, so for that there is a plugin that was never published. I don't know if I'll be able to find it or if you'll be able to find it but there was a plug in someone wrote that they never published that did that you could it was a groupie closure, and it would kill all output to the console. I don't know the name of the plugin. I found it years ago years ago and because it wasn't published to Jenkins and I, you know, I didn't know, I didn't even know how to pull it down locally and build it and then install it to Jenkins so I never used it. But it was it did work and there was like, you know, there was like issues where people were like hey you should publish this and I don't know the name. I'll Google search it and maybe if I find it later. So it is on GitHub right. Yes. So I can probably try to find it. I'll start the countdown, because we need to drop, but yeah. Oops. Did I interrupt everyone?