 and we are live. Welcome to the February 2019, the monthly meeting of the Jenkins Pipeline Authoring SIG. Then we are going to start with Austin Witt, who is going to present his work on Jenkins Spock, test framework for Jenkins Pipeline, which he presented at Jenkins World in San Francisco. And then we're going to talk about the ability to improve develop for experience while working with Pipeline, which I'm probably going to butcher your name here and I apologize. Marcin added to the agenda. Do anybody else has anything they'd like to add to the agenda, you can feel free to do so on the agenda slash note stock that I shared on Gitter and in the email and then currently presenting. But, oh yeah, that's right. Martin wanted to talk about something too. So I'll let Martin add something to the agenda. Bonjour, Martin. But we will start now with Austin, so I will stop presenting and we'll get to hear about Jenkins Spock. All right, I got to figure out, is it possible to share one's screen with Hangouts? Yeah. Up her left, there's a green screen with an arrow. You can share the whole screen or an individual window or whatever. Okay. And then, all right, and presented to everyone, no one else, and you're in the matrix, okay. Okay, so this should in a moment stabilize to looking at github.com slash homoay slash Jenkins Spock. Is that where we are? Looks like, yeah. Okay. Can you make it up the size by couple of notches? Command on the plus probably. Yeah, it says that it's, okay, there we go. I'm not gonna read you this. As was mentioned, this was presented at Jenkins World 2018, which means there's a video recording of the long form 45 minute presentation. I'm not gonna redo that presentation because you could just go watch the video. If you just search for a unit testing pipeline Jenkins World in your favorite search engine, you'll find CloudBees site with a link to the video to watch. I'm just gonna cut right to the fun code part of this, which is, so you've got some pipeline code, you've written code to guide your application through its life cycle. You ought to test that code that you wrote, probably used to doing this for your application itself, but now you've written code for the pipeline, so you need to write some code for your pipeline, some tests for your pipeline code as well. Fortunately, you can do that with Spock, the industry standard unit testing framework for Groovy. This library allows you to, it takes care of getting all of the Jenkins pipeline steps in to scope as Spock mock objects, so that your pipeline code that you've just written will just work in any pipeline step that isn't there because it comes from a plugin or somewhere on a Jenkins master, will get mocked and instead of errors when you try to run your Jenkins file, you'll get interactions with mock objects where you can then make assertions about what should have happened or stub responses, et cetera. Luckily, this library, in addition to having pretty great, if I do say so myself, and thorough documentation for how to do each particular thing you might want to, also has some examples for how to test a whole pipeline of Jenkins file, how to test a shared library and how to test a, well, we'll take a look at it, how to test that format where you've got a Jenkins file but you also break something out into a separate Groovy file. So we're just gonna take a look at those. So over here, I have cloned that. We're gonna go into Jenkins Spock, we're going to the examples and we're gonna start with the whole pipeline. So this is a Python web application and it has some unit tests for the Python code, but it also has a Jenkins file, not the best view. Let's take a better look. It has a Jenkins file and it's got this function called deploy, which is a, as just minimally filled out, not actually gonna work, but try to give you an idea of, try to kind of be like what may be an actual C to the pants deploy function would be like. It's got the pipeline proper where it checks out the code, it assembles the amp. Yeah, when I hop in here real quick, I'll stand again, zoom that up just a little bit. I know you're going through this, but for people to be able to read it. Okay, if it would help, what time am I supposed to end my part of this presentation? That would help me modulate my speed. How long are you, if you were going at a comfortable speed is for you, how long do you think it would take? 45 minutes. All right, let's go at, let's try to get 30 minutes. Okay, at 10.30, I will stop. Or, sir, okay. So this example is showing how to... But also, can you, Austin, can you zoom this up a little bit? There you go. All right, I'm not looking at what it looks like on Hangouts because that does the matrix thing. So thank you. Like I said, I'll just tell you perfect. Thank you. Okay, so this is a simple Python web application. It uses Flask, it says hello, and then it keeps counting the number of times. It has unit tests for it. It has a Docker file to assemble a thing. If you run make test, it will really quickly build the Docker image and unit test the Python code. Great. But we have this Jenkins groovy code over here that is non-trivial, it's 50 lines that guides the application through its life cycle. We need to unit test that as well because it's code that we wrote. So there's a deploy function that can deploy to one of many environments at our company. And then there's the mock pipeline, not mock, but a simple kind of hypothetical pipeline here where the code is checked out. The Docker image is built. The tests are run on the application. We will send a notification to our Slack channel if the unit tests for the application fail. Assuming that they pass, we will push up our application Docker image, send it out to the test environment, and then if and only if we've been building the master branch of our project, we'll also ship it right out to production. So there's a whole bunch of things going on on this pipeline that conditionals, expectations, that we might want to test. Things, you know, so what does that look, what would that look like? Well, I spent time on this in the Q and A in the Jenkins world recording. There's been several other attempts to bring unit testing to Jenkins pipeline code before. Each of them has some trade-off or caveat. Either you have to write your code a certain way in order for it to be testable, or there's only certain kinds of assertions it can make about the code at HomeAway, where I was working, we had the problem where multiple teams were producing shared libraries and Jenkins pipelines that were more or less entirely unique to their use cases. And we needed some, my team, the DevTools team needed to be able to offer some guidance as to this is what you should be doing to guard the quality of all this new Jenkins code that you're producing as you move to Jenkins. All right, so what do the tests look like? Well, it'd be not, we'll look at the, so we're using Spock, which is a industry standard open source unit testing framework for Ruby. I think I have a link to it in the readme of Jenkins, Spock, yes. So it's Spock somewhere, I don't have a link to it in the readme, but it is in the presentation. There's a Spock host and interactive website where you can put together specifications and code and play around with it without having to install anything. But the Spock unit tests are pretty idiomatic. There's a lot going on here, but let's look at just. If I may interject briefly, if you have any familiarity with things like our spec from Ruby, Spock is directly inspired by that. The sort of kind of specification model for tests rather than the traditional J unit style of just running a bunch of code and checking what happened. Okay, so let's look at this one test here. In the, so the way Spock works is for every piece of code you wanna test, you have a piece of code named that ending in spec. So to test our Jenkins file, we have a Jenkins file spec. I'm gonna look at just one test here. Groovy lets you do strings as method names. So we've got a descriptive name for our unit test. We don't have to do camel casing or underscores trying to string together something useful. We can literally just type the assertion. We're gonna look first at the when part, when we run a Jenkins file, that's pretty idiomatic. Then we would expect, well, what we're testing is that Slack gets notified when the tests fail. We want to make sure that our Slack send step, that is over here, when we try to run the tests of our application, if somehow they fail, we want to notify Slack. So we're gonna make sure that we see that step called and our setup is saying that when we call SH in our, the SH step, the shell step in our pipeline script with that thing that is us running unit tests, we're going to stub it to throw an exception so that when we run our Jenkins file, we get an exception and then we would want to assert that we did in fact see Slack notified. Now this could be a lot more sophisticated and probably would be in the real world. You'd maybe want to make sure the right things were sent to Slack. You'd maybe want to make this a little bit generic so you didn't have to update it in perfect lockstep with your Jenkins file. Maybe you would want to make it a little smarter to know that whenever tests ran instead of just this one invocation, but the core idea is there. So how do we get a Jenkins file to be able to run and where are these pipeline mocks coming from? Well, those are the two main things that the Jenkins Spock library does. So if we scroll up to our Spock specification, a normal Spock specification starts in this format, some class, whatever your thing is extends specification. If you want to test a Jenkins pipeline, well, add the Jenkins Spock library and then extend Jenkins pipeline specification instead. In setting up our test suite, which is a most unit testing frameworks have this it again, JUnitRSpec as well, we're going to load a pipeline script for testing and it is gonna be our Jenkins file. That is a method from Jenkins Spock that reads the groovy text, parses it into a groovy script object and sets up a mock for every pipeline step that exists so that when the Jenkins file has something like stage or sh, something that we didn't write isn't defined in our code, that hits a Spock mock object that we can make assertions about later on. That is where what getPipelineMock is doing. It is getting the mock object that was automatically created for us, which is why we need a method to do it since we don't have to define it explicitly in our unit test, it's just there for every pipeline step that exists. Because the Jenkins pipeline script is a script, you can get the binding of it and set variables on it as well. You would need to do this for environment variables like SCM or N, things that Jenkins injects into your pipeline scripts that don't come from a step necessarily and aren't explicitly defined. So in this case, if we want the checkout SCM step to not error, checkout is a pipeline step. SCM is a variable that Jenkins injects. We have to be Jenkins and make sure we inject a suitable value for that variable or else we'll get a null pointer. But that's really the extent of the idioms necessary to get started. You create a Spock specification off Jenkins pipeline, you load your pipeline script for test and then in your Spock specifications which there's nothing, the only thing Jenkins Spock special here is the get pipeline mock method. Everything else in this unit test is just straight up regular industry standard Spock syntax. So all the public documentation, all the stack overflow posts, they'll work for you. You have all of that resources to help you figure out how to write tests. There's no, well there's, the secret magic is limited to a few specific things like get pipeline mock. And so you can use that to get pipeline steps. If you wanted to mock something yourself, you could also use the regular Spock idiom of, you can also set mocks up yourself if you want. And I'm gonna jump over to the groovy doc for Jenkins Spock where we can see and show a little bit more about the secret magic that this all brings in to make testing pipeline code easier. Then we'll look at unit testing a shared library which I think is the thing that most people are most excited about and then I'll be done. So we're gonna come over to Jenkins Spock and jump into the groovy doc. The thing, all the magic happens in Jenkins pipeline specifications. So I'll bring that up in its own window. So we saw testing groovy functions. You don't need anything. That's not really special, that's just stock Spock but it's here for completeness. Testing pipeline scripts, we just saw that. Load your pipeline script for test, hooray. The other thing that you can do is you can mock pipeline variables. Those are the two main extension points that Jenkins pipeline scripts have, steps, obviously. But also global variables, well, you can mock those as well. There's the syntax for mocking method call for accessing the mocks for method calls on pipeline variables is a little different. You do the access to the method all as a string and then that's your mock object. But it's explained here. We'll see that when we get to the example for a shared library. So don't worry too much about that now. I am speaking a little fast. Gonna slow down a little bit. There is a section explicitly on mocking your variables that are coming in from the vars folder of a shared library. If for some reason something is not automatically detected but you still need to test it, you can also explicitly set those up. The Jenkins Spark tries to identify all the pipeline steps that exist, identify all of your code that's going to need to call them and put those together so you don't have to do anything manually. But you can manually prepare additional objects in your test suite to call pipeline steps. You can mock pipeline steps that weren't automatically detected and you can mock global variables that weren't automatically detected. Probably though, that means you ought to change the way you've set up your project so that they are automatically detected so that you can do less work. For some corner cases, the Jenkins server singleton itself is mocked as well as a couple other things. And there's a warning down at the bottom, don't that if you're using groovy metaprogramming in your pipeline code, it will probably conflict with this and you maybe should reconsider your design. So let's, and so all of these are, they got links to the rest of the documentation, code examples, where necessary, showing how to use the library. So let's go and look at what it looks like to test a shared library. I haven't actually run the tests for the whole pipeline yet, I acknowledge that. We'll spend that time waiting on a test run on the shared library since that shouldn't be more interesting. So here we have a pipeline shared library. We've got a textual resource. We have a regular groovy class in the regular source, com, my company path. We have a couple global things in the vars folder. And then we've got some tests written for it all. Makefile driving everything and the magic that kicks off the framework, the Jenkins Spark framework and takes care of making everything detectable is powered by Maven here with a palm. At Jenkins World, somebody in the audience quickly threw together something with Gradle and said, yes, this works with Gradle as well. I have not done that myself and added Gradle-based examples, but- Did they do that like while you were talking, they was like, oh, let me do that. Somebody asked if it works with Gradle since most things that work with Maven- Exactly. And I said that I wasn't sure. And one or two questions later, Guy pipes up, it does work with Gradle. I just did it. That was pretty cool. It's clearly not terribly difficult, but I don't have an example yet. There is an open issue on the GitHub repo asking for Gradle examples. If anybody here is familiar and comfortable with Gradle and you wanna send a poll request that adds these same examples, but driven with Gradle instead of Maven, that would be, well, poll requests are happily accepted here. So let's look at this library. This is more or less the same thing as before, except we've got the pipeline for our application as a default pipeline.groovy, meaning instead of in our other application, instead of doing all this, we would just write default pipeline. And then now this application has nothing to maintain. It's all centralized into the shared library as one does, but if we do that, obviously the shared library is now responsible for making sure that the default pipeline works correctly. So the shared library has to write unit tests. And we've also got the deploy function in its own separate var called deployer to further contrive a example for every extension point of a shared library to prove that everything is testable. We have defined the deploy command as a constant in a real groovy class in the library. And finally, the message that we send it to Slack, we have said we're going to have a textual template for that over in the resources of our shared library. So that's all three kinds of extension points that a shared library can deliver. Can we unit test them without having to change around our shared library? Yes. So over here in the test folder, we've got default pipeline spec to test our default pipeline and deployer spec to test our deployer. What does that look like? More or less exactly the same thing. We still have the Slack is notified when tests fail, tests that we were looking at. It looks more or less exactly like what it was before. The only difference is that instead of Jenkinsfile.run, we are calling the step from our shared library. What does it take to make that possible? Well, in the setup, we have to define that default pipeline step by loading a pipeline script for test. And we load our shared library variable. Now, default pipeline is going to behave just like it would in Jenkins. As before, we have to inject any environment variables or other variables that Jenkins would set since we aren't in Jenkins here. And then we also are going to stub the library resource step so that when we try to, well, we're being lazy and using the Spock idiom for match everything so that any time any bit of our code tries to get a library resource, it's going to get the string dummy message to mock out actually accessing our message template over there. And that's really it. We also, instead of Jenkinsfile, we've renamed it to default pipeline. Deployer spec is going to look more or less exactly the same. We have a deployer now instead of a Jenkinsfile. In the setup, we load that pipeline script for test. And then we can call the deployer with an argument and make assertions about what should have happened. Cool. So how that is it working? I guess I should show it actually working. Test our shared library. Notice it integrates nicely with Maven through the Surefire plugin running each specification one at a time, giving you the test result output. If we had failures, we would see failures, but we don't in our examples. That's good. And that was our test, our five unit tests against our shared library. So how does that work? Well, there is the Jenkins pipeline specification, as I mentioned, which is an extension to Spark. How do we get Spark to run? Well, we're using Maven to do two things. Bring in GroovyRuntimes, which we use, scroll, scroll, scroll, sorry, three things. Using GroovyRuntimes with the GMavenPlus plugin to compile and run that Groovy code. This requires specifying, in the case of the shared library, where our sources are so that they can be compiled and available for Jenkins Spock. We also need to add our non-code resource as a resource. And the other really cool thing that's going on here is all of those pipeline steps that are getting automatically mocked. They're only getting automatically mocked because the Jenkins plugin class files that are annotated as Jenkins extensions for each of those steps were on the class path when the unit test started because we added them as dependencies in our POM. What that means is that if my pipeline code uses the stage step, I have to add the pipeline stage step plugin as a dependency. If I don't do that, then I will get an error message when I run my tests, if that often. You'll see it's actually quite a helpful error message that guides you towards fixing the problem. The other really cool upshot of this is now you have a place to look at to see every plugin that must be present on a Jenkins master in order for your shared library code to be able to succeed. If that list is not complete, your unit tests will not be able to succeed. So now when somebody wonders, I configured the shared library, but what steps does it need? Do I have all the steps that this library is going to try to call? Well, now you have a dependency specification that those consumers can look at to make sure they've got everything. All right, so our test did not fail this time. Let's take a look at what happened. All right, we've got some errors. We see there's failures, but the cool thing is this error message, which is word wrapping. I'm going to zoom out just a little bit to try to stop the word wrap. Oh, okay, we'll keep it. So I'll just read this. So this was one of our unit tests. It failed and the error message is actually quite helpful. During a test, the pipeline step stage was called, but there was no mock for it. Now we have the power in Jenkins Spock to just create that mock and go on, but we choose not to because that forces you to identify all the dependencies of your pipeline code. That tells you is the name correct? Perhaps you know, perhaps you made a typo. Does the pipeline step have a descriptor with that name? Perhaps somebody's got a pipeline step that is called one thing, but available as another. Number three is probably the most common. Does the step come from a plugin? If so, is that plugin listed as a dependency in your POM.xml? When I add greater examples, we'll probably have to update this message to be a little more generic. And if not, you can, in your test code, explicitly say, I know that this pipeline step will exist, but it doesn't come from a plugin. In the case that you've got something crazy going on. And that is that. Now that we got this error message, we realize, oh, okay, yes, my code does depend on stage. I have to add as a dependency that my code depends on stage. And now, once again, the tests will pass. I think that is everything necessary to present, to spark any further useful discussion. I am just about at the 1030 that I promised I would stop talking at. So that's all that I have to say. Are there any questions? Or where do we want to go from here? If you're watching, rather than in the participant, feel free to ask questions in the Gitter channel. We'll give it a moment to see if anybody. I am not found in Gitter over here. I'll relay them. Anybody here who's on the participant? Well, I mean, we can start by saying this is a really cool use of a standard library to, and then extending it to do unit testing on Jenkins Pipelines. This is fantastic. I'm really excited about this. Well done. Well done, Klaus, you know, all of that. Okay, well, I guess, shortcomings of this, I don't bring it. You don't have to say that any of the shortcomings, we can just be positive about it, can't we? Opportunities for the community to get your opportunities data. We don't have Gradle examples, though allegedly it works with Gradle as well. That might make it more accessible to more places. We also don't address declarative pipeline. This is all for scripted. Declarative pipeline is a single scripted step that takes in data. So I don't spark framework test code, not data, but people that write pipelines want some ability to make assertions that the thing they're sending into their declarative pipeline is going to behave the way they expected. That is a very different problem than unit testing code, because somehow you need to be able to make assertions about what some other code engine will do with input data, side effects and all that, not just a return value. Declarative isn't directly executable, so we would need to bring the declarative engine in or something. I haven't figured out a path forward on that. Yeah, that might be pretty hairy, because declarative is so heavily dependent on the CPS execution context that you might not actually be able to run it. But I mean, the core of it is a groovy file that is basically run, you know, compiled down to CPS and run the same way shared library would run, it's a global variable internal, capital G, capital B, global variable internally. So you probably could find a way to pull it in and run it, but I'm not sure. Right, and the question is not how to do that, but how to offer some kind of pre-runtime confidence tool for declarative pipeline. Yeah, I mean, we've got validation already, but something, it'd be interesting to see how this could be mashed together with declarative validation to provide some level of confidence while allowing you to do much more thorough testing of the underlying shared libraries for anything complicated. Because within, as long as you're not jumping out to script blocks all the time in declarative, the expectation that the pipeline should run and all the steps that are defined in it is pretty solid, and that's one of the points of declarative is to make it declarative. But inevitably, sorry, go ahead. No, go ahead. No, yeah. Okay, inevitably people are gonna be at companies where someone writes a declarative pipeline that's 300 lines long and somebody else wants to assert that, all right, whatever they put in there, it is not going to try to deploy to production. It will always run unit tests. Yeah, so to be able to set preconditions to trigger the appropriate when conditions and that sort of thing. Yeah, that would be interesting. That would definitely be a good addition. That's definitely being thought about, but I don't have a path forward yet. And if you go to the GitHub repo and look at the issues, even the closed ones, there are a lot of interesting discussions there about how to use the library in various ways, how to achieve different things. A lot of cool things that you could take a look at if you wanted to read more about people trying to use this. I have two procedural questions. How is this licensed? Fantastic. How is this licensed? Version 2.0. The APL is always a good option in my opinion. I've just run into too many cops, Steven. The spoke weird licenses to not ask. Excuse me, the Jenkins Templing Engine is open source. I know, I know. Thanks for me having run into a spoke license and brought that up. The second question is, have you considered moving this into the Jenkins CI org on GitHub and so that it would be less of a single point of failure for, in the long term, it's... I have considered that. I would love to do that. I started reading the documentation for how to get something moved to Jenkins CI. And it's a non-zero number of steps. Drop me an email and I'll help you out. Yeah. It isn't, yeah, there's no ideological or legal or procedural reason not to do that. It's just, it's got a non-zero number of steps, which means I have to execute those steps at some point. And it's especially hairy when it's not a plugin because we have a whole process for plugins, but it doesn't quite fit for non-plugging. Drop me an email and I will put you in contact with the right people to help you out. Alrighty. I would say that, you know, last time we talked about the Jenkins Templing Engine, all of our libraries that we've developed for that, we test with Jenkins Spock and have had a good time implementing it and getting some test coverage on those things. So we appreciate your hard work. Yeah. And I, in my capacity as a CloudBees employee, have run into a couple of our customers using Jenkins Spock as well and recommended it to a number of others. So hopefully this will help with its adoption and we can finally have a canonical, blessed, maintained, updated, et cetera, test framework for Pipeline so that we don't have four or five divergent ones that are all kind of hidden away on their own without wide enough adoption or contribution to really be enough of an answer for enough people. So thank you very much for presenting Austin. I'm going to move on so that we can get through everybody else today. Can I throw one more? Okay. When you think regarding the Gradle example, I would try to take a look at that because Gradle is definitely a good alternative to build software. All right. That came in with a bit too much gain. Yeah. Austin, you may want to move your mic a little bit away from your mouth, but he was saying he's going to take a look at the Gradle, helping with Gradle examples. Thank you. One final thing, opportunity for community contribution. We don't have a solution for getting code coverage percent numbers out of these unit tests at the moment. And with that, I yield. That would be interesting. All right. I don't want to stop sharing my screen. You go to the presenting button. So next in the agenda is let me share the agenda again. Where's my button? No, I don't want to share my cat picture. I think she's adorable, but irrelevant to this meeting. Not irrelevant in general, Kitty. Better be sure to say that otherwise. Yeah, she's... Look up with claw marks, yeah. She's currently under the blanket in my bed right now. So she can't really hear anyway. Comprehend English, I think. I'm not sure. She may just choose not to a lot of the time. All right, so let's move on to our next agenda item. Marcin, am I pronouncing it right? Is it Marcin or Marcin? Marcin. Marcin. Marcin. I try to figure out how to pronounce other people's names accurately. So Marcin. On this group, who wanted to have a discussion on the ability to improve developer experience while working with Pipeline. So the floor is yours. Yeah, so I started that thread because it's somehow a continuation of my utterance at the BOF at Fozden as I'm somehow, as I explained on Githa channel, I was somehow frustrated having issues with quite the quite worse developer experience after migrating from job DSL as I... Can you hear me correctly about the... Yeah, sounds good. And as I understand, it's not the same as job DSL. The technology is different and I like some benefits of Jenkins Pipeline. I'd like to talk about the ability, about option. It is possible to do something. Some, if there are some areas which could be improved to have, for example, better code completion in IDE. Because currently it's some elements in the development are somehow problematic. And first of all, I wonder if I'm the only one who has that kind of problem. Because maybe I'm a developer and I'd like to, I'm specialized in automatic contesting in general, or it's my very specialization. And I'm somehow, I'd like to see, I'd like to see the framework that Austin showed before because it's somehow different that Pipeline unit that I was using before. So I hope to play with it and to get some other experience with that. However, there are some, there are more elements which are somehow problematic, such as code completion, code compilation, or the DSL was mentioning, even things like inclusion of shared libraries. Think it's, there's no standard how to do that because you can, in Jenkins, you can just add a shared library which is implicitly or explicitly included, loaded. However, in IDE, it's somehow problematic if you need to do some assembling, some other stuff. Therefore, on the one hand, I would like to ask you if it is, if I'm exaggerate that situation and do you think it's not a problem. The second thing is, do you see options which could be improved in various fields? And the third thing is about declarative and scripted pipeline because I think that declarative seems to be more promoted and more features such as ability to reply from given stage is available only from for the declarative pipeline. And maybe it's not worth to invest into writing better tools or better mechanisms for the scripted pipeline because declarative pipeline, which for me is more like a programming, it's something that will be promoted more and more and in the end, scripted pipeline will be deprecated. So if I may, I totally agree with you on the transition from job DSL's developer experience to pipelines. Job DSL has an amazing developer experience. I've actually tried on multiple occasions to hire Daniel Spielker, the maintainer and primary author of Job DSL and the guy who wrote pretty much all the demo experience related stuff, but he's not being available. It's very frustrating. He's awesome. I'm terrible at that stuff, which is part of the problem. Then on the declarative scripted thing, because that is a philosophical thing and then we'll move on to the more specific things. My vision, which is probably a general consensus amongst the developers working on pipeline these days and cloud views in general, is that the ideal scenario is, the Jenkins file is declarative, but if there's more you need to do that you can't do out of the box with just steps and declarative, pipe or shared libraries are exactly the way to extend because we're never killing scripted as the way to write shared libraries because there is definitely a need to be able to do more complex logic and conditional behavior and loops and various other things, then declarative will ever provide. I mean, there's things we need to improve in declarative that we just haven't had the chance because of other priorities from our day jobs, but that I'm hoping to get on our product manager to sign off on in the next few months, among other things to do things like adding the ability to pass a variable in some form between stages so you can capture the output of a shell step and reuse that in the next stage, but we still wanna keep declarative declarative, the structuredness aspect of it, which is not good for programming. Obviously, it's not programming. And I believe that it addresses maybe 80% 75, 80% of Jenkins file needs when combined with shared libraries. I'd like it to get closer to that 100%. It'll never entirely get there. And there's some things we can implement in declarative that are just really difficult to implement in scripted dimension stage restart. That's because declarative is structured because we know that anything that happens is within a stage and we know what has happened before that and what could happen afterwards. And the like, we can say, ah, this stage is an atomic unit. We can restart this. We don't have to save the program execution state and resume it, which is how a proprietary plugin from cloud use work, the checkpoints plugin works for either scripted or declarative. Actually, really, it only works in scripted declarative, it gets confused usually. But that is the nice feature. It is not as useful as you'd think and it is absolute hell to maintain and support. And so there are ways we can do approximations of the same thing in pure scripted Jenkins file that we can do for declarative, but they're only approximations. They're all a lot more complicated to implement. They all have a lot bigger potential bug surface to it. And so if we can build features that, if it can take us a week to build a feature that is a little bit more complicated, that is exclusive to declarative Jenkins files, or a month and a half to build it for scripted, we're gonna build it for declarative. That's kind of why declarative is there. But again, we're not getting rid of scripted. We're not, in steps, et cetera, we'll always work in both. The execution in declarative is on top of scripted behind the scenes. It's only rare edge cases like Stage Restart where there's an opportunity presented by the structure of a declarative pipeline that is just not there and scripted that allows us to build new things. Obviously, I am very biased when it comes to declarative, not gonna lie, it's my baby, et cetera, but I've seen enough usage of scripted and declarative and to feel fairly confident that we've made the right call there. Now, in terms of the other things that you brought up here, code completion, IDE integration in general, yeah, we suck on that. I tried to pick up some work somebody did on the GDSL and you know, code completion for IntelliJ years ago, shortly after I joined five Bs and was no longer just a user and contributor to Jenkins was actually getting paid to work on Jenkins. And I could not figure out for the life me how to do it right. I think the right answer is that we need honest God, IntelliJ, Eclipse and VS Code plugins that can take advantage of the APIs on the Jenkins side that can expose things like global variables, like code completion, like declarer's validation in real time by talking to your Jenkins master and saying, okay, what steps are available here? Stop that, Austin, I can see you. Not necessarily requiring that, but I mean, we can't tell what all steps are available unless we know what the master has installed. We can't tell what global variable, what shared library stuff would be available, et cetera, et cetera. I think that was gonna add to that. I think that, you know, if you could took into that pipeline syntax API from your IDE to be able to ping it with, you know, I'm using the archive artifact step and have it return for you the data around what the available options are. Yeah, that's exactly the kind of thing I'm talking about. Pipeline syntax is generated off of resources that are in Jenkins plugin artifacts. You don't have to connect to a live Jenkins master to get that, you just have to know which plugins are on that Jenkins master. And what versions? Right, you need like a dependency specification or something. Yeah, it's one way or another without worrying about the exact mechanism, whether it's talking to a live Jenkins master or whether it's, there's some endpoint exposed from the Jenkins master that provides a specification of what's installed that you can download and configure in your IDE or you could just manually configure in the IDE to say here are the plugins and versions that I have installed or whatever it is. One way or another, an IDE plugin that knows what steps are available, what their arguments are, what the structure of a declarative pipeline can be and what's eligible to be used in what place would go a huge distance. I don't know, as I've said in multiple talks now, I have no idea how to write up one of those plugins. That's something where I have been periodically begging anybody I can find to see if there's anyone who knows anyone who could has experience with that kind of plugin. There is a BS code plugin for declarative validation, for example, it does talk to the ID, to the Jenkins master, but that's all it does. Well, the one thing to consider is that there's, we have the thing that generates that, the sort of the information for from your, but it's very minimal for your editor. If we actually got that working a little bit more, then rather than writing a plugin specifically for Jenkins files, we could have in the generics if there's anything that you can use this that might be a better, that might be easier to achieve then. Yeah, but I think the main point on in terms of ID integration is that we at CloudBees have not invested a lot in it and the community has not magically done it for us. And it comes up as an issue in what way, at Austin? I'm sorry to interrupt, but if I never interrupt, I'll never get to talk. Yeah, you raise your hand, that's fine. I have solved this a different way by shipping all of my pipeline code as resources, the byte code or text in a groovy plugin. I have regular code completion hooking off of the Java and groovy plugins for Eclipse for all of my extensions to Jenkins. I have submitted a proposal to talk about this at this year's Jenkins World. I can't show you anything now, but I have been working this problem. I just can't show you. All right, well then let's only have a talk sometimes. I'm going to plug in for things like auto, for things like more not, because there's things that matter to Jenkins, especially all of the stuff that's in pipeline syntax, all the validation that does not manifest in the code itself. Jenkins has to know to run the validator. So there is still a place for a lot more better experience with a plugin that's actually aware of that it's working with Jenkins extensions rather than just. So my way, what I have done is, as much as I could do with what was already there, but I don't want to say stop because there's totally still more to do. But I just wanted to grow that out there. That's cool. The reason why you can't show it is because it's internal. Yes, it is not open source to us. We are working on that. Basically the same dance that we had last year was, because yeah, it was around a year ago that I first saw Jenkins, the Jenkins Fock work, because HomeWay, you're still, yeah, HomeWay is a customer of CloudBees. So we heard about it through our support people's grapevine and they're like, I think your pipeline guys might want to see this. And then we started harassing, Kalske in particular started harassing Austin and HomeWay to open source it and to submit a talk for Jenkins World. So let me just. Let's just jump in here for a second. We have an administrator here. We're at five minutes till the hour. I think we probably want to move on to talking about Martin's question just for a minute. Yeah, I will, let me just throw something in here that will follow up on this subject at next meeting. Okay. So, I mean, I think the summary here for Martin is that yes, your concerns are completely valid and yeah, I mean, like, if you're having the problem, it's probably everyone having the problem. So input and help welcome and like, you seem uncertain as to whether or not there was a need there is. So let's continue to maybe also put this on the list for next time on things to talk about. Martin, or Martin. Martin, hello, yes, thank you. But I'll try to be quick. So this morning at the JSOC SIG meeting came a question from potential student regarding the advanced build this card plugin. So if I could share my screen, if you don't mind, do you see my screen? Let me stop sharing and present you there. Yes, we see. Okay. Yes. So the question came from the student and one of the users. So what I'm showing here is a pull request from to the Jenkins core. And it has been active for a while and we were trying to figure out why is the build this card or not discarding or how we could configure the build this card or to be more aggressive about discarding these builds in the multi branch pipeline. And more specifically one of the use cases, one of the users said he wants to keep the branches that are merged to production. He needs to keep them for audit purposes, but all the branches that are not merged in the end could be deleted. So I'm not a multi branch user. So it's like you're getting it from someone who has heard someone who has heard someone talk about it. So let me just see if I'm understanding. They want to treat merged branches as non-orphaned and preserve them indefinitely. They want to treat closed but unmerged branches as orphaned and have them go away. Is that right? I think so. And then probably more fine grained how many builds to keep for the branches that get archived but not removed. Does that sound in the right ballpark? Yeah, something like that. Unfortunately, I was trying to get the user to attend one of the meetings, but he cannot. So I think that's along those lines. He also said if he could set some flag in his own pipeline and this flag would say, this branch must be kept or at some point this branch can go away. Okay, so mainly we're talking about the branch we believe we're talking about the branch level rather than the build level. I think so. So that's orphaned item strategy rather than build discarder for what it's worth. And I'm not sure off the top of my head how well, how granular we can get with that if that's something that we can out of the box do right now or if it's something that we'll need to be implemented to be able to do more than just if it's open, keep it if it's closed, throw it away after some period of time. If you could start an email thread on the maybe on the signaling list if not to be directly with the user you were talking to about this, I'd love to follow up. Okay. Okay, I'll try to move the discussion to the email thread right now. It's in the JSOC proposal for the advanced build discard strategy plugin. Okay, I'll do that. Thank you very much. Thank you. Does anybody have anything else they want to talk about right now? All right, then you and share me instead. Again, not my cat. There we go. So there's my action item. For the next meeting all of CloudBees engineering and product org is going to be at an offsite in Spain during what would be our regular meeting time next month. So I was thinking we move it up a month and have it at same time, same day of the week but the first Wednesday. I'll send out an email and ping on Gitter to make sure but just wanted to bring that up in case anybody had thoughts right now. Otherwise, thank you all very much for attending. Thanks especially to Austin for presenting Jake and Spock and to Marisin for bringing up his questions and concerns and hopefully we can follow up on that usefully over by the time we have the next meeting and thanks as always Martin for keeping us looped in with what's going on over in the GSOC world. So thank you all very much and see you next time. Thanks. Thanks. See you.