 And we are go. Welcome to this week's SIG meeting for April 17th, 2020. I'm Liam Newman. We have Marky Jackson, Mark Wake, Steve and Terana. Is that right? Okay. And that is like Piranha the fish. Okay. Devin Nussbaum and S Foster. This meeting is being recorded. If you don't want to be recorded, please feel free to drop off. And this, we are also, of course, following the code of conduct, the Jenkins code of conduct, which basically just is everyone be excellent to each other. And let's get going. Let's see here. We have actually two open items. Marky, let's go with yours first. We have the roadmap item, which oh, let me actually stick a step back. Last week, we had a conversation in regards to changing the meeting cadence a bit and doing a issues walkthrough on a monthly basis. So what I wanted to do when I say an issues walkthrough will get more down to that as we talk about the roadmap in greater areas of focus. But what I wanted to do was, A, put that out there to everybody. B, see if everybody agrees with that. And finally, C, see what we would like to set that cadence as. I was thinking just to throw an example out there, we would do it the last Friday of every month and say, okay, we will walk issues that are created based on focus areas will go through and you give sort of an update on your issues. We'll call it the sort of the state of the SIG where I'm working on IntelliJ pipeline development in IDE. And here's what I've worked on. Here's what I'm going to be working on so forth and so on. So I want to put that out there to the community to say, A, if you think that's a good idea, B, what cadence you would like to do for that. And then I forgot what C was, but I think we get just a bit. Okay, okay. Any comments for that? So if there is no negative feedback, what I would like to move forward in the next action item, which would be for me is to update our SIG landing page that says the, you know, fourth Friday of every month, we do a roadmap walkthrough just, you know, and I'll put verbiage in there that essentially states, this is what we're doing on the fourth Friday of every month. And then I will go and update the calendar, the events calendar. So everybody will know what we're doing on the fourth Friday of every month. If that is okay, I'll do it. If anybody has any concerns about that, you voice it here and I won't do it. So just clarify, so you're envisioning that we'll actually go through JIRA reports, issue reports as a group and review them together? Or someone's bringing a summary of those? Tell me more. So if you look down when we discuss the roadmap focus areas, ideally, what will come out of that will be a story. There will be a story that you'll be working on. So I'm going to use an example. One of the stories that I find very interesting is better pipeline development in an IDE. A great example would be the GDSL in IntelliJ. That's something I would like to work on. So in this monthly cadence, I would come and say for the last month, I had these tickets or JIRAs that I've created for task and this is what I've got done. And you're basically just giving an update. We don't go and dissect them in detail and code reviews. We're not doing that. It's just giving sort of a sick state of the union, if you will. Got it. Thank you. Okay. Thanks for the clarity. So if everybody's good with that and there's no negative feedback. Sorry, I was on mute. Just to be clear, it's basically like a roadmap update, which might also include looking at issues in JIRA that have come in that people think might should be converted to something that will be on the roadmap. But it feels like it's more centered around roadmap than it is around JIRA. And I like that. I was worried it was a review of all the issues that have arrived and that doesn't feel like an effective use of our time. Agreed. And that's what we don't want it to be. We just want to keep it sort of on the fence where we're focused more on the roadmap. But at the same time, we're not turning a blind eye to people that are opening issues for things that are pipeline authoring centric. So that may be the time where if somebody raises an issue and says, I feel this belongs in this SIG, we can then say it doesn't fall into these focus areas, but it would in this SIG, could you address it there? So if nobody has any negative one to that, what I will do is I will put together an email and send it out to our SIG mailing list. I'll wait 24 hours to see if I get any feedback from that. And then by the end of the weekend, or maybe even I'll wait till Monday, I'll then go ahead and put a PR in to update the SIG landing page. And I will tag not only the copy editors, but everybody in this SIG so they have a chance to say that they were a part of that review process. Great. Thank you. You're welcome. Thank you. And I'll let you add the action item to I guess the bottom here. Is that what you want to put it? Yeah, whatever. Okay. There we go. I'll add that once we're done with the meeting. Cool. Let's see. So that was the new one. Was that the roadmap tracking that we're talking about there? Yeah. Okay. And that's so we're talking about how people can bring in new items. That's the point at which we'll be doing sort of the state of the SIG. And if new things come in, that's where we'll start them up, right? That's correct. Okay. From last time, my task was to create jiraqueries, which I did not get to. I had an urgent issue come in that I have been wrangling for this past couple of days. I wasn't sure that I was clear last time. And at least I wanted to clarify here. What I'll be doing is making jiraqueries for pipeline authoring issues. And it's a little like basically what falls under the charter. We were sort of back and forth last time that there are issues in pipeline, in the engine, in the underlying structure of things. There's tons of CPS related issues or other things that are related to pipeline, but not pipeline authoring. And the focus of the SIG is definitely authoring, not the engine overall. There's some things, there's an interface where those two interact, of course. And so it's basically just a matter of best judgment about where those are. But I just want to be clear that that's what I'll be putting together is a query for the authoring issues. I'm a plus one on that. Yep. And that'll be for next time. So I'll put that back on the action items again. So other discussion that we wanted to have was around the roadmap focus areas. Mark, you want to pick that up? Yeah. So as we were talking about the roadmap, one of the things we were trying to do was really highlight the focus areas that pertain to pipeline authoring. And we really wanted to start to now drill down. We did our personas. We've talked about the maturity model. Now we want to start drilling down into what those focus areas are. So a couple of the ideas that we put forth are things like pipeline development and IDE. That could be a focus area. Syntax improvements, pipeline testing. There may be one or two more that we could do. But what I wanted to start when we address these focus areas is start to say, okay, what is it that you may want to do in that focus area? Because that becomes part of what the roadmap is. For example, the pipeline development. I know that one of the deficiencies that currently exist is in the GDSL, there's syntax highlighting and things like that when it comes to pipeline authoring. That is lacking. So I see a lot of work that can be done there. Not only just in IntelliJ, but VS Code. And I'm sure Eclipse as well. So in doing that, I saw that that would be a good focus area to put forth to the community and everybody say, yep, I would love that. I feel that that's beneficial to the authoring of a pipeline. And then as we go down to some of these other areas, syntax improvements, pipeline testing, unit testing and things such as that. And then overall documentation. What I wanted to do was put these forth to the community and get a consensus of A, is this beneficial that we've highlighted here? B, is there any that we've missed? And C, no, these are we're going the wrong way. We need to do something completely different. I wanted that to be more of a community driven discussion. I will say these are basically like these are so far just following along the for the focus areas that are listed on the SIG landing page. So I mean, there's I think there's at least two or three more there. But the if anyone wants to sort of throw things out here, I'm looking at you Mark in terms of the documentation, of course. But just I guess I'll start. There's there's at least two or three syntax improvements that I can think of that would be that people have asked for that that we can make to declarative pipeline that will that are that'll be a big boon for people. And I I know of at least one that was an improvement for matrix that I filed just the other day. And there's a couple others in terms of execution, this ordering execution doing a case what do you call it switch case basically for a set of stages. So you want one out of these three stages to run. And rather than having to do really complex when wins to make them exclusively each other, you simply say switch on these things. So I'm just going to type those in right here. If anyone else has things they want to bring up. Please go ahead. So for me, one of the one of the consistent topics in documentation is absence of examples and specifically examples from the the exact keyword that the that the user is trying to consume. So the the challenge there is authoring those examples usually means writing an HTML page in plug in document in plug in source code. And we're already struggling to get people to write without making them also do HTML markup on this documentation they're creating. So examples are are a challenge and better documentation. It's powerful that it's written it's bundled in a plug in. That means though that it can't really be improved until a new plugin version is released with that revised documentation. So we've got areas to improve there. Now, how would we handle use cases where it spans multiple plugins? I could imagine that there's comments scenarios that exist that I in this might exist already, but I could imagine going to the doc site and being like, I can't even think of a good example. But like, here's a comment task I would want to do, you know, it requires two or three plugins. Where do you store that sort of use case example? Those are those are great because we put those in as how to or as as we give them a page on Jenkins.io, we offer them an ASCII doc, we can put pictures in it's it's a much easier authoring experience actually if we can get outside of the single single keyword case. So Stephen, I think your point is very good. And and those cases are very, very much suited to put them on Jenkins.io host them there. Do we have a backlog of how to's that we know come up a lot that we would like to write but haven't gotten around to yet? Like I could see that being maybe a good first use case for someone to get involved with contributions. Like, if I knew that there was a backlog of articles that could be written, I could take like 45 minutes to write one up and do it pretty quickly or something like that. Good, good, very good suggestion. There is not such a backlog currently. I think what we have to do is let's create ourselves an epic for in JIRA for pipeline authoring or for documentation improvements and then we can start inserting them into that epic. That then is an easy place to hook into the roadmap and display it on the roadmap that hey, authoring improvements are coming here, people then click through the roadmap link and see, yes, here's the current progress on those on those that epic. I over engineer anything, but I'd sort of be interested in like, can we look at all the questions that are asked in Gitter and see if there's themes and then prioritize our backlog for how to articles based upon frequency of how often are people asking for support for that topic? Right, not just Gitter, but also the documentation feedback that we have that we collect from Jenkins.io, because it has a provide feedback page. We also have GitHub issues now enabled on Jenkins.io. Those are all good information sources for prioritization. So it would be nice if we had a little boilerplate for this, but basically every time someone asks a question on Gitter or these things, we can, if we had a boilerplate, we could just, you know, drop, okay, hey, can you add a JIRA for this? So that we, or because basically if it becomes something that we have to do or that we have to follow to make happen, that's difficult. When you say, I think it's a really great, I'm sorry, Markey, go ahead. No, no, no, please go ahead. I was gonna say that I think that this would be a great opportunity to add new contributors, right? If someone asks a question and Gitter or through the feedback and we take time to give them a really detailed answer and help them get to a solution and then be able to say like, you know, this is a great opportunity to become a Jenkins contributor. Here's the ticket that corresponds to like how to actually write a how-to page. Take the information that we provided in this Gitter channel or whatever form this is and then format it as a how-to article, right? Because you constantly have this funnel of trying to lead people to become contributors to help collectively maintain, right? And a lot of people don't know how to get started. So I could see a common entry point to becoming a contributor, you getting a lot of help from the community, us having a how-to page for like literally how to write a how-to page, we paste that to them after we're done helping them answer their question and then say like, here's a great opportunity to get started. If you have time, feel free to open a poor request that includes what we talked about here so that others can benefit from this answer. I would like to add to that and say that if you do or if we did do something like that and the person did open a PR, I would love for that to be funneled to that sort of experience to be funneled to another SIG and that would be the advocacy SIG because I think it would be amazing to reward somebody. Let's say with a pair of socks from, you know, Jenkins socks or something like that and we have that ability in the advocacy group to reward people and when you post about stuff like that on social media and others see it, they're like, oh man, that was that easy for them to do. I want to get involved and then we start really filling that pipeline. See how I did the whole pipeline thing? I'm wrapping that all up here. I love it. I think visibility on social media is a huge incentivization for people. It's just sort of human nature that there needs to be something in it for them to get some involvement sometimes, right? Invisibility through like the Jenkins CI Twitter account, retweeting something you did might be enough to get people over that initial starting hesitation. Completely agree. I know in open source, we sort of advocate for the greater good and like you shouldn't need to get rewarded for helping out with the community, but I do think it pays dividends for helping people feel appreciated. I can't tell you how much I treasure my Jenkins socks or other community socks that I have. Socks are a very big commodity. You hear the insurance, definitely. And I have lots of those. But socks in the hand, special case. All right. So we have Gitter and Stack Overflow and actually not Gitter and IRC. We have pipeline testing on here. Are there any, I mean, I know that that is a, I think it's near and dear to many of our hearts. And also, I've had asked of me multiple times at conferences. It's also a hard and thankless job. I'm not sure. Again, maybe there needs to be just a how-to on this. I think this is like unit testing of the pipelines that you write. Is that correct? Yeah. Yeah, that answer is library. I've used the Jenkins Spock library with some success. So there's a framework developed by, I think it was Austin Witt. Back when the SIG actually got initiated, that does some really clever stuff under the covers to let you do write Spock tests for your Jenkins pipelines. And you can test shared libraries. And I've used that pretty extensively. And it works pretty well. Okay. Again, maybe at least having something that's very visible out there. I mean, I know that there have been presentations, but maybe you need to have an online meetup on it. Get this over to the online meter to make sure that it's visible and easily found. And if not, and possibly even on the authoring SIG page like, hey, this is an FAQ kind of thing. Like here's one of the things that people ask the most about this area. Yeah, and definitely like forking it into the Jenkins organization. I don't know if that's happened yet. I don't think it has. I don't think that I'm the right person to go do that. We should probably reach out to maybe Austin and talk to them about if they're interested in something like that. Right. So real quick on testing, there's sort of, we want to apply the same kinds of testing we do during software development, pipeline development theoretically. So Jenkins Spock is like unit testing. You don't have to run the Jenkins test harness or anything. It just mocks everything. And you can do like interaction based testing. I think the next level up for that would be real integration testing where we create some helpers inside that Jenkins test harness to do integration testing. Right. So Jenkins Spock is not going to tell me if I put something in my pipeline that's not serializable and I'm going to get an exception thrown during pipeline execution or something like that. So you know, having automated testing pyramid. I love that you just throw out this like like really technical, very specific detail that someone who writes a lot of pipelines knows about just like, you know, the way you do, which, you know, this has happened to me multiple times. No, it's like that's that's actually something people run into quite a ways down the down the road. But that's it's totally valid. I wanted to note that that this is this is the level of your expertise. I made the mistake once of writing. So I'm a plug and maintainer. I wrote way too much code before executing it in a Jenkins pipeline. And I got one of those not serializable exceptions. And there's opportunities for improvement in the the logs that are output to tell you exactly what thing caused this exception. So I had to crawl through a way more code than I should have to find the root cause of that. I don't think that happens to people frequently enough to prioritize it, but that was a rough, the rough weekend for me. Right. Okay, continue. Sorry. So integration testing helpers for the HHH tests that kind of thing, right? Yeah, like I could imagine a scenario where like instead of, you know, usually people that are using that test harness are plugin developers that are trying to test the classes within their plugin. But I could see us creating a facade that leverages the same thing to create actual pipelines that you're running. So some helpers to load shared libraries and then like use the test harness to actually execute a pipeline, a test pipeline to see if it's going to work as you expected to. Okay. Does that make sense? Is this resonating with anybody? Yeah, I get what you're saying. Just it's like, okay. It's a little confusing to me though, because like, or I don't kind of, I don't totally get what would be different than what already exists in the test harness today. And like, or like, you know, is Jenkins file a run or something that would be useful for this? Like, I guess I'm kind of curious of what you can't do today with Jenkins test harness that this would enable you to do. So I think you can do everything you need to with the test harness today. I think my suggestion might be more around exposing a user-friendly facade or having some tutorials for people to understand how they would use the test harness to test their pipeline. And because if you go to look up how to use it, you're probably going to find yourself looking at unit tests for plugins. So this is sort of a different use case of a framework that already enables you to do these things. But maybe we create some more methods that are targeted at users testing their pipelines as opposed to their plugins. Does that make sense? I mean, I wonder though, if like, I wonder if it's like something more like Jenkins file runner that's built almost the same way as the test harness but has some special methods or something. I don't know. I don't know if you'd want to build this into the test harness itself, but maybe you could have some library that depends on the test harness that adds additional stuff. Right. I think that's what he's talking about. Yeah. I think either is a good idea. I think we could create an extension of the test harness. So it's maybe packaged outside of it and consumes it. But Jenkins file runner would work too. It might be less performing because you'd have to like spin up a new master for each pipeline you want to test. I'm definitely not an expert at this. So I defer to you to tell me which is a better idea. I'm just thinking through some of the integration testing opportunities. I think it just depends on how users are expected to use it. Like, are they running this as some standalone tool or are they, this is like part of their code base and they're pulling this in as like a test library or just depends. I think the first story in that epic would be coming up with that design spec and talking through the user experience for it. And I think we just found the two owners for that. Yes. Possibly. Let's see. So I was going to say, it's nice to meet you, Devin. You've reviewed one or two of my poor requests. So it's nice to finally talk outside of GitHub. Yeah. Well, the good thing about this is it seems like there's a lot of feels, for lack of a better phrase, in the integration of testing that I think really would benefit the overall community with some more discussion from an architectural standpoint. So I'm going to circle back around to one more thing that I want to talk about. That I, it's not quite, this is where syntax and internals meet each other. And it's sort of talking about referring to before. The most common complaint, I know it's the strongest complaint, but it is the first complaint that people make have made to me for about three years running is the behavior of parameters in pipeline. Specifically that when you add a parameter to your pipeline, it doesn't actually get added until you run your pipeline at least once. Declarative. Yeah, exactly. And it's a speed bump in particular for new users. Everyone's like, I add the parameter. Why is it not there? Right? So the, and the reason for it is due to a pipeline internals being basically that the pipeline doesn't know that it needs to do something until you run it. And then it says, oh, wait, okay, now I'll create that thing. But you've already passed the point at which parameters would get called. I wonder if you could first do a run that's like a dry run that just looks for like, when the workflow run kicks off, if it could run a dry run, see if there's a properties block before actually, you know, executing the flow definition. Yes, that's the same thing I thought of. I've got some notes somewhere around for this. The thing, the, yeah, do a dry run. Now, what, what declarative does is it actually says, because you're doing, it actually can analyze the whole pipeline as it starts, it says, hey, I noticed there's a new parameter. If there's a default for that parameter, it'll use it. So it gets as far as without doing that, that regular role. It says, okay, I'm going to supply this default and run the pipeline. And then the next time you run, it'll ask for those parameters. The difficulty here is that even for, I see what you're seeing with the dry run, even for that, like it, once you kick off the, the, the run, you don't get back to the UI to be able to get it to ask for parameters, right? So there's, there's something to be done there, right? But it's, this is, this is sort of a syntax thing where it's like, yeah, the pipeline is not behaving in a consistent way with the syntax. But it's also an internals problem that we need to figure out, right? I wonder if you could augment the input step to, because right through input, you can provide information to a running pipeline. I wonder if you could like, I know the better solution would be to figure out how to figure out parameters before you run the pipeline. But if that's a huge refactoring, maybe you could augment the input step to, to pop up when that properties block is found, to take some, to figure out what those parameters were, give the user an input field, like you gave them an input step, fill it out and then run forward on the first run. Does that make sense? No, well, yeah, it might be, but it has its own limitations, but that's an interesting point. Yeah, I think that like fixing this is probably actually going to be extremely difficult. Right. Just because it's like not the way Jenkins is designed to work. But one thing that I think could, if so many people are hitting this and are confused by it, maybe we could change, we could try and like intercept errors that are thrown when these parameters are missing and declarative and try and check and say like, oh, well, the property step was used, but this parameter isn't here. So what must mean that this is like the first run and a parameter is provided and provide like a little message that says, hey, sorry, you're breaking up, Devin. Basically, the big idea is just try and intercept information to say, hey, this is normal. Here's how you can do it. So because like there's a lot of times where it's something, it's not a big deal. If you know what's happening is just if you don't know what's happening, you think you did something wrong. Right. I might also give a hot take of what if we just remove that functionality? Like if it's causing more confusion than it's causing value, why don't we advise that you Jenkins configuration is code or job DSL to configure your job parameters and just take it out of pipeline because it has so many issues associated with it. I don't know if I'm advocating for that. I'm just throwing it out that like if we zoom out, there is another alternative here, which is stop supporting that. I mean, I think that the parameters to a pipeline are not part of the pipeline. Yeah, like don't try using the properties from declarative. I mean, there's still always going to be a use case for things like multi branch jobs where that stuff like property step is necessary in some cases, and they'll still always have the weird behavior. Well, these are all interesting. Anyways, I wanted to put that. Yeah, not properties here, but I wanted to sort of bring this up as one, as another item that that would be on the list of things to sort of figure out or do something, you know, improve in some way because it's a it's a pain point for people off when they're authoring their pipelines. Even if and I think even if it's just a message like intercepting the errors and giving a clear message that would also be a huge improvement because then people know what happened, right? So, or maybe even aborting the build like maybe you can configure as part of the property step that if it's the first run, stop the build and give a good error message that they just need to rerun so they can enter the parameters and make that like optional functionality. Yeah, yeah, when it changes, that's another, yeah, abort, drive on and retry, or even abort. Basically, I would only do that if the property, if the, sorry, the parameter didn't have a default value, right? Yeah, and you could even pass it another block argument that's like abort if changed or something. So, you have to explicitly say that you want that to happen. I remember there was a very long roundtable about this at the DevOps world has been, I'm sure it wasn't the first one. Yes, that's what I've been referring to, yes. It came up twice at both, it came up at both, we had two different roundtables and it came up at both of them, yeah. Well, yeah, there was a pitch about kind of piggybacking off if multi-branch is involved in any way, piggybacking off the fact that multi-branch already kind of pulls and indexes every single Jenkins file. And at that stage, you could have an optional kind of, please parse my parameters and set them now before, as you're creating the job or something like that. During scanning, yeah. I'm not sure to what degree parameters can be very dynamic or anything that might not be possible, I'm not sure. They can't, they're not particularly dynamic and thank you for mentioning that. I keep thinking of this in terms of, okay, within this one job, but a lot of people most, the way that we actually suggest people run pipelines is via multi-branch. So that would let us have a job that runs, have something that is running and doing things before that pipeline launches and if you're using declarative, then you could actually be able to know beforehand. So thanks for reminding, bringing that up. Was there any notes from that, Lisbon? There is, I've got them around on my computer, on my cloud somewhere, I just need to go find them, but I think the, I was probably the one that said that and so thank you for remembering it. And so now it's back in here, this is good. That was the key point. The beauty is, is now that we've sort of started to break out what syntax improvements look like, it makes it easier for somebody to take up that cause and how they don't have to go reinvent the wheel. Yep. Does anybody have any additional focus areas they think that would be beneficial to the road map that is in line with pipeline authoring that is not represented here? Feeling static code analysis might be an interesting one. A challenging but interesting one because I see a lot of common mistakes that seem formulaic enough that you should be able to catch them through linting. The first example that always comes to mind for me for this is someone calls a node block and then they call file exists or they try to read a file in the workspace and they never checked out a SCM or they unstaffed anything and they have a distributed build architecture. There's no guarantee that the file you're expecting to be there is actually going to be there on the same node. First of all, does that example make sense? If I've got three build agents and I checked out my code in one node block, I called another node block and tried to do something with that code, there's no guarantee that those files are still there. I see that issue all the time and it causes a really tricky problem for people to solve because it'll work sometimes but not all the times. If you get lucky and the next node block uses the same agent as the first one, everything, your pipeline is going to work, but if it uses a different one, it's going to fail. So from a static code analysis being like you call the node block and then you call the step that's trying to read a file but through file exists or something, but you didn't unstash or check out, I don't know, I'm rambling at this point. No, no, there might be opportunities to identify best practices that are formulaic. So there's going to be complex examples. You've got no chance of catching from a linting perspective, but there are some examples that I feel like we could define a quality profile in Sundar Cube, for example. I don't know. I think that's a good idea. An interesting one certainly. I don't know if I have enough examples of that to be worth doing it. So maybe the first step there would be thinking through what those rules would even look like to see if there's enough of them to justify it. Like we would have to have an opinionated quality profile or set of rules that we declare best practices that we would want to warn people about. I think that the person that takes up this work, at least for this particular thing, I think they start it at an opinionated level. And then as they put forth to the community, what their opinion is, is where people will all of a sudden, oh no, I've got feels about that. I want to do it this way. That's where it generates the conversation. You can kind of do this through plugins. I looked into it a while ago. I didn't actually end up going with it, but you can extend some of the declarative model validators. So you could do something crude like, oh, you don't have a post cleanup step. I think the only option you have is to hard fail the build, but it would come up, you need a cleanup step. And they'd go, oh, I'll add one of those then. I wonder if first scripted pipeline. So I'm very biased towards scripted pipeline personally. I know that might be a less supported opinion in the community. But if you could start doing like compile static and then generate like an abstract syntax tree from the scripted pipeline to be able to do those same kinds of scans just by traversing the syntax tree. In general, I think Groovy CPS doesn't work with compile static, just like fundamentally breaks. Well, that takes that idea off the table. Now, I will say, bear with me, if you are working in declaratives, we do actually do a syntax tree creation and munging. So it would be possible to do some degree of analysis into what's going on, right? It's just a question of how much the fix that I put behind a feature flag for the code to large error that people have encountered works only in specific cases that I know that are safe, because there was a limitation to how much I could understand. But the reason why that works is because I'm doing syntax analysis of like, okay, are there any places where someone is using a def variable? If so, I'm not safe to do the closure construction to make this happen, to fix the code to larger. But if they aren't, then great. And I go into a bunch of, I generate completely different code with classes and closures to work around that code to large limitation in the Java class binary file structure. So, but if, as long as we're in declarative, we can do some of that at least. That sounds like a great idea for a rule, right? Like you used a def variable, you should, you should strongly type your variable. Or actually, yeah, anyway, let's go ahead. Yeah, Devin, you were saying. I was gonna say, one other thing is that if you don't really care about groovy code, only pipeline steps and things like that, and you're okay with only trying to catch errors after the build happens, you can always look at the flow graph to understand like what happened in terms of pipeline steps. And maybe that's like a good enough for figuring out some kind of lints that are specifically around step usage. That's interesting. I feel like people get in trouble when they write a lot of groovy code. So there's a lot of places where that would be helpful. I do think that there's a lot of opportunities for people that are going and doing some more. Let's call them creative pipelines to identify some, some best practice violation. I'm definitely guilty of creating creative pipeline. We all are. But so back to that idea of a dry run, right? Like I know it's scripted. You don't generate the same abstract syntax tree you get with the like AST transformations and declarative. But if you were able to do a dry run, maybe that first dry run, you could do some metaprogramming or look at the flow graph or something to see, basically create an equivalent of an abstract syntax tree to be like, we did a dry run. Here's all the methods that got called and stuff like that. So it'd have to be a little bit of a mature dry run, right? Like if it's a closure, you're going to have to have to keep that closure instead of just seeing that there was an invocation of something that takes a closure parameter. But I don't know that dry run might give you an opportunity to do the same thing for scripted. It definitely would not be a trivial implementation. Okay. So I think this is a good example of where the you and I differ in our focus and our opinions about the Stephen, and that's okay. I would rather people in general focus on declarative. But if where you want to go with things is, as you say, scripted is near and dear to your heart, then I'm totally within the framework of this thing that's still totally valid. And people and I think we should still have it on the list. I would support that being on the list. I have very selfish reasons for it being near and dear to my heart. So I might also have to reevaluate some of my implementations to see if I can adopt declarative a little more than I currently do. And as I was saying, I don't know if you were here last time when I was saying it, where the gaps are for declarative is where is also what's interesting to me. So if there's something that you can't do in declarative that should be, that's definitely where that syntax improvements comes in. So we're at about 10 minutes to the end. I think we've got a good list. Is there anything else that we wanted to, thanks for bringing up the static code analysis, because that is definitely also very high. I'm putting another under pipeline testing, but it might even be on its own. That really probably needs to be its own thing of in terms of authoring. Because there's so much we can do to improve things there. So is there anything else people wanted to add? Foster? I threw down on the mailing list sort of a proposal for something on the job level of things in Jenkins about managing pipelines at certain scales, and when you want to start extracting things into different pipelines rather than having a giant pipeline. Oh, right. I saw that and I did not get into the details of it. I'm grabbing it right now. Great. If you can drop a link. That was an interesting point that we have this whole system for running things in Jenkins that was based around when it was freestyle jobs, where you had trees of jobs. So why are we not doing more, leveraging that facility, right? Is that what you were sort of pointing at? Yeah. In my specific use case, it becomes more of a parallel scaling problem rather than a length scaling problem. I just dropped a link in Zoom chat. Apparently, it doesn't... I do think that does sort of align with... Go ahead. You go ahead. No, that's an interesting idea. Like having an abstraction for being able to orchestrate multiple pipelines at the same time. Right. Like when I try my best not to bring up the templing engine in this meeting, but like when I think about, you know, JT lets you create pipeline templates, each of those steps in your template could be its own job that gets executed. And then you could support declarative syntax inside my own self-issues cases in leverage that same sort of framework to be able to orchestrate multiple jobs together at a higher level of abstraction than a single pipeline. Okay. So as Foster, what's your suggestion here? I mean, I guess you have a proposal. Do you... That suggestion is kind of to figure out what the philosophy is, whether there's a lines with that philosophy or for where declarative wants to go, what it wants to handle. Okay. I would like to discuss this further, but I think we should... Let me put this on the agenda for next time. People can read over it and kind of delve into it, but you can do a little bit of emailing back and forth. And... Sounds good. Yeah. I mean, it definitely sounds interesting. I'm not off the cuff. I'm not opposed. It's more just like, okay, what falls out of this? And I don't know what that is yet. I added a link to this in the meeting notes. So if everybody can take an action item, we'll move this to the agenda for next week. Okay. This will be at the top of the agenda. Okay. I'm going to put that on. I think Oleg noted in the Gitter channel that he wasn't sure about this because he thought there were ways to support that use case already. So he might have something to add in the future. That's fair. Cool. But at least, yeah, let's take a look at this for next time. And is it Stephen Foster? It is, yes. Okay, Stephen. Okay. So I want to make sure we're calling you by your first name and not just calling... Well, we've got two Stevens, so... Yeah. I tried to choose myself earlier, but I was having some Zoom troubles. Okay. Yeah. Okay. All good. We will review this. Does anybody have anything else before we close down the meeting for today? An awesome feedback from everybody. I've... Yeah, this has been a really good meeting. This is what makes it really worth it, is when we really start getting into the real technical weeds of what we want to do. Yeah. Awesome. Well, thank you all very much. The next meeting will be next Friday at 9 a.m. Pacific again. And I hope to see you all there. Thank you very much, everybody. Have a great weekend. Be safe. Bye. Cheers.