 So yeah, thanks for joining the Jenkins contributor summit again. Today we have a discussion about the Jenkins delivery pipelines. And two days ago, we briefly discussed topics we would like to cover. So firstly, problems we experienced, we split in, then what could we do if we're taking a look at the JAP-229 from JC. And yeah, then looking into our pipelines and discussing how we wanted to improve them and what infrastructure we need for that. For me, the main outcome for this session would be a list of initiatives we would like to plan for the next year. And usually to put on the roadmap. And if you could also build consensus about how we want to address the security concern for delivery, it would be great. Could you explain what the security concerns are? I thought that the JAP explains very nicely how the infrastructure works. You mean 229? Yeah. Yeah, so long story short, currently developers release from their own machines on from their own automation setups. What it means that we basically trust maintainers to build the components properly, including plugins and libraries available within Jenkins. Yes, the Jenkins core itself is built by automation environment. But even the Jenkins core, we include components, let's say Jenkins models or libraries like annotation deep, which is built locally. And finally, it means that, yeah, assuming that a developer machine gets compromised, potentially may get compromised Jenkins core or Jenkins plugin. And this is one of the main concerns which was raised at the previous contributor summits. And the idea was that we actually need to automate the built environment so that we could do releases from a controlled space. So that we do not rely on developer machines for doing that. JAP 229 addresses. Oh, well, I don't think I'm missing something. Well, JAP 229 is for continuous delivery. But yes, it also provides automation. So JAP 229 can be one of the answers. So JAP 229 basically says that we would be delivering pipelines from sort of Jenkins components from GitHub actions, right? Yes, yeah, the motivation section talks about, I mean, it certainly talks about use of local bills as opposed to bills via automation. Yeah, it is just one particular approach to doing this. There are certainly lots of other approaches that are possible. This one would seem like the path of least resistance given the infrastructure that we had available to us, basically. So, Jesse, just a quick question. If I understand all these concerns correctly, the idea is we want essentially the developer laptop independent release environment of JAP 229, but perhaps don't also want a solution for people who might not be comfortable with this model of if you merge a PR, it will automatically be released. Wouldn't that be something that can be configured in the GitHub action workflow definition to say, well, don't trigger on merge or something along those lines so that we still have largely parity on the infrastructure and tooling and all of that, but it requires a click on the GitHub UI rather than just the merge. Yeah, the default setup, if you go through the docs and literally copy and paste the default workflow definition is going to have the behavior that you have a choice of manual merge, of manual deploy or automatic deploy upon push, but only if the change log includes at least one change in a user-facing category like bug enhancement type of thing. So the idea there, I think I talked about that, some of the motivation is just that I at least have often seen the case where somebody files a pull request and then it goes through a lot of review cycles and finally gets approved and then somebody merge it and then three and a half months pass and they post a comment in GitHub, like, by the way, I actually needed this for some reason. Can you please do a release? And then it's another process for the maintainer to go and look that up and get everything ready and do a release or something. So the idea was to, the continuous delivery part of the motivation is to avoid that delay. But yeah, you can simply comment out the push trigger if you don't want any automated releases, in which case you still have the GitHub actions has the workflow dispatch where you go into the actions tab of the repository and click on CD and you click run. And then it starts up a push at that point. But if I understand correctly, if we put the configuration at the GitHub organization level, then the plugin maintainer does not have that customization, right? On this specific plugin. You can't get have actions as far as I know, it doesn't have anything that lets you do this sort of thing at the organization level. It's per repository configuration. I also don't think... I also don't think... Share configurations if needed. Share secrets. Well, yeah, but the point is we don't want the secrets to be shared for one thing because we want the fine grained permissions control. And the other thing is people need to be aware that this is happening. So setting it up on an organization level for the existing Jenkins CI org with 2000 plugins that we have is simply impractical. Yeah, no, it's definitely something you have to opt into for several reasons. So anyway, we went into details for job two to nine. Maybe before we proceed, it makes sense to ask whether is everyone up to speed with the job two to nine or should we return back and provide some introduction so that everyone can participate in this discussion? I read some of the stuff, plenty of stuff to read there. So that's for me guys. Okay. So for me, job two to nine is one of potential implementations we could use. One main concern that doing releases from GitHub Actions is probably not something we would like to see in the Jenkins project. Well, it's a low resistance pass. Now the args, GitHub Actions is great for various kinds of automation. At the same time, we are talking about the Jenkins project. So for me, I would rather like to see the final implementation done by Jenkins. Are you saying for the operation of that? Yeah, so it actually does use Jenkins for the CI part. So the deploy phase deliberately bypasses any kind of testing, whatever, because it's only using a commit that's already been verified by the Jenkins CI, which is assumed to be doing the cross-platformer, possibly work with Docker, whatever is normally defined in the Jenkins file. So it's restricted just to the actual deployment step, MVM deploy. You certainly could set something up like this with Jenkins. I think the tricky aspect here is that we want to have some, we wanna have a secret that's saved per repository. And we can't really use an organization folder the way we do on CI Jenkins IO for this purpose. I think it just doesn't work in terms of the access control. It's probably something that we could develop some sort of Jenkins feature that tries to mimic the access scoping of GitHub actions in terms of picking things up per repository and also using secrets per repository or getting secrets from some other store that we would have to build for that purpose. I mean, it's certainly possible. It would just be a larger implementation effort to set something up like that because of the specific ways that we have this big organization with lots of repositories each with our own contributors and we don't have any kind of shared level of trust in the way the artifactory permissions are scoped per repository as well. So it's set up so that you can only use your secret to deploy to your space and artifactory. So then it's also making other assumptions about the particular version scheme and so on. So a bunch of the point was to not use Maven release plugin which has its own complications in terms of pushing extra commits and non atomic flow and all of this kind of thing. So there are lots of other approaches you can take with or without Maven release plugin with different kinds of automation systems to actually do the deployment. This was simply the thing that was literally easiest to get up and running with as quickly as possible because the only infrastructure from the Jenkins project that it needs is the tool Daniel wrote to provision an artifactory secret per repository on demand. And that's something we could sort of add into the updater script that we already had without building entirely anything. That's right. So yeah, while we stay on GitHub versions, another problem for us would be deporting because GitHub versions still doesn't provide the full report you may expect from Jenkins. So if we are about to extend pipelines, for example, analysis tools or dynamic testing, reporting might be a concern you can put. It doesn't do any of that. That's all done by Jenkins. Yeah. I mean, you don't even have extra commits if you're released this way. So the question is, and you can attach the statuses to any commit or pull request that you have or to the repository. So this does not happen, have to happen during the release. In fact, we can run arbitrarily complex and long running analysis asynchronously. And that doesn't have to delay the release process. People would probably riot if their releases take two hours because we add all of the tools in there. Fair point. So how would this impact security releases from a plugin maintainer and from testing of them ahead of time where those releases are normally staged in a different repo? You can, well, I guess this hasn't come up yet because I don't think we've had any dirty flaws in like the three plugins that are using it so far. But I think it shouldn't have any impact. I mean, for security releases, we still, Daniel, correct me if I'm wrong, we're still doing that manually, but that would just be an MVN deploy from someone in the cert team to, but going to the cert staging repository instead of public releases. But it is pretty similar. At the moment where all of the regular releases are done with Maven release plugin, we also do the staging with Maven release plugin but the slightly different targets, the stage targets, plus we override the destination repository and we do not push the commits. So we have the private staging repo and artifactory, upload the artifacts there. We have the private cert GitHub repo and at first you only have to text and commits locally and you need to manually push them there. So we probably want a more stable version of the standardized release environment than we currently have to stage and we definitely want to continue staging because it makes release days much less stressful. And we would just need to adapt if a plugin uses this release environment, well, independent of whether it has the CD trigger or manually triggered, I think we can probably pass the necessary command line arguments to have a similar looking release that we can end up publishing. One of the potential concerns here is however, the potential for having conflicts, especially in the version history. Right now we ask maintainers during the staging period to not really create new releases of their plugin, which I mean, they need to do a few steps to be able to release at the moment. I would expect it to be more common for them to perhaps accidentally merge a poll request that looks good during that period and create a release. So the pattern of version numbers needs to be such that we can have a reliable way to not end up with conflicts there, perhaps. I guess, I mean, I don't think, yeah, it's definitely true that you would have to either remember to disable the automatic trigger or just hold off on merging pull requests for a few days. That's something we should think about. I think the actual process should be slightly simpler because we don't have the weird situation that we do with Maven release plugin of us having staged prepare for release, prepare for next development version commits in the cert repo that are there just to produce a version that you then have to merge with stuff in the public master branch. So it should be, that part should be somewhat simpler. I mean, you would just be pushing the actual security fix commit to the public repo and merging it into the current had or it would be a fast forward merge if there were no other commits in trunk at that point. I mean, as far as the version number, it just encodes what the get history looks like, which is independent of where that history was built. So if it's, you know, so if you haven't done anything new in the public master, then the stage version is guaranteed to be newer. If you have done something else in the public master or well, it's the same as the current situation when you accept it's a little bit easier maybe because once you merge in the security commit on top of that, that will generate a new release which includes the public changes from the past few days plus the security fix also. And there's basically essentially random ordering between the staged security fix plus one additional and the one additional non security release in public because it distinguishes by the Shah. Yeah, yeah, if you're a get history branches then there is no total order right between the two branches you can't decide which version is newer than another. I mean, it's possible to override the version prefix when you're staging to cert if that was useful for some reason. The prefix not because we need the sorting to work out with the update sites unless we override how the version number works. Well, you can override the prefix in the command line. But that in itself isn't going to help because when the security vulnerability is announced people that have already upgraded who've got a new plugin that had been merged to master that don't have that security fix can't upgrade to that security version because they'll end up having an older one or an incompatible one. They'll have to wait for a release that hasn't yet happened. So basically, yeah, it already happens today. So for example, if a plug-in is on version five and we create the security fix 5.1 and at the same time version six is created we will also need to create version 6.1 or version seven with the security fix afterwards. But Jesse brings up the inclusion of the short check sum in the version number which prevents collisions with almost a certainty because the security fix will necessarily have a different check sum than whatever release is done in public at the same time. So that is actually less of a problem than it is today. The only problem is that it is much easier to get into the situation through accidental merges. Yeah, I think we would probably want to just include in the security checklist for maintainers disable, you know, disable the manual push or disable the automatic. Yeah, the checklist that they currently follow greatly today. Can I just have one question? Yeah, I'm sorry. Go on. I was just wondering what prevents us to use basically Jenkins instead of the GitHub action workflow is the way we manage credentials, right? Is it something that could be simplified if we have a specific Jenkins instance with Jcast Configurer so we will not consider CI. The Jenkins.io, but it's a plugins. The Jenkins.io where people could provide their own credentials. Well, are you running your own credentials? Sorry. It's not actually your own credentials. We create credentials scope to that repository only. So it's better than personal credentials. Okay. Yeah, I mean, you could build something like this. I mean, we've, as far as I can tell, only be correct me if I'm wrong and surprised that you're bringing this up. And I understand the optics of it that all like mentioned to some degree, but we're shedding whatever services we can because the infra team is so small and already overloaded. And now we're saying, well, let's just host another Jenkins instance to which I was responding. Yeah, in a second VPN because it's not going to be public. And it's also not going in the same VPN as everything else that we have. And so I'm really surprised. No, no, no, no. So basically the reason why I was mentioning the G-Cast config is because you can easily isolate that and allow other people to manage the PRs. I mean, if you take the example of CI to Jenkins.io you rely on the infra team to do the changes. But for instance, the case of release at CI, just people who have access to a gateway repository can manage the service. So that's why I was suggesting to another service. I mean, I'm not saying that we have to do that. I'm just thinking. I mean, there are other problems. So if you have it behind a VPN, then how do you expose the logs to the plug-in maintainer in case it goes wrong, for example? It's already available with the GitHub API. So you already have that information in the PR, for instance. We're releasing, so we don't have a PR, there's... Yeah, sure, sure, sure. But that is a much less... I mean, there's a lot of information. But that is much less accessible. And if the status is linked to a hidden VPN, a private Jenkins instance, we would need to grant at least the maintainer's access to that. Additionally, even if we find a good solution to the fine-grained authentication like we currently attach to individual repositories, those audio factory access totals, we also need to consider if it's an instance that's supposed to be accessible to maintainers, we need to maintain the VPN access to hundreds. Yeah, I see a point. And if you have several tons of them, yeah. I think there's a misunderstanding because the GitHub checks API and Jesse will correct me here if I'm wrong. It doesn't work on pull requests. It works on commits. So you can push a check response for the release process to the commits that you are trying to release that will then be visible in the GitHub API on the checks tab for that commit. Yeah, this is... Yeah, that would be the most straightforward way to do it. So there's no need to access Jenkins to see, oh, well, if you need all the logs, then probably you will. But if you just need the last, however, many bits of logs, then that should work. And at the end of the day, if there's nothing secure in the logs because we're hopefully masking all the secrets correctly, we could always just dump them in a gist and link the gist to the pull request. It's extra work. But it's, I think, a solvable issue. Okay, that's good to know. But what I also wanted to mention is, it's not just a managing access, it's also administering the instance. The service room. Yeah, what I mean by that is, if I look at pull requests to core or various plugins, I occasionally see failing builds. And then I jumped to Jenkins and it's, well, the connection between the controller and the agent broke. And that's pretty annoying in the worst case. Someone will need to troubleshoot that. And that someone's going to be Olivier who has a bunch of free time on his hands. It's incredible. And so that's another concern, right? There's just more work that we would need to do for a very simple process otherwise. Because, I mean, I like Jenkins. I like pipeline, but it's not necessary for the one Maven deploy command that we need. Yeah, I mean, actually half of the pipeline, if you want to call it, that is running release drafter packaged as a GitHub action. The deploy part is actually sort of this, well, yeah, I mean, it runs MVN deploy and then it uses GitHub CLI using the GitHub action token that you get for free with GitHub actions. So half of the deploy scripts is running GitHub CLI to do stuff in GitHub, like manipulating the release and so on. So the deployment to Artifactory is a single line of this process. So most of the rest is really pretty normal usage of GitHub actions for automating things in GitHub. Okay, thanks. Yeah, I don't know what to say. It's just that we've been discussing trying to provide some kind of CD for plugins or non developer laptop based deployment of plugins for several years and nobody had time to do anything about it. The advantage of this is just that it uses some pieces that we already had lying around basically with relatively small piece of infrastructure which is the secret provisioning. Quick question guys, is the idea to improve JAP229 with all these ideas being written down? Well, they should be written down, maybe not enough of the context. I mean, it really depends on what the outcome of this discussion is. If it's ultimately JAP229 can do the things and we just need an extra paragraph that acknowledges, yeah, we don't, you can configure your repository to not release on merge. Then that's a possible outcome if we decide to go an entirely different way. I see JAP235 or whatever coming up with a separate proposal. Sorry, what's JAP235? Whatever the next free JAP number is. Okay, great. Yep. Basically, I meant to say, yeah, when we, if we were to go in a completely different way. I would rather start from 229 at least to get it running. I have reservations about using GitHub actions. At the same time, we can probably duplicate the flow on the similar Jenkins instance easier. But the technique, even if it's a GitHub action, I will be experimenting with it, it should be okay. Yeah. So basically my concern about duplicating and Jenkins, that's the point that Daniel raised. If we have several, I mean, a lot of people who want to release something, this means that we have less time to maintain Jenkins. Yeah, if it's already, if the Jenkins is a pretty busy instance, it's just harder to work on it. So maybe GitHub action, maybe most more reliable, considering the amount of people that would be able to maintain that service. I mean, it's doing a lot less than the Jenkins builds are doing the release usually runs in a minute or so because it's not doing any kind of test steps. So it's not running user provided code normally other than Maven plugins. So it's basically just doing, you know, downloading dependencies and doing a Maven package and then uploading and exiting. So there's, and it doesn't need branches or anything like that. So it's probably reliable enough, I guess. At least I haven't seen any problems so far. There is an annoyance that as far as I can tell, this is just limitation of, or multiple limitations of GitHub actions is that one, you can set the, so it doesn't use a push trigger, it actually uses a trigger on successful check because it's waiting for Jenkins, not just for the push, but for Jenkins to validate the push. You can set up the trigger to be activated when there's a successful check, but you can't say which, or you can set up the trigger to activate when there is a check, but you can't say which one I think it is. And so it actually runs a bunch of times and then each time says, oh no, this is not the check I was looking for in exits. And all of those exits show up as failed builds because there's no way to mark it as skipped or reported or something like that, like you can in Jenkins. And the same if you have a push that had, you know, only dependency updates or something like that, it goes halfway through, decides there's nothing interesting to release and then the only thing it can do is mark itself as failed. So it looks ugly in the actions tab, but I don't know of any better way to do it. It's just not expressive enough, but as long as you're not paying attention to that, it works. So what I'm hearing is there are actually reasons to think about alternative approaches other than where to Jenkins project, so we should use Jenkins for it. Yeah, I mean, yeah, well, it's convenient. The use of GitHub Actions is convenient because we're integrating release draft earn because we have the GitHub API token so you get a special, like a temporary token associated with each action run that you can use for write operations. Yeah, this is the biggest advantage. Yeah, which we used to actually publish the release in the GitHub releases page. So that part would be hard to replicate because I don't think you can do that from an outside tool. Yeah, it doesn't provide the same API. The last GitHub universe, the questions asked about the product managers. No, no plans. Which, yeah, helps with looking. See, if you wanted, we could, so of course run Jenkins pipeline within GitHub Actions using Jenkins file runner or whatever solution, but I'm not sure whether it provides any benefit in this case. Because it's not supposed to be user contributed build steps or anything, it's literally just MVN deploy. There is no Gradle support in this currently, I suppose Gradle would be another topic entirely, you'd have to come up with something else. I don't think you could use much of this for Gradle. This is definitely made in specific assumptions here, at least about the version handling and all of that. Here, we have around 50 plugins using Kagradle at the moment. So it's quite a big chunk. I thought we were up to 100 or so. I can check, but yeah, I thought it's about 50. So a quick search for build.gradle shows 74 matches, but not all of those are actual build.gradle files. How many of them also have a palm.xml? I mean, I wouldn't actually be unhappy if this basically ended up deprecating Gradle support because Gradle writes invalid pumps anyway. And that's really annoying to consume with tools that process all of the plugins and stuff like that. Invalid pumps? Yeah, they still have the assumption around the old plugin parent POM that is versioned like core. It just writes whatever an XML string inserts the core dependency and calls it a date, basically. The JPI plugin or something? I think so, yeah. So that's really annoying and I wouldn't be too unhappy if there was less of that. It also took them forever to add support for staging repos. So that was another complication, obviously. I'm the only one who cares about that. And no plugin compact test or support? Let's go. Well, it says standalone tool. There were some patches which allow to use a PCT in principle. But somebody would still need to implement it. No, until recently, it didn't check at restricted either. So there are a few standards sort of that we have in the project that are basically just never made it into the Gradle tooling. Obviously, we don't use it and so the maintainer is basically on his own there. But so, yeah. Have we officially de-supported Ruby plugins now? Maybe we didn't. We haven't reported it, but we have never accepted that. So there's Ruby to add into the mix as well for... I don't know when the last time someone released a Ruby plugin for us, but... Yeah, they don't even work anymore, I think. Yeah. I think we still allow list entries for them. So, yeah. I think Oleg and I, if I remember correctly, had a disagreement about how this should be rolled out and I simply didn't have the time to do it properly. Maybe we could resurrect the session, take too long to finally kill the Ruby plugins. Right. GoodLab Hook still doesn't have the security fix and that was by far the most popular one. So that would be doable. Still, I think we should get back to topic. I think it would be reasonable for us to start this with Maven support only and look into Gradle support later or just say, well, if you want this, you're going to have to build your plugin with Maven. Yeah. I mean, to some of the infrastructure, I suppose you could reuse. You would need to have a different action for the deployment, obviously, because it does MVN deploy. But I suppose you could reuse the Artifactory secret injection. You could reuse the stuff that does the, you know, that does the check for the Jenkins CI status and release drafter and all of that. So it would probably be something that could be built by people interested in Gradle tooling with some of the pieces we have. I'm not going to do it. I have no idea what the equivalent in Gradle land is of Maven release plugin and how releases are normally cut. So there's some kind of similar plugin. We don't use it anyway, but it contains the devotee flow. So it will be basically just deployment step in Gradle. Gradle is quite straightforward. We haven't talked about Jenkins core components at all. As far as I know, the same system can be used for any core components other than parent arms. Question there. How do we do backports? Backports for security releases or for LTS needs? I mean, either. I mean, backports typically come up in the context of security updates. For example, when the wiki has a different version of stapler than the LTS line. Now, if we were to use this release process for stapler or whatever, I mean, Remoting is also a candidate. Whatever library we have to patch and the versions have diverged. How do the backports work? Is it just a matter of, well, we specify the full version in the POM. So it doesn't matter how the version sort order looks like. We just need to know that one is the main line and the other is the backport. I think it could be done through whatever branching policy because you can define branch pattern for backports. And you can say that if you follow this branch pattern, you will still get continuous delivery for this branch. I specifically mean the version number. The version number is the distance to the root commit. Plus, okay, Jesse, you explain. So there's a version number. Well, there's, there's two things. There's the version number and there's the change list pattern that gets injected for each commit. And the change list pattern you normally set to revcount.hash. And sort of in the default, default trunk flow for a plugin that doesn't have any special needs. The recommendation is to also use that same string as the version number of the whole component. But you can also customize that. So you have some sort of prefix before that, which we're actually doing in the JJWT API plugin. So for that plugin, which is a library wrapper around Java, JSON, WebToken API. The primary component of the version number is the version number of the upstream library. And it's actually defined in such a way in the palm that the dependency in the upstream library is picking up the same property. And then the, the Jenkins plugin packaging portion is like a sub micro or whatever portion after that. So I think you can do the same thing for backports. I just haven't tried it yet that if you're cutting a backport branch from a particular trunk branch point, you know, so you have a bunch of trunk commits going and then you have a particular trunk release that you want to use as a base for backports. I think you could use that, that trunk revcount number as sort of the major portion that you branch off of and then add, add your regular, you know, change list.hash on to that base. So it would work the same way as what we normally do where we have, you know, 2.1.1 is, is a backport fixes on top of 2.1, that kind of system. Or, you know, to make sure, you know, that the, the ordering works. We can also do a trunk count dot trunk hash. And then another separator, actual count, actual hash. You can't do it in get because there's no notion of what trunk is. Well, that would be the manually provided one because I know which release I branch off of to do the backport. So I can use the entire version string, including the hash, as the prefix there. Okay. So this is basically, we can, we can overwrite this even in the default case that we see in log CLI. In what CLI? Log CLI. Right. I believe so. Again, I just haven't tried it yet because we just haven't come across the case for something needed to be backported. But yeah, that's something I could prototype and make sure it works out. Just because this came up with using Jenkins for this would, what would need to be different in this process for it to be hosted inside Jenkins because we would still use GitHub releases because we like them apparently the version pattern would be the same because that's the same thing. We would still not use Maven release plugin. We would probably have some sort of marker file in the repo rather than a GitHub action and go from there. Right. So if we were to decide, well, we want to host it ourselves because there are benefits to doing that. We could essentially migrate all plugins off of what they're currently doing by replacing the GitHub action definition in the repo with whatever would configure the corresponding process inside Jenkins. Yeah. And in that case, I guess rather than having an artifactory secret uploaded to the repository, all the secret handling would be a private implementation detail of our trusted system. I mean, maybe it would just have a single. No, it would still, it would still have to have a per repository secret for safety, but it wouldn't have to ever be saved to the repository as a GitHub secret. It would just be. Sorry. You would have to have. GitHub right permission to the repository from this process in order to run release strafter and publish the release. And then you would have to do the physical deployment step would have to be done inside. Inside a container and some sort of isolated environment that's just given a checkout of the right commit of the source code and the, and the specific artifactory token. It doesn't, it doesn't need to be user pluggable code. It's just a stock command that runs, but you have to do it inside a container because, because it is, you know, running whatever mojos and stuff are part of the palm definition. You can't trust that step. But yeah, it would be possible. You would, yeah, you'd have to have some sort of marker per repository or perhaps it would be like currently in RPU, we have the CD enabled equal true. Maybe this is something that we'd put into RPU rather than marking it in the repository. I'm not sure. Another, another problem with that though is what is the, the trigger, right? So with GitHub actions, you know, you have the choice of either doing the manual trigger and we don't need to build the GUI for that. It's just part of the GitHub actions GUI and authentication that someone with the right permission to the repository automatically gets the ability to run that action. And if you were doing this elsewhere, then you would have to come up with some other means of doing a manual trigger. And even for the automate, for the automated trigger, I guess you would, I guess you could get web hook events from GitHub. So that's less of a problem. Does that make sense? Yeah, thank you. Yeah, it just seems like GitHub does, does basically not allow it to easily leave their action ecosystem. Many things could be replicated elsewhere, including running GitHub action from the interface, but yet tokens and triggers, they would be more complicated. Yeah, I mean, I think this is the question of triggering was one of the problems that we always had when thinking about CD for plugins, we could never figure out a good way to authenticate a manual trigger. That's less of an issue in JEP 229, since you normally can use an automated trigger, especially with the, with the trick that looks for interesting changes in the release draft. But I think you still really want to have the option of manual trigger. And especially for backboard branches, I think those, you don't want. An automatic trigger on those. I think you would want a. Only manual trigger for backwards. My little trigger could be implemented just as a tech push. As a one. Yeah. Maybe, I mean, that's one of the possibilities. We long discussed for. Plug-in CD, but I don't think we ever came up with a exact proposal that would work. There were some ideas there. I mean, basically a big part of the reason for GitHub actions is not because it's some sort of advanced CI system. It's because it's integrated into the GitHub permissions model. Basically. That's what makes this easier. So from my point of view, I think. It's a tip to 29. Perhaps with a few modifications to make it easier to get started for existing. Or to make it easier to transition because personally, I would be sort of very off a new process that automatically kicks off whenever I merge something. And perhaps I've not. Labeled the poor request adequately. So we might want to be mindful of people who don't want to immediately hand off everything to a fully automated process. But otherwise, I think this is this is a great design. And we could adapt it more or we should adopt it more widely. Yeah, I mean, so I think. Probably one of the obstacles is that is a change in version format and that's. And people don't like change. Basically, I already had one complaint about. Also, the new format is just human and friendly. So if you want to send a version to your fellow admin on the website, you would need to copy and paste it. Well, the portion that is actually interpreted for purposes of comparison is usually just an integer, actually. So we're basically adopting browser versioning. Yeah. So I've already had one complaint actually from file parameters plugin. Someone you can go look up the issue, but. Someone said that they had an unnamed work for a large company. They had an unnamed internal repository from an unnamed vendor. That had some sort of version parsing, parsing script that didn't like that version. You know, it wasn't sure what to say. It's parsed by Maven fine. It's parsed by Jenkins fine, but it wasn't parsed by that system. Sounds like it reached out to the wrong vendor there. I guess, I don't know. But yeah, those sorts of things will happen. You can. Use kind of Semver like versioning. If you want to your. It's up, it's up to you in that case to increment the major and minor portions when you think it's appropriate. That would be get commits to change those portions. You could, you could use the automatic piece as the patch component. I suppose it would be. I don't have too strong opinion. I feel like for. Most plugins, it probably doesn't matter, you know, the point is just to push updates, not to. Not to convey meaning exactly, but some people are going to feel differently. So yeah, I think there would be pushback on the version numbers. I don't know. Well, I mean, as long as there's the option for people to stay in their 1.2.3 point. Now there's your auto generated stuff. I think that would be a viable solution. Because there's no going back from version 150. To a single digits. Based on version math. I mean, yeah, you could start. You could start at 1000.1 or something. I mean, I mean, you go a number of classic version numbers, browser versions, and then you have just years. Yeah. Yeah, I think it would be. It would be more visible if we were using this for Jenkins core components, maybe. I don't know if maybe not, maybe nobody pays attention to the version number anyway. No, I doubt it. So I mean, core itself. There's a conversation on the dev list, I believe, to have date based versioning or some other sort of versioning that ultimately that originally came from. We're doing so many changes and we're still to point eggs. But for the components, meaning state to remoting and so on, nobody even sees them unless they keep clicking on about Jenkins to see the license information. And nobody does that. So you would see if you're managing static agents. Okay. Yeah. Well, it's not really all that visible. Even SSH build agents don't, it doesn't really matter there. Maybe it's exposed, but you transfer it via SSH every time you don't need to worry about the version as much. Still, back to topic. We need to discuss the implementation of the CD, or perhaps manually triggered CD of developers outside of developer laptops some more, or are we largely in agreement? Nothing in agreement. Sorry, could you repeat that please? I think we have a consensus. I'm basically jump to nine with a few extensions. Okay. So next topic with the checks. Or how do you want to proceed or like? Yeah, so I guess this topic would be completely separate because if we take, agree that there is CI pipeline and there is CD pipeline, then all we have here is about CI pipeline. Except for the last bullet point about signing. Yeah. One may ask why we test artifacts different from what we actually ship, but in principle, we do the same for other million flows. So for us, one of the concerns was about tooling currently during the build, we run some static analysis tools like spot box, animal sneaker, and a few other enforcers checks. But we don't really invoke security scanning. So how is currently security scanning complemented? If you have a dependent bot enabled, yes, dependent bots infrastructure verifies dependencies and sometimes it's a notice for you as a maintainer. So you can update, but it's not part of the CI CD pipeline. So it's basically standalone process and work could be done. There are existing tools like a last dependency check, which can verify dependencies when you actually run the build. And pretty much the same, there are additional static analysis tools. For example, Jeff Thompson must be integrated, find SIG box into the build flow, but there are more security analysis tools. And we could actually expand this tooling so that we can get better checks and maybe discover additional issues when processing a pull request and when building the plugins. Yeah. So, I mean, it comes up in the release pipeline topic because it makes some sense to ensure that we don't ship completely broken and unsafe software. And I mean, this is to an extent the problem in the project, which we can see if we look at the security advisories. I'm not a huge fan of failing builds or even failing releases. Men's static analysis finds problems, especially if those are sort of time dependent processes. So if I built something yesterday and it passed and someone published the CVE and the same thing from yesterday you built today will fail and perhaps even prohibit the release. I think that would be a problem. Even maintainers who accept the results of such scanners will be annoyed by not being able to ship a hotfix for a bug because the dependency got a CVE and it's just the same dependency as it was yesterday. And it's JUnit. And it's JUnit with the local file inclusion thing. Yeah. So the pipelines that we built and I'm definitely for adding more scanners, for example, in the Jenkins file-based CI should not fail the build but just use the builds as an opportunity to trigger themselves. So we can add more scanners, more results, but perhaps not completely fail, block releases and such, which based on the GitHub statuses is a problem because you cannot have a successful PR build status if the scanners say no. So we would need to separate those out. There's the actual build and there's perhaps other statuses. I think it's, yeah. It might be okay to mark a separate check in a pull request as failed under these conditions, even if the CI check passes, I don't know. You have also the possibility to just create a comment inside the pull request saying that dependency addition include also some vulnerabilities and things like that. Like it's the case for some coverage or quality plugins. And checks also, by the way, allows you to create a status that is neutral. So it doesn't count as a failed status for the whole pull request, but you can still have a status that appears, that's not positive, that includes a warning message and something that would show up, I guess. Yeah, I think that would be great. I'm just thinking whenever the topic comes up, adding more checks to the builds, I'm thinking of all the plugins that disable and check the tests because the maintainers didn't bother figuring out what broke and then you have these really trivial things that should be in no Jenkins plugin getting through. Additionally, there's a problem perhaps, depending on the scanners, how they work. If they need configuration through the plugin POM or are a part of the Jenkins test harness, like the injected test is, then that relies on the maintainers regularly advancing the version numbers of the build tooling. The other potential way to implement security scans is the one where the scanner is essentially completely separate from the source code, like for the CodeQL or Jenkins security scan that I've been working on over the past few months on and off. That's just a completely separate process that also runs a build but is not part of the regular build and just attaches metadata to the GitHub repository, which seems far friendlier to plugins where they don't get the new parent POM updated every month because we can improve the scanner and update the scanner without having to update 1500 plugin POMs. If I remember right, GitHub added some way whereby third-party tools could inject warnings that are visible only to repository owners. Yes, that's basically what the GitHub or the Jenkins security scan that I'm working on is already doing in a very limited capacity since I'm not integrating with GitHub actions yet. What I do is I check out the repo, do the scan on the master branch and then attach the warnings to the repo and it shows up in the security tab of the repository. If this would run via GitHub actions, which is definitely my plan, then that would be part of the checks of GitHub pull requests and that isn't limited to CodeQL. Basically, any tool, I guess, could be wrapped so that it reports its output in a way that can be consumed rather easily on GitHub. There's a difference between the CodeQL stuff that shows up only to the repository owners versus checks that are going to appear to the public. Since I haven't done this integration step yet, I don't know whether it's possible to have security scans that only show for maintainers, but the GitHub CodeQL docs should tell us how it works for the general CodeQL case. What I've basically done is I've taken the CodeQL scanning and library and forked it and invoked that manually rather than through GitHub actions. The value provided by the GitHub actions integration is not something that we currently have. My point is that if you introduce something wrong in a pull request, then you want that flaw to appear in a check in the pull request. It can be corrected, but if there was something already wrong in the master branch, you want to display that only to the maintainer ideally. What GitHub does is it tracks code changes over time. I expect if you have the action for the master branch commit and you compare the result of that to the action that's running on the pull request head, you probably only get the diff. That's how I would do it. That's how I expect them to filter the relevant findings for pull requests. Do you also plan to have some maintainer report for master? That already exists. That's the thing that I'm doing at the moment. I only scan master and this is made visible to maintainers only in the security tab. The benefit there is also maintainers don't have to change the code to make the warnings disappear that it can be managed through the GitHub UI. At least in my opinion, it's a benefit because I ignore these. I dislike all of the ignore warnings, annotations in the code that may be obsolete or years down the road anyway. I would actually say the opposite. Rather have the explicit annotation somewhere than use a GUI to dismiss warnings. Perhaps a matter of how many tools and how frequently the tools change because if that happens asynchronously with your code updates like the independent code QL scan does and all that happens is you need to amend the code again and again. Still, it's fairly convenient to get rid of findings which means I'm actually personally I'm pretty okay with false positives because they are so easy to dismiss which makes even with medium confidence findings reasonable compromise. Yeah, I'm just saying from my experience with the code QL checks versus say fine bugs warnings that from a developer perspective it was much more comfortable to evaluate and then fix or dismiss fine bugs warnings because it's just something I can rerun locally on my source tree until there's zero warnings left whoever I deal with it and then commit and that's the fix for the warnings versus the code QL things they had to go around and go through in the GUI and it wasn't exactly clear whether a warning would pop up again if I refactored some things and just move the same code to a different place it was kind of hard to see what had been dismissed and go back and undo that etc. Okay, good point. Still I mean we don't need to decide on specific tools here and there's probably also the topic of how suitable are they in terms of I mean OVASP dependencies can it handle plugin dependencies because if you depend on the slightly older release of I don't know script security plugin and the scanner tells you your plugin is unsafe but it has absolutely no impact on the runtime that's probably not the kind of scanner we want to use in the Jenkins project for plugins but the broad strokes of how scans would be integrated does that seem like we're approaching some sort of consensus I've been talking so much but nobody else says anything I want to hear from Oleg or you know OVASP From my point of view if you are not deciding the tooling right now it's perfectly fine there are so many different scanners there it could be just a nightmare if we want to dig into the detail there We could just discuss tools which are currently at our disposal so which we can use basically at short notice and this list is not infinite so yeah there is CorkQL Daniel discuss there are multiple analysis tools like OVASP dependency check fines and bugs we have also access to SNEAK as a part of Linux foundation which we can use well that's basically it just in general SNEAK is free for open source so no need to pass to the by the foundation or things like that you can also add a SonarCube because they are also providing some security scan it's not exactly the same kind of code because it's mainly checking the code and not just the composition analysis like OVASP or SNEAK that could be potentially useful perhaps in addition to fines and bugs not sure exactly but they are just including also security scan in addition but you have other scanners that are possible to use for open source code in general not sure exactly about the list but I think at least two or three other are possible Anything that works on binaries rather than source code I guess it would have to be invoked either as part of the plugin build itself or as a downstream check or something right because the rules for what physically get packaged into the HPI are kind of complicated and they've actually changed in the past few months so deciding what actually makes it into WebInflib is not so easy to predict you can't just look at the com with simple rules you really need to build the HPI and then scan that if you're doing a binary check or dependency check or something well in such case it's applicable on the tool a subset of tools so you can use SonorCube for such scan you can probably use SNEAK for scan of jars directly but definitely not the worst dependency check and other tools which operate with the dependency trees There's also something to mention in that topic even if a lib that is included in the HPI is never used at runtime some scanners are discovering it finding vulnerabilities and so blocking the production deployment for some user of Jenkins so is it something we also want to cover meaning not pure security but also security safety sentiment from different user meaning that for security point of view there is no need to change some of the things there but for some customer that will have an impact on their deployment possibility I would say something be packaged but not used I mean that will show up as a as part of the HPI colon package I think it is it will print a list of all of the jars it's including and it uses warning label for transitive dependencies but I would say if you're packaging something that you're not using and that has a security vulnerability and maybe you should fix that I don't think it's too common Well in some cases there are platform specific bits being packaged so for example you may have DLL resource which is used on Windows and for example we have such example in Jenkins in the process management library there are two DLL packaged and they're used on the specific platform But this is something that the users of the software will need to evaluate whether they can safely ignore a potential warning for the developers that is still a dependency problem I mean there might be dependency problems like Java has is a giant library and there's like two classes that have very specific vulnerabilities and they're used nowhere in Jenkins so it would not fault people too much for ignoring that for a while or lowering the priority of the work to update that library otherwise it's something that will need to be evaluated and only if it's something that only affects specific users and environments that's up to the users or admins to decide whether they can ignore it or not as developers by default it's not obviously safe to continue shipping it At that point would it be good just to give it up to the users that's an option sorry could you elaborate this use of the tool scanning can that be left to the users choice an option I mean then they will just report it to the maintainers and complain about it and the maintainers say well we don't have access to the tool I don't know how to use it please go away and leave me alone we want the situation today well and it's not a great situation right we should do our best to do things like keeping dependencies updated because people will complain if they're outdated and so giving the plugin maintainers the tools they need to understand what users might be concerned about would definitely be a big step forward and then it's up to the maintainers to say yeah this looks relevant I'm going to update it or no this too much work cannot do it right now and defer it what we're discussing now this is a point I made earlier I do not want to have something like a dependency check fail the release builds right this should not be blocking which would only result in people opting out of having these checks entirely this is just about providing additional information to maintainers to without them having to actively seek out the tooling and figure out how to integrate it into the builds and their environments so that they are given the information needed to say well this is a dependency I probably should update it's another question of default behavior because at some point maintainers may want to have strict checks I agree with you that default behavior shouldn't be enforcing that as long as we can provide some additional motivation for maintainers to actually adopt these checks otherwise something not enabled by default it won't be used so you mean something like I don't know exposing exposing to on the plugin side whether a plugin chooses to not release something with outstanding warnings or something like that or what do you mean something like that like adding a badge this is particularly high quality plugin or it successfully evaded all of the static code analysis checks yeah so we can send stickers to maintainers who enable these checks I'm pretty sure we can find budget for that but yeah something calling these lines so there should be motivation to enable these checks if you want to disable them by default I mean the thing is I don't want to disable them right the checks need to happen the question is just whether you can create new releases while some checks fail and based on what I've seen of some of the static code analysis tools well this check fails for no reason you cannot read the worst number and yeah and again for fine sec bugs or something I think that's okay because it's either going to fail or not depending on exactly what's in your source tree so it's not going to suddenly start failing for no clear reason it's going to fail because you updated something or changed some code so that's pretty different from the code QL checks etc. So we haven't talked about the recommended plugins in the setup wizard I mean that's one point where we can apply some level of oversight at a project level to plugins yeah fortunately we don't have so many plugins which would be recommended once historically we could start making some conditions for plugins which show up in the setup wizard whether they're recommended by default or not I think doesn't matter so much just having just for them to be listed in the setup wizard or be a dependency of something that's listed in the setup wizard there should probably be some sort of minimum baseline of compliance with these practices that we define I agree obviously if we introduce these processes now we cannot immediately start filtering because the setup wizard would look fairly empty I mean it would basically be, I don't know, loxdli but over time I think we should you know ensure that the Jenkins that we ship by default is none of these very obvious problems so I just wanted to mention something else related to what Jesse just said I think it's helpful to think of these analysis tools being on two axes one is whether their output is only depending on the component itself as input like find seg bugs if you configure the version of find bugs, if you configure the version of find bugs in the POM and you don't change the source code the outcome is as far as I know always the same whereas there are other tools like OWASP dependency check I believe can run as part of your Maven build but it contacts a CVE database of sorts and the outcome is time dependent and the other axis is whether it's part of the local build that you can run in your IDE what Jesse said which is particularly convenient and the other is well it's from the outside in an asynchronous sort of process and the asynchronous time depending one is a big one as well as the always consistent only depending on your actual component and configuration and locally run one but something like OWASP dependency check I would not want to add to the parent POM enabled by default and fail the build if it fails because that has the bad behavior of changing whether something is buildable from one day to the next so I just wanted to make that explicit I think there is a reasonable expectation that if it's part of something that you get from the parent POM that it will not change substantially over time which also puts constraints on the kinds of changes you should make to the pipeline library repository new kinds of scans or things like that should be hopped in at the source level I think if they're going to go into that build plugin pipeline I mean it can be part of the pipeline it just shouldn't be part of the status but a separate status or possibly that too anyway we will need a bunch of these tools and we can enable them readily for consumption the question is rather from what tools do we want to start right now because we have CodeQL which is mostly done by Daniel we have tools which are already integrated in parent POMs but we so far don't really have dependency scans we should generate reports the rest I mean if we consider the two axes I just mentioned something like a dependency scan would probably be best introduced as a GitHub action that adds or runs as part of build plugin but only adds some additional status rather than be part of the regular PR merge status or PR is it merge? I think it's PR merge and I mean we can basically just add stuff if someone doesn't care about a new tool being added then they don't care about it and otherwise it's an helpful service I don't see why those shouldn't be opt out if they don't end up failing builds they shouldn't especially if they operate asynchronously because for example for what dependency check it's about 5 minutes if you don't have cash so that's out of the question and for the tools like sonar etc they also consume quite a lot of time so I think this as long as our chosen solutions aren't too noisy with false positives this is where we would need to consider plugin dependency definitions because I think Dependabot struggles a bit with that there's no reason not to just whichever tool looks reasonable to add that I was going to ask what's the difference between dependency checks and Dependabot security checks as opposed to Dependabot regular checks for everything just having it enabled for security Dependabot well it's invoked outside pipeline and they use a different database of CVs but in principle is there any reason we haven't listed it then as a tool that we have access to I mean I know some people some maintainers already use it on plugin repositories you don't have a choice it's active whether you it's automatic and GitHub if a CVE shows up in something that the Dependabot parsing algorithm would consider to be a dependency and yes that includes test dependencies and whatever it's not very smart it's just some Ruby parser of your .xml basically as far as I know then it's going to show up in your security tab if you're a repository owner you can disable it at a repository level at the org level but by default it's there on the org level I recently made the huge mistake of clicking enable for all and that was before the 400 chain unit dependencies and plugins got the security vulnerability report so that was fun but yeah I could disable it and on an org level we can also periodically just opt in everyone again even if they opt it out speaking of dependencies we talked a lot about Java dependencies but we have some plugins like bloge and accessory which also includes a lot of JavaScript stuff and in each cases we have libraries including other JARs and DRLs resources which is probably the least important case but for NPM stuff I guess we will also need to invent something unless we proceed with approach being used by Uli Hafner when each JavaScript dependency becomes a separate API plugin well at least it will depend on that Shaq I think it will look for package.json or whatever it is I've certainly seen that in proprietary plugins that I have that have got both JavaScript and Java it picks up dependencies for all of it I think one potential problem there is the JS Libs JavaScript libraries but I think that's essentially long deprecated or JS builder I think was also more approaches but I don't think those gain a lot of fraction in the ecosystem so it's not like we have hundreds of plugins with this problem I think Uli's approach makes sense it's the same we package for reusable Java libraries which I think actually doesn't make sense it if there is a security vulnerability in that and there's a breaking change because we haven't updated it or even there was a breaking change because it went from whoever produces that library doesn't care about backwards compatibility which happens we're stuck in a we have to do a massive fix to get everything fixed and release and not break anything which I don't think is really sustainable you should have had Dependabot turned on to begin with when you had Dependabot turned on to begin with it works fine on the plugin with the API in it because it's got no test because it's an API plugin and Jenkins plugins it doesn't update the dependency elsewhere or it does and it fails that build but plugins already released it's in the update center someone's upgraded and boom I think it's the same kind of trade-off we have for Java dependencies yeah maybe the JavaScript ones break more often I was going to say yeah there always seems to be a new shiny thing in JavaScript land so it might happen more often but I don't know I have no metrics to say either way okay so we are going slightly over time should we summarize the results of the discussion and what would be our next steps the most obvious next step is somebody to process that and stop action items the question is rather is anybody interested to work on the tooling in short term so I would be most interested in continuing the work that I've been doing on the Jenkins specific security scan because there's just a lot of problems that many plugins have in common but that are specific to Jenkins and while something like a dependency scan superficially is probably helpful to you know make people back off with their own dependency scan results in terms of security benefits in the actual code I think doing the Jenkins specific rules and rolling that out a lot more widely will probably have the best results in you know actually fixing security vulnerabilities in Jenkins and taking that it's all half way done that's generous but thank you you have working things it's great I mean it works and we have some rules and we found some stuff so there are two directions in which we can improve this one is improve the rules that are Jenkins specific and publish them as well to allow others to use them and then properly integrate that into the usual GitHub pull request workflow because right now that is you know just a daily scan that updates metadata attached to the repo for the latest master and that's not really useful once it finds something real or you know even to scan pull request to prevent you from introducing more problems best used continuous delivery right ship it and the next day you get a warning but yeah I think that it's something you could definitely adopt another low hanging fruit for us is I was dependent if we really want to do that though again it was partially replaced by dependable so no specific outcome right now and we could also try enabling sneak though it's actually enabled for us the problem was okay you access the most important thing for security yes but anyway we have Jenkins here so you can see there is small number of alleged vulnerabilities where did they get 10,000 repos from don't ask no actually total repos is 7347 it's correct also 230,000 findings of which aid are fixable that sounds great so these buttons used to work now they don't so let's wait until they actually fix it but yeah before that it was possible to go to sneak and to explore plugins etc and actually as you may imagine the most of these issues come through dependencies and through trends in dependency resolution and yep like we discovered from other tools you just have Jenkins version started Jenkins core and then the things start exploding because you need to stop rules to ignore such things so it's not particularly helpful as this and if somebody wants to work on that it would require some time I'm looking at the sneak anyway at least to get accessible to contributors yeah I definitely don't commit to stop all the rules there so so I think James you said that sneak is free for open source or is that good I might have set that but don't recall certainly not in this meeting I might have said it before okay I'm definitely sure that we can get a sneak for Jenkins if you want well we get it through the Linux foundation anyway we can get the sneak sponsorship because it's also part of the continuous delivery foundation so I can by the way the center for release engineering for Linux foundation SNCC is available via the Linux foundation security system right so that's actually what they're using under the covers so you don't need to go applying to SNCC for an open source license you get that as part of LFX security the reason why you're not getting into that is you need to be granted access because due to the licensing that SNCC has with LF we have to specifically grant people rights to be able to see the reports yeah I've got the access couple of weeks ago yeah so something changed and now you get this fancy warning okay sorry about that I don't have any control over that part yeah that's fine because they needed to redesign user experience because there were some collisions between Linux foundation and the SNCC so that you JavaScript so that you're going to be able to actually scroll all the issues in LFX security front end so maybe that's what they redesign I'm not part of that particular team so the reason I'm asking about SNCC is if I look at the GitHub code scanners so I've used that so far for my custom code girl roles but there's a marketplace for GitHub actions that run security scans and there is something called SNCC infrastructure as code and that's a workflow that can be set up is that different from the SNCC or is that largely the same functionality just exposed to users differently now it's another product so SNCC has multiple products at the moment there's also Veracode static analysis I think all like you had great experience with that with SNCC Veracode oh yeah I had a great experience with that when I was working on embedded software when I was working on Java thank you so there's a bunch of tools that look like there are essentially one click in the commit to master away from being run against Jenkins plugins security yeah that looks to be largely the same list so you can see this by the way in every repo if you click on the security tab and then code scanning alerts and then you can set up actions there so that might also be just for us to evaluate which of these tools make sense and can handle Jenkins plugins and then recommend their use maybe set them up by default in the plugin template there is quite a number of tools okay so basically then your suggestion is to just review GitHub actions to see what we can use from there I mean that would certainly have the lowest cost in terms of infrastructure we would need to operate I agree it would be useful anything else we would like to release are there next steps we should take for chapter 229 just I mean between us I think we're getting a few dozen plugins should we just go live and see what happens yeah well I can verify that backport release flow actually works I mean I guess just take some stupid plugin like log cli and try to backport something I want to make something up but just make sure that the version scheme works for backports and that you can do either manual or automatic triggers or something reasonably in that mode GitHub actions the documentation is really vague about some triggers only apply to the default branch or they only apply when the configuration file is in the default branch or something and what the documentation is saying so that needs to be tested and we can try using it on some sort of core component I don't know I mean Stapler would be a good candidate the problem there is that we still don't have it inside the Jenkins CI org so that makes I don't know if that matters much but it makes things a little bit more complicated we can probably just move this stuff it's also currently being released to Maven Central which would not be compatible with this so we would need to track down the five people left in the world who are still using it for non Jenkins projects and tell them sorry I don't even know how you find those people I'm not sure that there are actually five people I don't know I mean wouldn't it be easier to I don't know version number something stupid like that to try to start integrating that or applying the new release process there and see what happens yeah that just doesn't get enough I mean it gets like one change every five years so we're a stapler people actually patch and are waiting for releases and things so it seems more relevant although for testing flow we can take any library there is enough de facto request you can create to test the flow yeah Winston is another possibility Winston would be a good one if we don't want to do remoting especially yeah maybe hold off and remoting I can look into Winston then for this process I think it would make sense to look into what the plugin template should look like we have I think default workflows in the plugin template sorry yeah so well with the archetypes all of the hosting on jaken ci specific stuff is currently it's not part of the archetype itself it's actually injected by a archetype parameter and I don't think we're ready at this point to turn on jep 229 by default in archetypes that seems kind of bold right but we will need to think about what that looks like I mean it could be a completely commented out workflow file or something but you have to it's not just the workflow file you have to have a different version format and things like that okay we can create this github workflow just a separate repository or as a part of jaken ci in .github and reference it from there why not because github actions it's yeah I don't think it lets you do stuff like that as far as I know it doesn't have very rich reuse options you can create actions but they can't compose and you can't pull definitions from other places they've been trying to there are hundreds of you know there are hundreds of questions about this but no clear answers so I believe they created template definitions yeah I mean we could include all of this stuff for jump 229 in the archetype commented out that's one option another option is to have another flag in the archetype so currently there's a there's a boolean flag to include jaken ci github specific stuff we could have another flag to include cd so I think that would be useful just to get a better idea of how that would look like to maintainers because I think from a technical point of view we're there there are already two plugins that use it in slightly different ways at least perhaps more but loxialion the jjwt api I know of and file parameters and now the question is how do we make this accessible to maintainers who are not also you know the authors of this thing yeah and mark wade I think was going to try it on some plugin he maintained so he can't remember which one at form label or probably most likely I believe he enabled it we had a discussion about it took nine several days ago yeah I don't know if it's working for him or what let's take a look it looks good version 793 points nonsense yeah so it's working mm-hmm should look good thank you really the first the first three least work yeah all right so what you're saying is in my great dog theme I was just going to say is that a valid version number as far as maven and everything's consensual I'd be a dash as opposed to a dot after the 793 even even is happy it's the same it's basically it's it may maven's happy with anything because it goes I can't pass this it's a string have a version number that's a string everything's a string so I'm not meaning happy as in you know is it not going to blow up I know it's not going to blow up and you can consume it and everything it's like is it could you compare that to something else that happened to be or is it just going to be a string comparison and I think a string comparison compared to something that's not a string is a little bit interesting isn't it for comparison purposes it's the integer the 793 that's the comparison it's it's basically a slight variant of the incrementals version format but we're not using the dash RC part because it's a release not a release candidate basically that's the only difference and that's been used in both both the maven version parser and the Jenkins version parser are okay with that and it has any problems I know of I mean there might hypothetically be problems once we look into back ports and what that version format would need to look like to be compatible here and compare correctly but yeah but it looks like a problem that we can address and figure out how to and then we'll need to document it because maintainers should know the options that they have because I don't want anyone to say I don't like having version 700 whatever I'm not going to use this they should we should advertise well if you don't like this version format here's how you add one in front or two right yeah there's some documentation about this but right now it's not on Jenkins IO it's in a separate repository and it should probably be edited and cleaned up in various ways made into a friendlier guide I think that would be a reasonable next step for us to basically promote CHEP 229 in the regular Jenkins IO developer documentation when it comes to releasing or publishing plugins and have at least a minimal introduction there plus references yeah currently it's just a link in Jenkins.io yeah our documentation needs some more of anyway so right now it's a bit scattered we start moving somewhere wiki documentation etc from first contributor standpoint it's hard to find the bits yeah we can actually publishing plugins yeah yeah but yeah style guide what is it doing here they added like a brief mention to click the link and it will all be explained or like click on style guides oh ok plug in name this is important for plugin hosting ok so anyway it's a separate topic so yeah we can add more information there we could refactor some of the pages to make it more visible or we could just poke people in the developer mailing list which is likely to be the most efficient way well you need to do both yeah I mean you need to have a nice written guide first and then notify people of it and I think for those of us who are maintainers I think it would be useful if we just started using this in perhaps our less popular plugins for example I completely forgot that I'm the maintainer of Solaris theme and I think that's a plugin that I can just use rather than you know matrix off that might be a bit much as the first test run plugin for it and then you know go from there yeah that's a good approach and anyone has a few project plugins so we can start from there okay any additional actions we would like to take the documentation needs to cover both the version number formatting as well as how to have manual releases via actions I think those two might help in terms of getting a widespread acceptance yeah probably we also need to do some formalities like I'm not sure but we even want to spend time on the delegate process I want to finally make this part optional in the process and I think that we should do that isn't that basically obsolete anyway it's de facto obsolete because benevolent dictator has fled the island I don't know something like that there is enough of this exists and the fact that this is not that this is basically opt-in we can just do this more as like the grassroots thing and because it doesn't need the superficial stamp of approval yeah so it's definitely not blocking rollout eventually we will quickly want to accept it okay sure if you figure out how to do that well I believe it's a part of my responsibilities I wanted to do it during the Christmas break with the it's a bit later than never okay I have to go was there anything else we wanted to talk about about 229 I don't think so and we are already over time I think we can just move forward to the developer mailing please especially for other tools which we didn't touch so if somebody wants to store them and put them into the pipeline it's a because it may help many other contributors I mean especially with github actions and the github marketplace and github security scans it seems like maintainers can just enable whatever checks they like and what we need is basically a blessed or known good set of scans that aren't completely useless for Jenkins plugins due to us doing weird things with the palm and the runtime mattering more than the declared dependencies and such so I think we should have definitely a recommended set of scans that perhaps can make it in the archetype but this is also very low barrier way for anyone in the project to contribute enable this in your plugin and see what happens and if something good happens you can also document it and create some configuration templates so I guess that's it for today and thanks to everybody who participated in the discussion thanks for driving Oleg thank you very much not that much mostly it was Jesse Daniel today thanks Oleg and Daniel everyone else so I guess we'll meet again at four so in 45 minutes or something is there a closing meeting and yeah I guess so alright see you then see ya bye bye bye