 Welcome everyone. This is the Jenkins Governance Board meeting. It's April 3rd, 2023. Thanks for being here. Topics on the agenda for today include the claim from BMC to get hub trust and safety, news, action items, several items there. Then I had put CDF topics and community activity. Are there any other topics that anyone wants to be sure we cover in this meeting today? Okay, then let's go ahead and get started. So first, two weeks ago when we last met, we received a note from, or we had Daniel Beck join us and share the work he was doing in handling a claim that had been filed to get hub trust and safety regarding private information that they said had been disclosed in a unwillingly disclosed in a repository of the Jenkins project. The claim was flawed at best, but we took action. You can see the actions taken in the entry from two weeks ago. Since that time, the claim has been rescinded, confirmed by GitHub that it's been rescinded, confirmed by BMC that it's been rescinded. The plugins that were distributed under an open source license have been restored to distribution. Those few plugins that were not under an open source license have not been restored because we don't distribute privately licensed plugins. Any questions, comments or concerns on that one? Okay, so next piece then is on the news. Jenkins releases are upcoming tomorrow. We've got a weekly, the day after it we've got an LTS. Chris Stern is the LTS release lead. As part of those weeklies, we've got a change of the PGP repository signing key for Debian and RPM packages. The Debian repositories seem to be the most impacted and are getting the most questions. Thanks to Alex for his work on helping to answer people's questions, resolve their concerns. We've announced it in multiple places and yes, we know we need to get better at doing this. The last time we did it was three years ago and we've not done this nearly well enough. We'll do a retrospective after we get through the work. Now we've got Oleg who added a note that he's changed affiliation. He now works for Wiremock and availability continues about the same. Any questions or comments on the news? Okay. And we may come back to Oleg's change of affiliation. He just joined us. So on the action items, the top several, no further progress since our meeting last week or two weeks ago, as far as I know, we've got EZCLA, the sub projects, Convergence and the Chinese Jenkins site. Oleg, did you want to give any comments on your change of affiliation? Congratulations. Well, I think actually nothing really changes for Jenkins except the fact that if you have any issues with Wiremock test automation, we have Jenkins here and there. You know, home to pink. Great. Thank you. Thanks very much. Yeah, but really for me, yeah, I was unable to work on Jenkins as much as I would like recently. But it's not really related to my employer, but to all the other stuff. So we'll see how it goes next. Thank you. And then the plan, a few talks about Jenkins finishing some beats around Jenkins file runner. And what else I plan? Yeah. One month ago, I gave a talk on open source road maps. And basically it was a retrospective on what went wrong with Jenkins public road map because personally, I believe that it didn't fly. So if someone is interested, maybe at the next governance meeting or I could do a recap, or maybe we could do it as synchronously. But yeah, maybe one of the action items if someone is interested is creating and you wrote them up using more convenient tools for the community members. Thanks. Okay. Okay, I think that's it. Great, thank you. So next, I see Daniel Beck has joined us. Daniel, we had just covered the claim from BMC to GitHub Trust and Safety. Thanks very much for your work on it. Is there anything that you wanted to note in addition to what's already been noted publicly? No. Okay, thanks. Thanks again for your effort on that. Thanks very much. So I still have an action item to archive the governance meeting notes. Gavin Mogan has politely declined. He's not ready to continue that effort. I'll carry it forward. I've got more work to do there. Basel, you had an update on the build monitor status. So the build monitor status plug-in, right? Yeah, there's not much to share but since I had brought this topic to the board earlier regarding Jan Mollick's request to maintain his link to his blog inside of the plug-in, I thought I'd give an update on this since it's been concluded now. And his repository has been transferred to the Jenkins GitHub organization. The build has been moved to Jenkins from GitHub Actions and the release process has been converted to our standard CD process. So this project is wrapped up and I was so pleasantly surprised to also see that some of our front-end contributors even improved the display of the plug-in website to display the older versions, the pre-CD versions of build monitor at the bottom of the list and sort the whole list correctly with the new versions at the top. So this effort has really been, has really fully been completed with everything from the build automation all the way to the display on the plug-in site. It now looks and feels like a full member of the Jenkins project with no, it's not otherwise visible as something that we transferred later on. So that's really all there is to say about build monitor. It's one of ours now. So that's it. Thanks very much. Thank you for taking that through to completion. Much appreciated. On the CDF topics, I had three items. Digisert, Signing Certificate Renewal Progress that I wanted to discuss. Then Project Presentation, April 12th and LFX Tools Working Group. Are there any other CDF topics Oleg that you wanted to add or to be sure that we included? Not really at the moment. So next week you will have a presentation on status of Jenkins, right Mark? April 12th. Yeah, is that next week? April 12th. Yeah, I think, I hope that's, yeah, 10 days. April 11th, maybe, because April 12th was really suspicious. Okay, yeah, so it's, I'll have to look at my calendar to be sure. Let me double check. It is, I show April. Yes, April 11th, that's correct. Yep, you're right. Yeah, basically I know that many other updates from the CDF at the moment. So there was a cycle of defining CDF and Jenkins hours for this year. So the announcements will go live, I believe, in a week or two, but otherwise no big news. Ah, see, and I thought that they were going to announce that at CDCon in May. Maybe at CDCon. Okay, so results provided to CDF. So announcement has not been made yet, right? Yeah, maybe it's internal announcement because, yeah, I'm in the outreach committee, so maybe there will be internal announcement first, announcement to participants, and then public announcement at CDCon, as you mentioned. It makes no sense. By the way, do you want to go in there? I will be there, and Alyssa Tong will be there. Okay, nice. And how about KubeCon? I will not be there. Anyone? Yeah, so I finally got the grant from the Linux Foundation to visit KubeCon. I will be there. I checked whether there would be any CDF events, apparently not, but yeah. There might be some Jenkins related agenda. Great, thank you. All right, so going back to the other CDF topic, so others are aware, we've, the Jenkins code signing certificate, the certificate we use to sign the war file and the MSI installer has expired. It expired March the 30th. The expiration of the signing on the war file is not a terribly big concern, but the MSI installer is a big deal. Most Windows users won't use an installer that's not signed. And so I sent a proposal to the board, to the release officer, the security officer, the info officer today asking, let's suspend distribution, got agreement from all of them. Yes, we should not build any more MSIs until we get the new code signing certificate. So what that means for users is they will install the old MSI and then they do an upgrade with the war file. Any questions there? Okay, other topic I had put on was the LFX tools working group. So what the LFX, the Linux Foundation tools working group is looking at a potential replacement or upgrade to supersede the DevStats facility that we've been using for a number of years. We use it to track participation and involvement at a course level in the Jenkins project. And so we're actively involved in those discussions and we'll continue to those discussions. Ultimately, Linux Foundation expects to retire DevStats and replace it with some new, this new thing. Any questions on that one? My understanding that we don't heavily use DevStats at the moment because, yeah, what concerns me that LFX Insights is not featured complete if you compare it with DevStats. So I wonder whether there would be any impact. There should be no impact on automation because what we had for automating community stats, it was quite weak and I don't think we use it anymore. But are there any other dependencies that we might have on DevStats? Yeah, I've continued to collect data from DevStats every month for quite a while and I've been using it to gauge how well or poorly we're doing. I'm doing it automatically or through scripts. No, manually, just once a month I collect them. It shouldn't be a big deal. Right. All right, so I think we've covered the CDF topics. Last topic on the agenda is... I had some questions about the key signing part if we still have time to talk about that. Sure, go ahead. There were some people complaining about the existing stable repository not being signed by apt until we do the next LTS release, which is coming up this week. And I wasn't clear if I think the action was, we told those people to wait. I'm not clear on whether that action was due to a lack of resources on our side or if that was because there was no technically feasible solution to the problem because I think both of those would be acceptable reasons to not do anything. But I'm not clear on which of the two was the case. Good question. Was that, because I don't know very much about this, but it seems to me that this wouldn't be an uncommon scenario for a key to expire and for existing old releases to be unavailable. So I'm not sure what the standard practice is, but I think it might be the case that you could re-sign older releases with a new key and publish that. I'm not too sure about this though, but if that's the common convention, then we may very well have been able, or we may still be able to do that. Of course, now with only two days left, it's probably not worth doing that from a time and effort perspective, but if we're gonna have this problem again going forward, that might be something to think about or I might just be wrong and there's just maybe no way to deal with this problem retroactively. But I was still a bit curious. Do you have any insight on whether that was possible or whether we just chose to do it because we didn't have the time and resources? Yeah, I think as far as I can tell from the comments by Michael Prokop, it is possible to sign old releases again with a new key without disrupting those things. So we could have done it. And it's also possible to sign packages with multiple keys so that, and between those two things, I think we could have made this process much, much smoother for the users than we did. The Debian process saying, hey, you're gonna be down for a week is really uncomfortable. That's painful for the users to say you can't install during this one week period. That's just terribly painful. Yeah, and from my point of view, there's really nothing wrong with us making that mistake or not having the time or the resources to do that. That's completely understandable from my point of view because nobody's perfect and nobody has infinite resources, et cetera. But I was a little bit confused about how that was communicated to users because I saw in many of these tickets, we just told them to wait another week. And then when they complained, there was just no response after that. So I think we could have done a better job of acknowledging from our side that it could have been done differently. And not necessarily being apologetic, but just, I don't think we acknowledged that as much as we could have. And I think users generally prefer when we're fully transparent with them about both our strengths and our weaknesses because at least that creates a complete understanding. Whereas if we're silent about it, then they could be confused about whether it's intentional or not. But that's my only comment is we could have been a lot more open about the fact that we just simply screwed up here. And it's fine to screw up, but I don't think we were very transparent about it. Right, good. Well, and I think that's a thing for that. That is a topic for inclusion in the retrospective of, hey, what should we do better next time in order to not have this cliff every three years? Is there a way to do it? So it's a much less painful experience for the users. Yeah, you're right. Any other concerns or comments from others? Alex, I know you were involved in those conversations. Are you okay with that as an acknowledgement that, hey, I know the infra team, we were surprised, right? It shouldn't have been a surprise. We knew about this. We've known about it for three years. When we set the key expiration, we knew it was going to expire. And yet we missed the opportunity to do it well before the .2 LTS. We should have done changes already in the .1 four weeks ago. And we didn't. So yeah, there are plenty of mistakes hiding in this one. Okay. Any other comments on the expired PGP key? How is this going to be handled in one, two, three, how many years? Yeah, so I don't know yet. We'll talk about it in the retrospective. One of the things that the Michael Procop mentioned was that if you sign with multiple keys and you create a separate package that supervises the keys, then you can do the upgrade much more smoothly. And I, but I need to learn a lot more about Debian packaging before I'm ready to say, ooh, that's how we should do it. So the answer right now is, I don't know what the best practice is. I only know that the way we did it this time was particularly poor. It was worse this time than when we did it three years ago. Did that answer your question, Daniel? Oh yeah, go ahead, Mark. Okay. Yeah, just a quick note on the communication thing. I've seen various tickets that complain that the stable packages are not, the stable pages are not updated on packaging and get.jankins.io, which is currently a limitation within the build cycle. Like we can't update the pages unless manually outside of releases because they are generated once the LTSRs takes place. And a few people pointed out that this is indeed a bad practice to not update documentation outside of releasing something. So there's possibly something we should consider changing in the future, or at least until the current GPG expires. Right, yeah, that certainly is another flaw, right? That flaw we could have hidden if we'd done this earlier, but it's still a flaw, absolutely. Good, okay. Any other insights or Daniel, did you have any further comment there? Yes, I'm less interested in the technique of how exactly it will be solved, but rather, if I understand the current problem correctly, it was that we discovered too late that the key is expiring. So the LTS release was already created. So there was no LTS release for which we can already use the new key to prevent the problem that happened. It did not happen as much with the weekly release because last week there was a weekly release, right? So I'm wondering, especially I think we have several keys on a semi-regular schedule that need rotating. Do we need some sort of calendar or other periodic warning system in infra without obviously, I mean, I'm not directly involved, but it seems like this is something that would be missing a calendar notification a month ago would probably have helped. Right, and that's in fact, exactly what the infra team uses. They have a calendar that has these kind of notifications on it, the early warning on this one was not early enough, right? And that was part of the problem is it, I don't recall if this one was the PGP key was not on calendar or was not early enough, the Digisert one was not early enough, right? So this signing, we probably needed to know six months ago that it was going to expire because attorneys had to be involved. And this one, knowing it six weeks of go would have been enough, but I think at best its warning was set to arise 30 days before expiry and not 60 or 90. So yeah, there is an alert system already in the infra team that they use a calendaring system that reminds them of these kinds of milestones. Yeah, six months sounds right to me because especially if you wanted to adopt the strategy you described earlier of signing an LTS release with two keys as a migration strategy, then that would need to be done for, because LTS releases are every four months or sorry, three months apart. So, and then doing the work to prepare for that would probably take around a month. So yeah, like four to six months sounds about right for how long we'd need to orchestrate this successfully. Exactly. And I think that's, and we may want to consider, hey, should we rotate every year even though the keys expire every three years? So that we've always got, there are all sorts of things like that that I think we will discuss in the retrospective and try to find a better path forward. The last time we did it three years ago we knife edge transitioned on a week when we did the weekly and the LTS within a day of each other. And that's the only reason we didn't get better complaints then, right? Because we did it on exactly that weekly plus LTS transition period. But that's still not a very palatable thing because it was still broken for a day, right? It's much more glaring when it's broken for a week but it was broken for a day even then. So there's gotta be a better way to do this. Anything else on either the Digisert key or the PGP key expiration? Okay, next topic I had then was this Artifactory bandwidth reduction progress project. So JFrog, our sponsor of Artifactory for the Jenkins project has asked us to reduce the bandwidth used by that server. The bandwidth that's using is enormous. Well, thanks to log files they provided we found a host at Alibaba that was downloading from 20 to 30 terabytes a month by itself. They have now banned that IP address and we're hoping that that's already a dramatic improvement. They've asked for further improvements and we may have to make use model changes in the Artifactory repository but we've made good progress thanks to that ban and the help of the Artifactory caching proxy. This is something that the infra team has created to provide cloud local caches of the artifacts instead of downloading them all from repo.jankenci.org. Any questions on the Artifactory bandwidth reduction project? Okay, so the next one is Google Summer of Code. We're delighted that we've got over 28 proposals already submitted to Google for Google Summer of Code 2023 in the Jenkins project. That's in fact, I think John Mark told me it was 38 because it's double the number we had last year and we haven't reached the endpoint yet. The endpoint is tomorrow when applications close. So we're delighted with the progress there. It's let's see 38 proposals. So, and we assume that there will be even more before we reach the closing of their time tomorrow. Any other topics that need to come before the board? Don't be surprised if you see me tinkering around with Launchable in the next few weeks. So I'll be making a lot of little changes here and there to try to get all the data flowing in. So you've probably seen me already doing that but if you haven't, there's gonna be a lot more of those changes coming. Thanks, yes. And so just for everyone's background, I Kosuke Kawaguchi has provided access to Launchable so that we can do some experiments. Basel started those. I've got this hope that we can find a way to reduce the cost of our infrastructure. Basel's got an even bigger picture view that he would like to find a way for us to get feedback much sooner to developers without having enormous costs or enormous time delays to get that feedback to them. Any other topics for... So Basel, did you want to talk anything at all about your work on prototype that you and Tim Jacome are doing or is that a separate story? I'm going to be writing a document that goes into all of this in a lot of detail because there's a lot of parts to it whether it's how to define test suites, how to define builds, how to define test sessions, flavors. There's a lot of concepts in Launchable and also what some of the benefits are. There are a lot of different use cases, some of which apply to us, some of which don't. The distinction can be fairly subtle. So I'm planning on writing a document that explains all of this. But to summarize it, I think we can benefit in three places. I think we can benefit in bomb builds to reduce cost. We can probably benefit in core builds to reduce the number of Windows tests that are running and also potentially to do ATH subsets as part of core builds. And I think we can also benefit in ATH itself just as we could in bomb by reducing costs there, although the costs are not as bad in ATH, but it's a similar structure. So those are some of the places that I see we could improve. But I'll have more in writing pretty soon about my current and every time I think I understand all of this Launchable stuff, a few days go by and I realized that I've misunderstood something and I have to do it over again. So it's been a learning process for me as well. But I think after reading the documentation, I thought I understood everything then I started implementing it and I realized that I didn't understand it at all. And now that I'm implementing it, I think I understand it, but I'm sure that in a week from now, I'll be going back and telling past Basil that he didn't understand it today. So, and I'm getting feedback from the Launchable support folks as well the whole time. So we should be getting pretty close to a working prototype, but having this all in writing should help explain the benefits as well as the costs and that could inform how we all interpret the results and ultimately whether we decide to productize this or not. Great, thank you. Thanks very much. Any other topics for today? All right, thanks everybody. Thank you very much for your time.