 Okay, first of all, let me share on the Zoom chat room, the link to the collaborative notes. As a reminder, these notes will be published to GitHub. They can still be updated, but they won't be the same collaborative. Let me show my screen on the notes. And let's get started. Can you see my screen and can you read it? Yes. Okay, let's get started. So welcome everyone to the Jenkins Infrastructure Weekly team meeting. We are the 18th of January, 2022. So Mark, Stefan, Hervé and I are present today. Let's start with an announcement, if any. Last week, the security release that happened last week during the Wednesday. So the day after previous team meeting, everything went very well on the open source. So for the rest, I don't know. We were able to release in the timestamp LTS, one LTS, one weekly, and more than 20 plugins. So thanks everyone involved in that process directly and directly. Thanks for the support. That was quite a success. Even though just for later, a reminder is that we add a weekly that was triggered the day before accidentally, because we all have been bitten by the fact that weekly are triggered automatically each week. And if we disable a job on the UI, the automatic system might override that part. We need to find a solution. There is an issue on the Jenkins-Safari this process for that. So next time we should have a solution. So anyone with ideas and ready to implement, let's go for that. We can speak about that later, but that's a reminder. Next time we have to be careful on that part. It's not enough to disable the job. We are multiple culprits, so no problem. No one has to apologize. The security team is aware of that. They know that there isn't someone that messed up. It's just the team has been bitten that things happen. So let's improve together. There is a weekly that should happen today. I haven't checked the status yet. It might break because I've merged with the help of team yesterday, fundamental changes on the pipeline that run this release and packaging. And there might be, let's say, punctual weekly between today and next week. They might have some weekly because if we see too much rate or too much changes, we want to trigger intermediate weekly to validate the new process changes that might have been introduced in the meantime. Are there other announcements? Okay, before we go, I need someone to drive the meeting next week because I should be in day off. So I won't be able to run and prepare the meeting. Who's going to volunteer for that? We can share the burden between multiple persons. We'll flip a coin later. I can do that. Yeah. Thanks. Okay, is there other announcement? Someone has something to say about announcement. Okay, let's proceed. First, a few minor subject that we delayed during the past week because of all the activity. Repocijenkins.org have a certificate that was expected to be valid until March this year. Thanks a lot, Kosuki, for dealing with GFrog. It took time to generate the required elements and send a new certificate with its key encrypted to GFrog. They updated successfully immediately. So everything went well, no service outage. And now the new certificate is in place on Repocijenkins.org. So thanks, Kosuki. We've had it on the team calendar which is brand new, less than one year old. We have added an event three months before the end of next certificates so someone else will be able to take over. And the event on the calendar has the link to the Google group with instruction. So the next person in charge of that will be able to find the history and know how to do that and what we should provide to GFrog support next time. So thanks again for everyone involved in that part. Mark, Kosuki, and Olivier, I think. Next topic is Netlify. So thanks, Gavin. Gavin worked during the past week and during the Christmas holidays, at least in the European and American area. Integration and partnership started with Netlify. We have an open source account and we are able to provide a website preview for genkins.io, plugins genkins.io and I think a third one. Any of our static websites. So we used Netlify before for status. We still need to migrate status to this new. But that allows us to provide on each pull request automatically, Netlify generate the website and add a link directly into the pull request. So the author or reviewer of a pull request can click and see what the website will look like if that pull request will be merged which is really a nice improvement. So Damien, you said that we still need to migrate status.genkins.io to the open source account. Did I get that correct? Okay, thank you. Don't think there is an issue. So we have to write one. Status.genkins.io was already hosted on Netlify in production and running the preview sites, the native one, Netlify. But it was under Olivier accounts. And now that we have negotiated an open source account with multiple administrators, well, before that we were only able to use one admin. Migrating the website to the multi administrator will make sense and will avoid any bad surprise to Olivier bank account if we start having someone with a status.genkins.io. Okay, just to note, gave indeed a hard work on providing safe mechanism to ensure that this deployment preview happens and everything is run from infra-ci. So he took care of using the native GitHub deployment object which provides the advanced check that maps to deployments that allows to say that commit is deployed at that place if you add commit on the prerequisite of new deployment, et cetera. And also the website is not built by Netlify. The website is built by infra-ci and there is a specific set of actions, libraries, pipeline libraries I don't know the details of implementation but this element ensure that the website is pushed by a Netlify command line directly to Netlify. The reason behind that is we are limited in amount of minutes of build on Netlify CI. We have a lot of free but given the high activity on these websites in terms of contribution we will completely burn these free credits. Instead, if I understand correctly given implemented something that we build on our own machine that we pay for and we use the Netlify preview that provide unlimited preview for free. He worked hard on the safety of that system meaning for instance on Jenkins.io an external contributor will see a CI build that will take care of building the website on CI Jenkins.io. So you have a feedback as contributor that is available publicly and in parallel infra-ci also trigger a build only for the deployment part so it build a copy of the site on its own and deploy it and then commit back add the commands to the pull request. So Damian just to be sure I've understood so there is now an infra.ci component for the www.jenkins.io site that really wasn't there or it's a new component that was not there before specifically in doing these deployments so they're not we don't have the credentials for Netlify deployments in ci.jenkins.io he successfully avoided putting those relatively sensitive credentials in there. Exactly. Okay, excellent. Thank you. So I will though it has impact on infra-ci I will talk about that on a subsequent task on the task list today but yes, that's the idea so Gavin and Tim did a really huge work on that part so that's really nice and the improvement for the contributor is huge. Are there any questions about Netlify? Okay. Quick one, rating Jenkins.io so it was reported a few weeks ago that there were some Apache tokens appearing on the page on different services including rating Jenkins.io so Hervé started the work and then finished it with the holidays so now we can say that rating Jenkins.io is not publishing you don't know if it's Apache directly by the web page or headers you don't have the version and you don't know the operating system it's a minor thing but still applied and it was a good opportunity to automate thanks to Hervé's work of all the docker images are now automatically built or rebuilt now we took the opportunity to add automating update people requests for production deployment so each time you have a new version of the image for rating now in the next day you will have a pull request open that say there is a new version of the container would you want to deploy it? That one was missing. Good job. Okay. There isn't any question we can move to digital ocean oh by the way rating Jenkins.io I'm almost sure there is an issue the idea will be to migrate that service on Kubernetes because it's running on a standalone virtual machine right now which makes low value so that could be a nice topic for Hervé or Stefan in the coming month it's just there is no priority on that but that's something to have in mind Next topic digital ocean Hervé Mike is yours Yes, so I started I tried digital ocean the telephone provider and started to experiment with it I will use it to create a new cluster on it and we will use it as a complementary agent for CI at Jenkins.io we intend to run this as an experiment for three months to see the easy associated cost and the other alternatives part two could be to add a new agent as digital ocean VM it's not started yet we could do it do these two parts simultaneously or one after the other it's not decided yet So as a reminder speaking here on Kubernetes are not directly related to the cost of infrastructure it's only human and technical costs because we have to change all the time, all the images to not use Kubernetes pattern but use what the Jenkins Kubernetes agent plugin expects from the way you use it so single container pod because there are issues in the implementation etc there are public issues but that cost us some effort that we that gave in publicly question for good reasons because if we have to build all in one images that take almost the same amount of time to get started than a virtual machine then maybe getting virtual machine could be better they will be on private isolated kernels they will allow us to not fear running Docker on these agents compared to pods we have a bunch of virtual machine provider everywhere it's just that we need one plugin for each cloud provider on Jenkins compared to Kubernetes that's one single plugin right but that Docker and Docker thing is or being able to run Docker on a virtual machine seems attractive to me yes so that's a balance good answer but gave in us for good reason that I think is right even if it frustrated me on the because we were working on Kubernetes in some times the reality is we have to question that or or improve the Jenkins Kubernetes plugin as a community because there are a lot of users that start complaining about these weaknesses so yeah that's time to question it or to question the first users we are a kind of quality before it's widely adopted and right now what we are suffering for sure we are not only to use that plugin so that's really something to push to the let's say cloud native relative subject maybe to other part of the community I'm not sure where will be the good start on that part because I think there some help is needed to improve that plugin thanks Hervé as a reminder that's part of the partnership with digital ocean we have limited credits on that area and we will have to write down a blog post once we will have started after one month of operation that should be a good start we can co-author the thing but they asked us to do this and we might want to put some logo somewhere but don't worry gave in will fuck us when needed one of the goals here is that if we start using that credits and it works very well and digital ocean is happy after one two months we will be able to negotiate with them if we need more so that's a great way to decrease our cost and the blog post is on how we use it yes exactly okay the goal for digital ocean is to have users that are not part of the websites telling them hey digital ocean is a nice platform because and it's also a way for them to get feedbacks, honest feedbacks on real life use cases it's completely fair yes especially they give us for free missions I mean yeah I mean that's fair that's fair we try and we say is there any other question or points about digital ocean topic just that the Jenkins at iosite does not yet have their logo on it and we may want to change the layout to admit that there are now different tiers of sponsors right now they're all a bunch of sponsors are listed as free and open source licensing programs where in fact their sponsorship many of them isn't about licensing it's about capacity Netlify and so we've got we've got some wording improvements to do on the Jenkins.io site good point so we have to keep that in mind thanks Mark next topic Stefan do you feel to present it yourself or do you want me to give a summary I'm sure you will do a better job I'm not sure okay Stefan is working as his first major task on the infrastructure to build all the docker images that we have on CI Jenkins.io into into a single image that will have exactly the same tooling as the virtual machine templates as a reminder the goal is developer shouldn't care whether they run their workload on CI Jenkins.io on a virtual machine or a container which means container will need to have the free GDKs the same version of GitHub and all the tooling except for the docker engine running for all the rest it's totally possible to have that one big images that will follow the principle we saw about Kubernetes and so the work of Stefan is to use Packer to build these docker images to have the same scripts that we use for virtual machines and the consequence will be we should drop the docker inbound agents repository that builds with docker file we should not need that one on Jenkins.io reminder only Jenkins.io right now the work is almost finished for being able to build the docker image so Stefan did all the heavy lifting behind next step will be deploying these images and we can start using it and we have delayed the work for the windows containers because multiple issues and the goal was to deliver as quick as possible already the Linux because it has value and then we can delay on windows so Damian for clarity then these it's a Packer image that's constructing a docker container or is it a Packer image that's constructing a virtual machine both both I see okay thank you Packer builds all the combination it say at the same times it build an Ubuntu image on Amazon for virtual machines on Azure for virtual machines local docker engine for docker images and install docker in the virtual machine so if there is no other question next topic about release process on the past weeks between the security release and the Kubernetes issues there were some changes required and some changes to start discussing about the release process for Jenkins the one we run on weeklies on LTS first one we have merges today the one release per pod not one container per pod for the the whole release process we already did before security release the Linux packaging section now it's also released so we'll prove that it worked today I've personally delayed the windows packaging process because see next one of the next we have the Next Week the goal is to just to apply the Kubernetes Jenkins plugin recommended practices and avoid the time you during releases there is upcoming modernization of the contents so at least using the GDK 11 to compile the final war and release Jenkins with that version of Java and finally Windows packaging So right now, to generate the Jenkins MSI from the Jenkins work, it's only a one-step packaging. We are downloading 15 gigabytes of Docker image on the Windows Server Docker container system. That's half of the time spent on a release, which both the security people and usual release people are not really happy about this. That means we should be able to decrease twice the time for release if we can solve that issue. Because for sure, we don't need the whole 15 gigabytes. So we have to work on that part. There are different paths, installing on-need, the required tools, et cetera, but there is some work to be done here, for sure. About that, thanks, Basil Crow started on the Jenkins CI packaging. That's the Jenkins CI repository that hosts the make file that define all from a Jenkins work file. You can build a Debian package, a SUSE package, a Red Hat package, or the MSI package. That logic is not on Jenkins Infra, which means the Jenkins Infra release and the Docker images we are building on Infra are only to reproduce that process. But that process should be tested already on CI Jenkins IO before. So Basil did a really huge work, and I think it's not finished yet, but there are much steps. But the goal is that we should have at least once per day a full build of a war that should validate that we can run. And I wonder, that's a personal proposal. If we shouldn't move all the dependency logic that say you need the tools to build the RPM, you need the tool to build Debian package, you need the Wix tool set to build MSI, all this logic should be available to developer on that repository. And our work on a release and Docker packaging should be only specific to our release process, meaning GPGA agent, inbound agent, but only tool specific to the Infra. That should help the developer experience to avoid the egg and chicken problem like we have today, which make the documentation hard to write. So let's get tuned on that subject. Finally, there is a subject I will want to start. We won't discuss it now. I think I propose a discussion topic on this course that will be stop using branches on the release repository. The goal will be one branch, one pipeline, one set of pod or agent definition. And then each Jenkins line, that are multi-branch and that's not the topic, each line will be one file or set of parameters. That change would have impact on the release process and the security procedures and the way we build and release Jenkins. So that's why discussion is needed before acting because there might be some consequences that we haven't foreseen. So that's why I prefer asking the question before discussing and see. But the value of that will be, we wouldn't have to transplant and cherry pick each change to hold stable. I will only have one process that we can run regularly and be confident with. So the safety check I think we need there is we need to understand one of the security teams strong requests that we do not have is the ability to, I believe it's stage or package a release in a staged fashion. So we can't generate a DEB file or an RPM or an MSI that isn't published immediately invisible. Do I understand that correctly? Yes, that's also one of the high value of this. It's the ones we have simplified that process. It will be an enabler for being able to stage release before the real release date. Got it, right. And so today the tooling does know how to stage the Jenkins war file before release but it can't do the packaging. And that creates this critical path, long lead time thing of building the Docker images takes a long time, building the MSI, building, et cetera. Okay, thanks Damien. Okay. One last important topic is that it's been, so the 13 of January yesterday and today we saw G frog repo.jenkins-ciog download slowness. There is an issue that has been open on our new L desk by end user. The consequence for end user is that every builds of Jenkins plugin that require an artifact from repo.jenkins-ciog is terribly slow, like less than five kilobytes per second in download when you are lucky. Let me find the issue. I love it, I love it. Okay, thanks. So we have asked G frog support for feedbacks to them. Sometimes they ask us for more details because I think they are running blind with our initial requests. So I haven't hired worked on an answer earlier today. So we gave them proof, we gave them reproducibility and we gave them the metric because not only the download that's slow on CI Jenkins-ciog, it fails the builds because we reached a one hour time out for builds that should take five or 10 minutes. It's easily reproducible on a local machine. When you start to see these errors, you run a Maven clean compile on your machine after cleaning up your local Maven artifact repo and it tried to download. We were able to reproduce on a bunch of different ISPs. So that's not a problem specific to CI Jenkins-ciog. We gave them metrics. There is a ping on point that our major duty is monitoring. And the response time is between six and eight seconds when that don't slow download happen. So there's something on the infrastructure. I think that's Jesse Glick that asked the question to let's say leverage the impact of this at least on the CI Jenkins-ciog area will be to think again about a service that will be an artifact caching system on our build system that could be used. So that will be, let's say it was, if I understand correctly, it was kind of engine exproxy. We have to configure our builds to use it. And if you use it, then that system act as a man in the middle. If it has the local cache file, then it serve it locally which is quite performance. And if it doesn't, then it ask it directly to GFrog system. So at least when that kind of incident happen which is not that frequent but still annoying at least the builds on CI Jenkins-ciog wouldn't be that slow unless you had a new dependency of course. So let's wait for GFrog and depending on their answer or their ability to solve or not solve the issue, then we can start a topic on that area or not. Are there any other question on that topic? So we have reached the 30 minutes of the meeting. So I propose that we delay the other subject to next week. Ah, yeah, because you're not here. Don't worry, you will have plenty of work. Ha ha ha ha ha ha ha ha ha ha ha. Are there any other question or important topic to treat before we stop that we could have forgotten? Just that we've got to be sure we do the safety checks for the weekly release that you mentioned earlier. So there were changes in the release, in the scripts, right? And so detailed checks, I'll start those here shortly. I've got a release checklist that I run for weeklies to see that they're healthy. Okay. If it's okay, let's do this, the two of us, unless someone is interested. Or at least they will be available if you have, if you see anything on this check that we can fix and iterate. Yeah, I was just going to raise the questions in the Jenkins Release IRC channel and we can talk there as a way to discuss it. Is that okay, Damien, or would you prefer to care? That's okay. No, that's okay. Let's start the discussion and by default, if no one is available, happy to pair. Great. All right. Okay, I think that's all for today. Thank you. Thanks everyone. Thank you very much. Have a good update.