 Hi everybody, welcome for this new Jenkins inframitting. So before we jump on the few to pick that I would like to introduce, we just have few announcements coming in the week. So the first one is tomorrow with Garrett and Damian. We are going to do a cluster of grades. We documented the procedure to upgrade the cluster, the communities cluster, feed free to review. So I put the link to the notes. We're free to review that documents. We don't expect any issues for this upgrade. But we would like to formalize the way we do the upgrade for the coming version. So that's a perfect time to improve the process. And we'll apply the same process for the next version. We still have few versions to upgrade like right now we'll jump to the version 1.18 We'll probably go to the version 1.19, 20, and then 21. So yeah, stay tuned. The second announcement on Thursday, we're going to do a JIRA upgrade. There is a link to the maintenance window. So basically the Linux foundation will update JIRA and then we'll have to restart the service. We should not have downtime bigger than 20 minutes. But yeah, just assume that it's done for one hour. And if something goes wrong, feel free to look at status. The Jenkins.io as it will be the place where we update as we provide the news. And finally, the third announcement, which is the Jenkins weekly release to the 288 has been released today. I'll talk a little bit about that. But considering the challenges that we faced last week, I'm really happy that we could release Jenkins with the latest weekly without any issues. But yeah, I'll talk about that later on. Any question before I continue? Sounds great. Yes. So today I would like to share something new. So I've been experimenting a little bit with a new workflow to take notes and to publish notes. So the current challenge that we have right now is we are using Google Notes, and I've been wondering a way to directly have those notes published on the Jenkins.io website or on the infrastructure website. So I've been playing a little bit with Acme. So I'm going to do a quick demo here. So inside Acme, we can specify who has access to the notes. So obviously everybody will be able to read the notes, but we would like to only have the infrastructure team to edit the notes. So if you are interested to participate and to take notes with us, feel free to ask and I'll add you to the team. The purpose is just to avoid spammers. So that's why we don't want to allow everybody to write. But I mean, yeah, if you're interested, feel free to join. So we now have a few documents. So I just started. Yeah, we did some experiments. I just started the meeting notes here. So this is the one I was showing you here. So there are multiple views. It's loading. So you either have the view to visualize the notes that we're taking at the moment. You can have just the edit view. And in this case, you have the combined view. So you can edit the documents and then it automatically show on the right side. To me, the most important one, the most important thing here is you can directly push to git repositories. So in this case, you can to bottom pull from GitHub or push. I'm not expecting to use the pull in a way because we are not supposed to. I mean, we will just push a comment once. And we are not supposed to modify them as well. But anyway, so if we want to push the comments, we can just select the git repository. So in this case, it's just an experiment. So I'm using the documentation git repository as setting the branch, the file that I want to create. So either I select an existing one or I can just create any one like meetings. Slash, we are 20, 21. Let's keep the months first to 04 and 30 markdown. I'm going to create this fine, create new meeting. Notes. Push. And so the file will be available soon, thinking. And so now the file is available on the git repository. It's loading. Yes, meeting here. So the purpose here is just to improve the visibility. So I'm envisioning to put that under the Jenkins.io git repository. But yeah, as long as I don't control the process totally, I prefer to experiment in this git repository. And then yeah, that's that's all I want to share. Any question until now? So starting from now, what I'll probably do is I will just export the existing notes and put them inside the git repository. So we have everything in one place. So I'll probably reuse the same workflow next week. So if you don't have any questions, I'm going to move to the next topic. Yes, sorry. For some reason, I don't have the video enabled. Yeah, never mind. I'll see that later. So Garrett and I identify a few issues regarding the update center. So updates are Jenkins.io and the mirrored website. Get the Jenkins.io with both. So Garrett identify issues because he's currently working on a way to automate Jenkins Docker images that include the plugins and he already irregularly face issues where we can't download the latest plugin. So he came to me to see if we could identify a way to fix that. And at the same time, what I what I noticed is a mirror brain. So the place where we download the plugins. It takes a huge time to build the list of mirrors. And it takes like several hours to build that list, which means that sometimes first for the first hours, you are redirected to the fullback service, which shouldn't be used as a fullback for viewers. Maybe like just the time to sing the first mirror. But I was more expecting like 30 minutes maximum when I put that service in place. And then what I notice is we usually have the traffic redirected to China for a few hours before we officially use every mirrors. The thing is the way mirror bits work is it builds a checksum for the files. And so for some reason, only the mirror from China has the correct file with the correct checksum. But the thing is if I manually trigger scan, it worked immediately. So there is definitely something fishy, weird happening either on the container side that we have to investigate. Carrot was maybe suggesting that we have some issues with the proxy that I was not correctly forwarded to mirror bits. But yeah, we definitely something that we have to improve. And it's not something that just affect the Jenkins project. It's something that affects every Jenkins instances. It's just that we are now facing the issues as well, because we have really early built instances with the right plug-in version. So if someone is interested to dig into that, I would be really happy to work with that person. Otherwise, yeah, I just have to find some time to work on that. Any question? Nope, sounds good. So let's continue. We are currently working on the Jenkins.io website. So the continuous delivery workflow for Jenkins.io website. So just to bring some context and reason why we are working on that at the moment, we realize that because of the current implementation, we always push files on a network storage. And the thing is because we always push file, we don't necessarily clean the file. And so we regularly face this situation where we have either outdated or files that should not be there anymore. And so we have, I mean, we have weird errors. So Gavine came to me to see if we could file a solution to redesign the way the website was deployed, which is something that we put in place like four years ago. And at the same time with Update CLI, we have an easy way to define. We have an easy way to build and to publish the container on a different git repository. And so the idea was to change a few things. First, we don't build the Jenkins.io website from Trusted anymore because it does not have web hooks. So we just trigger, we just from Trusted.ci, we build the website like every 30 minutes, which means that we build it way too often. And so the idea is we move, we will build the Docker image from Infra.ci. Once the image, the Docker image is built, we publish that on Docker Hub. And once that Docker Hub, what's that image is published on Docker Hub, we bump the hand charts used to deploy Jenkins.io websites. We had discussion about either using the network storage or just use hand charts. My preference was to use a hand chart because it's just more portable. We can redeploy that website everywhere as long as we have a community's API. So right now we are waiting for one last component, which Demian has been working on, which is including Update CLI in the chart library. Once it's done, we'll officially switch the workflow. And if it's successfully, we may probably apply the same new pattern for other websites like Chabadoc and other static websites. Kevin was interested to do the same for the Play-In site API. So depending how easy it is, it's still an exploratory mode. So just have one question. Is there a process today or if it's not, did you discuss? Maybe the answer is no. About previzurizing a change when a contributor is opening a pull request. Let's say I'm going on the Jenkins.io, I see something I want to contribute. I scroll down, I click to improve the page, it opens the GitHub web editor. I change what I want to contribute, click commit, which results in opening a pull request still on the web UI. And then we have the build that does its stuff. Is there a process that at the moment in time, add a GitHub checks or a link in the pull request messages that allows me to previzurize the change? Which means now as a maintainer, I have a pull request, I can check the source, but I also need to check the render. Does it break the spaces? Is it breaking the large code block or something like this? So we discussed about that. This is something that we would like to have. We just agreed that the first step was to stop using the Azure Fire Storage. And once we have another workflow, we would be implementing that. There are different ways to do that. And this is definitely something that would interest us. The only thing that you have to keep in mind here is... So we have two opportunities. Either we deploy that on a Kubernetes cluster and you did a great job to deploy CI Kubernetes. So we now have a Kubernetes cluster that could be only used for CI environments. So that would be one option to use it. Because the thing is, the Jenkins that I website is still built from CI. Jenkins.io, which means that it's an untrusted environment. And so I would either use a new cluster that you just recently deployed. Otherwise, there were some discussions to use a service like... Just for the PR process like Netlify or search.sh or something else. But on a regular basis, I'm usually not confident to rely on third services because either you decide to rely on the free tier, but then you put in play scripts and you automate things and you rely on a service that you're not paying for. And so the day that service decides to charge you for whatever, then it means that we have to change the workflow. So you just delay the work, Imo. So I would definitely prefer to rely on a Kubernetes solution that we can easily move between cloud vendors. Yeah. The cost of having such service for previsualizing is really high, especially given the build time. Using Netlify only for the pull request for previsualization could be an intermediate in particular because Netlify provides Docker images. So you can produce what Netlify is doing because all their internal Docker images are available by default. So basically, this is something that Gevin has been pushing for quite a long time and I've been pushing back because of the time I spent in filling and configuring the different accounts. So we may be using it for the pull request, sorry. Yeah, that could be an intermediate. It might not be mandatory. The goal here I want to underline is how to help a contributor and a maintainer who is not part of the infrastructure team to have a better experience when they want to contribute. In particular, the documentation is a critical point for the project. So it's from my experience, being able to previsualize something else than the code is really, really important when you get starters. For people who don't have a lot of time to contribute. That's a sensitive point. Still, I agree on the not relying on Netlify for the production website, but even if there is a diff between the previsualization and the production, still you have previsualization and this is critical. Yes, and I just realized that I was sharing in the wrong video. So you didn't see my screen. In fact, when I was showing how I was working. So I was in an old angered session. And so I was sharing that old angered session where I was alone. And so apparently you didn't saw my presentation with my demo about the documentation and the way we take notes. No, we didn't. Actually, I think we did. Yeah, we did, yeah. Okay, perfect then, okay. So I thought, okay, then that's critical. So back to Damien's point, preview environments feel brilliant to me as a way to reduce onboarding for contributors. I know we wrestled terribly with Google Season of Docs, contributor onboarding because our writer had real difficulty getting the Jenkins.io site to build for her on her local Windows computer. So I like the idea very much. So the thing is being able to just build the Docker image right now with already simplified that process because right now if you go to Jenkins. That's the SES part. The rest is complicated, meaning creating a subdomain, adding the root on whatever proxy, even if it's automated, ensuring that it's garbage collected. That's the worst point. These are the things that are technical, feasible, but really time consuming. Yeah, that's the kind of thing that can be automated anyway. Honestly, if you can point me to someone who did that correctly, including garbage collecting on something else than a public service like Netlify, I will pay a restaurant to you, Olivier, because it's been a decade that everyone wants to do that, but they've never seen something like that. Jen, Jenkins.exe does that, and it has been working for years. I mean, there are solutions that works. And so the thing is with a Kubernetes cluster, you can define for the domain. You just have to configure a domain. Let's say you consider that PR.ci, is redirected to a website built from CIDA Jenkins.io, and then you can use the PR name. In the domain, you will get HTTPS by default. I mean, there are solutions to implement that. I never said it's impossible. I said it's time consuming. That's why I want a solution which is not time consuming compared to the effort of only the pre-visualization of PR on Netlify. But that's okay. That will be another subject. Jenkins.exe is doing this quite nicely, but that means that we mean to install a Jenkins.exe instance and maintain it. I don't say it's impossible or it's a lot. I just say it's time. And I would prefer having your brain and Garrett's brain dedicated on something more valuable than pre-visualizing a static website. The other website is, yeah, with me just did some name. The thing that Kevin was suggesting. So there is something interesting with Acme is that you can support Veeam, but for some reason it came with that configuration. So I was just typing the wrong comment. So let's get back to use Veeam inside Acme. Search. Search this way. Search. Sorry. And yes, the last topic that I would like to mention before we finish the meeting is we did the last week we had the security release, which was really cumbersome on many ways. First, the release process and obviously the packaging process. We faced many WebSocket timeout issues, TCP timeout issues. For those, we were able to fix them by switching from the TCP, from the Jenkins to Net to use WebSockets. So this drastically reduced the problem, but we faced another issue, which is by default we are always installing the latest plugins version available for Jenkins version that we have. And the latest SSH agent plugin broke the SSH agent connection on Windows machines. So we had to specify a plugin version, which was the previous plugin version, in order to finalize the Windows packaging. The thing is all those issues took me quite some time to identify, to fix and to implement workarounds. Kerats helped me a lot in the process. So we were able to deliver the security release. But yeah, that was painful. I sent an email to resume the situation. I think that the most original thing that we have to identify right now is to remove the specific version that we are using for the SSH agent plugin. So either there is something that need to be improved on the, either there is an issue on the plugin that we have to fix or we have to improve our scripts that use a plugin, but we cannot stay forever with that specific plugin version. So we have to identify a solution right now. That's actually functionality that's not available in the plugin installation manager. So it only works because it's using the UC command to do that, to pin that update. Ah, okay. So we couldn't use the minus minus last faults or no, minus minus latest faults? It's the updating. So it's not so much the installing of the plugin. It's the updating of the plugin's text file. So it's actually for these images, we're using GitHub Actions running once every 15 minutes or so that look for new plugins if they are them at Craig's pull requests to the right repo. And it uses a tool called UC, which is just a very small binary that runs as a GitHub Action to do this. And that gives you the ability to pin the plugin in a particular one and avoid an update if you want. So if you want to look at the work that Garrett did recently, so we have a git repository named docard-chainkins-hts. There you obviously have the Docker file, which is pretty simple. We just pump the version on a regular basis. And we also have the plugins.txt. So we specify the version. So we are using the tool made by Garrett, which is UC. And so you can specify... If you have a look at line 81 there, yeah. Yeah, this one. So you specify a command to send an update. And in this case, we won't, so the tool UC will never update this plugin anymore. And so the process, I don't know if we can look at it from the Jenkins. So we have the CST, we just specify some tests. And I'm not sure if it's clear here. No, it's not. It's all in the GitHub folder. Yeah, the .git-out-slash workflow, right? Yeah. So there is one workflow that fetched the latest Jenkins version. Yeah. And so we try, and then we have the latest Jenkins. So basically we rely on the Jenkins Docker image that we built and published for the project on the version in JDK 11. And so this one fetched the latest Jenkins version. And then you have another one, which is just updating the plugins. Is it the one? Yeah. Garrett, you see here. Yeah. So it will, it kind of skips that if there are no updates. And so there is an image. Sorry. It skips what, Gareth? I'm not sure I understood that. So when the UC command runs by default, it tries to locate a plugins text file in the local directory. And it will try to write to that by default as well. So that's Peter, something or other. Was it Peter Evans? Yeah. Should remember this, should remember his name really. When that runs, it will create a pull request if there are local changes. Got it. So what happens here is UC runs and optionally updates plugins.txt. And then the last stage there, that create pull request, the last step there, submits a pull request of that changed file. Yeah. So if you have a look at the, if you have a look at the actions tab, you'll see that running every 15 minutes or so ish. But it already creates a pull request when there are updated plugins. And so the process is from here, we are building either for the Jenkins R test and we have another key to press three for the weekly. So we built open PRs. If I'll go here, closed, you can see that we just update plugins. And so if you look at just one, you can see that the file was just changed for a specific version. So we just bump plugins for the specific version. And then from the GitHub charts from here, we use update CLI here, Jenkins and test. And from here, we just say, return the latest GitHub release from the GitHub repository, Jenkins Infra slash Docker Jenkins LTS. We want to test that there is an image which contain. So in this case, we test that there is an image published on Docker Hub, which name Jenkins Infra slash Jenkins LTS with a tag matching the GitHub release that was returned from the first components. And then we update the configuration, config default Jenkins release with the new tag. So what we test is, we test that the Docker image tag has been published. And then we can safely update the open PR in this GitHub repository. And so then you arrive on the pull requests. And from here you have like this one, Bob Jenkins with Jenkins weekly version. And so we have a PR and we can see the file that was changed. And so those are using custom tags from the Docker image because we want to know what Jenkins version is used for this inside this image and how many time we bump the plugins. So we are using semantic version. So I'm actually on that one because that is a Jenkins version. I'm actually bumping the feature branch or the feature tag. So that's what I've been trying to fix with the release version this week. Okay, so, and that was an important piece. So the minor number incremented because the Jenkins version incremented. Yeah. And then we're appending the Jenkins version as re-release information into the server. So it's still a valid semantic version. It's just pre-release stuff. I'd like to do it. I would have preferred to do it as build metadata with the plus symbol. But back, we can't do that with Docker images. You can't use the name. So we've got a number of workarounds around that. Okay, so, yeah. I just want to online that's a really cool job because right now it forced us to feel what our end user are feeling by building quite often, almost every day an image and downloading the plugins. We started to see issues that are most of the time are to reproduce because you try it, it work on your machine one twice. Okay, it really hard here. We start to see the same problems. So that's not only it helps us and facilitate a lot of our tasks. So that's a great job. And also it improves and fosters the empathy. So I think it's a really good foundation that you did. Yeah, brilliant. Thank you. So I'm, Olivier, I am a little concerned about the WebSocket timeout. Have we had any further discussions with Jesse Glick or others about the root of that thing? So not yet. We want to update the Kubernetes cluster first. To see if, I mean, there are any network issues or load, whatever. And then we will investigate a little bit more. There are two main steps here that have been pointed by James and Jesse, when I asked them for help a few weeks ago on the infracia instance. So they pointed out there are the CloudBees support article, which is on the GERA issue that proposes different solution that are most of the time kind of the optimization. But still we should be able to measure. And also we should be aware that we are using old Kubernetes version on AKS. And it's hard to be sure that the problem does not come from the control plane of AKS and not from Jenkins. So that's why we need to upgrade to a recent version and to put in place some measurement, maybe using GMX and the GVM or exporting the amount because it looks like we were not using a lot of Kubernetes client inside GVM. As Olivier said, the changes that Garrett and he applied to release last week, just to clearly improve the situation, even though they might have some time to them, some web sockets. So we have to track this down. Now it's not blocking. We improved a lot. So let's see when we'll be next time. What I find weird with this issue is, for instance, Infra.ci has been running since almost a year now. And we only start seeing those timeout issues in the past few weeks. And last week, it was really problematic. It's time we triggered a jump. The jump was failing. Since Azure announced the application of the version 1.17, yeah, we are just assuming that it's better to just move to new work with this version before spending a time to understand what's there. And so that's what we're going to do tomorrow. Before I finish this meeting, I would be interested for feedback. Until today, we maintain a really huge Google Doc with all the notes that we've been taking every week. We have like something like 1990 pages of notes. And I'm just wondering if we should, if we use like me and we push those on the Qtrip history, is it better to have one big markdown file or is it better to have one markdown file per meeting? So that's the only thing that I'm wondering right now. So if you have any feedback on that, I would be glad to hear about them. So for me, markdown file per meeting with date stamp in the file name is easier for me to deal with. GitHub still has good search facilities and Google certainly has good search facilities to find the right file. Okay. Okay, that sounds great. And also we can have a template for the meeting. So we could, I mean, again, yeah, put in place that. Thanks. And finally, and the last question, is there any opinion to publish the meeting notes on the Jenkins that are your website or should we maintain another website for the Jenkins infra projects with the documentation and so on? How will this work with the Jenkins.io pull request based website, pull request based workflow? Would this then be directly injecting without a pull request? So I guess, yes, otherwise someone, yes, that's my guess because obviously in the end, it's just a configuration. Another option would be to create a temporary branch and create a PR, but I think it does not bring any value for that to have a pull request workflow. So if someone were to commit, there are cases where Jenkins.io can be broken by bad commits. Usually they're data related breaks where if I break a YAML file, I can damage the whole site. My initial thought was not to put this on Jenkins.io but put it somewhere else, but I'm open to the other. Which is Jenkins.io right now is hundreds of megabytes of data around the project hesitant to add more there, but open to suggestions. Okay, I mean, for the time being, I can just use the documentation, get repository and send. And if at some point we decide to improve the process, maybe we move to Jenkins.io or somewhere else at that time, but your guess, for the time being, just populate the documentation, get repository. Thanks for your time. Have a great day and see you later, bye-bye. See ya.