 Okay, everyone, welcome to the Jenkins Infrastructure Public Weekly team meeting. We are the 12th of July 2022 today. So today we have your servitor, Damian DuPortal, we have AirVellumer, Mark White, Stéphane Merle, and Bruno Herrarten from the community team at CloudBiz. One, two, three, four, five, everyone is there. Okay, let's get started with announcement. So it seems that the weekly is in a good direction. At least I can see it available on Jenkins.io website. Not sure for the whole checklist, but as usual, I assume in a few hours it will be validated. Is that correct, Mark? Sorry, ask the question again. I was working on the Jenkins Weekly release. Oh, good. You're asking about that. I was multitasking. That's very dangerous. So yes, the weekly release has been published to Artifactory, so to the repository. The changelog is not published. I just completed an edit on it. We'll push it. And there's more to do on the checklist. So so checklist is not yet complete, but the coming soon. No, no issues detected. But no issues detected. Nice. Right. Are there other announcements? So for interest's sake, we are in a we are still in the in the phase of further adopting Java 11 and further updating dependencies to dot 359, for instance, has many Java JavaScript updates in it of various component libraries. That's that's good. We like that. There may be some instability or some surprises there as those JavaScript libraries are updated previously. Our dependable configuration was incorrect. It was not detecting JavaScript updates that were needed. That Alex Brandis corrected that mistake and we suddenly got a flurry of you need to update this JavaScript and this and this. And those are all good things. But likewise, if not 2359, then the next one will upgrade from jetty nine to jetty 10 and the jetty nine to jetty 10 upgrade is is a key component of Jenkins. Jetty is the thing that provides HTTP services for us. It it handles HTTP and HTTPS and and that's a good thing because the jetty project has declared the end of community support for jetty nine. They're still willing to support us, but because the Jenkins project is a big deal for them, but we like that we're switching to jetty 10. It's a good thing. Those that's all the news that I had. Many thanks. So that means that for the upcoming two or three weeks, if folks you've seen stability is an infrasci, the Jenkins, it's not you and don't hesitate to ask for help because that might end on reporting an issue on the weeklies. Right. Yeah. So we've actually got two installations, right? Weekly.ci. Jenkins.io is a publicly visible instance of weekly that doesn't do much. Infra.ci is a privately visible, but does an awful lot for us. Absolutely. OK. On that area that based on the work that Bayesian and many other contributors did recently, I wanted to raise the subjects. So that's al-financement ask note. I will want to try GDK17 on infrasci as soon as possible. I'm not sure though if this would impact or could we avoid impacting weekly CI or if we should because it does nothing. But my proposal would be to start switching our instances that follow the weekly release to use the weekly in GDK17. Brazil on community thought there is no known issue. I thought that we would have some issues. So if it's ready, let's get started as soon as possible that will provide insights. Well, I would love to see weekly.ci.jankens.io switch to Java 17. I've got an unreported issue that I need to investigate further when I switch my production instance from JDK11 to JDK17. I got surprising messages in my agent logs, but I can't duplicate it. And I've tried with a brand new creation. I still can't duplicate it. So nothing for you to note here because I don't know of a repeatable issue. I just happen to have a thing that I need to investigate further when time allows. Once I have it repeatable or understand the conditions, I can then raise an issue with it. But I don't have any repeatable issue yet. OK, the challenge for us will still be we will have to add JDK17 next to JDK11 on our agents. For the virtual machines, it's not an issue because it's already the case. So we only should update the EC2 templates to start the agent with OPD JDK17 instead of JDK11. So for virtual machines, that will be really easy. However, that might require us to add more contents to update our Docker container images. But that effort will be required no matter when we do it. If it's not now, it will be in two or three months. So I propose that we can get started on this one. Is that challenge clear for everyone? I like it and I think we take it relatively low priority. But yeah, I like for me that we eventually we certainly want Java 17. And we it was very good for us when we switched CI Jenkins.io from 8 to 11 a few years ago, it gave us lots of experience and even better that we switch lower impact ones first, learn from those and then eventually get to the big one. Agent with both JDK11 and JDK17. That was my announcement because I'm motivated to do it even outside the priorities of our weeklys. I feel like we can gain a lot. And for me, it's a way to also give valuable insight, if possible, to Basile and to the community and to prove that it work. OK, just can I let you just take the lead just two minutes, Mark, to go to the notes? Sure. So do you want me to do I'm not sure what you want me to do, Damien? Sorry, sorry. That's OK. I'm taking back. So let's get started on what we did. A lot of issues, great jobs this week. So let's start one by one. So we had a request. I'm taking them on the order of the notes. So update analysis, model and warning and G plugins and see the Jenkins. So that's request has been requested by Alex, not my fault. That's something I'm the origin of. The reason is that see I Jenkins IEA right now is not manage as code for the plug-in port. I was sure I started writing something, but I wasn't able to find. So there is an issue to be written to get started on. Putting CI Jenkins dot IO Docker image with its plugins. So that would mean building an image for the CI Jenkins IO. The same way we do for infrasci release CI, that the prebuilt image with an exhaustive list of the plugin with their exhaustive and spin versions. And then we have a process to update them. I need to write an issue to explain what would be the downside of doing that. That we'll have an impact on the security process. So before jumping on implementing this, we first need to underline the advantage of that and think, how could we improve the security process so they can pre-stage and test the image prior to delivery. And how do we fix the security process? Fix the drift that's inevitably happen when a security board is applied to CI Jenkins IO, because if we upgrade to a version, an updated version of the core or a plugin, then we need to build an image before these plugins or core is available publicly. That's the concept of staging. So then as soon as the security advisory is published, then CI Jenkins IO must be updated as soon as possible. Otherwise, someone will hack CI Jenkins IO by following the security public advisory. So time is the essence there. And then once we have an image that has been stored on a private registry and tested outside our usual public configuration system, we need a way to correct that drift to say, oh, I see that that image is not what you would expect by looking at the infras code, which is public. So we need, let's say, an update, clear whatever automatic process that say, oh, we see that there has been a security process automatically open a pull request for us. So we have to put disabled during that amount of time and of the security process. I don't want to burden the security team with the drift configuration. And then the time between they finish, they say, OK, CI is updated with the image. Then we need a way to correct the drift and go back to usual process. That's the main challenge on that issue. The advantage of doing so is that we could stage the image, test the image prior to deploying to CI Jenkins IO. And we could be, we could then automate plugin updates and audits plugin updates on CI Jenkins IO. And moreover, we can provide easier rollback if needed. So just to be sure I understood. So the concept then is switch from the current current way of managing CI Jenkins IO as a as an operating system installed package and switch to use a Docker image, but then we have some way to stage those Docker image updates privately before we're ready to publish them publicly. And on public public publishing, we it detects the drift and says, oh, we need to update. Exactly. OK, except the only it's already a Docker image. Oh, it is already. Oh, it's the official one with the tag LTS Chideka 11. OK, so it's so what it so it's not it's not an image that has we're not building our own image that includes the plug-ins and any other components, job definitions, etc. I see. Thanks. Thank you. So right now I've asked Alex to open issues so we can at least trace the changes they want they want to do because he wanted us to update some plugins on specific versions. So I took care of that. The goal is to have issues like this one until we are able to fully use a config as code with automated or audit table content. Is that clear for everyone? Because I failed to share that to air V. So so it ended in closing the issue initially and it's my mistake because I did not share enough the knowledge. So I just want to be sure that it's clear for everyone to not have this happen again. Cool. We had to remove my account requests. So usual. We had to prove that the person saying, hey, I want that account to be removed. We have to to be sure that it was the right person. So we have to prove that they were able to control both the GitHub account and the Jenkins account before removing it. But it went fine. Remote access API on a very non CIGI your instance. Are we do you want to explain it or do you want me to to summarize? So when we're working on the embedded status page, we discovered the way I discovered the remote access API path are disabled on CIGI Jenkins.io, but also on other VM instance managed by Puppet. So we when Daniel noticed it, he asked us to unblock this remote access API on the private instance like certs that CIGI Jenkins.io I'm interested. So I made this vest configurable. I've added a parameter to block our notes as remote access API, and if I did it, I set it to false by default and true on CIGI. Thanks for the explanation. So great job. That was a lot of Puppet and you delivered until production with no outage on CIGI Jenkins.io. So congrats. Um, release is as in from meeting notes. So here it's, it's an automated way to generate the notes for this meeting that create releases really useful that avoid the running a script locally. So thanks for that. That's helps us to prepare the meetings. Um, Docker compose in Jenkins CIGI, so we had a plugin maintainer, we needed to run test container, which is a tool that is driven by Maven or Gradle that runs during the integration test phase. Once you have packaged your application and you start running on the package application, a set of test harness and it's used Docker or Docker compose to spawn ephemeral databases, services, whatever that connects to the system on the testing to run your test harness, which mean they cannot use the default configuration where plugins are built and tested inside container on Kubernetes agent and CIGI Jenkins. So it was only a matter of using the correct label, but also the thing is we had issues because it accidentally was using IRM machines. So we had to fix the labels of the IRM templates because a Docker IRM container, when you specify an Intel, an Intel image and you are an IRM Docker engine, it doesn't work as expected. So the label has been fixed on CIGI Jenkins IO and the user confirm it was working correctly on that. So, so Damian, for clarity there, that means that the label Linux in this case really means Linux on AMD 64. And that's OK. Semantics of the Linux label on CI.jankins.io is it is absolutely, is there a place where we can record that that would help me remember that? Because I think of Linux on ARM as the same Linux as on AMD, on Intel as on System 390, but of course we don't have a Linux label on the System 390 agent or on the PowerPC agent. So we're compliant with that. It's just I need to remember it. The thing is that the constrain is on the pipeline library. So the pipeline library drives what are the labels used? So we have to update both the label collection and the documentation and the pipeline library at the same time. So there is some clearly some improvements because, yes, Linux for me should not specify an architecture. So it behaved as I thought it would behave. However, I didn't know that the build plug-in library was using Linux while it should use Linux and AMD 64 by default. OK, so so so it's OK. Your view is it would be OK if we if we ultimately said sometime in the future Linux, the Linux label may in fact mean Linux on any architecture. Right now we have a short-term practical problem that Linux is interpreted to mean AMD 64 and then we could adjust pipeline libraries to say, oh, Linux and AMD 64 for those cases where it really requires it. Absolutely. OK, thank you. Thanks for the clip. Thanks for the question. We added Kevin to the Jenkins IoT try-h team, like we did for John Mark and Christine with turn recently for other repository. But in that case, that will help Kevin to contribute and continue working on the documentation SIG. So thanks for the people who did that one. We had a failure on the weekly last week, which was a consequence of the Kubernetes upgrade. So thanks for the help of Stefan and the RV. We were able to fix that. That was the same issue as we had. But we totally forgot that there is a persistent volume based on Azure file, which is mounted on the pod used to package and deploy the weekly release and the core releases. So the fix was literally the same and we were able to cover all the others volumes, persistent volume to ensure that the problem won't appear at third time. So thanks for your help, folks. It has been documented on the Kubernetes documentation. So thanks for taking that time, folks. Removing GroovyTool configuration from third CI, that was a request from the security team. They want to remove the plugin. So we had to remove the tool configuration from the G-Cask. So done. Plug-in removed and removed from Poopettes. So, Sun Scott. Yeah, we have to add a panel scenario tools now. That's it. No problem. OK. New request. The scope is removed GroovyTool for the next. Yeah, yeah. This one is good. And he opened Jenkins, Afra, and Poopettes for request, but yeah. OK. Thanks for sharing that, then. So we still have some work to help him. Ground permission to update center. I don't know which one is this. It was closed without no further add-on. OK. So someone did not read the documentation. OK, that's the whole decommissioning things. OK. So team and Daniel took care of that contributor. The goal was to decommission a plugin. 502 Proxyro when accessing pull request view for Jenkins CI. So that one is solved. So that was a tricky one. The root cause was a plugin about the code coverage that was either looking or taking sometimes. It has been fixed thanks to Uli's work. So many thanks for everyone involved on that. As soon as we deployed that new version of the plugin, the page was printed immediately. So that fixed the issue. The 502 came from a time it reached when trying to load the page. I have no idea how to troubleshoot such error. I know that we can take a thread dump like Daniel did. There is a page on somewhere on Jenkins where you can see the thread dump when doing such an element. But honestly, when I read the thread dump, I mean, that's, yeah, OK, I have a thread dump. I'm happy with that. So that one is, that's why I say it's tricky. Us Jenkins admin, I don't think we have the knowledge required to understand this one. So what that means is that when there is a word behavior, don't stay to ask the others. Because as soon as we can raise such issue to people who knows like Daniel and Uli, then they could start acting. That's what Uli expressed, saying if someone will report such problems. So I think there is a room for improvement there. But I need to know that personally, I'm a long-time Jenkins user. But knowing that, it's like Magica for me, like how could I know? So I don't know how we could improve any ideas welcome there. Because I feel the Uli's frustration. I mean, having an issue telling him, hey, there is an issue with the plugin, help him as a plugin developer. But the road until being saying, oh, yeah, it's a problem with that plugin for me, it's like Magica. So thanks for Daniel help. But yeah, that means don't stay to ask for help when you see word behavior on Jenkins. That's taken away from me. I don't know if you share it. Is this one where in a future day, we might be able to use historical traces to something that's doing observability to associate a plugin upgrade with a slowdown in some operations. I mean, right now we certainly cannot. But I'm wondering if this is a place where, think about the trace facility that the open telemetry plugin provides. Could it be somehow correlated to a plugin upgrade and tell us, oh, we upgraded this plugin at that point. And that's where the traces got slower. If you have a regular process that reach in that case that URL, then yes, you can because not only the build time or process but every get or post request going to all Jenkins endpoints. Right, but the problem then is we would have had to have been measuring that before the change and after. Okay, right. So the trace facility may not be the magic solution that I hoped. Thanks. However, if we have a set of hills check, get request that we usually, I mean, the list of pull request main of the core to check, oh, do we have Jenkins project built on that instance that could make sense though. But yes, not that easy. That could help but not solve the problem. So every you close the issue about PowerShell and PayWSH. So as you expressed, it has been done for the virtual machine built on Packer, which is our final goal. So it's done. It's hard to do it inside the Windows container and it's really time consuming. So we didn't implement it the Windows part because we don't have the issue now. We under the issue with the developers that had the problem and that leads to an update on the pipeline, PowerShell documentation. So thanks for that, that will help a lot of users because I'm sure we are not the only one being beaten by the fact that Jenkins pipeline engine has two different instructions for PowerShell depending on the PowerShell core version. So by adding something on the documentation, that's the real outcome of that issue because on the infra, we don't have the issue anymore. I mean, we can change the pipeline library, we know it but letting the rest of the users know that outside our bubble is really, really valuable. So thanks survey for taking care on this one. Next one, we have Docker Hub rate limiting. No more. We have been upgraded last week to a team plan for Jenkins Forever, used on CI Jenkins IU and Jenkins CI infra on infra CI and release. And I haven't seen any rate limiting since then. So sounds good. Let's see if the one who will have a waves of a Debian update will see if it's face the new reality. Finally, stop building pull request merges. So that one was closed, has been reopened and re-closed because on CI Jenkins IU, we don't manage as code the job configuration and some jobs were rebuilding pull request even when the main branch was updated. So sometimes it's okay for a plugin that doesn't have frequent merges but on the Jenkins core or ATH or BOM, as soon as you update something on the BOM branch, each pull request is rebuilt immediately because the goal of the pull request is to build the virtual result of what will it be if it was merged, meaning the Git commit merge of the head of the destination branch with the source. That's the default behavior of all CI of the world. And in Jenkins, there is by default that behavior that say if either the source, so if you push code on your branch is changed, it trigger a new build, makes sense. If the destination change, you have to be sure that merging your change is still valid and striking that. But the thing is that for the BOM or ATH, it doesn't make a lot of sense and it costs us a lot of wasted cycles. So it has been disabled on one of our jobs that was done by team, I think manually, inside the CI Jenkins IO. So another room for improvement, being able to put all CI Jenkins IO job config under job DSL management will clearly help in these situations because we could audit what has happened and we could remove the plugin job config history. And that one will be a great win. So another CIS code issue. So this is for what we did. That's a lot of issues, not mentioning the one we are working on. I'm taking them on the same order. Do you have any question whatsoever on what we did during the past week? First one, download latest slash latest directory out of date. We have to walk on the update center scripts that are run regularly. There is a script on the PKG update center machine which every hour send the latest plugin, core packages, Gison changes and send them to the mirror reference in Kubernetes which is the mirror system. And then all the mirrors can start delivering these appliances. The thing is that the tool we are using is not ERSYNC and it's not a Linux system. It's Azure file bucket which doesn't support natively symbolic links. And the thing is that we constrict the latest path on the update Gionkin center or core stable latest. It's a sim link to the latest version. But when you send that in a system that doesn't support the sim link is undirected and it's a directory. So since we do not override the changes we just add the new changes. Once the folder latest has been created at the moment of time, it's never overridden. So we need to update that logic to say we need a specific things that will take the latest of each change plugin and core and update the directory and update the new directory and the remote system. We fixed at least the initial issue manually on the file system inside the Azure file to be sure that the core version is the latest as the user pointed out. There is still some work to do. I'm not sure I will need help from team Olivier or Daniel on that area because I need them to tell me why it didn't avoid it that because it's explicitly avoided and I might need help from them. That might be postponed to after Stefan finish update center because we might be beaten by this, but right after. CI Gionkin's IO agent are very flaky. We had issue with the ATH. For me that issue is closable. So we had two kinds of issues, bomb builds and ATH, bomb builds on Kubernetes. We had issues with a container or instances on Kubernetes. All the spot instances on the region were consumed on AWS. Leading to whatever uses a spot instance, Kubernetes node on AKS or virtual machine spawned by Jenkins. This machine were reclaimed or not started leading to high rate of build failures. So thanks to Hervé's work who exchanges a lot with people involved. We removed the spot instances for the virtual machines, but only for the big one that are used by the ATH because these instances were causing the threshold. So now no more impact with the containers and the additional cost from that is absolutely sustainable given the rate of what we consume compared to the amount of builds that won't have to be retried. Good news with this issue were able to fully test JC retry pipeline thing. It worked because some build took seven hours instead of 30 minutes, but they went to completion. So it worked as expected. Developer don't have to retry builds. There were also a few minor things that we need to report, mainly EC2 plugin that should at least print a warning saying, hey, that is spot instance has been reclaimed by, at least that will help to diagnose in the future. So thanks, James, for the support on that area. For me, that instance is closeable, I will take care of that after. Removable embeddable build status plugin. So almost there, Hervé, that trigger a new issue discussion, what should we use to replace? So the status right now is we are experimenting quickly if we can have our own instance of shields.io that provide these badges in the same machine as CI Jenkins.io. That's the scenario for now because then we don't need to put any token or and we can keep API, remote API disabled on CI Jenkins.io to avoid script keys. So Hervé is evaluating that part and the batch pull request things is ready to go. So the question is, can we quickly spin up shields.io and propose a replacement to the 160 whatever developers or do we have to tell them, hey, sorry, that feature won't exist anymore? So, and on a related but not dependent story, yesterday in the Jenkins governance meeting, it was agreed that the embeddable build status plugin will be listed as, we sent a message yesterday inviting developers to adopt the plugin. If we don't receive an adoption request in the next two weeks, it will be suspended because it includes a proprietary component. So the catalyst event was, hey, this plugin has a component that does not comply with Jenkins license requirements. And because of that non-compliance, it started this, shall we remove it? And I suspect I may ultimately be the one who adopts it actually because there are cases where I need it. Unless shields.io works. Yes. If you have a children tons, you don't need this plugin. Yeah. And in my case, I'm not ready to start a new shields instance for my needs. Yes. But it's the plugin, there's valuable to retaining the plugin, even if we don't have it on CI. Yep. If your instance are publicly accessible, you don't need any plugin. You can already use the public instance of Shield.io, but maybe they'll provide private one. If you don't block the remote access API path, like we've done on CI.jnk.io, your instance should be able to use Shield.io public instance out of the box. Ah, good. So if I were to adopt the plugin, I may want to take your knowledge, Edwé, and include it in the plugin documentation so that people know you don't have to use this plugin if you're hosting a publicly visible instance. Great, thank you. Yes. And even, yeah. In case of private instance, you're already, yeah, we can discuss this later, but yes. So thanks because that was a lot of hidden work. So thanks for having on that. Enable development integration. Sorry? Yeah, I was saying for a little bit, it was a great way to discover other things. No problem. Enable development integration in Jira. I failed to make this one go forward. I need to ping the right person on that area. Evaluate retry condition. There is one last plugin to be released before we close it. It's the Kubernetes plugin. So please don't upgrade the Kubernetes plugin on CIG and Kinsayo until this issue is closed. JC is doing last minute fixes. And we are running incremental version on CIG and Kinsayo, which is not the latest. Hence, the please don't upgrade it. Marc, that one, you started, but I think you missed time to work on it. Is there anything we can do to help on that area? At this point, I don't think so because I think the best thing is for me to document it in the run book, how to do it. And then later we add a new ticket that says, okay, now let's make it smarter. The agent is running well for me in my private instance. It runs great. It's just a matter. I haven't done the configuration work for the public instance. Would you mind sharing an access to that machine with us? The goal for us won't be to take over, but to test if we can start a Puppet agent because we have played around during the past 10 days with Puppets. And that could be interesting to know if we can install it even without the official package like we do for my run, for instance. Yeah, happy to grant access, you bet, absolutely. Okay, upgrade to Kubernetes 1.22. What's the status, folks, on this one? I haven't followed the last changes. The documentation request is ready to merge and we should be able to close this issue. Cool, so second one, close label. I'll take care of that once, Merge. Thanks, Paul, for that documentation. I thought we had to fulfill the 1.23 before closing the 1.25. Oh, yes, yes, good point. I was waiting for the merge of the documentation to start the new issue. Okay, so we'll close after this then. Thanks. And finally, the big one, migrate updates to Enkin's IoT to another cloud. You say it's a big one because it's mine? No. Yes, because it involves a lot of technology PCs and it's a big instance. Okay, I thought you were, you knew, so I get some wind. So if it's okay, so correct me if I'm wrong, but the status is you are working on doing the Poupet part. So you started working on creating a role for that machine. That machine as a Poupet agent is connected to the Poupet master, which means on Oracle, everything is started, network, cloud resources managed by Terraform. And so now the next step is being able to install the same role as the update center machine. And to manage the volume. Okay, yes, and manage the volume. Perfect. That's something that did not exist on the previous machine because it only have one big volume with the system within. So slash on Linux is 1.2 terabyte. While on Oracle, you need a root volume, which is 40 gigabytes and data volume, which is clearly better because we can trash the virtual machine and start a new one, provision it, we still have the real data, the persistent data and the volume, but we need to be able to mount that volume correctly. So we have LVM things that might or might not be useful on Poupet. So that's a problem that Stefan is working on. So congratulations on that one because yeah, Oracle cloud wasn't the easiest one. By the way, that means I can start importing resources. So Mark, you have two running machines. We can have one, but if you can stop or merge the usages, unless we have some usages there. And then in that case, we will have to work on adding that to Terraform if it's okay for you. I can, I use those machines to test Oracle ARM instances, but I can certainly turn them off if it will help. No problem. If it's used then let's keep them, but I will have to import and maybe move them. So I will have to bother you a bit about the usage just for the migration. The reason is because with Stefan, we discovered to apply security concern with Terraform management. We created some kind of name spaces, name compartment. And right now the existing machines are outside this compartment they are at the root. So we will have to move some resources there that might have some outages. So for archives, we will manage it on the other topic, but therefore your machine that might have some consequences. No problem if we need to just delete them Damien. I'm great with either. They're just a way for me to test more broadly and deeply when I test Jenkins, but I can certainly live without them. No problem either way. Cool. I would love to see the imports if you can do that while I'm around. Yep. So is there any objection to move all the issues of the current milestone to the next one? Oh, go ahead. Except the two that should be closable. Okay. Just to look at the new elements. So first, ah, I missed. Infra team syncs next from the notes. Right now, I don't see any emergency new issues. I have some to write that I will mention a lot there, but today was too slow to fulfill that. Tomorrow, I will have some time with Olivier about the data dog management. I hope it will clarify how it works. But we might have a task on the terraform data dog to change templating instead of having one monitor which is multivalued with the 12 or 13 website will use terraform template power to generate 12 or 13 different monitors. So we can finally disable one or the other because it has been a pain on the past week. And the second part is trying to get away from these false positives that we keep receiving on page duty and data dog, which are mostly caused by data dog probes running on broad public gates on Azure, the Kubernetes cluster, which has network issues, which is the most probable cause. I cannot be sure. I don't have metrics or facts to confirm that, but as soon as we use other probes, it works as expected and we don't have false positives. And also the most prominent one, which is update center showing a slow latency, I can reproduce that with a call request from inside the container in that cluster. While I cannot reproduce on virtual machines outside the cluster. So my guts there say, okay, let's start by having more manageable data dog, simpler data dog, and then we can iterate. And for that, I need the help of Olivier. So the goal of that issue that will be written for that milestone will be to take away of the discussion with Olivier. So you have actionable on your own. And the goal is, I should not be the person doing that ideally versus the constraint we will have because some of you will be off on the upcoming days. CI jobs as code. Do you have new issues that we should look on the upcoming milestone on your own before I go on the most recent issues? One, two, three, okay. I just need the help from all of you folks, including Mark about how to evaluate Alex's request about Java docs. So there is an issue, broken Taglib docs. I'm not sure who we should contact to diagnose that because I have no idea how the Java doc works. I assume it's a set of files that we serve on a static web server. I'm not sure there is anything we can do except saying maybe the errors are HTTP 404, but I don't know how we could diagnose that. I am maybe Basil. So Jesse Glick knows details of this. I've done some detail to work on it. If you want to just assign this one to me for now, I think this is a low priority thing. I'm glad that it's been reported, but the transition from Java 8 to Java 11, and again, a transition from 11 to 17 tends to cause breakages in this area because the Java doc Jenkins.io site is somewhat of a special case. It started originally as a single Java doc site for Jenkins Core. We extended it to allow it to also be a Java doc site for all plugins and for Jenkins components. But in order to do that, we had to do some link trickery, some link magic in its interactions. So there are some complications hiding here. And I think it is managed as code, but it will need some research to go find it. Okay, so we'll ping Jesse. You have been assigned. I won't put that on the screen. Yeah, just assign it to me. Let's not ping Jesse with this one. I think we've got much more valuable things to use his time with than this particular thing. Okay, and we have the twin sister request to have Java docs for LTS versions. I assume it's the same area of how do we manage a multi-version Java docs website with the Maven site generation? I have no knowledge about how to do that. Yeah, I'm not entirely persuaded we want this. So I'll want some discussion on this one before we say, yes, we should do it. Just because I don't know that I want people to, well, yeah, let's bring some discussion into this and before ever consider putting it into our backlog. Okay, so is there any action that we should do except maybe starting the discussion on the mailing list? I think that's, yeah, it's worth the discussion on the mailing list because that's probably a place where the developer list could tell us, do they actually need access to Java doc for LTS versions? Is there enough difference between LTS and weekly for them to care about having published Java doc for LTS versions? Okay, fair. It's just I want to acknowledge both messages to Alex so he doesn't feel ignored or... Right. That's the only reason why I'm pressuring here. Okay, I will give him answer to, to tell him to, for this one to start discussion instead because there is no actionable for, we cannot help him right now. And for this one, I will let you take care of them. Okay. Were there other new issues or things for you? None for... Okay. Just one mention that thanks Hervé and Stefan for the huge work you did on Poupet because I did some unplanned work which is a foundational work to keep Poupet framework updated and cleaned up so we can operate efficiently. The fact that we were able to deliver a multiple element in production during the two past days is a proof that we are growing on that area. The fact that Stefan has started autonomously on the Poupet means that we are clearly improving compared to what we had Olivia and I one year ago. So thanks for your feedbacks. Keep giving me feedbacks. You're not an IAINB even if I'm grumpy. Liar. That's foundational work. That's really foundational work. And during the weekend I was ill so I had some time to kill and I was able to upgrade all the games to the latest version except Poupet to the same version as production. So it's like four years of dependencies jump. And I was able to make it work on the unit tests. We will have to do it on production now that will be the next step. I don't want to hear that French people are lazy when I hear a French guy working during his illness to do that kind of thing. So it's all for me folks. Many, many thanks for your work and your help. Keep that pace for everyone going to long weekends. I hope you won't do the same as me. A.G. I hope you won't be ill and take care of you. Is there anything else you want to share folks? Take care of the weather too because it will be very hot. I'm stopping the recording.