 Okay, hello everyone. Welcome to the Jenkins Infrastructure Weekly Team Meeting. Today we are the 14th of June 2022. So we have the attendees, myself, Damien Duportal, we have Hervé Le Meur, Marc Waite, Stephen Merle, and Bruno Berrarton. I always fail to remember your handle. What an idea. Our goods are in digital, whatever. I will do it by myself. First thing first, announcement. The weekly process went well this week. That means that all the work that Basile and the team did last week is reused. So, good thing. I assume that there is a checklist that should be run later today. Just before the meeting, I started the job on Trusted CI to build a controller image for this release. So, going well. Checklist took over. No error this week. Second announcement. We are currently facing since at least two days, an incident, major incident, on repo.jenkinci.org. This is the most important artifact or instance hosted by GFrog. It caused a different kind of issues. Contributors are having issues to download dependencies when trying to build plugins. There are jobs that are slowed down on CI. But also all the release jobs, security jobs, all the content building, everything that involved the call to repo. As always, we have contacted them maybe a bit late this time. Yesterday we decided it was just punctual. It wasn't blocked or it was usable. Today it isn't. So, we contacted them today as soon as we saw that it started to be reused. Thanks, Stefan, on RV for opening the status to communicating with the users about that. That's all on the announcement for me. Do you have other announcements, folks? Okay, so let's get started. A lot of work has been done during the past iteration. I will try to go as fast as possible with some usual, let's say, day-to-day maintenance tasks. Basile has been removed from some plugins that he doesn't maintain anymore. So thanks, Tim, for that. We were able to install the latest version of the Kubernetes plugin on CI Jenkins. Which features the fix for when suddenly you have to spawn a lot of pods at the same time. That was creating some locks and was slowing down or sometimes locking completely the cluster. That's one of the part of the big outage on CI Jenkins. We have corrected the root cause since then, but that one was amplifying the steps. So I hope that we'll consume less resources when we have a bomb built. So far, no more issues. So if you see anything on CI Jenkins, I always a lot of pods, agents that are in Word step locked or something. Don't hesitate to mention that ticket or reopen it. For the contributor, we did that. Thanks very much for the work on that and thanks, Vincent. HTTP permanent redirect, so that one seems to be solved. It involved different changes between the different provider, but as far as I know now it works as expected. Is that correct? That's correct as far as I know. I just added one more pull request this one to Jenkins to www.jankins.io that replaces Jenkins is the way that I owe with stories that Jenkins.io for each resolvable URL. That way the only remaining broken URLs are Jenkins is the way that I owe. I'm working with Alyssa Tong and Gavin Mogan to try to find the content that is missing that would would resolve those links. Some of the content was intentionally deleted. Some of the content may just be missing because we didn't get a copy so I've a work continue working with them I think that that ticket being closed is the correct thing. So many thanks Mark for managing that during the conference. Thanks Hervé for the work you put that and also thanks Tim and Alyssa for also the other tasks you did on that. Issue about last weekly, last week, weekly release that failed. We had to burn to numbers. So it's more an audit of the issues that we met to summarize. We did too much changes at a time. We removed absolutely GDK8 from the build of that weekly. And we also graded the image to Ubuntu 2204, which had a lot of side usages. The takeaway for the infra team is that part of the release build is done remotely on the machine pkg.jankins.io. You know that machine that also hosts the update center on AWS that we can't upgrade anywhere. The reason as explained by Olivier is because one of the two between Debian packages or Red Hat packages. In order to add new packages and update the index, it needs the file from the previous one, which means even if you have the tool create repo on a container that runs somewhere on release. You need a way to retrieve the gigabyte of files already present on production, have them accessible locally, run the hard on that one and then copy back these files. It's not that much a problem. We could have a caching file system on the release environment and rsync on one side and then rsync on the other side. But we have to do this in order to decouple the release process from that machine. So right now team removed the create repo tooling from the container since we don't need it. So no false expectation and we will add it back. But that means that for contributor you cannot build all the packages of Jenkins. If you are not with an access to that machine. Since the package build is a signed package, you already couldn't build a fully signed package right because the the signing the signing key is is deeply privileged we don't want to give that to anyone. Correct. By default, the release process as a dummy key GPG key that everyone can get access to, which allows to sign but it's not recognized by the usual certificate provider for the packages. What you cannot do is once you have generated a package updates a local history, you don't have all the tooling that we use for doing that even from scratch. What could we do about that. That means being able to either do it on as part of the release process. So there are solutions on that area team is aware of that but no action required for us for now, because we focus on moving update center out of that machine somewhere else and then we can start discussing the future of that machine. Do you have any other question about last week release. No, thank you very much to everyone who was involved in getting it done. Yep, that was a long long running process and everyone else, everyone I take it back there is one open question that Daniel Beck just raised I think it's related to the upcoming security release that will be 2.346.12.332.4. Yes, relative to have we switched from building with Java 8 to building with Java 11 and I think the answer there is yes, and does that affect or does that place a risk on the security release that he's trying to do. So for the security release that shouldn't in the sense that so Daniel coaxed that. So what happened is that after the weekly release. I worked on back porting the changes we did on the weekly branch to to pull requests on the release repository. There is one to the 2.332 line. And there is the twin sister to the to the next LTS that should happen in September. So it's okay for the other, as expected, because that is from the same weekly and if I'm correct the, not that line but the other line should be should drop Java 8 by default. 2.346. Yes, drop Java 8. It's not until the September release, which will probably be 2.6360 something. Okay, so we also have to roll back these changes. That one is tricky. Okay, can I ask you to open an issue on the release repository for me mark that topic. Yes, we'll do. So what happened is that for the current LTS line 2.332 that's Daniel is asking question about that line will be this will have a security updates dot two, I think. So for that for ready. Okay. And so, in order to have the security release we have to backport to roll back some changes we fixed the image to the latest that still contain both Chidek 8 and Chidek 11, which in turn, we need to add back that path because this is the default on that machine. So the pod, the pod that are starting that triggered the release. We need to specify the binary the Java binary used to start the agents because it must be Chidek 11 like on the controller, while Java 8 keeps the same default for And also I had to roll back the open SSL changes we did due to the Ubuntu 222 change which embed open SSL free while on that image it's still open SSL one or two whatever. So I assume I will have to do the same kind of rollback on the other LTS line. And we can ask a team, Alex and I to pay a beer to the others because the three of us validated my poll request, while it's children. No, thank you very much for that work you've done and we just want to be sure we that the security team understands as well. Thanks. Okay, so we've got more to do I'll create that issue. So we really rely on this issue to not forget this backwards then that clear for everyone. No question. Okay, thanks for very for dropping the dependency of MG. So now we use a fully fledged Docker machines everywhere, instead of MG. So no more maintaining that parts. Great job that wasn't the easiest one. Thanks, Stefan for working on bootstrapping a new Terraform project for the Oracle cloud maintenance. While I'm there. So we have now two child tasks, one which we'll see later. There is one that I want to take will that will import the existing resources mark we saw with Stefan that we have free machines we have archives. There is one with your name that I assume you are using for testing Jenkins on IRM and per system. Do you know what the third virtual machine is for do you remember. I suspect it's also a test machine that I use but I'll happily connect in and double check. Because do your mind stopping it and deleting it if it's not used anymore or at least stopping it to not consume credits, because with the update center that should migrate on that area who will start to be on an area where. Yeah, not too much machine at the same time I will say. Okay, yes. Unless you have a use case and in that case, I'm quite confident it's being used, but I'm happy to switch it off I use it. I believe I use it to check that we're portable to Oracle Linux, but that's a relatively low risk, low risk check that I can shift elsewhere. No problem, we can otherwise just open an help desk issue and we can start documenting the presence of this in machine we import it on the terraform and that will be part of the of the process. Okay. The precision warning in packaging pipeline. Don't remember at all. Oh yes, that's a minor change we did on the release but template. You can look at it that's really minor. I memory runners fail to come online so thanks Alex for opening that there were issue 10 days ago. And last week about the I memory virtual machines that are from a religion to see a Jenkins I you mostly use for a test harness of Jenkins core, usually and some bills. These machine were completely blocked unlocked. So team did a revert of a recent feature on the Azure VM plugins and also were able to fine tune the configuration of the plugin on all our instances. So that's the Azure machines are immediately throw away from the way once a build have used them. That was really the case for easy to, but not for Azure. So now they all have the same behavior and we haven't seen any more issues. The reason is because the feature that team had to roll back was for the case of when you have used the machine, you wait sometimes before stopping it, just in case and as a bill come and you, you will want to reuse it. Not only we don't want that. We want the machine recycled immediately we don't want to reuse ability. In that case, the bug was triggered when you were waiting a few minutes that was creating some blocks on the garbage collector of the machines. So thanks team for that and thanks Stefan for the for the help on that area. One issue has been closed has not done. I don't remember why the sign off on a web based commits. Or they can you remind. It was about recurring, recurring sign off commits. So commits done via the web UI could be automatically signed. But as Tim said, it can be activated. They are opposed to it. So as there is only one menu as the end chart. We didn't pursue this modification at your level. Okay. Makes sense. Thanks. No question. Okay. Last done tasks. The GSOC plugin is scoring system. So as part of the GSOC project, there is one of the project this year. That is monitoring. He was asking to create a repository expect upcoming resource request to create part of infrastructure to support that project in the future. Thanks for taking care of this one. Any question. Okay. Time to check the work in progress folks. So first one. Stefan and I in pair, but driven by Stefan, we are working on migrating update center to another cloud, which involve creating a new machine with Terraform on the new Terraform project with the data volume. We are working on how to convert the kind of machines and then that machine will be had it's and then we will do a, we will split the traffic once ready between both after talk for tests. And if everything goes, we can absolutely move everything to Oracle. The reason is because we are consuming between three to five K per month on bandwidth on AWS, mostly caused by that service. In order to update center on Oracle, we should be less than one K. So that's quite the quite the chip on with no blockers. We are trying to dig the Terraform Oracle provider, which is not the easiest one. Thanks to Stefan for the work on that part. So is that okay to continue for next iteration for that task, Stefan. Yes, of course. The priority given the need we have for money. Digital ocean. I have a should meet later today the team. We have created a note on ACMD for that topic. I can ask you to add the ACMD link inside the issue. I forgot to do it yesterday. I'm sorry. We cover different options. How much if we keep the same burn rates that we had, that should be one K per month. So that means asking for if it's a sponsorship for one year that will mean 12 K to use on their cloud. We considered also other alternatives instead of the Kubernetes cluster. So either we could put limit on the cluster to reduce the amount. We want to discuss with them the fact that we could gain almost 400 K per month if we were able to have a scale up scale down to zero, which is not the case technically speaking. So that's the big one to bring to them. And then based on the amount of money they are prepared to give we could also set up mirrors for Jenkins managed by ourselves. Because they have data centers in Singapore and India in Bangalore. And that that wouldn't cost too much. That will be a way even with the bandwidth. And also we could use dynamic fmr all virtual machines. They only support Linux out of the box. But still we could use all the docker builds and fracci or the machine for trusted CI. We could use totally different setups or clouds than CI and Kinsayo to split the cost. So we have different options with different level of cost measurements. We'll see how they feel about continuing. The proposal is to ask the same the same sponsorship burn rates aka one K per month. That's already a lot in a sponsor. It's a bit more than what they propose initially. It's an increase for them. So we might want to them prepare to write more blog posts or things about their setups. Great limiting. I should have asked an answer. I didn't. So I'm going to send an email to the open source program. The person in charge of that program looks like they are really, really busy. And also sending an official email on the mailing will allow me to share most of the information with the rest of the team. So we'll have something written that we could use in the future. So that should be same next iteration. Upgrade to Kubernetes 1.20. Stefan, what is the status? I did create the Kubernetes client. Cut all client. And I started the upgrade on the digital SM. And I'm right there. Is it done or? No, it's not done. It's a work in progress. Are you sure? I think we merge something on Terraform. Okay. Can you take care of checking that issue? Let me ask, are you going to work on that topic on the next iteration on one of the other clusters or? Yes, I hope so. Okay. Autonotify people based on service routing rules. RV, last iteration for you. You still have one week to continue working on it. So as I said last week, we should remove it from the milestone. Okay. Of course, if you find some time, don't hesitate to... Oh, okay. I need to remove it. Okay. Clear milestone. Thanks. Provide PowerShell on the Windows Docker image. Use the ACI. Yep. Nothing has been done, but some are, I think, about it. Some are, I think, we should test for the presence of PowerShell or AWSSH executable site. Okay. I think we should raise an issue on the plugin in charge of implementing PowerShell step because as for today, there are two key words on Windows. You have pay AWSSH and PowerShell. As service sites, that could be really a great help to the end user if Jenkins could be able to detect the one to check because either you have one or the other. And pay AWSSH is for smaller distros or older version of PowerShell. It's the opposite. I think even the AWSSH should be put in front in documentation on everything. I don't know. I'm not sure, but I think it's more the PowerShell cores and the Windows PowerShell version. It's the last version of PowerShell. While the PowerShell, the full PowerShell command call PowerShell version less than six. Okay. So definitely it's not less than six, but the AWSSH call PowerShell six and more. Okay. So clearly definitely a topic to open on the community because that's a feature on the pipeline plugin. Really important for that. There isn't any action required for infrastructure. So if it's okay, we will close that issue here. I'm not sure there isn't anything to do. What problem would it solve? Not a problem, but in our pipeline library, we are always calling with PowerShell command. Okay. I think it might not be available on some of our machine. Okay. So it's working today. So it's available. That might disappear if I understand your explanation. But that issue was about on the Docker Windows image. PowerShell is present on this image. But it's called a PWSH. Yes, but it works with the keyboard. Yeah. No. No, it doesn't work with the keyboard PowerShell. You have to use PWSH in your pipeline. If you want to call PowerShell on this Docker image. Okay. So, okay. So can I ask you to add a comment at the end of the issue explaining the next action, meaning we should at least update our shell library to use PWSH or B80, which most of the others are doing, depends on. The initial problem that raised that issue, that was something that JC and James Nord were blocked on, and they use B80 instead of PowerShell. That's sort of their issue. That's why the initial reason of that issue is not valid anymore. However, as Erwe explained, there is something to do for the infrastructure. Yeah, we were searching for PowerShell. Okay. Can you add the comment just to explain, and then someone can take it. So, yep, sorry. Is it worthwhile having nano server based images? If it doesn't include PowerShell, should we just acknowledge we're not going to. It includes PowerShell, but it's not called PowerShell.exe. It's called PWSH.exe, and it's a greater version than the one on Windows Evercore, which is called PowerShell.exe. I see. Okay. Thank you. Thanks for the clarity. That was a tricky one, honestly. But that's why we could help because if we add that issue, some user on the Jenkins community and Jenkins user will have that issue soon. So better to act now and to raise the topic on the community side. Another topic. Last one currently work in progress. Evaluate retry to improve the stability of the builds. So thanks Erwe for raising this one. The core of that requests is that Chessie worked in some time on the behavior of the pipeline that will retry a stage or a full pipeline. If, in some case, the pipeline fails because of an agent failure. The use case is for spot instances. When we have a spot virtual machine, it can be reclaimed in less than one minute. And the agent machine is stopped and removed, which means there is no way to keep this to risk to have the same state. The pipeline have to know the reason of why it failed. And when we detect such error, then it should be retried automatically one, two, eventually three. Because you don't want developer having to watch if, oh, my build failed. The reason is because of the agent. Then I trigger new build manually. Not mentioning the issues we have of plug-in maintainer that don't have access to CI Jenkins. Or they have access, but they cannot click the build now. But also it's a use case for some Kubernetes cases, when in some case the agent has been removed. This set of features should solve the issue we have almost every time we redeploy a controller and restart it. If there are some builds running, they failed to continue or restart because of the reason. So, Jesse's working on that. That's an open source work. And we have started and deployed that work to infra-CI this morning earlier. The goal is to try more and more use cases. So we'll check with Jesse that will continue next week. The target is CI Jenkins. We don't want to release this pipeline because these are the pipeline plugins. So if we release that and it breaks, it can be sensitive. So we propose the help to Jesse. So we start on infra-CI. If it works, then we can extend the area of testing. The challenge is that we need real-life use case. We cannot test on some isolated environments that just run a kind of fMRL pipeline that does not map the reality of end users. So that's why we still have to go to production really soon. And we will install the pipeline plugins in incremental build, meaning from a pull request not released yet, on CI Jenkins at the moment on time. Because we need feedback on that core behavior because that could impact a lot of users. So right now we're using a version of the plugins pipeline that is under development as a beta texture on infra. Exactly. You're correctly understanding. So Nifty Trick that Jesse and I discovered, we didn't know it was possible, you can specify instead of the version plugin TXT, you can specify a URL to a plugin or a version from incremental. You have different ways. Awesome. That's really, really practical. So thanks a lot, Jesse and Dervé, for working on that part. LP, yes? He mentioned reason discussed there, but I don't know. It's his last comments. I don't see the reason. This one, just under your mouse. Yep. I don't know. Sorry. What are these reasons where to find them? I propose we discuss that after the meeting because it's part of the topic. The goal now is to say, okay, we have this, which means the takeaway for all of us in Frasier, he might be failing some build or might have word behavior when redeploying new version of Jenkins or plugins. If we saw that, we have to audit these changes on that issue. Okay. Thank you for everyone. Yes. Okay. So one last step. New topics. We have quite some. So that will be quite busy. I want white. I'm trying to sort the issues on the milestone, but it looks like I can't. First one, it's a little bit of a pain. Thanks. survey for, and Basil for raising that one. So we were blocking the maven updates because free. A dot five was having issue. Basil confirmed that we don't have that issue anymore. So we should be able to include the new free. A dot six. However, we had Daniel that asked for waiting before the staging of the That's why I did quickly a comment yesterday, I'm sorry for blocking that, because otherwise you did good and we should have proceed as soon as possible. There is no emergency on doing that, but still it's important to do it soon. So I propose that we move that issue on the next week milestone and we wait for Daniel and Vadek or at least someone from the security team to tell us that we can go away. Go for that update, just to be sure we don't put the security reason danger. Looks good for everyone. That sounds good to me and I can confirm I'm using Maven 3.8.6 in my cluster successfully. So when we finally do get to that point and now we've got the transition to Java 11 is for the weekly June 28th. So that's one week after the LTS. And that would probably be a good time. If not before then, that's already that's a good time to be switched to Maven 3.8.6. Cool. Thanks for confirming. So I've put that issue on next week, the Maven 3.8.6, because we will have to obtain it on different areas. Another topic that came back James open an issue asking us to integrate to enable again, something that existed years ago, where Jira is connected to GitHub. So there is a kind of automatic mechanism to reference issues in Jira with pull requests. I don't remember exactly. I tend to be scared by Jira. That's an integration to be installed based on the discussion and the memory. Lay memory trip to memory lane. I ask Daniel, Tim and Olivier. We should be able to do it. However, we might have an issue regarding rate limit on the API, a GitHub API. First point, rate limit usually on GitHub means you need to use a GitHub app to benefit from the version four of the API. But the GitHub app to integrate Jira with GitHub is only for Atlassian Cloud users, which we are not. And we cannot move our Jira instance to Atlassian Cloud because of the LDAP, not supported on the cloud. So we are condemned to use the old method with a technical GitHub user that has the right to write everywhere. That's not ideal in terms of security, but we don't have any solution. And as Tim said, we should have a dedicated Jira user because it was the Jenkins infrastructure years before Jenkins bots, and it was shared with other usages. So not only were we able to raise quickly to get the limits. So that's why Tim suggests to have a dedicated account just for that integration. So it only used the rate limit only for Jira and GitHub. So yeah, is there someone volunteering for that one? Should I do it? We can pair. But that's a user request that will help a lot of developers. So I understand the project management could be eased for them. I have a silly question, but is that not the same thing than the GitHub issues? That's the same feature, but on Jira. OK, because GitHub issues are already integrated in GitHub. Yes, but Jira issues are not integrated with GitHub by default. And using Jira for Jenkins Core is still a recommend and will stay. Oh, that's for the core. OK, yeah, because we did import our Jira in GitHub. For the infra project, but for the intranet of Jenkins Core and a lot of plugins are using that project security as well. And so there is that dissonance between things in GitHub and things in Jira that will always be kind of a mess. Migrating that project will mean migrating if you see the ID numbers. That's a lot of issues. And also, Hervé and I are on the team, the GitHub strong feeling team. And some people are strong feeling about Jira. And these teams will never be able to go to consensus. GitHub issue is on that level of Jira, their function in Jira, which you can't really reproduce. I don't mean to hurt feelings and to fight between Jira and GitHub stuff. It's just I didn't realize that they were using Jira in the core and plugins. That's all. The takeaway is that there are many more different use cases and Jira or GitHub issue alone cannot fulfill all the use cases. So two tools for different cases and we need both. That's what ends the integration request. Yeah, OK. So unless someone is interested, I've put it on my name. Mention yourself or assign yourself if you're interested in doing so. Put me if you wish. Yep. Mark, you opened an issue about the CPU-Z agent that was... That one you can assign to me, Damian, if you're welcome, if you're willing to, or I'll assign it to me because I think it's one I need to get the runbook updated so that we can do it. There is no reason for us to put it on anybody else. I interact with IBM. I just need to be sure that I describe how this works. And then we can consider your suggestion. I'm not a puppet expert. I know you'd suggest that, gee, should we puppetize this thing? And for my initial was I just want to get it online as cheap and quick as possible because it's a one-off and will probably always be a one-off given how expensive it is that they donate a Z-Series to us at all. The thing is that once you have an initial access as a one human, the advantage is that then by having puppets that will be able to make auditable if we need to install some change at large scale. If we want to configure Maven Java following the rest of our dates and also it will avoid everyone on the team to be able to maintain that machine in the future if we need it. So that that's the reason. So if you want, we can try. I understand that it wasn't done because there weren't any package back in the days. But it sounds like puppets. Some people were using puppets on Sancto S Linux on that kind of CPU. So I assume that the game instead of the puppet agent should be able. So I would be interested to try it. So so in terms of relative priorities, I'm I'm assuming that this is much lower value than most of the other things you're working on in terms of automating or helping automate. OK, good, correct. Super thanks for raising the issue. I've started the Ubuntu 20204 upgrade campaign as a reaction of last week, weekly release. That means we will have some consequences in particular open SSL CURL and CSR were updated, which means we might have issues on some of our machines. So we have to keep track of that. That should be interesting on some machines. But yeah, be careful. So my proposal is that we don't start working on that yet. We have Kubernetes. We have updates on how to migrate and Kubernetes to finish migrating or terraform managing. Is there any voice not agreeing with that priority proposal? OK, I think that's all. That's already pretty quick. I might take if I find the elements of minor tasks on that back burner, but I don't see other highly prior to you. I do not. OK. I think the getting ready for the upcoming release, the 22nd of the security release and then the Java 11 transition, the 28th are both both should be our key focus to be sure that we don't block anything on those. Absolutely. The priority enabling the upcoming release. Second priority migrating updates and talk to Oracle so we can decrease our cost in AWS, right? OK, is there any question, things unclear? The picture we want to bring. The repair of data that we just broke is a good point outside of that. Yes. So that's why I used an announcement about the outage. Right after that meeting, we will open an issue because the outage yesterday did not justify an issue. It was punctual. Today it justifies, so we'll do it. Stefan and I try to disable PagerDuty since the system was done and there is nothing we can do about. So paging the person in PagerDuty made no sense. And while trying to disable what we understood would have been the correct agent element. Now PagerDuty is alerting us with more alerts. So we have to better understand how it works and fix it. That's part of a knowledge sharing session we started last Friday about which component on Datadug is doing what. We talked a lot of issues to be treated, but yeah. We're going there. Thanks, Stefan. Good point. Good reminder. You're welcome. As a general matter, I propose the following improvements. If we have to open an issue on status Genkin Sayo, we should all expect each other to open a help desk issue associated with the link on the message. Sometimes it's you open the issue on status to communicate, especially the weekend or during night, of course. But then when we are back on walking out, the rest of the team should start on help desk or the person in charge of on duty. I think we all forgot to create one quite regularly. So that's not I'm not pointing finger and I don't say it's a bad thing. It's just I'm just saying we could improve in the future because we would have an audit log and we won't forget elements such as this one like Stefan reminded us. And can you do something to have a help desk issue that with the correct label will trigger automatically a new status dot are you the check ins that are your issue? Yeah, you have something like that planned, but it could be awesome. So, Stefan, can you open an Elbeck issue describing what you expect? And I will have a label status Genkin Sayo just to test. Just describe the feature requests. Just please write this down. Writing things down allows us to share the knowledge because there are people that cannot attend or who don't want to attend, but can still contribute. So that's why we need to write. Oh, yes. All right. You can also look at my random issue and yeah, come on there and we can. Write a proper issue, right? Yeah. So, so Damian, that brings a question for me. This system 390 thing. Should I flag that somehow on status dot Jenkins that I owe? And likewise, the power PC that's currently the agent is offline. Should this be in status dot Jenkins that I owe? Is this too, too, too, too questionable relevance to bother people with it on status dot Jenkins that I owe? Do you have a sense of should I should I have should I have updated status dot Jenkins that I owe to show that the agent is offline? Are there unique job running on it? Yes, well, yes, there are. But I modified the job definitions so that they're no longer dependent on that. So I would say who will be bothered by these machines being offline except the five of us? No one. So if it's OK for you, then we don't need to bother for these two machines. The rule of thumb will be if we open an issue on status Jenkins, you it's because someone outside the infrateam will be bothered. OK, good, good. I like that guidance. That's a that's a good guiding principle. Thank you. Is that is that a clear one that everyone agrees with or do you have? I think that makes sense. Status dot Jenkins that I owe is a way to share status with other people. So yeah, I like that. That's all for me. Are there other topics that we forgot? That you want to bring discuss cool. So now you have to stop speaking until next week then. Thanks, everyone, for the huge work you all did and for the help. Take care and see you next week.