 And everyone, welcome to the Jenkins Public Infrastructure Meeting. We are the 7th of February 2023, today we have myself, Damian Duportal, Hervé Le Meur, Stéphane Merle, and Mark White. And Ufti, my dog, who is playing behind, sorry for the noise. Okay. First of all, announcement, so the weekly has been released, the web version 2.390. Tag has been created, we need to check the Jenkins controller image. And then the next item will be checked. I believe that you have a subject to bring Mark. Yeah, so this, this announcement topic is actually a just an alert that we've got conversations happening right now. So Jira announced a year or two ago that their business model was switching that they were no longer going to support self hosted Jira instances of their standard product. They were going 100% cloud for their standard product. And we use a donated license of their standard product. That donated license of their standard product is hosted for us by Linux Foundation issues that Jenkins.io in February of 2024 they will conclude that transition by switching off. They will no longer support hosted instances of their standard product. However, we've learned that they have a product called data center. The data center is a hostable product. And we've asked them will they please donate a data center license to us so that we can continue using Jira, rather than making the switch to get hub issues. The security team would very much prefer to stay with Jira because their processes are strongly coupled to their plant there are a number of core maintainers who would prefer to stay with Jira because Jira has specific features that make large scale management of projects easier than get hub issues are so so right now the the effort is to try to persuade Atlassian to donate us a data center license. And as a board member I have the action item and I'm actively communicating with Atlassian right now asking them if they would please be willing to donate. I don't know if they're willing to donate or not. They've seemed to indicate they are but it has to have special approval for them. This is not a common thing for them. They correctly want people to transition to their cloud solution. We have a problem doing using their cloud solution because their cloud solution is limited to not more than 30,000 users on an instance and we have well over 100,000 their cloud solution requires that that you must use an Atlassian account. And if we did that for Jenkins it would mean instead of two accounts to be a Jenkins contributor a GitHub account and a Jenkins that I account IO account for repository upload, we would now have three accounts. And for me that's that would be sort of a tipping point where I would expect many contributors would say no let's just switch to GitHub issues. So, thus, the request is happening now to Atlassian, please grant us a data center license. If they don't grant us a data center license, we then have to have the discussion within the Jenkins project. What do we want to do next, whether it's changed to JIRA cloud and reduce the number of users and use Atlassian accounts, or change to GitHub issues, whatever it is. It's a it will be a change data center is the easiest for us because it doesn't require that we do any real change. Any questions. That's clear. My time has gone number. When you left an application. Okay. Because I assume that the Linux if we have a license then that will be Linux foundation area to host or migrate the from current instance to the other. And Linux foundation has indicated that they are already hosting one or more other data center licenses so this is not something that's that's unfamiliar to them. Now I have not received final confirmation from them that they are willing to continue that hosting, but I think if we bring the data so I will I will check with them on that as well. Okay. That's good news then. At least, at least it is, it is a path forward that does not require a migration yet. We may still in the future have to do a migration but for the moment we're not required to do a migration. Okay. There's any other question things unclear on that topic. Okay. Do you have other announcement. Yeah. Okay. So let's move forward with the upcoming calendar. First of all, next weekly will happen next Tuesday, I believe will be the 40 of February is that correct. Yes. So, yes, Valentine's Day, some volunteer. Valentine's Day. Next LTS will be 3.2.375.3 and it will happen tomorrow. So please don't break anything on the Kubernetes cluster of the infrastructure. Just a warning note about this one. Last week, is it too much we saw that it took almost two hours to be able to spin up a Windows container in order to build the messy package. So it wasn't a failure per se because it finally released correctly without reaching any time it's so that might be interesting to watch when the package build will happen. So as the order of magnitude, once the build is is triggered, you have two hours to two hours and half of building tests, and then the package happened. So when you see that the release has started then that means two hours after we will have to start watching the package. I'm not aware of that because I understand RV and Stefan might have personal appointment on the day. So to avoid any pressure on you, is it okay folks, if I take care of following up on this one that I will try to record and report what happened. Is that okay for you? Okay. Security advisories. Let's see if there are any announcements. There are two old ones so no security advisory are announced. Next major events as far as I can tell it will be DevOps since the first time just happened last week. DevOps funds in April 2023. We will have as far as I can tell we will have a bird of fissure session about Jenkins. Is that correct. I don't think it has been proposed. I don't think that I've not seen any announcement if the proposal was accepted. It has been accepted. Oh good. Nice. So if you want to discuss with us about the infrastructure Jenkins in general you're welcome to join during that event. Unfortunately, Bruno and Adrian talk hasn't been accepted. Yeah, that was a tight selection. They reached the same amount of selection as the Kubernetes. Less than 10% of the submittal talks could be at had they had only 10% available slots so hard selection job right. Well that means it'll be a great conference that's that's wonderful. Absolutely. Okay, can we proceed to the next step. So the task that we were able to finish during the past milestone. So the real test not the one that we did not fix. We had two password recovery, of course, no answer back from the requester like every time so closed. One access to a plugin that is treated on the RPU. And so we were able to remove a bunch of you news plugins. I wasn't able to close the last one about a seditor but that has been done everywhere. So that should add another task done. Thanks Mark for the for pointing out that the index HTML page on the reddatz table repository on pkg and can say you should not show the latest version available. But else, since when JDK 11 was supported for each which line. We never had any feedback from the user as well so I've took on me to close the issue. And the gira topic that you that we discussed about the fact that we had to renew the license and everything you explain about the product. Well and and that license renewal was yeah that was the catalyst for it but ultimately the the. Yeah, yeah the license renewal is done that that task really is done the the February 2024 thing came out of it. Yeah, that's that's important to to detail. I was a bit too quick on this one. Thanks. Okay, so now on the currently open issues. First of all, great job folks we were able to enable artifact caching proxy for every plugin last week. We haven't been aware of any failure yet. So an increase in the memory used on most of the instances which shows an increase of activity. But also, we haven't reached the memory limit. So it means that the current amount of cash data is already loaded in memory. Due to that, it means that we weren't able to see any IO activities or something that change anything on the read and write on the file system because the cash is able to load everything for now. There has been an additional change that we pushed. So I think that survey took care of that Friday. And it's there on digital ocean and Azure but not on AWS yet. We are caching for one month system instead of one day. So that should increase the amount of the decrease the amount of requests going to G frog. And now as pointed by the next step is the bomb. Because on the metrics that we gather every week from G frog, we saw the outbound IPs of our Kubernetes clusters. And our Kubernetes clusters mainly us to build plugins builds on the pods and bomb builds. So bomb is the next let's say corporate because that one is also downloading an insane amount of plugins. So that one is the work that air base working on. The goal is to reuse the same logic inside our code and use it for both build plugin and build bomb pipelines. Please note that we have an issue in progress on the ACP area. Which is AWS cluster is currently trying when updating the latest configuration has just mentioned about the cash. It's trying to risk the pods but we have an issue where in AWS. You are allowed to have multi red. It's not region but availability zone cluster. So we have virtual machines on different areas. But the persistent volumes are only per availability zone. So if you spin a machine another availability zone, it cannot access that the auto scalar should be aware of that topology, but it's failing because that's an issue in the EKS product itself. So we are working on fixing that there are multiple solutions. However, the simple that should not create mayhem will be to have free new not pull one per availability zone, then migrate our two instances one after the other and clean up the other. The goal will be to avoid breaking builds of people using that cluster. I believe we should be able to close the artifact caching proxy activation issue is that correct. Okay, is that okay for you. So we'll take care of closing it just if I update the title to mention explicitly that it's for the build plugin function. So we wish we might or might not be able to create another issue for other areas if needed. Is that okay for you. Okay. Is there anything else about the artifact caching proxy that I forgot that you want to mention that you want to ask just serious congratulations to a very smooth production implementation. For me as a plugin maintainer it looked like it was absolutely flawless thanks very very much. I saw no failures I saw no no surprises it just worked and kept right on working thanks very much. Nice team effort. Keep working as expected. Next step. Bomb bits. On the other issues and show all get a back to version are pinned on tracks. So an initial work has been done by Stephen. Thanks for that. So you were able to enable dependable. And updates. And they associated thanks survey for catching that I enable dependable with a wrongly named file, two years ago almost and it never updated so thanks for fixing the typo. So we discuss and agree on different moments that before closing that issue, we should check and have that exhaustive list of every repositories having a dot get up workflow directory, but no get up dependable. And so we'll have to exhaustively cover all these elements, deciding if all these repositories archived nothing to do. No, not this. I've removed the archives repository from this list. All of them are just have to work on that list. Okay. So Stephen is it okay for you to keep on this one. Thanks. Thanks for the extraction of it. Thank you. That's awesome. Do you man sharing or were you able to extract the list that will be useful in the future if you want to do mining. Yes, I have on my computer see all the things in the repository and I've done some find and grab on it. Okay. You can find, find with kit up workflow folder and find the kit up dependable file and then I've merged this to list. Do you mind adding a comment explaining the method. I will have for YouTube video. It's quite manual. No problem at all. It's just knowledge sharing. Anything that might looks easy for you might not for other that will want to help on the infrastructure. So that's why I'm asking for this one. Which leads me to the fun. There might be case I don't think that will happen here but I just want to be everyone to be aware that if any of the repository on nervous machine are not up to date with their remote. They might be already okay. Someone already pushing a dependable. I'm sure that won't happen in that case. It's not something. I checked the reach of this line before putting them. So then the guy is awesome. Okay, so then this was up to date five days ago. Okay, so if you are able to share the method you used to batch. I click on these links. I've manually checked them before. We need to put a camera on top of his shoulder to know exactly how he's working that the only way. If we all agree it's okay. But I just want you to share that knowledge. So anyone else could do it in a rush in the next time, even if it's manual that's not a problem. But still saying that you did this this steps help it's working that area. Very good job. Awesome. Good job. Renew the signer certificate for Jenkins and documented. So we said we will do that next iteration after the LTS release before any upcoming updates. Mark is that still okay for you first. Yes. So we should start working on that area Thursday. Olivier pointed me this weekend that there is already a discussion on the Jenkins in fast slash release repository. So the only run book will point to that area. That's which is great. I'm embarrassed to admit I didn't find it in my in my notes and I'm really thrilled that Olivier pointed us to it. So we did documented the last time we did it just not in the place that I thought to look. So we should be okay. Just confirmation I understand that it's only the private key, but we don't have to change Jenkins war like it was done on customers on a user site. When they get to Jenkins war, whether it's the old on a new one that should be signed and no problem for them to set the intermediate authority should not change, right? That's that's my understanding is this should be this should be what this act was transparent to Jenkins users the last time we did it. There there was no disruption to them. They they perceive that. Yes, now, now I suspect, if you attempt to run an MSI installer from seven years ago, it will tell you now the the signing is has expired on that MSI. And that may be the same condition that will occur for MSIs done with our current installer. After we've after we've transitioned and it has expired. So when it expires in 45 days or whatever the expiration is, but I'm not even confident of that. It's important that we get this updated. But as far as the last time we did it, it was no disruption to any users. That should be a breeze. Okay, so this too will move on the next iteration answer remove Jenkins your pages aren't accessible and indexed anymore. So we open the issue weren't able to work on that. I read you remember if we were able to finally clean up and do what's the initial issue was fixed. No, nothing has been done. Okay. So if we have to, if we can work on this that will be nice to focus on okay updating the initial requirement from Daniel removing whatever page is mentioning. And then we can take care of the rest later. Is that okay for you. Is that okay for you to take this issue on the coming week or do you want me or Stefan to do it. Removing is a gift page is okay but Yeah, this one I don't think it's that I wouldn't have put it in the next milestone at all. Okay. I agree. I like not including it next milestone. This one's harmless right it this one is a longstanding multi year thing. I agree. So let's, I will move it to next milestone so then. Frequent budget duty alert this space is below one gigabyte. So that one is a recurring armless alert that we all keep since beginning of the month. We've documented the content and reason the first step we have to run is to ensure that we shift all of our controllers when using virtual machines particularly windows once to immediately delete the machine once the builds passed. We wanted from a role machine everywhere. Once we will have done that we'll see if the if the alert scheme coming then we will have to search in detail which job is impacted by that if any. And decide if we have to increase the default disk size of the bills. The main possibility but we haven't diagnosed and also it's not a factor proof is that we have multiple bills coming in the same windows virtual machine used as agents, because these machines are usually we're waiting 30 minutes before being crashed by most of the controllers. And the reason for that behavior are are not up to date anymore. So we should shift to trash the machine immediately. And we shouldn't have any bills that's tried to generate 80 gigabyte of data, even the docker builds. So that's why that might be a quick win. Stefan are you okay to work on that with me or. Yes. Yes, we talk about that already that's why I'm fast to answer yes. Cool. That's okay for you to take it on the upcoming milestone. Yep. Okay. I'm sorry to re-align repo Jenkins your mission. I didn't do anything on that topic because we were focused on first day man ACP. Mark, you had a feedback from G frog on the fact that they cannot get the user agent as it seemed on the metrics logs that we received from them so that might be annoying. It looks like they are not able to try the redirection on the Jenkins war, which is annoying. Well, and that I wasn't sure of so I'd ask that question further clarifying. I'm still working a little bit on hope that they have some way to do her file reader per per artifact redirect because if they do, we could stop the that server in China, by redirecting the war files to get Jenkins.io. It'd be a lot of work but that's a doable amount of work and Jenkins dot war is a big file so makes sense. But I don't know there they haven't answered conclusively yet. And, and I think their likely answer will be no we can't do that. We will hope we'll see. We might have to raise that points to higher teams in chief frog, because that war is literally in the release repository which is not a mirror repository. And they cannot block IPs they can block user agent and they cannot give us user agent so we cannot diagnose what kind of usage is behind as Brazil pointed out. We cannot be sure it's is it a curl get it a maven badly configured. We don't know. But it's not a mirror so even enabling authentication or you enabling a CP every work we are doing won't have any impact on that one, which is still terabytes of data per month. Which means we will have to raise that to pick to she frog people quite quickly. Yes we've, that's a that that makes this research into how to stop that Chinese machine from doing what it's doing, even more important unfortunately I've not been successful yet in my all my attempts to find whoever it is that administered that machine in China. Yep, sir. We receive them just this morning they sent them about four hours ago I've. I've not yet uploaded them to the to the shared location. I'll do that but I have downloaded them so the first phase is complete I just have to upload them to the shared location for further analysis and I looked at the last week's logs and didn't saw any change. Not a big change so I got to see this week. Me too I want I want to see if that large gaming company has has reduced like they think they have. It won't be eliminated because it was Tuesday or Wednesday before they implemented their fix. But I would, I would love to see that there was at least a diminished our repository demand during this bandwidth demand from the that particular gaming companies server. And actually it was multiple servers we found later that there were other servers in there in there that were doing the same thing just not as visibly not not as much. Any other question on that topic. Next one is Terraform module for AWS EKS. So we found Terraform word behavior in the way we are using it and on the EKS module that we fixed Friday. So now the problems have shifted to the multi availability zone thing I mentioned earlier. So once we will have fixed that multi availability zone we should be able to close that issue. But at least the core issue the core thing about the EKS Terraform module being bumped and we weren't able to use some of these elements. That part should be fixed we shouldn't have any more issue with the labels and load balancer. Unless this issue are reintroduced by the multi availability fixes that could be a possibility. So I will report on that issue every finding send actions. The question is priority because it's unsure that the AWS repository. ACP is still working can be updated and maintain it's working but it's not caching for one month. So I'm keeping this issue working that's my two days job. And install deprecated jQuery and a seditor we have to do an exhaustive check but that should be deployed everywhere. So thanks everyone involved in that because everyone did a tiny task on that so that's cool. And question about OS. This is the last one where I have to ask you again how you can this person can do about the BZT pattern plugin. You said to me you can add something in POM but I don't know what and I didn't found out in putting the look. So assuming that person will start using CIG and Kinsayo on for Linux builds at least we already have Python 3 and virtual on installed on our virtual machines and container. The only step is installing the PIP package BZT and what I suggested to her ways to communicate to that person they could configure their POM XML to hook on on one of the prepare phase of Maven. And to use the Maven exact plugin and hook it on that part to run the command Python 3 dash M PIP install BZT dash dash upgrade. So each time the Maven command will be called on the plugin that will automatically try to install BZT. And actually that command could be enabled if there is a CI equal true environment variable which Jenkins set as far as I can remember or any of the variable that Jenkins set up. So when a developer of the plugin runs it locally it won't try this. But I suspect that one should be also portable to developer if BZT is needed. So that means we discussed that and that's why I forgot that you asked me some help on pointing out the correct Maven element RV. So someone with Maven knowledge need to help everyone that area to have the correct communication to the user. That's the status. Is that correct? Yes. So I will have to just clean up my Maven skills, because I know to hook up on the test and verify phases but I need to remember how to hook up on earlier phases but yeah I will try to point you out to the documentation if it's okay for you. Thanks. So this one could not have been solved by them switching to use containers it sounds like you've got a better solution than that than containers because they can already do Python and virtual VNV. Okay. So is it okay for you still to acknowledge the message explaining them the issue, the idea, sorry, saying either they could hook Maven to a container running process or we already have Python free and virtual on so if they can, if they know how to hook up on the Maven release then they should be able to run the command just give them the general idea and tell them we will spend some time trying to find the correct pointers for you. But if you already know you can get started with this. Is that okay for you? Yes. So we might because if this person are maintaining the plugins I assume they have a minimalistic Maven knowledge maybe they don't but at least we have to acknowledge telling that should be okay. Sounds good. So let's keep this issue on the upcoming milestone answer to the person and then depending on our available time we will work on this on the next iteration or this one. Okay for you. Okay, so these are the work in progress items. So now let's check if we had any new incoming issues in the past days. Oh, something opened 21 minutes ago by team. Oh, yes and this is one where Alex Brandus has been working a project to reclaim ownership so that someone from the governance board has ownership of every mailing list out there I believe this is one that we have not yet. Reclaimed the ownership and I don't know who owns it. I sent a message to a few people I think might be involved. Oh, he says oh okay so we need to ask Olivia he found who's the owners are good so we just need to ask Olivia to to give the ownership or to add additional owners, Mark wait. Alex Brandus. If either of us are added we can we will make the list read only. So we'll ask Olivia if he can give you on Alex. Right. Thank you. Yeah that's sorry I had not. I, I'm impressed I don't know how to find who the owners are but he found the owner and all we need is for them to grant us permission and if they grant it to not my fault. And to me, we've both done this, this task before. I want ownership Mark and Alex. Okay. Yeah. Team just added a comment on the issue about migrating updates Jen can say you're from AWS to another cloud, which happened to be a record. We, we put that issue on hold, because other topics. And he just mentioned build that was broken because with the infamous HTTP slash to windows frame error, which is a consequence of the underlying hypervisor, which is having network issues. So then it's a domino effect until the Apache server. We should be able to use a more recent Apache server to serve that issue but alas, we are stuck on the Linux distribution. If you try to upgrade the Linux distribution to 22 on that machine that will break and never reboot so please don't do a distable, upgrade. And so that means once we will have finished the world cluster migration we'll have to go back on that area as the priority after the world ACP and networking. But thanks, Tim. Just wanting to mention. Do we have other new issues I don't see any other new issues. You have other topic you want to discuss, or you want to bring up there. I remember correctly you already started the new for the next week. Sorry for the name milestone. So there probably may maybe some to do in there now. Yes, but since survey pointed it out, I remember. Yeah, let's check your right. Yeah, we was done. Both of you pointed it really early to me. So I took care of not moving everything and I'm sure some of you already corrected. So the only two issue there are issues that we explicitly decided to not work on due to the first time. So that should be okay I just have to update today's milestone to this one. But good points better to check better safe than sorry. Any other topic you want to bring or anything. So I just have a few notes that's me open them because this is my personal notes. I don't want to show them on the share screen. I'm saying thing about my colleagues so I don't want them to read it. Now but jokes aside, a few elements that I learned discover a discuss during the fall during the first time I will share them on a written manner. It's an excellent exchange with Carlos Sanchez with the initial author of the Cuban esprig in matters Carlos has been a contributor of Jenkins during years and Carlos is now working at Adobe. And we discussed about building Docker images with canico versus the new Docker buildings. And so I showed him what we were doing on the platform publicly, especially with the official Docker image. And I pointed him that if you only want to build Docker buildings is able to spin up fmr all pods. So that mean you can rule on full Kubernetes from a to Z. The only question was about the shared cash it looks like canico is able to share the cash across the cluster, but Carlos was really interested in the also starting to consider seriously the pattern that we have instead of trying anything on Kubernetes and start to be a nightmare, especially in terms of permission, because running a container engine with unprivileged processes is a nightmare and the most impossible without putting your security in grave danger is considering using what we are doing, meaning having a virtual machine when you need a label Docker on Jenkins builds, so you can reuse a cache. So he's trying different things. But the message is good job everyone involved on the official Jenkins image and infrastructure because it looks like the result are impressive. As a data point, he remember having to wait two hours for being able to build a Jenkins core image to ours. The current time was we reached the time you have to find out when rebuilding the 30 past images, 30 times the number of each definition. So we are more in around 10 minutes today than two hours. So he said great job to everyone involved in that. Big belated thanks from Carl Ains from the Maven project. We'll see that we are almost always up to date with the latest Maven version. He pointed me to two new elements that has been supported by other Jenkins developers. There's something around a new environment variable that will be able to hold the options that we passed to Maven that variable will be on the new Maven upcoming Maven 3.9, which is on the oven. So we should have to update it across the platform. Let's wait for the Jenkins core developer feedback before doing that though. I saw some requests updating it in Jenkins CI repository already. And I'm already using it. I've confirmed the 10 plugins that I maintain, 10 plus, all compile successfully with it. And it's yeah so it's I think it's looking very, very good and I'll paste a link to the to the issue that Jesse Glick created to help us track the use of that new Maven feature. So what Jesse was suggesting is that there's some benefit here and I'll I'll paste the link into the notes. So I think it's the same same thing that you were discussing Damian. Absolutely. So my proposal is that we wait, not the upcoming milestone but two weeks before starting updating Maven 3.9 everywhere, just to avoid surprise rollout if we find anything around, especially because on the bucket we have ACP things. We have an LTS tomorrow. We have code signing to change until the upcoming weekly and we have to fix a lot of alerts and elements. So I propose not being too greedy on this one. Is that okay for everyone? That sounds good to me. I think that's very reasonable not to rush. 3.8.7 has been very reliable. It's worked great. We're not harming anyone by letting the infra continue to run it for a week or two. And also Carl was happy to look on the public infrastructure code to solve some of his issues, especially around the pipe and debris. I pointed him to the org scanning and showed him the pro and cons. He started to try also the Elm template that we are using, especially for defining job DSL with less efforts, because job DSL is nightmare to write and needs So really interesting discussion, but same card was impressed by the whole community progress since the past two years. So congratulations to everyone again. We also met Tressie Miranda and Alfonso from ChainGuard. So I understand that you might be interested on the whole feed topic. So there will be one area. So we need to all discuss by email to get started, but the idea will be to check with looks like it will be gems rolling. Gems built different images for Jenkins at least weekly with GDECA 17. So could be interested to check with them opportunity to create a new official image based on Wolfie, at least for the controller. But also study the area of the agents. How is it working? Is it easy to build one of our challenges will be controlling the Git version inside these images. The benefit for the infrastructure will be to think about could we use Wolfie based images and some of our container running in production. Jenkins controller, but also container like Jenkins plugins, the NGNX, etc. The goal is having the ability to enable SBOM and track dependencies and security vulnerability without paying a complexity task due to oh let's build packages with that new way of doing which is absolutely hard to maintain. So the goal is to explore that area. So you understand the way you are. You might be interested by this topic. Is that still the case? Yes. Okay, so we'll start the email discussion. No immediate action for the infra topics, but feel free to work on the contribution part because that would benefit users to have a controller Wolfie image. And finally, discussion around the CDF, but it's a geratopics already discussed at large. We discovered that Stoshiba and the third particle accelerator are using a bunch of Jenkins controller and they are working with Jenkins and they are really happy. How is that. So that was a nice surprise. Finally, I will communicate with you. So Lori from the CDF but acting also as someone has on a G frog employee accidentally added me to a NGNX mailing list, because of someone else named Damien with the same writing. But that Damien working for NGNX, they are eager to learn about NGNX war stories or user stories. I told Lori, especially if it helps G frog internally in their battle against accounting for decreasing bandwidth and costs that we could call writes how we use NGNX for the ACP. If they are interested for me that will be deferred to her way if you are okay because you build main main of the work. I think you will be at the center. And also we could discuss so we use NGNX on other areas, even if it's simple. I'm not sure that one might be that interesting but the ACP use case could be could be nice to presence. And that will help G frog indirectly because they will have marketing public messages that Jenkins is doing things with NGNX to decrease the bandwidth. So the message and the message will not only be the data they see at the end. So same area. Are you okay on that area? Cool. So I will get started on the image. These were all my first notes. Anything else folks? No question. Okay, so see you next week and please don't merge anything so used to avoid breaking the LTS next week. Bye bye.