 Hello everyone, welcome to the Jenkins infrastructure weekly meeting. Today we are the 14th of November 2023. Around the virtual table you have myself, Demandu Portal, Erwelemer, Stefan Merle et Kevin Martins. Let's get started with the weekly release 2.432. So the war and the packages are out. So the release looks good. Kevin, can you confirm that the changelog is released? The changelog has not been merged yet, but I can take care of that in a second. It must be merged. War, packages, and Docker image. So something new on that version is that now we provide Windows container images for controller with Titica 17 and a bunch of new tags. So great work, Erwele. Looks like it works really well. Looks like you've added additional release notes on the Docker image part. I believe we can communicate on this also on the official changelog now. Another breaking change is that we have removed the CentOS 7 container image as that operating system is long deprecated. So for the Docker images, it has been removed. Is there any question or comment on the weekly release? OK, so that means we can proceed for weekly.ci and then for.ci. On the announcement, we have an LTS tomorrow. Oh. So don't break the infrastructure tomorrow. Please, folks. The LTS Windows container images. Let me add additional release note. Oh, sorry, I removed the additional. Oh, sorry, it's hard for me. It's part of the game. Container CentOS 7 removal as well. I guess. Yeah, that's all. I don't have other announcement. Do you have some folks? Nope. OK, so let's continue with the upcoming calendar. Next week, 21 November, we'll have a new weekly, as usual. Tomorrow, the LTS 2.426.1. Mark White is the release lead. So if you have any question, please check with Mark. He will coordinate every every elements of the checklist. Any questions so far? Let's check on the mailing list for the Jenkins advisory. And we don't have advisory announced publicly. So none and remember a reminder of the next major event where you could cross-path with Jenkins contributors. We'll have DevOps for London 5 December 2023. And the first week of February, it will be, I believe it's 2 and 3 February, in Brussels, we will have the first them. So some of us should be there on that moment. So don't hesitate to comment. It's 3 and 4. 3 and 4? Yeah, thanks. I believe the 2 will be Jenkins contributors submitting Brussels prior to the first day. Friday, I think, yes. Today, I can type properly. Are there major events, commands, or can we proceed to the operational tasks? Nope, OK, so let's go. Step one, the task we were able to finish during the past milestone. First of all, oh, it's not the same order. I will take them on the not order, no priority error. We detected that C.I. Jenkins I.O. wasn't able to spin up fmroll agent when the VM were Linux RM64 machines. So we fixed that earlier today with the help of Stefan. We saw that despite the official specifications on Azure website, the machines here cannot ask for more than 100 gigabytes for their disk when we use an fmroll disk. It's due to a missing feature in the Azure VM plugin where there are two kinds of OS disk placement. Either the fmroll disk requested when creating the machine can be a system disk with everything within. That's what we have by default for Windows machines when asking for fmroll storage. But in the case of Linux, RM64 agent, on that particular hardware, we have limits. So theoretically, we could add either 150 gigabytes of temp storage, but the fmroll storage allocated for the system is limited to 100 gigabytes. And the Jenkins plugin only allows us to specify for the OS disk. So in the particular case of RM64, that's what happened. So we've decreased the size of 100 gigabytes and that was also an opportunity for us to decrease costs because not because we decreased the size of the disk, we don't pay for fmroll disks. However, we detected that trusted CI, Windows, VMs and infra CI, every kind of VM agents weren't using fmroll operating system. So that mean we were paying for disk while we had the local NVMe storage. So we have changed this. That's not a lot, but still we can expect 1 to 200 dollars of savings monthly due to that. So thanks for the help. Any question on this topic? Please note that by using the same kind of instance everywhere, the probability to eat the Azure quota for that specific part of instance has increased. I will show you why we didn't work on the quota because we will have to work on a new Azure subscription that will take care of that part. So no worries on this one. But worth mentioning, just in case we eat the quota earlier than expected. I've closed the issue about pull requests when merged on the Jenkins Core on CI, Jenkins IOU, continue building. It's all to reproduce and we weren't able to get a reproduction case, but most probably that could have been caused by the fact that the garbage collecting, so the mechanism that detects that a given pull request or pipeline job on a multi branch pipeline, need to be deleted. That part, when used with AWS S3 artifacts, can tend to delay the moment where Jenkins start to delete the builds. That could absolutely map to what Alex described, meaning 10 to 15 minutes time for the system to pass on cleanup the builds. So you have to wait 10 to 15 minutes for the build to be aborted as per the settings that we have automatically. Another working angle is that maybe we have eaten the GitHub rate limit. That will also have the same symptoms because when you merge the pull request, there is a webhooks on Jenkins and Jenkins start scanning the repository. If it eats the GitHub rate limit API, then it will wait until the threshold is increased again. Waiting before deleting because it cannot watch and see the deletion of the reference. I've closed because not able to reproduce. Of course, we can reopen at any moment if we have a way to reproduce or see it in action. Is there any question on this topic? OK, next issue. Mirror status link from get Jenkins.io return 404 error. So in fact, the status HTML file was only present on two locations, on get Jenkins.io reference file system and also on archive Jenkins.io. That file was a leftover of the former mirror system named mirror brain. It was using a mechanism named mirrormon, which was used to monitor all the mirrors of a given mirror brain instance. Alas, that monitoring does not exist with mirror mites. So that was a really a more than three year old file. So I removed it. I've removed it and updated the index HTML so that the link is not shown anymore, avoiding the error. You can see the fourth bullet has disappeared. And we clearly, with all the changes, did not make it reappear. It means that file is alone here. We don't know where it's generated, but it's clearly not generated by the core releases. So problem solved. Questions? How did you edit the index? I collected to the get Jenkins.io one of the containers in the cluster. The mirror bits as the read and write while Apache doesn't. So I went on the mirror bits because mirror bits is scanning that file system as a reference. So I went there, I changed the file and done. OK, this isn't generated content? No, as I said two minutes ago, no. Otherwise, we would have seen it updated. And I wasn't able to find a template. Maybe it's generated by something, but that something is hidden somewhere, Chrome tab or an hidden script. And second questions. How about putting a link to status.ch Jenkins.io? Sorry, your son was cut, so I didn't do it. How about putting a link to status.ch Jenkins.io? In that place. The goal of that page was to show a status of the mirrors. That's not what status.ch Jenkins.io does. So that's why. Does it make sense? Yeah, not exactly, but if we notice the problem in the student, it can't be on status.ch Jenkins.io. So it would be better than nothing. Not really, it's a status of mirror, but. Again, I disagree because the title was mirror status. That was literally the title of the link. And we don't have a way to monitor our mirrors today. So unless we had a dashboard on status.ch Jenkins.io with a specific subpage, in that case it will make sense. But here, I don't see the need for that here. That was something really old. So just to avoid people trying to check mirror status, that page should disappear in long term. But that's different things. Status.ch Jenkins.io is about the platform. The status.ch.tml here is about monitoring the mirrors, which are two different things for end-users, at least. Does it answer your question or do I have a sound problem? Mark mentioned that that page could move to Jenkins.io in the future. That could be another solution, by removing the index.ch.tml page and adding a HTTP redirect to Jenkins.io somewhere where we would have a page that would say, hey, here has the thing. And that will be a proper way to have a link to status.ch Jenkins.io. Right now, editing an index.ch.tml file, which is not generated, makes me really scared. Any question? No. Next topic? Sorry? No, thanks. OK. Next task that was quite the task, we were able to finish the upgrade of Kubernetes 1.26. We can start working on our thinking, working on 1.27. One of the major elements here, compared to last week, so we had to move resources. Because we were stuck on Azure, because the locks we added on the public IP during the previous upgrade were preventing Kubernetes to upgrade. That was immediate and fast failure. So for private gates, we experimented deleting the locks, upgrading the cluster, and letting Terraform recréer the lock. But then we realized with Stefan, and then our way that might not be the long term. With the notes that our way and team put on the previous upgrade, we were able to understand better the solution to move the public IP to another resource group where we can have the lock without preventing the cluster to upgrade or manage its resources. It's just changing the resource groups. One with the resource we manage and that we eventually lock, and one that the cluster IKS manage by itself. Upgrading the cluster means being able to not having that second resource group locked. So the good news is that moving public IP from one resource group to another does not shut down the service and doesn't change the public IP. Yeah, for us. So we were able to move these public IPs and then perform the upgrade. Only tiny feedback. When you move as your resources from one resource group to another, you have to wait. That was the part the more difficult for them to be patient on that. I tried to move the free public IPs at the same time. Let's say the system is eventually consistent. No, no, they lock the destination resource group. So if an operation is on it, it's locked. So of course, it's consistent one by one. Yeah, and then it waited 10 minutes each before telling us in the middle of an upgrade. And we also had to update a notation. So these annotations are recent in the Kubernetes AKS story on the service of type load balancer on Kubernetes. We now can specify the resource group and the public IP name instead of specifying the IPv4 or IPv6. That's way more efficient and recommended by Microsoft because the controller instead of having to search, hey, I got that IP. Let's search a resource with that IP associated. You have one level of indirection here. It directly say, oh, the resource is there. Let's use it. That's way more efficient and that avoids confusion regarding two IP, one v4 and v6 for the same load balancer, which is not what we do. So yeah, that's all. Any question? The logo is nice. Thanks Airray for being the guardian of the proper logo. Any question? Okay, walk in progress. By order of priority. Update center. Airray, are you okay to report or do you want me to do it for this one? I can report on the folder parts. Yeah, let's proceed. So the folder job is not a freestyle one. So we can use a potential in the Jenkins file. We had to add a coffee to the next provisioning to be able to use it in the next agent used by each job. That's the permanent agent of trustee. The request is ready with the synchronization of the update folder contents generated by this crawler job into the get-on file there, like it's done in the update folder in the PKZ machine, Austin, update.chankings.io. We have to try it, to form a limit, and then we'll be able to continue on the process. Okay, so that means we need to pair on this one, is that okay? Yes. Okay, Stefan, is that okay for you since you were the last one working on that part except last Thursday, Friday and yesterday before you were off? Okay. So I'm removing Stefan and adding back airway. It's on ending over every two weeks. On my side, I've started working on fine-tuning the ingress. It's related to the topic we had about the status HTML file because Mirrored bits tries to serve some files. But what's the difference between index.html and whatever file.html that we have? In the case of the update center, it looks like that every HTML json.txt file should be served by the Mirrores because they are copied by the AirSync, AWS or azcopy to the Mirrores. Only exception, HT Access files, of course, because it's not an URL. It's only an internal of Apache not visible to the end user. So we don't have to care for this one except not copying these files on the Mirrores. And we have the slash index.html page that you can see on Azure Update Jenkins. That page is the only HTML file not copied to the Mirrores. I believe that that page should also move to Jenkins IOT a moment on time. And that page is generated by the update center with HTML templating. So the next steps now will be to fine tune the ingress to send everything to Mirrored bits when it ends with HTML json.txt extension, except for the slash index HTML, of course, and the slash. Is there any question on this? Does it make sense? It makes sense, yes. So as written, the next step for us will be one scroller and fine tuning rest will be done by Arvind Hai. Both of us will have to do functional tests on Azure Update Jenkins IOT. So we'll trigger manually one update of the update center. The goal for us is to spin up fmroll controller and connect them and try to build Docker image with the Jenkins plugin CLI to use that update center instead of the default one. If functionality it works, we will have to work on the update center to pull requests that are restarted to be sure that we have a review by Daniel Beck and that we can have it in production to keep our Mirros updated. Once these two are finished, then we can work on in parallel writing a GEP to explain and show our POC and what are the pros and cons and why did we ended up with that technical solution that will act as a reference guide and a bit of performance testing to be sure we understand what are the limits of the system. Any question? OK, so Arvind Hai, this will work on this on the next milestone. Is that OK for you? OK, that will be the top priority of the world. Thanks. Next topic, IRM 64, what's the status survey and can you report on the word to Stefan for the next milestone? Last item of the list from the main comment from Stefan. You can click it there is a link in the shoe body. Celeste to this one. You know, it's this one for now. Plugin site issue with image was the build was stuck. I restarted the build and the image is now ready. Plugin site, the back end, plugin site IP API. I've made, I've moved the build process from Prostate to Infra-CI and I did an ARM 64 image. He shared his applet and now we can proceed to the migration of plugin site component. As plugin site issue is a component of plugin site too. For me at least. So next, next will be so I'm in sync with the migration doing them. I wouldn't be an issue. I've tested the plugin site API on my Apple Silicon with ARM 64 and retrieved the categories from my localhost. So it looks good. And the next will be so announcing this and we grade them and finally we can we grade quickly that CIDA Jenkins that I am a controller to ARM 64. This will be, I think, all for this first migration part. Cool, nice work. So is that OK for both of you to take care of the deployment either in pairing or let Stefan validate the pull request and follow-up so that will act as an handover until you can focus on the crawler. Is that OK for you, Hervé? Yes, sure. Stefan, are you up to the challenge? Oh yes, perfect. We did all the work. OK, so that means in order to close that issue we have three services to migrate to ARM 64. And then you are done. Most of the work has been done. It's mostly Elm chart, Kubernetes management and following deployments. OK, so I'm assigning the issue to Stefan. Stefan, you will have to plan the date and open a status Jenkins IO that there should be no incident. However, you never know. Yes, I learned that. I'm wondering if you could even add on the plugin site component integration or atones. Yep, makes sense. I didn't get that, sorry. Plugin site issue is an API which is running on the cluster, but it's used only by plugin site. So this is the free remaining service to migrate as a plugin site content a plugin site. API is back end un plugin site issue. I think you can announce the migration of them in a single announcement. Yeah, I agree. Thank you. Cool, thanks folks. I propose to Stefan then we will work once this is done, we will start writing a new issue for the next RM 64 migration, which aren't part of the public gates part. Either you do it on your own or if you want, we can call write this very. Yes, great. So the definition of done will be having a new issue for the next steps that we can add on note this week milestone, but next week. Care for you? Yes, perfect. Cool, thanks. Hervé, your turn start a new repository under Jenkins and fra for this Jenkins contributor spotlight. Can you give us not not no really news since last week except I'll prioritize this issue a bit more this week. OK. Oh, yes. One thing I added French protection on the main branch. So public request rockier review now of the to be in March in March. Cool, thanks. Can you just not forget to add a comment just to be sure that it's written here because I saw the messages in IRC. I wanted to add it in the. If you later when I will add the status requirement check, but I can add it in the issue. Yeah, just to be sure we don't forget to trace this. OK, I've added the issue to the next milestone. Now, Stefan, your turn to walk on the packer image. So the only one template particularly focused on the goss test harness. Can you report with us? Yes, I'm I'm on the on the go right now to move all the rest all the. Not rest, sorry, I'm missing the English word. Remaining, the remaining. Thank you, Damien, the remaining check that are sanity check that are right now in the shell script. They are moving right now in the goss file either with the version or at least with the error code 0. And on the tools that we don't stick on the version and and the next step would be probably this afternoon or tomorrow morning to update all the data to keep tracking on those gross file. It's it's it's in a good way, I think, and the next step would be factorisation on the goss file to have one for Linux, one for Windows and one for the common tools. And that's all. So I believe you will continue walking on this next milestone. Yes, please. OK. May I ask you after this meeting to report a comment on the issue just to describe what has been done since the last comment, even if it's a summary of the issue here, even if you didn't do all the work here, since, as I mentioned, we had to install a Z copy and we had to ponder with your work, but still just a summary. What has been done was what is being done so walk in progress and what need to be done. But you just said it early that will act for as an audit work log for us. Is that OK for you? Yes, of course. Thanks. You will be the first one being helped by writing this. I know, I know. OK, next issue, Chinese website. So that's a topic for Mark. Kevin on high. I need to find a way to add Kevin here. Sorry, because I can I can't find a way to assign you, but we know you're there. So we did meeting last week about that topic. So I believe Mark and Kevin, you should be autonomous on that part unless you ask us for help. Is that correct on the high level status? Yes. Cool. So Kevin, can you just give us a quick summary on what's what's the further about on that issue just to share knowledge with Stefan on her way? Yeah, so so Mark and I met with Damien now. I met twice, three of us together once. Damien's explain kind of the structure and back end of it so that we know or we understand kind of where all of this is coming from and where it's being directed. And we've gone over the ingress set up and how everything is being directed in that sense. And we've gone over possible solutions to this and how they can be combined. So at this point, Mark and I have been going through and I've been installing the K3D and trying to get the Helmchart stuff working so that I can do that end of it and test that. And then, yeah, at this point, we're trying to get that all squared away so that we can determine what the best selection of options are to resolve this. So rewriting the path seems like a really solid option. We can also remove the direction that it's getting in the first place. So some kind of an accommodation of these things will most likely happen. But we should get to that point. Cool. Thanks. Same as what I asked as Stefan on the survey, once you will reach a status where you have a global view of the different solution and eventually select one, may I ask you to have a comment on that issue to describe the what before we jump to implementation? The goal is to have a self audit log for you at first. And for the team, because in six months, when all of us will look, why did we do things like that? We will have an audit log on that issue. Does it make sense? And is that OK for you? Yeah. No problem at all. I can definitely put some more info in there. I just saw that. So that didn't come across in mind. And so sorry about that. Missed it. OK, I'll do it now. No problem. There weren't any expectations. That's a kind of exercise that we are trying to put in place, the free of us. So just for the sake of sharing knowledge here. Is that OK for you? Don't worry, it's hard for everyone, except Damien is doing that quite well, but it's hard for everyone. But that's awesome. Yeah, no worries, thanks. So I'm adding this to the next milestone. Is that OK for everyone? Thanks for the report. Two new issues that landed on that milestone. The first one updates Jenkins.io is not accessible via IPv6. Which is true. The current virtual machine doesn't have any IPv6 network address. As I said on the message, we are waiting for for the migration, the data center work to the mirrors, which already has IPv6. So the proposal here is that we don't spend time on trying to tangle with the existing virtual machine adding a new network interface, rebooting it, setting it up and eventually break network for a few minutes. Compared to the effort of let's focus on migrating to the new system and we won't have to care anymore. Is that OK for everyone that we don't do short term solution and we add this to the backlog? Yes, yes. OK. So I'm removing the milestone here. We can... Sorry? I was about to say we can comment on this issue to make a... Yeah, you are ready. OK. Yep, already done. And finally, Sbinec report its failures on infra-ci and only on pull request in infra-ci for the plug-in websites with the following error message. That failure doesn't happen on CI which uses a different Jenkins file and different environment for the builds and it doesn't happen on the master branch on infra-ci. So we just have a few changes between the master and the pull request but I don't understand the problem. I try to check the Gatsby.js documentation and I need help. I don't want to spend any more minutes on that technical stack. Nothing makes sense. I don't understand anything and not at ease with that part. So I need help to understand why the heck is that thing trying to write on a non-user repository that makes no sense for me. The documentation says everything should be done on dot cache at the root of the repository. So I don't understand that tentative copy that could be a plug-in or something outside the Gatsby scope but I don't know. So any help here is we're very welcome. On short term, one solution could be to unblock our friend working on that part because they might not want to spend too much time. It will be to merge Jenkins files used between infra-ci and CI Jenkins.io on a single one. So we will always have the same method. Advantage on short term, it will unblock them. If it work on CI Jenkins.io that will work the same on infra, we have the same machines. The problem with this is that they use Docker container, Node.js Docker container to run the steps within instead of reusing the Node.js version we have. So that mean that will be one step back from the old in one without running virtual machine. Today on infra-ci we build inside Linux containers. That mean we will have to use virtual machine for this. The rate of build and pull request is less than 10 or 15 per quarter. So that's OK in term of billing. It's just that it will be a bit slower. So my proposal is that I won't wait until we have finished the LTS. If we don't have feedback from Alex, Vinnec or Gevin then we will merge to un bloc. Deme, is that OK for everyone? Unless someone want to deep dive on this. OK. So I'm removing myself here for now and I'm adding it to the next milestone because that's a real life problem blocking them to deliver new features to plug in Jenkins.io website. Any question? No question for me. Cool, so we can. Close the milestone. Now, do we have new issues? Mark, you opened a new issue about snake YAML statistics. Can you give us a summary and should we are the action expected from us? Yeah, I think this one it's probably best to just assign it to me. Thanks to the work of Sprinac Konechni. He's identified the root problem and the root problem. I think it's so you can also clear the triage. The root problem is that the version string used by the snake YAML API plug in embeds a dash and that dash is needs to be relevant. But we use dash in many other places to indicate the end of relevancy of the content of the version string. And therefore, this this choice to do it with. Oh, it's still relevant after the dash means something different. So we've got. I need to work this and I suspect it's only to work it with the maintainers of the snake YAML plug in. Just to propose could we use a slightly different version number sequence on this plug in instead of a dash? We'll use another character and it just thanks to Sprinac for doing the doing the detailed look at what's going on because he then proposed a fix. But then when he proposed a fix, it tainted every other plug in statistics with things that were completely unrelated. So it's like, oh, very obvious way. Fix the problem, show that the fix is disastrous and then we take the fix out again. Any question on that topic or clarification? Yes, I love Herve's idea. Please don't be shy. Explain why you want an emoji instead of a dash as a separator. So we tried taking a tag with emoji and it works correctly and get a base accepting them. So. Yes, could you eventually use an emoji as a separator for better release? Herve, there's a part of me that says you're a terrible person for even suggesting such a thing. Yeah, you should, you should, you should reconsider your life choices for having suggested such a thing. Again, I imagine the barcels for that and the size of the character and yeah, everything up with that. Yeah, two types of person on the world, people who know how to parse unique characters and the buffer of a flow error. OK, so I've assigned Mark to that issue and I did the milestone. So we can continue looking on the new issues. I've created three new one that are already part of the new milestone. So, the first one is and will be important. We have checked and double checked with Stefan earlier today the amount of credit left on digital ocean. Initially, we wanted to check the outbound bound with consumption that archived Jenkins IO and the builds we run on digital ocean are consuming. And we have almost two months left on credit with the current rate, which is a good news. But that mean we need to contact digital ocean as soon as possible to first summarize what we did, what, how did we use their service because they asked us particularly mentioning that we moved our Kubernetes to high availability controller and that we started using the outbound bound with with archived Jenkins IO. And then ask for them to continue renew eventually increase or decrease the sponsor. So for us that that mean a bit of preparatory work for an email, I will ask you for a review before sending it. And if they decline or they don't want to continue with us, that mean we will have to prepare for closing everything in December and find solution to us these services. Is there any question on this topic? Oh, thank you for doing that and sincere thanks to digital ocean for their contribution. They've been, it's been an amazing journey. I hope they will renew for us. I really do. I, I think there, there's no reason they shouldn't renew for us. But if they were not to, we still should be deeply grateful for what they've done for us. They have been amazing. Absolutely. And just for the sake of numbers, we saw how much gigabyte are going out from the whole digitalism platform that we use. We don't know which how is the repetition between archive Jenkins IOU on our builds, which some data to see I Jenkins IOU or AWS. However, that's nice 20 to 21 terabytes per month outgoing. Was about to say not gigabyte. Terabyte, yes. Exact. To see if you were following. Nice. So yeah, that's a lot of data. And that's included in the droplet billing. So it's awesome. I mean, for all the clients, it's really great. So that's for, yeah, it's because they allow us. No. 25 terabytes of data transfer per month. Free. Awesome. I am wondering if that is limit to automation, I think. Yes. By default, it's 500 gigabyte outgoing per month for free. And then you pay 0.01. But in that case, they they are enabled. Nothing. I don't know if it's because of the open source sponsorship program or is it because the outbound bandwidth measurement and payment is from this year and they want to to have it in beta for some users. Not sure why. But it's good thing, parents. Second new issue. We have four. We have a few credits that has been given by Azure for sponsors. And after a long discussion with the Jenkins board, we have confirmed that it's too risky to pay the current Azure bills with these credits because that will involve changing some legal terms and also migrating subscription to one kind of another. Technically, they say it's OK, but that's mean reconfiguring everything related to Azure Active Directory, and so every tokens, every credentials of technical users, every permission need to be redone. They might also have some hidden risk on the storage account. So my proposal to the board has been to not use the credits for paying, but now we have to start immediately working on moving the ephemeral workloads to a new secondary subscription that will be paid with these credits. So I've put a few ideas right now. We have two to start with ATH and bomb builds. ATH, because right now it's running virtual machines inside Azure. That's not a lot of money, but that will allow us to split the usual bill plugins from the ATH virtual machine. So we could have an exact cost of all much ATH costs and proposal is to bomb to run an experiment where the bomb builds are run inside virtual machine instead of container today. Eventually, we could also had an AKS cluster inside that subscription to run the bomb builds instead of AWS or to move elements or if Digital Assign doesn't renew for instance. So we need to set up that subscription which is under my name with the credits now. I need to have everyone as coadmin and then we should start adding that management inside the Terraform project so we can create what is required to host these virtual machines and eventually clusters. – Any question, clarification? – Well, so there is a question from Michel Martinot that arrived during this meeting that you'll get to answer in terms of how we handle it. And there's a question from a person at Microsoft on that thread that hints to me maybe they've got a better way to do it. If they've got a better way, we're open to a better way but I think your approach here is the exact right thing to do. This feels like the right thing to do based on all that we know about the risks of the alternatives. – Yeah, I see the email. I believe both of them haven't read my previous email. That's why they're asking this question, so yeah. But thanks a good pointer. – Well, and probably did not if they read it did not comprehend it. And that's what I saw in the reading of my mail message. Yes, asked back a confirming question. Does that mean this? And so I'll give them a response to mine as well. Yes, that's what I meant. – Absolutely. I might have only send the email to the board and a few persons that might be the explanation. But yeah, they could have, I could have been too much technical on my answer. So I'm going to answer to them. But yeah, the answer is let's get started with this and that's prior because the goal is to decrease November, as you're being. – Any question? One last item. It's the Scaleway sponsorship. I've edited the title of an old issue which was related to how the Kubernetes cluster on Scaleway for C. I didn't say you. So, Stefan, I'm going I'm going to assign these two. Oh, no. I clicked. Since you're in contact with people at Scaleway, the goal is to track. – You had Bruno also because he also have someone and a certain email also. We had the two of us contact in there. – Ok, I need… – On-dach. – Yeah. Ok. I'm assigning this and you will take care of it. – I will do it. Ok. – So the goal is to start the discussion with them to see how much, in which form, is it still we had a comment from Erwey about 2.400. So even if it's one or two machine, that could be an alternative for a mirror for a data center or moving archive in Kinsayo, I don't know. But yeah, the goal is to apply to the open-source program and see the next step. I've added that issue because it's open so that will help us to track the work on that part. Is that ok for you, Stefan? – Yes, it's perfect. Thank you. – Cool. I don't have new issues. I don't think we have something on the backlog to look on. We have cyber bits. Eventually, Erwey, you wanted to delay that issue. Is it still in the backlog or do you want to add it for this week? How do you feel? – I don't know. – Ok, so let's not add it. I don't know means no for me. No pressure, no stress. Ok, I don't see other issues. Do you have another topic you want to bring either on the milestone or to the discussion? – Ok. Je vais arrêter mon share. Je vais arrêter la recording. Pour les gens qui regardent, see you next week. Bye-bye. – Bye-bye.