 Okay, hello everyone, welcome to the Jenkins weekly infrastructure team meeting. We are the 22 of August. There on the table we get Mark White, my self-demand reporter, Stephen Merland, Kevin Martins. Let's get started with the announcement. We were able to successfully release the weekly core 2.419 last week, no problem. And today, we didn't release successfully the version 2.420. So we had an issue failure during test phase. During test phase? Yes, I haven't looked on this. Ideally we should relaunch it. I don't know, maybe that could be... I don't know if it's a consequence of using GDK17 on the agent. I doubt it. Maybe on the memory management. I haven't looked at the data metrics of the pods. There are CPU and memory metrics. Maybe the memory usage is different. Got to check. Okay. If it is an issue, GDK17, resource limits. Or maybe it's a timeout, as Alex said. Because we don't have that error on CI Jenkins in your main build. So there is something different. Let's restart the build and see. Any question? Do you have other announcements? Go free now, okay. Upcoming calendar. Next week, we'll be next week, 2.421. We'll be the 29th of August, is that correct? We have a new LTS line that should be released tomorrow. Oh no, yeah, and security release. We did have security release last week. Everything went fine, as far as I can tell. We don't have new security release publicly announced here. So nothing here for us. I have another announcement. We had to rebuild the container image of Jenkins for all Linux version and for the free past releases, so 3.1.1 and the last what's here, 4.1.3 and the last week. So it looks like we had a bug and a whole tag of the container image was built, which has the side effect of overriding some of the usual tags, such as LTS, Alpine, LTS Alpine, etc. And some versions were rebuilt and redeployed, which is really annoying. So since we provide different operating system and architecture than we used to do with that old version, we decided to rebuild the whole set for each version to avoid any risk of, let's say, if you have Alma Linux, that's okay, but if you have Debian, you have an old version. So the consequences is that the checksum of this image is changed. So we started, thanks for that, Mark and Kevin, we started a blog post to an unstat to end users, but we need to update slightly the blog post because it's not only Alpine. Well, and I think we've got users asking, could we also republish older weeklies and to repair it? So we had one requesting, hey, it's the current, so 2.419 is the current, is the most recent weekly, is it feasible for us to rebuild, to republish the, all the way several weeks back, or is that just infeasible? Technically, yes, but we have to do it on the right order, and that means republishing today's weekly, because each builds or republish the latest tag saying, oh, that's the currently built, which is the latest. Oh, right, okay, right, so. So the answer is no for the weekly. They can use either 2.419 or the upcoming 2.420. And is there any way for us to discard a republished version from the Docker hub saying, hey, we're not really, I assume no, they treat them as once you've published it, it's visible forever. Eventually, could we modify the pipeline to not push on latest, if certain. Clearly no, the only outcome of trying this will result on way more mayhem. Trust me, we simplified it as, but that would have been a good idea, if that publication was a usual image, but the controller image is not like the agent, it's way more complicated, because it has five dimensions. So that's why we publish automatically, atomically, a given release. So I'm sorry for the end users, but I will be interested on the use and the reason why they cannot use the latest weekly version. Since it's updated once a week, there will be at least two weekly late. I mean, given we have a new LTS tomorrow, and given the number of the upcoming LTS, I will suggest them wait for Thursday and either switch to the new LTS or switch to the latest weekly. I think that that is the correct answer, is that we, that may make them uncomfortable, but even if it makes them uncomfortable, we simply don't have a reasonable facility in the time we have to go back and publish, republish those outdated versions. Yep, so that means a given tag is atomic, either our OSM platform and must be rebuilt in the proper order to ensure latest. Got it. Or at least, or at least ensure latest. Weekly is rebuilt. Is always rebuilt last. Exactly. Okay. That's problem and we still need to find a solution to protect ourselves from this scenario. About your question, can we discard, we can delete the tags on the Dockerfab purely and simply. But that means this version will be gone. Yeah, and that I think is also a mistake, right? That will create a different class of mayhem, right? So yes, we could, but in either case, the problem we have is best solved by users doing what we recommend anyway, which is upgrade to the latest weekly. If you're on weekly, you should be upgrading to weekly. Exactly. If you're using. And now we're any of, yeah. Yes, yes, good. Maybe we could publish them as recently and not being able to override them for that kind of problem. That's not, the Docker registry does not allow that. It's not like eventually on artifactory, but even on artifactory, I'm not sure it's possible. It's only a mayhem artifact or not just artifact, but the Docker registry does not allow locking a given tag. And the thing is that here we are in our, we have a lot of restriction, but that's the location where putting any restriction is only based on is Jenkins behaving properly. It's really a Jenkins bug or something really weird. I will add messages because team had proposal to protect ourselves even more than today, because right now we are not publishing master branch. So that's a good thing. Now the problem is that the whole tags, we set up the job to not build tags although then three days and still they were built, which doesn't, so I might be missing something and, but that's really really weird. And the suggestion of team, I wasn't able to find the multi-branch job traits he was mentioning. There is a trait that say filter branch or pull request by name. There is a field for filtering tags. I wasn't able to find that option neither of the field on trusted CI. So that might be a plugin. I'm reasonably confident it is a plugin because branch, so filtering traits are intentionally many small plugins. So the last thing will be protecting ourselves on the pipeline level. We have the environment variable with the tag, so we have to fail the pipeline. No, that one won't work because the whole tag doesn't have the pipeline change. So that's nightmare. I would advise on removing the tags, that's the same kind of thing as removing Docker tags, removing tags from GitHub, especially when they are associated with a GitHub release. Right. I think let's not, let's admit that we've got a problem and try to not make it any worse. I'm a bit sad by this. That's really not trustable. Yeah, we'll see what we can do and I will report next week on that topic. Is that okay for everyone? Yes. So next major event, Jenkins election, start in September. So reminder, the nomination of candidates is a period of a few weeks where people, where you have to nominate candidate for the different jobs. So it's not you that goes, say, I want to be, oh. Self-nomination is allowed. Exactly. But self-nomination is allowed. But the crucial thing for me is the infrastructure officer will be up for consideration as part of this election. We elect new officers every year. Likewise, the documentation officer. So Kevin and Damien, you are welcome to continue and nominate yourself for the rest of us. We'll happily nominate you. But if you say, I'm not sure I want to do that, then you need to go looking for somebody who'll take your place as a candidate. Don't tell my boss, but I will reconsider and for nominating myself again. Well, your boss will probably nominate you one way or the other, but only after checking with you. I will not check with him. I will just push him in. Under the best. Don't forget to nominate candidates for every post, including yourself if you feel like it. That's the sign of an healthy community. That's not easy for me as current officer, trying to get to stay officer, to say that. But yes, it's important to have people presenting with the project ideas and changes if needed. And in any case, don't forget during the second period between 18 of September until November to register yourself as voter. The condition, the threshold is you have, I suppose it's the same as last year, Mark. Yes, it is. You need to be a Jenkins contributor and everyone who is on this call is a Jenkins contributor. Everyone. So please don't forget to register and don't forget to vote again. That's really the healthiness of the community. 11 of December, we will have results. And if questions, things too hard, things I forgot, things unclear on the topic of the Jenkins elections. No? Okay. So let's continue. I take it back. We've got one more major event, though. Yes. Java 21 support. Our Java 21 arrives in our Jenkins containers today. And tomorrow. And tomorrow, right? Now, it's still an early access version, saying support is too strong a term, right? Java 21 early access. How about, will be in Jenkins? Yeah, as early access. Good. Perfect. And it's so it's expected to be available today and tomorrow and encourage people to test drive it. So that's great news. As a reminder, for the infrastructure to start testing controller and production, I would prefer to wait for the initial release because it doesn't have a CVE process in place right now. So running publicly this, that might be a bit risky. Right. We absolutely do not. We want to wait till Java is released. That's for sure. We'll wait for official DDK 21 in October. September. September. Yes. Before trying on controllers. But of course, we have DDK 21 already available because we were able to build these images. Yes. Nice one. Okay. For everyone. Okay. So let's start with the walk. We were able to finish during the last milestone. So this were two weeks milestone. A lot of tiny tasks, but not a lot of person available at the same time. So we have overall the workload of one week. Let's start with unable to release plugin 401. That one has been open today. It's a contributor that's tried too early to release the plugin using the CD automated process while their token wasn't created yet by the repository permission updater. So we explained and told them to wait three hours. No feedbacks, but they were everything looked good. They had a token. It was valid. I tested the token myself. It was just a time. They tried 10 minutes too early. It's not the first help desk issue from this user. I've also added an issue in the repository permission updater to suggest adding an automatic command in merge request with explanation and timing. Good idea. I have a link. I'm ready to add a message in RPA to let the user know of the three hour time window. That's really a good idea that will help because it's not the first time. It's not really clear. There is a lot of explanation in this repository. A lot of things to follow. Absolutely. I think you're absolutely right. It's clear for us because we use it and maintain it. But for newcomers that will help them to avoid setting false expectations. Thanks, Hervé. Something else on that topic? Okay. So next topic, switch to GDK 17 for Jenkins controller and agent processes. It's done. We are using GDK 17 for all the JIVM processes involved by Jenkins. The release today was executed on an agent run with GDK 17 but still using GDK 11 for building Jenkins. It's a DevTools versus an ExecutionTools. Haven't seen any error yet. Yeah. Nothing else to say except that we have a bunch of different agent definition everywhere. So last time we were able to divide by half was able to decrease that spreading of 20% this year. I hope that for GDK 21 we will be able to have a centralized location at least one for Linux and one for Windows. So yeah, it's a personal minor failure. I was hoping that we were able to finish the all-in-one image before GDK 17. We weren't for a lot of good reasons. So it's just I'm a bit sad I was challenging but yeah, everything is working well. I hope we will see operational benefits on the memory usage but that happened to be measured. So let's wait one week and see the result. Any question? Okay. A repository of a plugin that was archived because the product behind was closed two years ago. Maven incremental commands so that was not related to infrastructure at all. However the instruction it's the same thing as RPU that's a note for Mark and for Kevin. The MVN incremental command instruction looks good on the paper but there is still a bug. And yeah, not only fixing the bug will help but also the plugin documentation could be improved by saying if you have an error you have to look on the detailed instruction here. So that's just the link and an explanation because in that case the user carefully followed the instruction from Junkins.io and still had an issue because their POMX ML was missing something because they played with it. That wasn't an easy one but yeah, with opportunity to jump on how R is Maven working with its plugins. Might be a minor update. Any question? GDK 21 support for developer to preview and try. So thanks Stefan. Notes as pointed by Tim or the Junkins.io slash Docker. A team was able based on the work done by Bruno and Stefan about locating the proper Temurine 21 binaries was able to find some hidden release which in fact are official release by Temurine, preview release with a fixed name on the release and tag name which will allow automation and tracking of updating the version until the last version is okay. So Stefan, that means as we checked together right now we are using for the infra the nightly builds while the preview Docker image will use the weekly builds. That's the only difference. So we might want to put our hands on this one if we have time but that's not mandatory topic. In any case, I'm not opening an issue if we have time we'll do it until the official release where we will have to do something. Can we use weekly with fixed tag name allowing tracking with update CLI. Any question on GDK 21? So nice job Stefan because you had a lot of places to check for this one. A lot of puppet medica. Not able to log in to Artifactory so the problem with some user that need a manual change of the setup on Artifactory for them in order to use the LDAP password instead of whatever local password they have. Still not fixed. I will come back on that topic still open but that user for that user that was fixed. Right now it's still manual casebikings. CD release fading in valid token and trusted CIG and Kinsayo. That one happened last week as far as I remember. And it's because we the administration token used by the repository permission update job which allow it to connect to a Gfrog API to create Reset token and permission on Artifactory. That's top level admin token was expired. So of course the RPU build was failing with 401 because it wasn't presenting any token. So a new token has been generated once we understood. I've updated the calendar because that wasn't the infratim in charge of that token last time. It was a three year valid token. It was done before any of us was working here. It was done by Daniel and Olivier and they didn't have the calendar. So now it's on the calendar until it's only valid for one year and I've added plenty of notifications. And also as I show to Stefan on trusted CI the token I've set the ID because they have an ID to map it to administration in Artifactory. And the expiration date is also on the Jenkins credential for that token on trusted. So Azure for you folks to understand it if it fell in one year. Any question? Finally, that was a leftover of last Kubernetes upgrades. Now the administrative user used to install and chart an application on our cluster by Infratii. That technical user is now defined as code on the six clusters. That means it's not only a script on my machine each time we recreate a cluster from scratch. Any question or things on this one? Okay. We had three issues closed as usual. No answer back from the user. Now let's jump on the big topic. The first one and top priority for now is Assess Artifactory Band with Reduction Option. Mark, can you give us some heads up on this one? So I won't be alone talking. Yeah, so Artifactory Band with Reduction we've got agreement from JFrog that if we can successfully stop mirroring central the Maven central repository that will be sufficient in terms of their expectations from us to reduce our bandwidth use. And the reason they were willing to do that is because our analysis showed that that is the majority of the misused artifact or misused mirrors. And since it's the majority they said that's fine. It's also a change that we believe we can make without requiring new releases of palms, without requiring new releases of plugins, without requiring changes to 2000 plus GitHub repositories. So we like that they were willing to compromise in that we're very, very grateful for that. We have more work to do to assess we did an initial brownout. We found positive results in ways we'd hoped but some surprising messages that need more investigation will report our latest results to JFrog in a meeting on Thursday or Friday. And so I and Damien both Damien and I both have some preparation to do to be sure we're ready for that meeting with JFrog. Thanks Mark. Was there anything I missed Damien? No, that's still still the same. So we did not have time to spend on that one either. So let's I hope we will have time on Thursday to prepare for Friday. Any question? Think them clear for folks? Okay, no, so then let's continue. A new issue that I've had it's around public cluster. So we had failures on Java doc, Jenkins IU and another website. I don't remember which one. In fact, we whether we use Intel or IRM for these services, they all have the same problem. We don't define an anti affinity, which mean all the replicas of a given service are scaled in some scenarios on the same machine. Sometimes it's a difference. Sometimes it's on the same. It's something that exists since month is that always existed because our end charts are not taking that in account. And what happened is that during a regular update process, I wasn't careful because I took the services were replicated. So I triggered the updates of the operating system for security issues. And that resulted in, yeah, the service was absolutely shut down time for the new machine to start. But it was only one or two minutes, but it was cooked by your security team because they were testing things before the last week during their security advisory. So that issue has been open to track this. That means we will have to update end chart by end chart to add anti affinity, which should have the result of scaling at least the IRM based pods to two machine at the same time. Okay, so Damian, I need to be sure I understood what you just said. So we've got today's configuration maybe running a single pod to serve javadoc.jankens.io or something like that. They are running two or more pods. Oh, they are already running two or more. All these pods could be one machine. Right. Okay, so we're already using two pods. It's just that sometimes the two pods may be on the same physical computer, which is not nearly available enough. We need it on two different computers. And the chance to have the two pods on the same host bigger with IRM because we have a little number of them. So we should only need one, not for the power, but we should have two for that kind of problem. So what this really means is if we're running in minimal case today where we only allocated one ARM64 physical machine in order to run all of our pods, we will now be running two ARM64 machines at least. Because in order to be able to reset one of them, we have to have a second machine. Makes sense. And you can say that we are saving a lot of money right now because we're using only one. Well, we're probably saving money. I'm not sure. A lot is, I'm ready to sign up for a lot yet, but yes. Definition. So, yeah. I'm not sure if we are able to work on this in the upcoming week, but I think we should. Personally, I will advise on Stefan, once you have finished the IRM64 Docker image builds, you move to this one before going back to migration of workloads to the IRM64. Yeah, I agree. Like that, the other one will take advantage of that too. Less prior IRM64 Docker, less priority than IRM64 builds, but should. But more important than Intel IRM workload migration. Okay, any question? So let's continue. The Artifactory HTTP 401 errors. So I had the meeting as reported on the issue, meeting with the support. So during five days, I checked daily and Daniel as well. The new user created an account app and logging for the first time on Artifactory were set up with the option disabled. And I tested the morning before the call, so two hours before the call. And during the call, I'm sure something was done to our instance because suddenly it started, the variable option was the default suddenly, which wasn't the case two hours. So that's a good news. So now the new user have the proper setup, which means now we only have to find a way to treat the other thousands users and ensure they have the proper setup. Didn't have time to spend on this one, given that we only have one user asking for that this week. Or whether we still need to find to write a batch script. Of course, I asked them and they said, oh, you have a nice API, you could use it. Yes. So that means some batch script will be written soon. I will be happy to share the burden or delegate to someone, but only administrator can validate and test that. So yeah, unless you want to set up a full Artifactory instance on your machine and the held up and the connection between both and the test case of users. So yeah, better to test in production. New user now have proper configuration and now needs batch existing user once. Any question? Okay, so I will continue working on this topic this week. I hope I started something on the go this week, so that's a long time thing. Plug-in site build commonly failed on Infra-CI when accessing plugins Jenkins-IO. Last time I checked before you went to holidays, you detected that every three hours, we had a peak of 502 errors when the site was generated from a fast plugin backend. I don't think neither you want high time to dive on that topic. So we did not dive more on the topic except the great error. We are not sure if it's fastly or sending the error or if it's firstly answering an error due to an error on the back ends. So yeah, we got a deep on this topic. We are not at ease with the plug-in site architecture to be quite honest. I'm not sure if we have time right now. Hervé, I believe you made a proposal and maybe it's already done. Can you remind me if you already implemented a retry on the build setup? I think you're muted. Yeah, you're muted. Sorry. I don't think so. You can check that. Proposed retry on the pipeline on short term. Is that okay? To check and if it's not done to add a retry on the build. I know it's always for builds like this one when we generate something that's the same for RPU. I know that we have a lot of developers that say, hey, we could implement retry natively inside the batch jobs and whatever. But on the infrastructure side, we don't really have the expertise to jump on these elements on the language and the domain that we not always master. So at least on short term, adding a retry, that's not the best. But one, two, three retry eventually avoids errors most of the time. So the goal is to decrease the errors, making them a bit more invisible. That's most of the time the problem. But now that one is not is visible. So let's add retry and then diagnose. Any question? Okay. Next major topic is the update symptom migration out of a native address. Herve, your turn. So if you can open the issue, I've put last status dates on it. Last, before my EPTO, I've noticed that the mirror beads equivalent service we've put in place for update touching kins.io on public ATS cluster was filling with a specific error log. Looking more in detail in the mirror beads source code, I've noticed that the mirror we're adding needs to be able to respond to FTP or ERSYNC requests to be added as valid mirror to mirror beads. So we first looked at, I've looked at Munt S3 like packets as volume. But most solutions are using S3 FS to Munt S3 as huge system, which is a no go for us as the template running with privileged context education. So I've looked at the decommissioned the mirror we had previously, as your dot mirror touching kins.io, which is using the same storage account, which was using the same storage account as mirror beads. And I did the ERSYNC server running as the one. I've transplanted this capacity in mirror beads, so we don't have to use and maintain two charts, which are almost identical. And when this service will be deployed, I tend to use it to try to add the ERSYNC 2 packets mirror with its own URL, FTP URL, and this ERSYNC demand URL running on another service. And serving the Azure storage account content we are already using as local reference repository for murals. And doing so, we will have to ensure that the content on the Azure storage account and the ER2 packets are updated in sync. And the next step will be to transplant many HDA access redirection to get similar behavior and the current updates of syncs. So you mentioned ERSYNC, and I'm not sure I've understood. So ERSYNC today is used by the other mirror repositories, like OSU, OSL, and Yamagata, and others. ERSYNC or FTP. Oh, some of them are synchronizing with FTP. Yes. I think the list of the murals is in the L chart with me. And with the ERSYNC or FTP address, we are setting them. Well, so I'm sorry, I was thinking from the provider side, from the side that we provide. So I have an update site inside my private network, and it reads from an ERSYNC server that is, I think, that is one of the mirrors of GET. But the central server is providing both R-SYNC and FTP. Is that what you're saying, Alvay? I'm trying to understand, or is it? For a mirror to be able to be added to mirror bits, list of mirror, the mirror needs to be able to respond to a SYNC or FTP request. So it can be scanned for its file list. Okay, so the mirror bit knows which repo is up to date for the specific file. Okay, so before mirror bits is willing to offer a file from a mirror, it has checked to be sure that the file is on that mirror and has the correct checks on. Got it. Yes, not exactly the file, but the first steps when you are adding a mirror is to scan this mirror, either FTP or ERSYNC, and if none of them are available, it fails. And there is also the ERSYNC you mentioned, Mark, so Alvay also had the same confusion initially. The mirrors that we provide can download the data we provide on forgetgenkin.co, so the HPI files, they can download using ERSYNC, but they also need to provide the ERSYNC server, so they are both client and server, because we need to be able to scan them, or FTP is the alternative, of course. So that's why we have today that tricky topic. If we use the S3 bucket in Cloudflare or an Amazon or somewhere else, we have a cheap HTTP server that we can replicate and the storage costs nothing, but mirror orbit is not able to scan it, as Alvay explained. So in the case of the update center, the proposal I made to Alvay is to say, let's provide the SYNC server on the reference, because we control the updates. It's atomic. That means we say, here is the update center, we copy it on the reference, and then we copy it on Cloudflare, on two locations. And once both copies have been successful, then we trigger a scan. So we tell mirror orbit, please scan that ERSYNC or FTP somewhere. So if mirror orbit points to the ERSYNC server on the reference data, it's not a problem. It will think the file is there, but we don't have the risk, or the risk is really low of discrepancy between Cloudflare and the reference. That is not possible with getjenkins.io, of course, because we don't control when a given mirror has run their ERSYNC to get the data. So we need to carefully scan each of the mirrors at any moment. It's only because we control the update of the mirrors in the case of update center. That's mandatory for us in any case for security reasons, of course. So that's okay to get started with this one. We have other alternatives. We can stop using saying, we can say normal buckets. We will use a machine or whatever service somewhere on the world and use these mirrors for at least moving a current update center to Azure on first time. Cloudflare is only there because it's cheap. That would have been a news console, and they have a foot on the China network. But if we don't have an easy solution, or if it doesn't work as I said, then let's use a virtual machine with storage, web server and ERSYNC in Digital Ocean and Azure on different locations. Also, a proposal I made to her is that given that mirror bits is not quite often updated, the code is quite simple. That will be a nice feature request to say, hey, why not adding S3 as a scanning protocol? Which means we could directly say, hey, we have S3 compliant buckets, and mirror bit could directly take care of the scanning, avoiding having to provide a nursing for these servers. All the S3 providers, AWS, Digital Ocean, Scaleway, OVH, Cloudflare, they all provide both S3 and HTTP protocol. So that will be a really useful feature for mirror bits. But that will need to put our hands together. And the go long code of the application, but still could be really useful for infrastructure. Does that make sense? Because that's not an easy topic at all. And it's critical one. So good job on that one, Ravey. Support S3 for scanning mirrors. Is there something else to add? OK, so Ravey, I believe you only walk also tomorrow, and then you're gone for holidays. So that means tomorrow you and I will have to do some kind of handover if it's OK for you. So I can continue working on that topic since it's top level priority. Is that OK for you? Do you really want to keep that topic? And we can say, let's wait when you're back and see what we can do instead. How do you feel? We'll see tomorrow how we progress. But yeah, I think it's a good idea. OK, I don't mind either. It's better if we can handover and finish earlier. But yeah, maybe you want to take the whole topic and you might be more efficient when you will be back from holidays. So I don't mind. OK, unless there is a question of the next topic, nothing to say. It's a GitHub organization permission managed by the Jenkins and admin where they are going to use our training for centralization, a word about so we have two issues that are linked each other. It's migrating third CI to a new network and restricts to VPN users who need the access to both trusted and the new third CI. In order to work on this topic, I've started to work on Terraform Coderefactor for numerous reasons of maintenance. Otherwise, creating all the proper resources, network security groups, and cloud resources for those controller will be too much waste of effort. Team already suggested that we should create a Terraform module. So it's the first instance of the same topology. In particular, trusted insert would be exactly the same topology on network level. So in order to achieve a proper VPN restriction, I chose to refactor on the Terraform module. Now CI and Trusted are using that module. So now I should be able to spin up third CI a new virtual machine and will migrate its data on this one. We expect then once it's done, not only securing trusted and third CI access, it's already the case for third CI. It's already the case for third but not for trusted, but also we'll be able to delete the legacy network to avoid wasting resources on Azure. Is there any question on this one? Put a spawn a new instance or sort. And this one blocked by below issue third CI. Migrate package origin service from AWS to Azure. So I didn't have any time to work on it. It's the same idea as WaterVis doing. I wanted to work on the foot of AirVis, especially with the work it did on improving the end charts. Since we will have holidays and eventually take cover of the updates to Nkinsayo, I propose to move that issue on the backlog until update center, until we have a clearer view. It's on the good direction, but I don't want to follow AirVis work and then having to chain two or three times because we had bad surprises in the date center. So I prefer that we start to have something, we are close to a stable situation for date center. Thanks to that job. So yeah, I prefer having less things but doing them sequentially but properly. Is that okay for everyone? Back to backlog. Time, let's see. Command prompts to avoid confusion between services. Steph and I believe you wanted to work on this one but you didn't have time. Exactly, I'm sorry. Do you want to keep it? Is that possible? Yes, please. Yes. You see it's possible. Still working on it. And next issue, ATH builds commonly unresponsive. So that was Stefan ideas that blocks us from closing the issue to build ATH on spot on first try and then fall back on demand if spot error. That's just a few pipeline code line. Stefan, is it okay if I take over this one for billing reasons on Azure? I just want to have this implemented before every developer come back from early days and start picks of ATH builds. Yes, and in fact you planned to do it but show me that. So that would be perfect if you have time too. Okay. Takes over. But we'll burn always. Only if it's not taking time of you from the RM64 build. I want you fully focused on this one. I agree. I agree. Linux foundation mark, did you have a feedback from the LF about the caching topic? I have not asked them yet. Sorry, no feedback because I've not asked. No request to LF. Right. That's still my actual item. No, it's a critical item. Cool. Stefan, RM64, can you give us a summary? Yeah, that's easy. I'm working on the chat pipeline library to be able to build our images on IMD and ARM. So the proposal is not going further until I'm able to build all the image that we plan to move to migrate in both platforms. But it sounds good. Already migrated. Yes. Okay. I believe right now you are switching to use of Docker bake files. Docker bake for Linux at least. Yes. To allow RM64. Windows and RM64 is not a thing for our cloud provider. So that will be Windows Intel or Linux, RM Intel. And the benefit of Docker bake, it's a bit of revamp and change the behavior of the pipeline library. But that will benefit because that means we can also build images for PowerPC or S390X image if you want to run custom agent or custom services on such machines. Meaning if you have a sponsor giving us this kind of machines, we can use them for production workloads, especially for HA services. That means we could have one Intel machine and then replicas on let's say cheaper machines in the future. Thanks. So you have to continue working on that. It's your top priority item. Is that correct? We need to discuss how we handle some names and stuff, but it's almost done for me. Okay. Status walking on naming convention. When you start speaking about naming convention of variables, it means you're almost there. Yeah. Last time I said I was almost there, it took me like two or three weeks more. So I should be more careful. You set up my SQL instance. So Matomo still same status. We still need to set up the MySQL instance and images and stuff. Since we plan to deploy it on ARM64 at first, that means the second task around images, I'm waiting for Stefan Effort on this one. But at least we can start setting up MySQL instance. Finally, Erwei, I believe we can put Blushen back to backloggy. That's not a problem. I just wanted to be sure, but since you are in holidays, neither Stefan or I will be able to work on it. Oh, holidays. Okay. That's all for me on all these topics. That should be a, we should shrink the next milestone. Do you have new topics you want to mention here while I'm checking for new issues? Okay. We have a blocked anti-spam system. So that's a usual account. So day to day operation. And the last stage are still on the backlog and okay. Nothing new for us. Just a word on Kubernetes 1.26. The odds limit will be October 20, 2023. So that means we should plan 1.26 in September, just as a reminder. That's all for me. Anything else for you, folks? Nope. Okay. Cool. So then see you next week for Stefan, Kevin and Hai. Enjoy your holidays, Erwei. Take care. We want you fully rested and powered up in two weeks. See ya, folks. Take care and see you next week. Bye.