 Hello everyone, welcome to the Jenkins infrastructure weekly team meeting. Today we are the 10th October 2023 and around the virtual table we have myself Damien Duportal. I believe Herve might not be able to join today. Oh, I'm trying. Mark Wait is here. Stefan Merle. Bruno is not there, I believe he has a meeting. Bruno is there now. Kevin. Oh, perfect. Hello Bruno. Okay, let's start with announcement then. The weekly release is out. I saw change log update as well on the Jenkins.io website. I believe everything is ready or already shipped. Correct. Well, at least I haven't checked for the docker container tag, but the weekly bits are there. The change log has been merged. I need to check that the change log is visible. Has the container tag been written, been placed? Yes. Oh, yes, it has been. Packages, container image, and Stefan and I saw the change log on the web page earlier today. At least the first version of the change log is online on Jenkins.io. Great. Do you have other announcements, folks? I don't either. So no announcement. Tomorrow, we will upgrade whatever happens. Operating tomorrow or eventually Thursday, we will upgrade CURL as much as possible on the production. That might require a lot of upgrades. That's something that has been cooked by Hervey I think two weeks ago. The CURL author has cut short release cycle and announced the release tomorrow with high severity, fixed not only, but at least. We'll have to wait, of course, that that new CURL version is packaged and made available on Ubuntu packages, mostly. Then we will have to update all container images as much as possible when they run in production. That's a communication for the leader of the ASI-G platform, because we will need to do the same for the official container image, which is most probably tomorrow. Regarding the Jenkins CURL, that means we will have to wait next week for the weekly and LTS upgrade. That's one week. I believe it should be okay, but we will have to assess the issue, because if it's really critical, we might have to find a solution with the security team. I propose that we check with the security team once the assessment is available. Is that okay for everyone? Yes. CURL, okay. Upcoming calendar weekly 2.428 next week, Tuesday the 17th, and the next LTS 2.440.3 the next day, the Wednesday 18th. Is that okay for everyone? Yes, and thanks to Kevin Martins has submitted the change log for an upgrade guide for 414.3. Thank you, Kevin, very much. And Chris Stern has done the initial merge on 414.3's backports. There may be, depending on this week's, the changes that arrived in this week's weekly, there may be requests to backport additional. I'm not sure. Some of them looked possibly quite interesting. Nice. Thanks for the explanation. Great work, Kevin. Great work, Chris. Thanks for the huge work there. No security update announced on the mailing list. By the way, for both weekly release next week, as soon as they're available, we try to merge them. The LTS means that during the Wednesday afternoon after it has been done, because usually Chris is based on Azure, if I'm not mistaken, because usually starts really early with our German colleagues such as Alex. So yeah, it's early time or during the morning in U. So in afternoon U, morning U.S., we are able to upgrade our controllers. That also means mark and define that we might need to take care of upgrading the plugins on CI, Jenkins, IU, and then trusted inserts eventually Monday or most probably Tuesday next week. The goal, as usual, is to be sure that we have as less change as possible when the weekly release is released. Okay. Next major events, you will be able to meet Jenkins contributor during the DevOps wall at Santa Clara next week, 18 and 19 October. And if you have a preference for European and the nice Belgium weather, 3 and 4 February, which by the way, I'm going to test this Sunday a potential venue if we want to have Jenkins contributor submit or any kind of event, more official or not. That's one hour and a half by car from Brussels, but we can go by train one hour and then 20 minutes with a bus taxi or car. But trust me, the venue is really nice. Let me spoil what I have in mind. That will be Jenkins event on top of Belgium, because that will be the highest point. There is a big brewery restaurant, which is on top of Belgium, literally speaking. That's all the beer pipeline in Bruges. Anyway, let's start with the infrastructure work. So what are the tasks that we were able to finish during that milestone? There has been a GitHub documentation and sonar plugin issue taken by Herve. It was this awful override at the update center level. So that means when we are generating the update center index, there is a way to control the documentation and source URL that is advertised by the index that you can then see on your Jenkins plug-in management section and also on the plug-in Jenkins website. So for everyone, if we have a user contributor complaining that they did everything properly on the site on their plugin, they have a markdown on GitHub. They want the plug-in Jenkins website to show the documentation from their GitHub repository instead of the whole wiki. It means that you have to look on the update center overrides like this issue. It's the second one in one week, I believe. So that's a quick and easy fix. So thanks everyone involved on this one. Maven 3.95 is now generally available to developers on CI, Jenkins, IU and everywhere. Thanks Alex for raising the topic. Not much to say about this one. It was already tested at least on the Jenkins core before the request, so our trust is good enough, but maybe it breaks something. If you have an issue, don't hesitate to send us an email. We had the Jenkins account issue in that case that was marked as spam, probable spam by IP. So the user was a human though. So I've created this email and it worked from my IP. I believe this is an Indian ISP, public IP, which is used a lot. So yeah, no feedback from the person. So I closed the issue. Last week, we had an issue with one of our Packer images only on Windows Agent that has been fixed with the 1.28.0 version of these images. The problem was rooted in something failed during the last step of the Windows provisioning, which is what we call sysprep. So we have a script that cleans up any license, computer name, temporary files on Windows to make it available for an image, so then you can start new machine from that template. And that step showed as a success, but as you're when starting virtual machine from that template was reporting, it wasn't. So we tried a new version of 1.28 that has new changes and it worked. No more problem, they played everywhere. So closed the issue. I remember correctly, it's 1.28.1. Yep, true. Now 1.29 and soon 1.30. Free minor version, that's nice. Another login issue treated with the user. I've closed the issue access to GitHub packages in the plugin. This user from Red Hat wanted to access dependency hosted on GitHub packages, which is not possible anonymously. Making the build break and see, I can say you and contribute to laptops except theirs. They acknowledged the tip yesterday, so I closed the issue since now they have to test, give us a feedback if it works or not. We have at least two other plugins already doing this, so that's why I closed the issue, and we'll see if they have any more problem, but no more action for us. Any question? Finally, great job. Now, all of the replicated service on our public gate cluster, production cluster, are now running with high availability on forced setup, which means no more two replicas on the same virtual machine and the virtual machine crash or is upgraded, meaning we have a service cut. Also, we added not only anti affinity, but a way to when we roll out a new version of this replicated service, we ensure that we always have a replica running at any moment. So great job that work on any kind of CPU. That was a way for us to also discover nice surprises on some of our Elm charts. For instance, I broke the public backend used for Plugin Genkin SAIO during at least 10 minutes last Friday, if I'm correct, because that was the plugin backend system, because it was an exotic way of using Kubernetes node selector to save the lists, and it worked for like these four years, so that has been fixed. But yeah, nice way to clean up and improve the content and quality of our Elm charts, by the way. So that task is definitely closed unless we missed the service, but looks like it was exhaustive. So now the next step, that opened the door for going back to migrating services to IRM 64. And open the door to a grid in Kubernetes 126, because now we prove that we can plan good points. Anything else to add on the tasks that were done? Any question, clarification? No, okay. So now the work in progress. So we have a lot of work about the plugin side backend or API, that's the same thing. We had the OM Kill on the pod detected by Stefan next week. So thanks Stefan for the complete analysis. The idea is we should, we only have to upgrade the GDECA running to a recent Temurine version, even if it's a GDECA 8, because the Ubuntu 22 or 4 machines we use on production use C Group version 2, which is a way to control the resources allocated to the containers. And the old GDECA version used currently on the latest image version is not able to read that. So it was trying to allocate 130 GB of memory and up to 60 CPU. So yes, at the moment Kubernetes was killing it every minute, since six months almost. So thanks Bruno and Stefan for opening the warm of can, because we realized that in fact plugin site API is not building since February. So we had to first fix the build with the help of RV, fix the build of that so we can we can then upgrade the GDECA 8. That wasn't an easy one. But yeah, now what's the status? The build is now fixed. Oh sorry for the uppercase. But build is fixed and container image are deployed to Docker Hub. So next step for this one to unblock the next step is to deploy a first version of plugin site API, which which deliver commits from Gevin, me, Hervé and Stefan since the since October 2022. We have at least one year of changes if not more. So next step, deliver your version to production. The good point is that we know that fastly is in front of that system. So that means we will have to check carefully the plugins that origin the Jenkins IO URL and test it. And if it's not working, immediately roll back and until a fastly decache, you have generally one hour for that blocked by the above. So of course, once we will have upgraded the new version and we know it work as expected, then we will be able to deploy the new new version with the new GDK. Is there any question, comments, things to add clarification on the plugin site API? By the way, coked by Hervé and then confirmed by Gevin, this service should be sunset soon. So there is a pull request since a few months, that is only the front end port that will allow us to get rid of the back end port. There are still some work we need to ask because Binecan, I believe the world documentation team, we will need your help to evaluate these changes. I believe maybe the Bruno and your mark might be of help here. As we need to check what is missing and what will happen if we switch immediately to as Binecan Gevin proposal. But I believe it's a problem of images and absolute URL so that should be not that much at first sight. It's just a matter of finding someone able to spend the required time on it. Damian, this is still the plugin site API, not the deletion of the Jenkins IO site. I'm sorry, I'm a little bit lost. No problem. The plugin that Jenkins IO websites is composed of a back end and a front end. So you are fastly in front of it that caches everything. But still we have three components, plugin site issues, back end and front end. That's the back end that we are speaking. It's written in Java, GDK8 only. And that's the one suffering the OM kill. We will keep the front end websites, which is the back end for Fastly on plugin Jenkins IO, absolutely we'll keep this one. I don't know for the site issue. I've added the notes just to be sure there is no confusion because it's clear on my mind because I add my under T and the rest of the team. But yeah, that's a good question. Any question clarification? Okay, we have a user now next issue. We have a user unable to sign up their email. I just studied the answer one hour ago. The email is considered as a source of spam. So the message on Datadog logs with their email say no. It must most probably email a blacklisted. So I've added a message for them saying you have to try another email. I'm not sure how much is it acceptable because it can be frustrating for the user. But if we, yeah, I'm not sure we want to suffer tons of spam on the system. So that's why so we'll wait for my feedback from them. And next week, if we don't have any answer, we close the issue after one milestone. Okay, for you, for user answer about changing their email. Stéphane Girard email status from Saint-Grid. I believe you didn't have time to work on it. No, I'm sorry, Daniel, if you're watching. Do you want to plan working on it or do you want to move it to backlog? I will move it to backlog. Got too much to do. Nice. Back to backlog. Back your ghost version tracking and moving sanity check to ghost. What's the status for you, Stéphane? What I think was a good move that we did what an hour ago just crashed. So still no real usage. We try to move all the sanity check that we're doing on the process of provisioning from the shell script to a ghost like that that will be more portable and valuable. But for now, I got problem with the user running those tests and especially the path used to run those tests. I thought we found that the solution, but it seems not. Still need to work on it. For now, I started to move as their version test and the NPM version test, but for now it's useless. Okay, thanks for the report. Do we keep it on the milestone given your workload? Okay. Yes, that's the background task. Thank you. Okay, any question, clarification on that topic? Okay, so the next topic, migrate terraform state from AWS free to Azure buckets. So I've started preparing the code for creating the new Azure buckets, but I haven't applied the changes and created, so that will be next step. I was traveling, so I wasn't really keen on trying to manipulate terraform state from my machine on a running train on 4G cellular network, right? So got ready. Need to apply creation of new buckets Azure. So I'm keeping this one for the milestone. I will be able to work on it. I don't know if there are any questions on this one. Mark, do you have any news about Oracle Cloud Project on tools? Yes, the news is no news. I'll double check while we're here, but we still can't pay them the last three invoices because they still show $0 due. Okay. By the way, everything has been cleaned up on Azure, the whole SSO. So I propose that we close that issue here and you will comment when you will have news. Is that okay for you, Mark? Yes, absolutely. Has no more action required except waiting for Oracle? Thanks. So I will move this one on the close issue on the notes. Here we are. Any question about Oracle points clarification? Okay. Next step, Stefan, speed up the Docker image library to create push tags at the same time, blah, blah, blah. What's the status and do you plan to work on it? Yes, I plan to work on it. So keep it on for next week. We did already prepare people requests to disable the tag building, automatic build for one repository, which is the Wiki Confluence Data for me to try the new version of the pipeline library that will do everything in one row, meaning that the build on main will deploy latest and the tag and will push the tag and everything will be done in one row, one turn. We started. Next step, write down unit tests. Yes, TDD. On the library to describe the new expectations of the new pipeline. Is that okay? Yes, exactly. Pipeline if I can write properly. Okay. I think that's all. So that's already a lot. Thanks. So we keep it on the upcoming milestone. Is there any clarification, point, things to add on that topic? No. Okay. Next topic, remove a count request field from Jira login pages. I didn't have time to work on it. So yeah, I'm not sure. I think a back to backlog. I don't have time to spend on this one and it's on Jira. So if any Jira admin is okay, I will ping Daniel, Mark and Tim on a message saying I don't have time and I don't know how to do it. I need help from another admin center and then I will move it back to backlog unless someone objects after pinging other admins. Okay. Next topic, we have on the list upgrade to Kubernetes 1.26. Stefan, I believe you worked on it. What's the status on this one? Yes. I did the first little step, which is changing the CLI manifest for kubectl exec type from the client side. So now we got a request dealing with the upgrade. And I think we did. Yes, we did merge. So we should have the new kubectl client side working and up to date. Asha is now enforced on all replicated services. So next step is reading changelog and preparing digital OCN upgrade. Need a timeline. I don't always interested in doing that by default. I volunteer, but if anyone is interested to either pair or drive the topic, I would like to pair. Yep. I think they said something now. The same as you. You want to pair on this one? Not pairing a bigotry, but yeah. That's a production system. We need to pair on this one to make it safe. So you can, both of you can pair. I let you decide or can pair with me. We can be the three of us. Given the calendar, I propose that we accept reading changelog. We don't plan anything until after the next LTS. Is that okay for everyone? Let's plan for after the LTS. We can start changelog and prepare DOPL. Okay for everyone? Cool. So I'm I'm moving, I'm adding it to the milestone this week, but only for the changelog on preparation part. Is that okay? Stefan, what about RM64? What's the status? I didn't prepare for rating to migrate to RM64, but then discovered that rating needed an upgrade. I think forgot why. Oh yeah, see what it wasn't building correctly on main and needed to be upgraded. So I did prepare a fix. It's done and need to be deployed. And then I will be able to merge the other pull request that's dealing with the RM64. Also explain the bug you found on rating Jenkins. I found the bug, but you found the solution of the bug. I'm sorry, I didn't hear the beginning. You changed the size of a field in the SQL directly? No, you look at the pull request. I've updated the, I need SQL schema and I wrote the SQL query with the transaction needed to be applied to the database to increase the length of the water field. Yeah, we need to be careful with that before, before the cap will show. It can be huge. Okay, so yes, we discovered that it's not compliant with EPV6, but only EPV4 because the field is too small. It's 30 characters and we need more. Thank you. Thanks. The fix in itself, I don't think that's the problem. It's more we need to plan and verify that it work on a copy of the database first and then see what happens. So not it for a full copy, but at least a copy with 10 or 20 records and then we try to change. We need to be prepared for an interruption because if the table is huge and if there's index and everything, it can. We will have an interruption on this one. We don't have to be smart. It's not critical data anyway, so. Yeah, okay. Yes, but still. Yeah, it was down for how long and remember how long, but yeah. Yes, but the data wasn't lost. If we start trying small things, we need to improve ourselves as a team when we are running production operations, the three of us. It wasn't off. It was working for EPV4. Rating was down for some time. Okay, you didn't know that. That's not an argument for not planning a proper projection operation. I don't know. Okay. And I think that it has already been done. It's what in response to Stefan, not your attention to the production operation. So I propose the same to postpone this one to after the LTS. Is that okay for you? Or is someone motivated to plan an operation this week? Okay, for after LTS. Thanks. So you're working on rating Jenkins to move to RM 64. Is that correct? Oh, yes, but I will pick a new one for the week coming. I don't know which one yet. Is the ingress controller, but the private one? So we can start a first set if it breaks, it only breaks our usage internally. And if it doesn't, then if it worked, then we can plan the public one after. Yes. Is that okay for everyone else? Private nginx ingress controllers. Okay, that will be enough for RM 64, I believe. Oh, we mentioned, no, we'll see later. Sorry, I was diverging. Next point, Matomo, as discussed during a team sync, I believe that the argument presented by RV makes sense. Matomo needs to be prioritized, not as much as update Jenkins IO, but still. So that's why we spend some time working on it. Because yeah, we don't have statistics on the website since July. Is that correct, RV? Since the end of July, yes. Yes. So because of the Google Analytics version four, that is, we are struggling to upgrade. So the goal is to as soon as possible to Matomo. Sorry, you were saying? Nothing particular, just that we can't update to the fourth version of Google Analytics, as we don't have the required level of commission. We can't upgrade. No, stat since end of July, prioritized. Makes sense. Thanks for pushing on that topic. Status, we had a database instance, but we didn't have a database inside the instance. So that's the work in progress. Adding Matomo database with its own user. Pool request was added, but failed to apply on production once merged. Looks like the instance user administrator is not allowed to log in outside localhost. So I'm searching how to open it from just a few IPs or just from a few endpoints. I believe it's around the private endpoints. We had the same response SQL. So that will just digging archives and how did we solve the problem back in time. I can reproduce the issue by trying to log in from the private VPN machine, which is used when you want to run Azure Terraform from your machine, you need to open a tunnel through the VPN. That's an improvement for later, but I can reproduce the issue from the VPN machine, which will be easy to test before running again Terraform. We have also the Elmchart. So I've taken and refreshed the work from Gevin, which tried to install a few months ago, the Bitnami Matomo Elmchart that allows to spin up Elmchart. So right now, the goal is to insert the credential and the ARM64 image has been pushed on it. So the next step for me will be to try a first install on a local cluster, and if it's the case, and if there is no objection, I plan this week to run the first installation outside the configuration as code, but without an ingress. So it won't be available publicly. My test will only be with a kubectl port forward to the internal private service. And as soon as I have something that looks like it's working, then I will delete the whole namespace to clean up the resources, and I will submit a pull request for you to review a version that had been tested once. Does it make sense? And is there any objection or agreement on this one? That's a good move. Irving, does it map to the problem you had when bootstrapping the Update Genkin Sayo Elmchart? Sorry, I've missed the last sentence before. No problem. My proposal about trying to create a first installation manually without the as code, but without the ingress and testing only port forwards, I want to map what you did with the Update Genkin Sayo service without the ingress. Does it make sense for you? I don't see the directing between the Update Genkin Sayo and this chart. During your first install of the charts, there has been one week during which you opened the draft pull request with the code, but you were applying the code on your machine. Okay, to be tested without ingress public access at first. Cool. So that will be the next move to start as soon as possible. Remove Genkin Sayo pages are aren't accessible. So the last step is to find a way to delete the old pages on Genkin Sayo that are not generated properly. I believe we didn't have time to spend on this one. Is that correct? I haven't worked on that and we need to have some people working on setting the page too before adding the deleted flag. So I'm sure if Kevin or Mark are still there, I believe you were involved. Okay, so we I'm adding a note. We need a help from the docs team to check the page to be deleted. Is that, did I capture it well? Should I add it to the backlog? I think we should remove it from our milestone while this check hasn't been done. The checks are done. Okay, thanks. Planning for supported GDK version in GDK infrastructure. I've added that one back to the milestones since Stefan started to remove GDK 19 from Packer image. Remove all from Packer image. So that will be the release 1.wip to be deployed on CI Genkin Sayo. So we have the pull request Stefan that should be the next step but we have to do it carefully. Yeah not during the weekly release so most probably tomorrow. So next step will be Genkin's tool and Docker inbound agent windows container images for the infra. I saw a message about GDK 21. Yes. Yeah finally available from a few minutes ago. When you say available does it imply every bits of deliverable or just intel on the website? Yeah not everything is available yet you won't find it for just every platform available on world. So yes it's not for today the first ones are available but I think we will have to wait one or two days before everything is available for everything. Every just checked during the meeting and for example we don't have the Docker images yet. So yeah we could install some things on the infra but it's urgent to wait I guess. I mean if we have the platform we need that's not a problem. So yeah just R64 and AMD64 I think they're available. So why not and they are already providing some GDK 22 builds by the way. After cleaning the GDK 19 we could dirty everything with GDK 22. Nope the first step will be it's not LTS so it doesn't exist. Like it's the same thing as not GS20 it won't exist until it's the new LTS soon. Okay but 19 wasn't LTS either. To add to the debates there wasn't even one plugin using the GDK 19 so I don't think it was time to put that in place and to remove it later. Right infrastructure GDK supports timeline because before trying to jump on non LTS version we need to say what do we want to provide to our users and support. The problem is not installing GDK 22 the problem is to support and have a way to help user when you need to remove it. As they say GDK 19 we didn't find any usage so yeah what about next edge version such as GDK 22 GDK LTS supports and off-life etc. We need to write a document and then send that document to the developer to be sure no one has a strong opinion against it before we remove it. I'm still not sure about the process do we need to to play the benevolent dictators in the sense that we are the person in charge of maintaining and providing security in the infra. The goal is not only GDK 19 the goal is about the general support policy that we want to have an infrastructure. That means for the LTS channel and the non LTS channel do we support it yes or no and if we do for how much time what is the timeline so the goal is to be deterministic for the users. If we keep like the early available for the LTS or if we also use the intermediate ones and everything and we need to bring to their attention the the possible cost because if we provide a new Java version and they had that the test that means that we will have a lot more agents running yep true that thanks folks that yeah that will be a not easy topic but yeah at least we can continue on the GDK 19 survey is that okay for you if we start writing first proposal draft and then we revive that to pick inside the the mailing list cool um finally last topic i have there unless you have a question nope update genkin sayo survey your turn that's the big the big issue but we treat it as the last story so can you give us a status we've lost our way okay so they will we shared a bit of work so i will share my my parts if every come back she will share otherwise i will try to speak on his been no problem are you okay to share the status on the day genkin sayo on your side yep sure um yep so your side is in chat so i can let you talk about that on my side we've got every needed tools sorry two seconds Please install on the trusted agent comment agent uh which are aws He Cl ons laia a z copy the first one uh to be able to think There 2 folklare buckets and the second one to synchronize the she has your uh file share we are using as reference from your orbits. I've got also the credential for AWS stored as a non-default profile. So when you execute the AWS CLI command, you have to specify as a profile and it doesn't work by default with these credentials. For the Azure copy credential we have, we have to use SS token, which is more secure than Blobix Fair used elsewhere, which is using primary decay. So more fine token. I wanted the first to use, to pass it as code by storing it in a file, an environment file and sourcing it, but it doesn't work. I can't bind it as a secret in the freestyle trusted job. So I will do like the other credential use by the updates and third jobs on trusted, which is store, register them, store them, create them manually on trusted controller, then specifying them, binding them to the job, freestyle job manually. So first I've pushed a pro request updating the privilege script used by a designer to also upload and synchronize the file share and the air two buckets, but it's failing. And I'm currently working on the beginning why it's failing as local tests are not failing. Copy of the job. Okay, walks locally. So I'm not there. When that will be working, we'll be able to test this new data center website with local and different checking instance. And finally, we'll be able to remove the initial part of the script, privilege script to drop the update center, virtual machine updates and keep only the new one, new service, which is hosted on the public cluster. Nice. Just a note, performance. So that will be the challenge once we have a way to validate functionally that we can copy data as a reminder from Daniel initially, that job needs to run in less than three minutes, which includes the copy to the job to update center that will be removed in the future as survey mentioned. It's an air sink, but also the copy to the Azure bucket and the S3. So we will have to find smart way to parallelize these copies with background processes or using new parallel or shell deduce techniques, but we will need a way to run the copy on simultaneously. Have you got the confirmation from Daniel that it must be under three minutes or is it current time? I haven't asked, you can ask him. Yeah, I'll wait until we've got working working before. Okay, I don't want to challenge that thing. I want us to provide something optimized. That's why I'm not asking him. If you want to challenge the three minutes, that's okay, but you have to find the reason on Daniel. That's why I won't ask him yet. Perfect. Thanks. Or not all three minutes. I mean, when you have a problem like this, either you increase the time that you can afford spending on it or you optimize. That's the two obvious path. So that's okay to take another pass if it's not a problem. Cool, nice job. That's a lot of work. So almost there we are on the debugging port. On my side on the Elm charts. So first the new Elm chart is deployed. The new Elm chart features what we call umbrella PVC and ingress. So that's the parent chart that define once for all the storage and ingress. And then all the subchart, ERSYNC, which is not used. No, it is used. The ERSYNC for the scan, the HTTP server and mirror bits will use this PVC or ingress depending on this. Yes, great news. It broke the mirror bit system. Yeah. But breaks the mirror bits. So as we mentioned during our private team sync, right now, if we need to test the new service, one service finished, we will have to disable HTTP as an ingress backend temporarily. As an ingress backend. Related to the problem is somehow path with regex and nginx locations. Yeah. You know the adage, you have a problem, you had a regex or a pattern and then you have two problem. That's literally the part. What is happening is all requests are coked and sent to HTTP. Even if we specify something else. So either we are specifying it in the wrong way, but I mean, Erwe did everything properly since the beginning from an existing status that work as expected. So it's not obvious what did we miss. Maybe I broke something which will be obvious at the second thing, but right now I don't understand. The configuration on nginx looks good and should do what it's told to do. So that's the status right now. And I also have a whip, but I've blocked that whip on templatizing the mirror-orbit configuration. I started working on it because Erwe was in need of enabling the download logs of mirror-orbit which are really verbose logs. So we avoid sending them to the STD out. And finally, so Erwe successfully fixed the problem and found the issue. So that problem is not priority. That's why I've deferred it. But still we have a lot of duplicated configuration between the month pass and we want to enable logs or GOIP or whatever parameters on the dot-con file. So the goal is now to avoid having it everything on the secrets and have a template that's generated from values on that chart. That will be easier, but that one is not priority now. Is that assessment okay for you, Erwe, or did I forget something? No. Cool, so yeah. That's all on the chart. The priority is fixing dot ingress and ions. So yeah, that's all for this topic. Do you have new topics? Okay, let's have a look on new issue of Antoly. I need to add to the milestone, to the next milestone, that issue that I've moved from plugin API. That's me who broke the plugin backend website and I've moved it here so I can provide a post more time of what went wrong, what are the fixes before closing it? The incident is fixed since one week, since four days. I don't see any new issues though. Do you have other topics to add? Okay, so then let's see each other next week. Bye-bye, I'm stopping screen share, stopping recording, good week.