 So, hi everybody. Welcome to this new Jenkins in Chrome team. We have a few things that we need to discuss today. So, the first news and the most important one is we are now sponsored by Amazon. So, they give us quite a lot of credits and this will be renewed every month. So, the good thing is we can now start using, in the short term, we'll be using it for CI to Jenkins.io and all the Jenkins instance that you are running. So, it will be just easier to deploy in provision machines. And then in midterm, we should deploy as much as possible on Amazon. So, some community services, it will be easy but not necessary for all of them. But yeah, this is something that we have to keep in mind when we have to deploy a specific service. While the idea is to avoid to be too much locked in Amazon because the sponsoring will only work. I mean, at least we work for one year. The idea is to be able to adapt and try to avoid the situation that we are right now with Azure account where we need to find someone who can pay the bill. So, yeah, it's really great news for the project. Do you have any questions regarding this? So, do you have an envisioned timeline or process that you're ready to describe on a transition or on an organized transition process or is that separate? That will be discussed later. So, I would expect that in the coming weeks. So, either this week or the next week, I would configure all our check-in instances to be able to provision machines on Amazon. That's the first step. And this should drastically reduce the build on Azure. So, this is the first thing. The second objective is to move all services. So, for example, let's say we are deploying a database on Azure. Maybe we can deploy it on Amazon. Depending on the service, it will be easy. But, yeah, it really depends. It will be many dependent on the service. But whatever, in any case, I would deploy a Kubernetes cluster on Amazon so we can start moving services one by one and try to reduce the loads on our Azure accounts. I don't want to get rid of the Azure accounts because it's useful for multiple reasons. And so, the idea is just to be sure that we are below the 10K per month. So, we have multiple resources provided by different people. But the main target is that we have to be below 10K per month. That's the goal. And then we can do that. Thank you. So, cost-focused as the first objective and then orderly transition. Excellent. Thanks very much. So, while we are speaking about sponsoring, another thing that I did not officially announce is that Amazon is sponsoring also some resources for the PP-C64 infrastructure architecture and the S-390. Those are the machines that you started working, right? Mark, you are looking at new words. I thought it was IBM who did that sponsorship rather than Amazon. Did you say Amazon and I missed it? I said Amazon. I said IBM. Oh, okay. IBM is sponsoring those infrastructure. This is something that I totally forgot to highlight last week. And we still have to configure CI, the Jenkins that I had to use those machines. Well, IBM needs to give us a different power PC setup. Right now, they've provided us one that goes through an SSH jump post. And SSH jump posts are not terribly convenient to manage as Jenkins agents. Yes, I've got it working. And I'm running my builds and I like it. It's very fast. They've got a nice computer there. But that jump post agent configuration is harder to do. Yeah, it's not totally working there. So we are still in the process with that. True. Good point. There was a different topics. I don't know if you saw my events regarding mirrors yesterday, the yesterday. So basically I deploy new service called get the Jenkins.io. It's a mirrors. So, as I said, it's a simple mirrors. So the files automatically generated, uploaded each time we have a release. So it's exactly the same thing that you see when you go, for example, on your or whatever. The main difference is the way it's going to be applied. So as I said, in my mail, there are two, two services. So we either have the engineers just to list all the files in the specific directory. But if we go for specific files, let's say this one, let's say this one, you can either either you click on the on the file and then you're automatically redirected to the closest. You can specify some parameter and then you have different visualizations. So for example, if you specify your list, you can see all the mirrors that have that specific files. You have a map. But yeah, the main, the most important thing is that you have a list of the mirrors. You have the hash to be sure that the files were coming from the mirrors. And most importantly is if the hash does not match between the mirror bits and the remote mirrors, you will not have, it will not give you the file. And it will instead pull back to a specific mirrors. So while it's working really well right now for the generated series and all the artifacts. For example, if you look at the genes that are generated during the updates. So for example, it's a little bit too clear. As you can see, for example, in this case, the file do not correspond between my local mirror bits and all the different mirrors for some reason. So it's really like mirror bits become the single source of truth. And so you can possibly increase the speed. It's really easy to deploy, but I still have to have a better control on how often the files are uploaded on mirror bits. And what's what I'm missing here. So if you have any experience because So the mirrors right now, so this is a series that I've talked about since quite a long time now is hosting your roles, your brain. So the specificity for package agent is that it also all contains all the metadata for the operating package for the OS package manager, like the Vienna Redats. And so we also have to generate those files in these mirrors. And then you have some package mirrors and then you have the update center. So right now I'm really focusing on mirrors to be sure that we can distribute packages. And for the different views that you have here, you have lists, you have all steps. So, for example, how many times you don't know the specific file. So right now it's many pastings on anyone testing it at the moment. And you also have the stats for specific artifacts. If you have any questions regarding this service. I don't hear you. I feel like I'm alone. So the idea is, I would like to take, I would like to take some time to do more testing. And if it's working perfectly, I would just replace mirrors that I always get the data. In this case, the URL does not really matter. There is more than the service that you deploy. We can easily switch the browser. That's one of the things that have been working last week. The biggest challenge that I face right now is that it's taking quite a lot of time. Each time we want to upload the files on that service because I'm using Azure file storage for that. And we have quite a lot of files. So each time we trigger an upload, take like between 10 and 15 minutes to upload and to check that we need to upload files on it. The next thing that have been working on is to, it's more related to the other methods of these projects. What was the status of these? So, I don't remember if I already shared this information. Basically, I created a new PR on this page. I was wondering how to do this. The idea was to have a specific release line. So we can use the release environment to build and produce new releases. So this is something that I would like to start as soon as possible. So for that, I started working on the release environment with Jenkins. So I just have to add to the application. I assume nothing from CDF or the people doing the legal wrangling there. So I just repeat the question because I forgot to play the records again. So the question was, what's the current status regarding the codes and certificates? And still, so I asked the same question yesterday and it's still the same. They are currently working and interacting with the legal department and with digital search. So it can take weeks or months, no visibility on that. So that's why I would like to trigger a release on the experimental release line. We can officially try to release even if it's not the official one. So we keep the current release process and then we just have temporarily one just for validating that the release process is working as expected. I wanted to do a demo about the two that are recently codes, but because my new Jenkins environment is not really yet I have nothing to show there. So I will do the demo next week when everything is ready. So do you have any questions or want to speak about to bring a specific topic. From my point of view, we cover all the points that we have in the next meeting. So if you feel free to raise a concern or whatever. So I'll probably attempt to schedule some time with you. Not tomorrow morning, but the next morning or my time morning to go through more on data dog and the monitoring we're using there. It looked like some of the things that I was thinking we need those are actually already covered in the monitoring. I just got alerts about long build queues, for instance, a large number of machines in waiting to or large number of jobs waiting to acquire a machine. It's already being alerted on so I got to learn more about it. I'll schedule a separate session will record it and keep it for posterity. So yeah, regarding the big job queue. This is something that I saw this weekend's and basically it happened when we trigger. So when we try to beat all Jenkins core PR and I saw a lot of open PR and because you reduce the amount of virtual machine that you can deploy at the same time. Yeah, the job to become really huge. Well, and it did eventually drain. It was a course of a day or two it took to drain so it took a while to drain but it wasn't that it was growing never ending it did it reach the peak and then and then reduced. Yep. But we definitely have some work to do on on the majoring, but from my point of view, the focus will be to transition Seattle Jenkins that you're on Amazon. Within the week and work on the release environment specifically for Jenkins on the Jenkins instance. So this will be the focus for the coming week. So I guess, if nobody else have any topic that you want to bring we can stop the meeting here. One time through time. Thanks for your time.