 Okay. Hi, everybody. Welcome for this new Jenkins infrastructure meeting. Today to the agenda, the first thing that I want to announce is we got an improvement from digitalization to sponsor infrastructure. So Damien lead that effort to escape in whose, I mean, giving is a digitalization employee now. So maybe that helped but yeah, that's really great. So with that money, we'll use it to add a mirror to our mirror infrastructure. We still have to decide where we deploy that mirror. We have some suggestions to deploy it in India or somewhere in Asia to improve the situation over there. So it's something that needs to be decided. And the second thing is we'll have the ability to deploy a small community cluster and we are planning to use it for agent for our CI infrastructure to release our dependencies on Azure. So if you are interested to work on that one, I just created the digitalization account. And so this is a very new project that we are starting on. So that thing. On the agenda today, we have a few things. So the first one is last Monday. So yesterday, I did an upgrade of the puppets version. So we are using puppet enterprise. We are using a small puppet enterprise, we are still using the free tier right now. So the motivation to do that upgrade yesterday was we, so we have to go back to the rack space accounts and we were working to recreate our character Jenkins that I owe to a different machine and we wanted to test the Oracle cloud with the harm machine. Because they are very cheap. It's arm. And we have one gigabyte of network bandwidth. So that's the, I mean, we have the biggest network bandwidth for the size of the machine. But the limitation was we had to ramp up into 2004 and we had to to be on the prevent version. And so that's the 19 that's that eight. After a quick evaluation that was not a big upgrade. I just have some notes so there is a link to the grid note that was pretty trivial. We still have some, I collected some in the note I collected some tasks that we have to prove to do for the next major upgrade but we are, we are on track with that version which is great. So we have a version is puppet seven, which should be ready, I think in September. So we are not too far in the past. This puppet code, which is that's nice. So right now, I'm going to reorganize notes. So what was the next step so we are almost ready to deploy on to move archived Jenkins right to Oracle clouds. So we have minor features, minor codes that I would like to bring first. So the first one is, there were some manual change, apply to the archived Jenkins radio that we have to work. And we have to write the perfect code for it. That's one thing. The second one is how we provision the data on archived Jenkins radio. So I open up here. I don't know if he's if I put the link in this notes, no, but I can put the link later on. I'm open for feedback. So just briefly to understand the big picture is right now, which time we generate new packages we push everything on a specific machine. And from that machine we push artifact to different location. So that's, that's central machine is a key component of our infrastructure. And then what I'm doing here is, instead of having that machine that push artifact to different location. We just have me all that fetch their own artifact on a regular basis. And if we want to trigger a sync we still have the ability to run a script. So that's what my PR is about. So feel free to make comments there. Any questions so far. So, so yeah, I hope to, to, to me create that workspace machine by the end of the week or at last next week. So that's the target for me. So the next major topic that I want to briefly talk here is regarding the issues we had with the factory of the past few weeks. So last week, G frog upgraded apparently it was the issue was related to the size of our database. Yeah, just for the context several weeks ago, G frog migrated instance that we were using from somewhere I have no idea because I don't have access, and they moved us to the G frog clouds. And so I guess they undersized a few components such as a database. And so what happened is they increase the size of the database. And now we were able to finalize the security release so that happened. Yeah, last week. But what we noticed in the process is what used to take us one second or two second took us one hour and so we now have to investigate with G frog to understand why it's taking one hour. And so yeah that's that's the current state with that issue. What we want to be sure is the release process does not affect us for the next security release. That's our target. The next topic is about the current images and that's when I think they mean is better than me to do a quick update. Yes. So with the work of team, mainly we are now able to we started experimenting building multi architecture. We are currently working on being able to build only an Intel machine, but build the images for IRM and other platforms using buildings capabilities. So we are playing around on that it mainly involves enabling cameo. And the reason why we want to do that on CI Jenkins I look is because we don't have a lot of specific machines, like the IBM machines the IRM machines and we will want to allocate this machine only for trusted where the image or pushed and where security is a concern. While building with cameo and the general day to day pipeline is far is completely okay, even for executing images and running test harness. As you know, there are some issues with the test harness. There were bad tests that had to be removed or enable parallelization. There are still some work on the performance. So team on high are working on that. The main change is being able to move all the listing of images inside the centralized file instead of duplicating that information on pipeline build and test steps. We have involved alas a lot of make and shell scripting and the Docker buildings tooling is still a bit young or missing features. So we might want to give them feedbacks. We are working on that expect something in the upcoming days. And on the other way thanks to to team and just click. Now the GNP agent images for windows Java are using Maven free that a dot one on a CI for CI Jenkins. So right now the virtual machines and the container built on CI are now using the same Maven version. About that topic. That's a topic I want to raise I can do it right now or a bit later. Okay, so right now we have a repository name packer images. Which whole is to build the operating system images for the virtual machine agents. That repository is using the same scripting for provisioning to build both AWS and Azure machines virtual machines for windows and Ubuntu and also Intel and IRM on certain cloud. My proposal is that all the container images that we are using like the ACI images. My proposal is to build them with packer as well instead of separated Docker files. The reason is that we don't want it's really an issue when a developer, a contributor has different environments where you don't have the same Java or the same Maven or the same curve the same shell or PowerShell version. So the idea is in that case instead of using Docker build or build X or specific Docker stuff. We let packer handle these builds. I used to do this on previous gigs. There is a pro it's adding a new provider today for building a windows machine virtual machines. We have AWS Intel and Azure Intel so that will add a new standard that's a Docker. Just for my own understanding, do you mean building Jenkins agent or also like R275 to reform the current engine? You mean every Docker images using packer? No, only the images that are used for the ACI so the base images. Okay. That might have consequences of course because we still need to have the idea is that when we use Kubernetes or ACI sometimes we need a specific Docker image with just the bare minimum because mainly the most use case are with a pod multiple container you have one container per command line or most and an additional container for GNLP. If we have such use case then we will keep using specific images with specific scope. However, most of the ACI builds are should be the same as the virtual machines. Most of the time we use ACI it's container used as a virtual machine because we have Java and SSH and PowerShell and a bunch it's a kitchen sink images. So my proposal is that ensuring we have by default always the same base because we can test the output on packer that's the same when we update a Maven we update it on a single place same for whatever dependency we have. And we keep the specific images like the one we have on the infrastructure specific for Terraform specific for packer. That mean that mean depreciating the GNLP agent images on CI Jenkins IO. So we keep them provided for the community, or because it's it's a specific thing to deprecate publicly, but us in CI on the infrastructure we stop using these images. Yeah, if we go down that road, we should be clear why we are deprecating to GNLP agents. Yeah, here I'm not I'm not stating we should deprecate the images. Sorry, maybe deprecating is is not the correct meaning here it's we stop using agent using with these images. No, my point here is, it's really useful to use the same images on the Jenkins Infra community than what the Jenkins community is using because we test the tools, and that helps us to identify issues. I challenge you to find a single test on these images, even on the community. No, I'm not saying that we have tests. I'm just saying that we identify issues and we try to solve those issues. And so if we want to do something else, my point is just like we should be clear why. Okay. So, something just something important to have in mind that it's still building Docker images and Packer is able to push these images on a registry. So, the idea will be instead of docker build, docker push, that will be Packer build, Packer push. Thank you. We keep the same workflow, and the images are still available for the public, except it's Packer, which which reader, instead of reading the docker file, it start an image provision it and then snapshot it like it does for the virtual machines. And the size is not an issue because I mean, Java. All the open JDK image are EV. So, either it's EV because a bunch of layers, and the final size is big and build by Packer it's still the same. So, I can, my proposal is before taking any decision it's just I will build a proof of concept that will show you a concrete example. Definitely. Thanks Damien. Any other topic, you want to bring Damien, Marco, Adika here, regarding the Jenkins infrastructure project. So, I still have one open. You're freezing for me so I'm not sure if it's on my side or not. Is it working for you Damien? Same for me. Yes, same for me. Zoom doesn't want Mark to ask questions. Yeah, so if Mark has connection issues, maybe drop from the meeting because he's hosting the meeting. Okay, right. So I'm counting up to five in my head and if he's not coming back, then I propose to stop the meeting here. One, two. Okay, so I'm the host now. Let's answer my question. Then I think that we don't have any other topic. So feel free if anything comes to your minds. We'll have another information next week and there is a link to this document to the one for next week. Feel free to add any topic you want to cover there. And yeah, thanks for your time. See you on RC. Goodbye. Thank you.