 So, hi everybody. Let's start this new infrastructure meeting. We have many four topics that we want to discuss. The first one, I mean, there is no real order, but the first one is the IBM infrastructure and the current work. As far as I know, maybe I missed something, but we still have access to the temporary machines. Jim, can you confirm this? Yep. The S390 machine is yours and that's not temporary. You guys have that. Okay. Power is temporary. We're talking to legal and you guys probably don't know how well that goes. Yes. So, it might be a little wild for like a hardcore machine that dedicated you guys. But that temporary machine you guys have is not going anywhere for the time being, so feel free to utilize it. Okay. But you can confirm the S390. The S390, it's in-spec, so we can start working on it. Yeah. So, the S390 is yours. That's not going to change. That's hosted and that's already being cared for. The power will be eventually switched over to a full-time server. I created a new API for this specific work, which is Infra2519. So, if you are in, you can feel free to follow that. Basically, what you are looking right now is some Ansible script or at least a script that we can use to configure that machine. As far as I know, Mark White already did some experiments with that specific machine. Is it right? Yeah, I'm actually still using it and I have learned several things from the using it experience. So, Jim, I think you had advised me earlier that we need to use Git LFS from the standard download or from the download site, not from the package provided by the OS. And I can confirm, absolutely. The modern Git LFS is much better to use in the case of tests I'm running than the outdated Git LFS that's included with the OS. Also, OpenJ9 is crucial on Series 390. I haven't tried OpenJDK with hotspot on 390. Adopt OpenJDK, but OpenJ9 performs very reasonably. The bundle JDK is completely unacceptable. It's really painful. Yeah, I think that does change once you get past Java 8. Java 8 and above, or sorry, anything above 8 should have the Git built in so that Series 390 performance should be good. So, if you switch to 11, you feel free to use anything you want because it should work. But I know for Java 8, it didn't get into the release or I don't know what happened, but you have to use the adopt version to that performance. Mark, if you could share a script, so all the instructions to configure that machine would be nice, so we can move forward. What was that? Infra-ticket again? You said there is an Infra-ticket. Yeah, so there is an EPIC, so if you put the link in the Google Doc, it's Infra-ticket. 25.19. And so you should find the right ticket. That's an EPIC, so. Interesting. Infra-25.19? Yes. Oh, it would help if I type correctly. Sorry, got it. Okay. So, yeah, once the agent is configured, we can add it to see all the Jenkins.io. I put all the instructions in one of the tickets. It should be pretty easy to find. But yeah, nothing more to add here. Next topic is about moving Seattle Jenkins.io on AWS. Right now, the main work that has been done was to create specific MI that we can use from Seattle Jenkins.io. So the main focus right now is to move all the agents or the virtual machines from Azure on Amazon. I configured Seattle Jenkins.io with the right credentials. Tim Ja and Alex Holt made backup images for Amazon. So it seems to be working now. I just configured it into one and it was working few minutes before the meeting. So I should be able to put more machines there and reduce the number of Azure machines that we can position. I'm not planning to remove Azure. I'm still planning to keep it as a full bike, but I would like to use to keep that number of machine as low as possible. So once the machine is working correctly, I will put the Windows machine and I will probably also add the ARM 64. So we can use specific instances running on ARM on Amazon. So it could be useful. If you want to do some testing, who is that specific architecture? Yes, Mark. And the master will likely then continue on Azure for the foreseeable future. Not that I care because you keep me hidden away with DNS references, but I assume the master stays on Azure for a while. Yeah, so there is no plan right now to move the master. It's working and maybe once we have sort of the main reason for that is because moving the agents on Amazon is quite easy because we just have to specify the security group and we have to use public IPs anyway. So the configuration is quite simple to put in place. If we want to deploy more resources like this cluster or whatever, we need to put in place some terraform for that terraform code for that and it will require more work. So right now the focus is just to reduce the actual build to use Amazon. Compute. But the next step is definitely would be a be really happy if we could deploy a second to this cluster on Amazon and start using it. Thanks. Any questions regarding the iOS migration? Just one question is, so we'll have ARM64 from AWS. There is no current offering or cloud offering for ARM32, right? Yes, I think you're right. Okay. Yeah, I do a lot of people and I was, you know, we were talking in the IRC. A lot of people use ARM32. Have you guys seen any usage of that? Maybe I guess some some ideas, but I really thought that we can find some stats about that. Is it right? There is no way that we can have that information. We're not hearing you, but you're not muted oddly enough. You better now? Yes. Okay. So we could get some information from our stats. And if somebody spends time, I believe that we could have this information public. At least for commonly used platforms. But yeah, right now you cannot get anything from the website. I mean stats. So you think that there is data being gathered that might be available in the raw data store? Oleg, that we could tell us if we're running on ARM32? I believe so because platform is a part of the metadata being submitted. Also, plugins like support core plugin also collected this information. So you could extract it using this information as well. Do you guys also pull anything from Docker Hub? I don't know what kind of stats they give you in terms of is it just overall polls or they tell you by architecture? We don't have official images for ARM right now on Docker Hub. So the statistics wouldn't be that high. I believe we have never promoted experimental ARM images. But it would be interesting to anyway put an image and see if people are interested about that. What would be the best way to see? So going forward with like ARM32 if we wanted to produce an image. I know there was talks I think Oleg, it was Oleg and Mark you guys were talking about the windows guests running some of the hardware from houses or apartments. It was on the mailing list from a long time ago. Is that something we would do for like ARM32? Like we want to build on architecture. We get like community support in terms of hosting or you know infrastructure for ARM32. Yeah, I think in terms of market interest, the market interest is much stronger in the ARM64 stuff that Amazon's providing. I'm not worried about hosting about ARM32. But if there's a press for it and if there's a community drive for it, at that time we then see for me it was a fun conversation. But I don't think we'd be safe in relying on hardware contributed from basements that typically we've tried to keep CI.Jankens.IO very safe in terms of what it uses as resources. That makes sense. Yeah, and one of the other reasons why I'm not comfortable to put machines that are posted in one person basements is because if that machine is down, there is no way to replace it or whatever. So I prefer a more stable situation because I don't want to. Yeah, you're totally right. And I think I think ARM64 like seriously with all the new Raspberry Pi's. I think three and up slide. Sorry, Alex and I were talking about our 64 bit or ARM64. So I don't really know where other people are pulling like, I guess ARM, you know, hardware from unless you're using some sort of cloud service. Yeah, so I think everything is said on that infrastructure. I will, I will just try to use the EMI on ARM64 and I will just put a new labels on CI.Jankens.IO so we can do some tests from there. The next topic that I want to discuss is modernized new house. You may have seen this, but I deployed, I deployed one service, at least in our bits that we can use. And I'm currently work. Yeah. So but the biggest, the biggest limitation that I have right now is our bits need to contain all the files that we need that we can provide because it creates an MD5 for each file. And then if the remote mirrors as, as the same file with the same hash, then it can distribute you the files. So we obviously do not give you the file just for what to the closest errors. So while it works for more, most, most of the mirrors, it does not work actually, because every mirrors only keep the files for the last year or something like that. And so, for example, all the files that we can download from archives. So the plan is to deploy archives.io on communities that contains every file. So if a, if a remote, if a random mirror does not contain the file that we need that specific archives always have that's the file that we want. And that's, that's the biggest limitation that I have right now. The second thing that I discover while doing some tests is that we use a lot of actually access in our new house. So I cannot use the NGNX container or let's say traffic or whatever. So I need to rely on Apache. But yeah, if you want, if you want to blow the work that is going there. Just go on Jenkins and press that shots. I will just put that in that specific PR. And it's evolving right now. So once, once, when this specific PR is merged, we should be able to promote it and enter the more, more tests. Any questions regarding other new house stuff. One time, two time, three time, apparently not. I'll just put this in the house and update the documentation. My connections is super slow. I'll update the docs later. I will just continue. So the fourth topic, at least the last topic that I want to also discuss is regarding the automated Jenkins release. So I made some work to refactor CI that are released the CI Jenkins that I owe, which is the Jenkins instance used to build the trigger release and to trigger the packaging process. And there was one job running on that specific Jenkins instance, which was used to configure the communities cluster. So I created a separated Jenkins instance called called infrared CI, the Jenkins that I owe that instance is allowed to manage all communities in our resources. So now that the job is separated, I can be more restrictive on release the CI Jenkins that I owe so only allow people who should be able to trigger release to have access to that instance, at least to trigger a job. But everybody else would be able to just read the outputs. So the next step is now to to push artifacts and to be sure that when we publish artifacts, we published artifacts on the experimental release line. So we do not affect the current release process. So once this is officially, once I've added this, I will modify the publishing script to push artifacts to mirrors the Jenkins that I owe. So to the, I mean, the, the, the service that we use to distribute packages. So the idea is once, once we publish to the, to the experimental release line and we validate that it's officially working correctly. We can start to, to use that to trigger releases based on the Jenkins get repository and try to be as close as possible to the to the real process. Because right now I'm still, I'm still releasing version on my own fork to be sure that I do not affect the current release process. Any question? Nope. Then let's continue. And the last topic that I want to show you, which is a small demo, but a tool that I wrote recently, I will just share my screen. It's going to be easier. Sorry about that. Can you see my screen? Nope. Not yet. Yes. Yes. So you can see community is okay. So basically it was a small project to automate the deployment of our application. The idea is just to have a small CLI that can where I can specify some rules and basis based on those rule. I can update a specific chemical configuration. So we'll just show you the current outputs. Where is this? So the idea is a small tool that I call update CLI so we can either specify directory or configuration file. If we specify the directory, it will just look at every file inside and apply the different rules. So in this case, I have just three examples. And the first one is to update every Jenkins instance that we have. So we just specify a source. So in this case, I want to retrieve the latest version from a Maven repository, which is a version 2.226. We specify a condition. So I want to be sure that the Docker image with that specific tag exists. In this case, there is a version that we can use. And then it will update every target. So a target is just a file located on the Git repository. And I have to specify a specific key value. So in this case, it will check that the Jenkins master match tag has the right value. So in the latest run, the version was already updated. But otherwise, it just changed the value and just push the new version directly. So that's why you saw some PRs on Jenkins in first charts to automate the deployment. So currently, the sources that I can use is to either fetch a version from a Maven repository. I can fetch version from a GitHub release. So for example, for a plugin site, it's also automatically updated. As long as you have a Docker image that contains the right tag. So the configuration is quite simple. The configuration is charts. So I just created the directory of the CLI.d. And so, for example, for the plugin site, I specify the sources coming from the GitHub release. So I want the latest, the latest version from the Jenkins infra slash plugin site to API. I want to check that the Docker image exists with the value generated from the, from the GitHub release. And then I will just specify every file that I want to update to the files on the key value, a message, which is used in the Git commits. I specify if I want to update a Git repository located on GitHub, for example. But I can also just specify Git as a Git repository. If it's a GitHub right now, it also opened a PR. Otherwise, it just pushed directly to a specific branch. And that's it. So right now it's work, as I said, it's working for Jenkins plugin sites, but it would be nice to refactor at least to update the Docker image tag that we have in the Jenkins info organization to use GitHub releases if it's possible. And yeah, and otherwise I'm really looking for feedbacks on this model. If you have some, any question. So that was the last thing that I wanted to share. So if you don't have any thing that you want to. I have a minor question. Yes. Right now I'm working on the roadmap proposal for Jenkins project. Okay. And maybe at the next meeting I would like to discuss what would be roadmap items for the Jenkins infrastructure team. I can pull some from the December contributor summit materials. Okay. But yeah, we could discuss them a bit more. Also infrastructure related topic is about roadmap engine. Because one question in the proposal is whether roadmap JSON files should be provided from Jenkins IO as static to the bundle resource or whether they should be hosted on a separate repository and for example provided by CDN like the JS deliver. So if somebody from the infrastructure team is interested to discuss this topic, again I would suggest to cover it at the next meeting. Okay. So, so like the trade off there between hosting it inside Jenkins that I own hosting in a separate repository is that a bandwidth consideration and access control consideration. So how it happens right now. Basically, I reused engine from lotion. So every time a user mediates to a page JavaScript on the client side downloads a JSON file and renders it and this place that there is no immediate concern about bandwidth or whatever because we don't expect millions of users to go to this page. It's mostly about maintainability because if roadmap was a separate repository. It might be better to maintain it easy to maintain it from the administration side. So this regards the approval request reviews, visibility because it would be separate repository of the JSON file hidden somewhere in the future Jenkins IO repo. But yeah, that's a minor thing. Often do you think that you will update that roadmap? I think it will be a moving target is several updates every week. At least it's my it's my wishful thinking because we really want to have it dynamic. Because the main my main concern if you I mean the main advantage I see to keep it under Jenkins that you it's in one player. I mean you have everything is in one place are just easier to look at the JSON file and to update it or whatever. So personally I would be more in favor to put it in Jenkins Jenkins that I own repository. So that's my current definition. I can see the moving kit outside if somebody votes for that because from visibility standpoint it would be preferable. Let's say that's how Jenkins enhancement proposals implemented in the separate repo. We have some issues with the process but in principle I think it's better than having to them in part of Jenkins IO. But yeah, there are three jobs. So right now I'm just doing the simple implementation could be based off the ocean roadmap more or less. But for the future I there might be proposals to move with the extra repo and then CDN and other topics may arise. Thank you. Thank you. Thank you. But yeah, it's not something we will need additional effort from the infrastructure team. Because we either either you provide either you want to have a specific application for it and then you provide let's say the current major that we can deploy in a community cluster and that's all. We don't really just put it in Jenkins IO. Yeah. So we already use CDN for example just delivered on the plugin site. And basically we don't we don't we don't use CDN on the plugin site. We do use it. So plug-in site serves images or maybe I'm not sure maybe giving more energy work to that the latest version. But when it starts in September all images reserved from CDN. And what's the which CDN jazz deliver because we were doing we were serving you'd have documentation. So we were using CDN which was providing out of the book support for you to have representatives. The entire JavaScript ecosystem works like that. So that's why we didn't eat anything from the infrastructure team because we just used the features provided by existing CDN in GitHub. Okay. So that goes also be an option. Yeah. I don't expect somebody to solve an issue with Asia CDN CDN if you're concerned about that. No. We don't we don't we don't have Azure CDN. Exactly. And yeah, that's my understanding. So I'm even touching that. No. I sent an email yesterday. So the last date was four weeks ago when they said I should receive the contract and I tried. So right now it's nowhere. So I tried to see the current status. But yeah, sent an email yesterday or the day before. Oh, yeah, I think it was yesterday. To see what was the current status. The reason why I don't want to use Azure CDN or the Amazon CDN or whatever is because if we use it for one site, we want to use it for every website and depending on the website. It can be quite expensive. So that's why I'm in a place. Sorry. Get a sponsored one is better. Yeah. This is something that I would like to have. Yeah. But yeah, as she tried to find solution with fast. Um, yep. Thanks. I like for your intervention. Any last topic. Yeah. One question. I saw I didn't attend last week's meeting, but I did see a topic saying IBM sponsoring. Were you guys looking for, was that just concerning the S390 and power resources or? No, it was. No, it was, it was the BBC 64 and the S 300. Okay. Which is, I mean, which is for me sponsoring somehow. Clearly sponsoring you're putting, putting money into into the project that we use for compute resources. So yeah, yeah, yeah. I was trying to talk internally about trying to see if you guys could use IBM cloud for some x86 resources. But that conversation has not gone anywhere. So that's why I was wondering, it was like, oh, you guys wanted x86 resources. No, right now, right now the main deal that right now is those architecture. Um, yeah, that's the current state. Thanks. Um, have a good day. Have a good weekend evening for some and have a good day for the others. Bye bye. Bye.