 So, hi everybody, I'm starting the recording and thanks for being here for this new Jenkins information. A quick follow-up on the last discussion. So first regarding the infrastructure sponsoring, we are still in the process to get sponsored by Amazon. So right now, before working on CID and Jenkins.io, I'm just waiting for the sponsoring to be officially enabled. So we can start talking about that. It's the same for Fastly. So the last discussion that I had with Fastly is we have to sign a new contract and they should send me a contract. But right now, I still don't know the bandwidth that we could have with Fastly or what would be the terms of the contract. So it's still ongoing and still on discussion. Regarding the Azure accounts, we managed to be around 6,300K for the months. And so the billing is closing next week. So we should be around 7,000 for this month, which is way better than what we had last week months, which was around 17,000 for the Azure account. So we managed to reduce the cost, but we still have to do better right now. Regarding the discussion that we had last week about CID and Jenkins.io, we mentioned some work that TeamJet did with the Packer image that it was to create a very small virtual machine image for the Packer image to speed up the provisioning of those Jenkins agents. We spent some time to work on those and we are almost ready to put that in place. The last thing that needs to be done is just to create new credentials specifically for that and then configure CID Jenkins.io. So this is something that should come in the coming days, and otherwise we did not spend more time with the G-Cask configuration for CID Jenkins.io. The main reason to that is we are still waiting for Amazon sponsoring to be enabled. So right now we don't spend a lot of time on CID Jenkins.io. We also mentioned last week that we had to renew Gira license, this is done now. So we should be able to upgrade it again. This is something that you have to work on in the coming weeks. That's all for the follow-up. Do you have any questions regarding this? Nope, so I guess we can start talking about the agenda items. What are my notes? So the first item is the Update Center and GitHub integration. Who bring that to the table? Was it Oleg? Might be, but I'm not sure. Was it recently? Just a second, I'm opening the notes. Oh, yeah, it was a plan for the previous meeting, I believe we discussed them and the changes are now integrated. Okay, so one need to be done for this. Those changes were the ones that allow us to do using topics to manage the labels on plugins.jankeds.io. Is that right? Yeah, that's right. So I'm not sure why it's on the agenda for this meeting, because it was about putting for meeting two weeks ago. We have some issues with the transaction consistency. So just a second reference a little bit of an issue for that. But yeah, overall it works. Our main problem is eventual consistency, because once you set the label, it takes up to several days to have the plugin site updated, which is definitely not sustainable. We need to find the reason and improve that. Okay. But yeah, it's a separate topic. Okay. So just to be sure, the next item in the agenda meeting are still old ones. I don't know if Alex wants to talk about duplicating Windows 2012 and the Windows Asia agents. Yeah, so we currently have Windows 2012 VM set up for Azure, and I was wondering what the consensus is on deprecating those. We have Windows 2019, and it's a smaller VM image and faster. So I would like to deprecate the Windows 2012 instances and moved only using the Windows 2019. To me, I thought that you were, I mean, I thought for me that we were already deprecating those old Windows machines. So I think you can just remove the configuration from CIDA to the value. Okay. I will take care of that. And regarding Windows Asia agent testing. So Mark, I think he's been the only one so far that has been doing some testing. He's brought up a couple of things that I'm going to look into. So I don't think we're ready to move forward to general availability. Let's all continue looking at those issues. Thank you. Yeah. Now, now the issue that I had flagged there may not be a general purpose issue. I don't know how many other plugins mandate that they must have command line get installed on the Windows machine. Certainly the get plugins have to have it. They can't run their tests without it. But others, I know that we use JGit throughout the CI Jenkins IO infrastructure. So we've avoided installing command line get on a number of machines as a result of that choice to use JGit. Yeah. I think it's a, I know of other plugins that do use get command line. So I definitely think of these be resolved. So I will look into it for sure. All right. Okay. The next point is regarding the Jenkins improve for Docker builds. Jim, do you want to give a update here? Yeah. You guys might have seen in the IRC and then Mark and you, I sent out the term sheet for the S390. And I think Mark put that down on the governance meeting this week, I think on Wednesday. So I guess tomorrow. So that's good. I'm still waiting on terms of sheet, term sheet for power resources. But once we have those, you guys should have full access to pull those into your infrastructure. In terms of the PR, that's still open. I haven't done anything with that. I guess we kind of really need to wait for these terms of use sheets to get you guys signed off and get you guys access to start testing the whole PR that I put out. Okay. Just for the context, so Jim, James sent us a bunch of cheats that we need to sign. And so basically in the past, what we did is it was always on a personal basis where someone signed that document. And one of the reasons to move to the CDF was to have a legal entity above us. So we are not personally responsible for that anymore. Obviously, the CDF, I think, is not really yet to sign that document. So the thing was to discuss tomorrow during the governance meeting, should basically should I sign the contact or should we ask someone to the CDF? But yeah, in this case, in this document, I'm not really worried to sign it by myself. So yeah, we are just waiting tomorrow for the release. Could you please send a message to the developer mailing list? Because if you expect anything to be voted on at the governance meeting, there should be a deaf list discussion before. Okay, I can. In this case, it might be enough to just discuss it with the Jenkins board, even without voting the public mailing list. I'm not sure what would be exactly the objective there, but yeah. Yeah, so Olivier, if you're okay with it, I can do that or you can do it, whichever you wish. Yeah, if you can send that, if you can sell the discussion on the main deaf menu, that would be really awesome. All right, we'll do. It will help a lot. Thanks. That's really it for me, for the Jenkins infrastructure Docker builds. We, you guys, we talked about, I think briefly, we have access to power on either on Amazon or Microsoft's cloud service, or not power, sorry, ARM, ARM, sorry. To be honest, I didn't check that normally, I think we should have like, we should have ARM resources, but I forgot to check this. So on Azure, they only have ARM 64, well, they have other ones, but they're part of this IoT Edge product, which I've tried to figure out if you can run like general workloads on and there's no clear documentation on that. So I'm still trying to figure out, I may contact someone via the support and find out if you can just run general like Docker type workloads on on there. Okay. So you, the, the main thing though, is they have ARM 64 support, right? And that's usually the common one going forwards, right? The Raspberry Pis and other ARM platforms, right? It's, I haven't really heard that many people still using ARM 32 or I don't know what the actual name of it is. ARM v7 32 or ARM 32 v7, that's what it is, ARM 32 v7. Whatever we get, it would be an improvement being compared to the current situation. Because right now we don't test with ARM at all. Okay. So anything's really improvement. And I think, I think the Docker like official pipeline for like official images, I think they only have access to ARM 64 too. I don't think they have access to ARM 32. I can double check that. But it might be only 64. So if we even if we get 64, it'll be good improvement. Yeah, I think we need to figure out if we even need to support ARM 32. I don't know how many people would be using it. Okay. That's, that's one thing we should look at too. How much effort do we really want to spend on an ARM 32? Yeah. We're not officially as far as I understood it supporting any other 32 bit architecture, right? We've, we've deprecated wind 32 bit windows, haven't we? No, we didn't. Oh, we did not. Okay. So our documentation basically says nothing about the business of operating systems we support. So I still have an action item to create a job or at least documentation page to document it. But yeah, right now, if you can run Java somewhere, you can expect that Jenkins runs there, which is obviously not exactly what would happen on embedded platforms. So for, and also this is more for like Docker agents. Not for running the master on that platform necessarily. So I think this, the support is a little bit different based on that. As well. Now going forward, we did talk about at one point in the platform segmenting that we would not be supporting wind 32. With the new installer. The new installer is only a 64 bit. So that, that will come up at some point for windows at least. Yeah. And I think that's quite reasonable. You, you just described it. That would be the master running the windows installer. That seems very reasonable to me. Jenkins Jenkins running in a 32 bit world is. For agents, it may be interesting. I think it's less interesting. For masters. Yeah. Yeah. Yeah. Just changing. Constroller doesn't remove the support in principle. Because one still can download the word file and get it running. Sure. It just won't be out of the box. So personally, I'm not that concerned about windows 32 state. I do know that some people still don't know that. For arm and for the new platforms for which we had the official support. Even if you want to do that, you should rather do it increment. That's all I had to say for that. It's good to know that we at least are moving forward to that. Okay. Sounds like the net thing that you have to investigate. Is there any other topic that we want to discuss? So I can bring mine. Mark, you're muted. Yes. I wanted to talk about CI Jenkins that IO monitoring, but I'll happily defer to after whatever topic you have. Mine is not a long topic. I just wanted to summarize what I've learned and make a proposal. So yeah. Feel free to, feel free to, to talk about the monitoring and then I'll use my topic. Okay. So I'm going to go ahead and share my screen if that's okay, because I, it'll help me frame the conversation. What you should, let's see which screen are my sharing now. What do you see on your screens, everybody? Good. Good. Oh, good. Okay. So you see the right page. Okay. Super. So I've spent a few days looking at, at monitoring and trying to understand our monitoring. And what I realized is we've already got just about everything we need conceptually to do a good job of monitoring my proposals. Let's take the systems and the concepts we've already got and extend them. So data dog is a world class monitoring platform. We get it for free and it works great. Let's just keep using it. Yeah. Piece of work. My job. My, my, my only question is regarding because I saw that data dog made several updates recently. I think someone on that. I started working on the data. Because in the past. It only ship really few metrics. So it was not that useful. And so I started investigating primitives. So on the release that's here, the gene is that I am using primitive. So if you want to have a look with the kind of data that you have there, I think the main difference is that the data, the plug-in is just exporting. Specific metrics. So I think that's a good point. I think the main difference is that the data, the plug-in is just exporting specific metrics where the, the primitives it's using the, the matrix plugin or something like that. There is one, one specific plugins that export a lot more, more metrics. I can, I can, I can find the plugins after the meeting. Thank you. I hadn't, I hadn't thought about Prometheus. I think it's valid to consider. Consider my thought right now was, I think we'll get faster results and better results quickly. If we stay with the solutions we've got, rather than making a shift of solution, I, I'd actually started my own monitoring system on my local network and was using it, but RedEye's data dog is worlds in a way better than anything I might consider recreating. It does amazing things in terms of dashboarding and measurements. It's, I'm quite impressed actually coming from a business, prior to the current employer, I was working for a company selling those kinds of solutions and data dog is an awesome competitor in that space. So don't, don't get me wrong. The data dog is really awesome. And I think I sent you an invitation to the data dog account. So you should be able to, to maybe create dashboard or look at the monitoring, maybe do some other thing. We can organize a session specifically for the data dog. My, the, the reason why I deployed the primitives plugin was because I was missing some specific metrics, which was provided by, by the primitives plugin. But the main difference is that those metrics are exported to, to a graph and a dashboard that you have to maintain yourself. And obviously it's time consuming. If you want to be sure that the service is always up and running. So right now it's more for thirsty purposes, but yeah, I would be curious to see how the data dog plugins evolve over time. Great. Okay. Yeah. So one place where the data dog plugin actually could offer something is this second item, the canary jobs. So this was a concept that Tyler Croy and, and Olivier had started in using what, what are listed as infrastructure acceptance tests. They're set right now of four or five. And then there's a, there's a lot of data dog tests that check very specific thing should work end to end. And I think that concept is a good one. We should teach those tests to notify data dog so that they can, they can then raise an alarm if they start failing so that we have a relatively few in the hundreds and hundreds of jobs. Those relatively few will raise alarms to us so that we know that we're going to be able to do that. Okay. That's one. I'll, I'll negotiate. I'll work with you, Olivier, to try to get it. The other is that in choosing which checks we should make, we've already got 13 jury tickets based on past outages. And I think that's a good beginning. So my thought is I'm going to create an epic. If we don't already have one that tracks. And then the last and actually probably the most important one is be sure that we've got more people looking. And that I think is probably the, the single most important thing. The other things will automate, but human beings can do an awful lot to help us as we get surprises and learn how to monitor better. And that was all that I had questions or comments. Yeah, my just comments. I don't know if you saw that there is a Jenkins, a histogram. Where we automate checks. Basically we use to automate those checks. So yeah, feel free to have a look at this. It's easier to first look at the dashboard, create those check manually and then export those checks. Interform. So we have a way to, to share those. And I also share the, And I was also sure that the plug-in, so when I said, so there is one plug-in called matrix that export few, few, few data house. And so this is the one that I think the primitives plug-in is using. So basically the primitive, so the primitive plug-in is just a way to export those data, those metrics to, to, to graph enough for example, right? So there is a bunch of such plug-ins and one of the issues they try to solve is to bridge pool monitoring services and push monitoring services because there are some periodic jobs which collect statistics and prepare that on demand so that we can serve the data quickly to any service. Thank you. So I would get it, I would just turn my screen as well. I think that means I have to stop sharing. Okay, go ahead. Can you see my browser? Yes. Perfect. So basically with the recent, with the recent issues that we had with a package of Jenkins.io, the update center and the browser Jenkins.io, I spent a little bit more time to investigate different solutions that we could put in place to reduce the load on the package of Jenkins.io. So right now we have one machine located on the Amazon account and that machine is hosting so the three services that I mentioned and more the, the, the, the packaging jobs and tasks and so on. So that machine is, has a lot of things, is doing a lot of things and also is quite outdated because neural brain is not maintained since years now. So we have a whole version of Python really and so on. So I try to find ways to, to split the different services running on that machine on different container services. But obviously the main challenge that I have right now is they are all kind of interconnected. So this, the, the screen that you see right now is one of the services that we could use to replace mirror brain. So the idea here is that we have neural bits that contains all the, the files that we want to provide through the mirrors and also basically you can just browse the files on it. So for example, as you can see here, I just synchronize one of the mirrors. So there are two possibilities. It does, we have mirrors available for specific files. So let's say for example, I click and read that and just select one. Oops, this one is not working of this name. Take a different one. So if the file, if the file is, so basically what mirror bits does, it's create a hash of the file and if the file with the correct hash is available on one of the mirrors, then it gives you an opportunity to download directly from a mirrors close to your location. And if the files is not located on the remote mirrors, it just gives you an error. So it does not work. So right now I configure my mirror bits with six mirrors, as you can see here. So it's only working on HTTPS right now. So we can really enforce this, for example. And it's really easy to deploy. The main thing is if we plan to use mirror bits, we have to modify the script that we use when we release a new Jenkins version in order to push the new artifacts directly on mirror bits. And obviously then we have to update the different mirrors. But that's one of the works that I did. And mirror bits also provide views. So this one is just provides all the mirrors that you can use. You have another one that you use just as that. So if you just put stats at the end of the URL, you can see how many times I downloaded this specific packages. And otherwise you have a list of the different mirrors. So this is one of the services. There is an open PR on the repository Jenkins.access.charts. So right now I'm not sure yet the way we push the data on that service. So either it's a pool based where we fetch the data from a remote mirrors, or we push the data directly on your objects directly when we release. So that's the main thing that I have to figure out right now. The second service that I work on is a way to deploy more mirrors. So just a hand chart. Basically what it does, it just run on a regular basis. I see commands to download from a remote mirrors. And so we can just have more mirrors. The main reason why I started working on this is because right now we have archives at Jenkins yet at org running on the rex space accounts. And then we need to move that service somewhere else. So initially I thought that this could be just a simple mirrors. And then I realized that archive has a lot more data than just mirrors. So I still have to, I probably have to deploy and provision an Azure disk and move all the data on that Azure disk. So I would probably move archive to Jenkins yet at org to Azure from rex space. One of the other services that is also running on the current release mirrors mirrors to Jenkins.io is repo. So this is a different service that I also started working. So basically in this case, we just have ways to So in this specific case, we just generate a website for reddit.tpm and sues. So people can add those packages directly into the operating system. But I still need more time to test how it's working, if it's working. But yeah, so basically what I wanted to share is that there is some activities about the way we deploy package at Jenkins.io and mirror and mirror brains and mirrors the Jenkins.io. So if you have any inputs or thoughts or suggestions, I think it's the right time. So Olivier, you mentioned repo and I thought I saw what looked like an Artifactory image. Is our Artifactory instance at all being changed to? No, no, no, sorry. So I mentioned repo, but I want to say package at Jenkins.io. So repo, so we are not changing the Artifactory. This is just the maven repository that I deployed in the release environment. So I can just test and it was easier to deploy than Artifactory with the setup that I have. But this one, I mean, we won't change repo to Jenkins.io. It's mainly package at Jenkins.io, mirror at Jenkins.io, and the update center, but regarding the update center, I'm not there yet. The main reason why it's not easy to work on those is because we have quite a lot of scripts that are either uploading or downloading packages via R-Sync or SSH with CRUNJUB and stuff like this. So before deploying and for example, before switching from mirror brains to mirrors, I just wanted to be sure that I don't break or lose data in the demigration process. I just have another question. So if you don't have any question on this part, I would be more interested by this case more about Alex Hirm. So I had a look to the release environments. So I fixed a few things. So from the release process part, sorry. So from the release process part, I had some resources issues. So I just deployed bigger machines. So as you can see, it goes from three hours to release Jenkins to one hour 35. So right now it's working again as expected. So if someone wants to test the output generated by this, I would be really happy. But the more importantly is that I worked on the packaging part, which is here. And so in this case, I don't remember, Alex, maybe you don't remember. I don't remember if Windows artifacts are signed or not, or maybe what would be the best way to verify this. Let me take a look because I think you can, if when you download the MSI, you can right click and look at the properties and they'll tell you whether it's signed or not. Okay, sorry. All artifacts are likely signed. On the new release process? Yes. This is not KK's release process. This is the new process. So we just need to verify. I'll take a look. I have a VPN set up so I can look at it. So everything is located in the artifacts. Sorry, everything is located in artifacts. And otherwise, the main thing that I've been working, so because we don't have access to the codes yet, I'm currently working on the publishing part. And one of the things that I'm really relying on is, instead of using our SSH to publish artifacts somewhere, I'm using Azure Blob Storage. So I can just mount the Azure Blob Storage directly in the release pipeline. And so I just move files from here to there and so on. And it's also the same Azure Blob Storage that I'm using in the other services. So I can mount the same files in multiple locations. And so it's really simplified the way we are publishing artifacts, because as soon as, for example, as soon as the artifact is generated in the Jenkins instance, it's copying the Azure Blob Storage. And then mirror bits can use the same storage. And so the part is directly available in mirror bits and also directly available in a package, in the package. Jenkins as I hope, for example. So yeah, that's what I'm working on right now. Yeah. Otherwise, I think I covered all the things that I wanted to show you. So if you don't have any more questions, I propose to stop here. There is one last thing, sorry. So Mark and I, we do a small session about configuring SSH access and using the VPN, other Jenkins incorporate projects. So if you are interested to participate, it is free to say it. The idea is we need to do some knowledge transfers. So to take some time to ask questions and to be able to answer those questions. Thanks, everybody, for your time and see you on our scene.