 So yeah, can you start the recording, Mark? Hi, everybody, welcome for this new Jenkins infrastructure meeting. For today's agenda, we already put some points under the Google Doc, and so yeah. The first one that I would like to highlight is we still have a bunch of handshots V2 that we have to upgrade. The biggest and the most annoying one is that we're still using Angelix Ingress and in order to use the new chart location, apparently they suggest to either delete the existing handshots and to reinstall it or to deploy a second one next to the first one so we don't have any downtime for the application. We are exploring the second option with Sladeen. So I had a call with him last week. So we opened a PR. And basically what we did is we tried to test his PR together and see how he could improve it. So it's still a work in progress. But yeah, I expect to finish this either this week or next week. And then so I would be able to announce a maintenance window. But the target is to not have downtime anyway. Yeah, there's a deadline of November 11th. Unless isn't there? Yeah, the deadline is coming pretty soon. We migrated all the other handshots. The other one that we haven't migrated are Dex and Outproxy. And we don't need them anymore because those were used for the election last year. And we decided to not reuse them this year. So I just have to remove them from the cluster. The next point that I briefly mentioned, which is the Docker Hub issue. So the deadline is coming. Apparently they announced that they will delay the retention policy for a few months. But we are still looking for alternatives. I received the news from the Docker Hub. So we may go down about paying the subscription. But there was nothing really major here. The next topic is about Docker Images for Windows. I think, Garrett, you have some news here that maybe you want to present or mention. Yeah, sure. So it's one of the decisions that came out last week was that we should be locking to the latest LTSC release for these. So rather than going with the 1809, 1909, and so on releases, go with 2019 LTSC. So we have added a test phase into the pack of builds to validate that we can actually build Docker containers matching that version, which does seem to work nicely. Well, it works nicely for Azure. I am working on the AWS piece now. And then it's a question of whether or not we need to build our own version of the adopt OpenJDK image or whether that PR will get merged on the adopt OpenJDK repo to actually produce 2019 LTSC versions. Thanks. Just for the context, so basically the challenge that we are facing here is if you want to build a Windows Docker image, you need to use the Docker image version for Windows. You need to match the host machine. That's correct, yeah. So that's the reason why we want to set it to one specific version so we don't maintain every Windows version. That's why we decided to go with the LTSC 2019. Yeah, so even though the LTSC will be patched, it shouldn't update that version number. It should be the same version of the kernel, really. So as long as Windows updates are applied to the base images, then that should be fine. OK, perfect. Thanks. So what's the current state to the specific tickets? So what's our next step on that topic, basically, for you? Because the adopt OpenJDK, I saw a PR from Alex on the adopt OpenJDK projects. But I don't have the feeling that we are kind of stuck there. We don't have any control on that project. So if we don't get any, I suppose, adoption on that, getting that one merged, we think the best option is for us to produce our own image based on that version. We've got a link to the PR. Yeah, that's something that I was thinking of. I'll add that in now. So the creator on image install adopt. Oh, good. You provided us a link. Excellent. Thanks, Gareth. I'm sorry about that. I have a good watch there. So yes, if you could just put a link to the adopt OpenJDK PR, that would be awesome. And you did. That's really great. And to the geographic end, that would be great. Departed, I know. Next topic, any question on the Windows Docker range? No. So the next topic is about the artifactory. As far as I know, there is no news there here. We are still, I think, at the current status. They are happy with the current situation. Yep, Mark, you have more information. Yeah, so they've started a new set of monitoring based on the change that Daniel made. The change that Daniel made was to switch off the allure use of our repository as a tool installer source. They've asked the second question, oh, hey, what can we do to reduce the volume of data that you're currently storing? And the volume was, like on the order, three terabytes. So three or four terabytes. So it's a large, large repository. Daniel replied, yes, we know we need to do that. We have not done that yet. So we have an action yet to do, which is look at which are the things that are using the most space and identify if we can do something that does not jeopardize the project and still can reduce our space usage. For instance, the incrementals is one place that we expect we'll find some savings we could make by just declaring a retirement policy for incremental builds that are more than a few months old, something like that. Does it make sense to keep them forever anyway? Yeah, so let me make a note here that, well, OK. So that's a further need. So we're not done yet. And the action is on us right now and not on JFrog. OK, OK. Thanks. Next topic is about, basically, the JIRA testing that is happening this week. So yesterday, the Linux Foundation gave us access to a public JIRA instance. You know, not to do the test. It's almost the same, except that the endpoint is obviously different. It's an old, I mean, the data is something like two weeks old, because we back up and restore that data. We are still looking at any blocker for the transition. As far as I'm aware of, we are tracking all the information in the Google Doc, but we haven't identified major issues. Do you want me to review them, Marc? The different issues that you can notice? I mean, those are small issues. I can just drag them in here and we can look at them together briefly. We've got Tim here and he detected one that I had missed. So if it's OK, let's, yeah, let's, oh, good. So let's, are you OK, Olivier, if I will just walk us through this? So Tim, you found some things that need updates and that makes sense. So those updates, those plugins were not available on the testing instance. So I'm just wondering where Tim found them. That is on the test, let's say in the instance that we logged into that Linux Foundation URL. OK, OK, because I went to the same place and they were not listed. They were there when I first saw the screenshot. I can see if they're still there. OK. So, yeah, that's basically that those plugins. In this case, we are using two plugins, Capture for Jira, which can be deleted. So we don't need that anymore. I had got confirmation from Ahno and Suggestimates, I think, is still useful. Basically what it does is when you create an issue, you try to detect if someone only created the same issue. So it suggests other similar issues, which is useful, yeah, from time to time. We are renew, so I think we just have to push the button updates in this case. If you look at the widget, the dashboard, something that I found weird is indeed some gadget cannot be reused, like this one, this gadget cannot be displayed. But if you delete it, you can recreate it and it works correctly. So I have no idea why this specific one is broken. There are specific, I mean, there are few gadgets broken on different dashboards. But again, I couldn't identify why it was working this way because I could delete and recreate them if you go down again. Because I listed a few things. If you go up in the comment that I did at the beginning. Oh, yes. It looks like I cleared that comment. Can someone more extra? No, no, if you go up on the document, if you go up again, again, again, again, again, again, before Tim's like, yeah, just go down, testing notes. Oh, I put a bunch of comments. So the different thing that I identify that I'm wondering is what about the way we send emails? So right now we are using SendGrid to send the emails. It's working correctly. But if we switch to the Linux Foundation, we may have to do something else. Something important to understand is we enable demark on our domain. So if we want to change the data, I mean, if we want to allow the Linux Foundation to send emails, then we also have to update the data. So that's something to keep in mind. So it's not as easy. Otherwise, we don't send a lot of emails. So the basic SendGrid account that we have is way enough. But yeah. The second point that I couldn't verify is I shared some credentials with the Linux Foundation. So they could create less encrypt certificates using the DNS method. And they should only have access to specific text records. I'm still wondering if it's working for them, because in this case, we are not using issues.genkins.org. So this is something that I would like to validate. And finally, something that I also I think we should change is by default in the current settings. We use genkins.genkins.ci.org, the main. And the default one is genkins.io. So we should also, yes? I thought that this one was the broken one. Yes, it's a broken one because you have to configure that inside JIRA. So at the moment, we have two HTTP proxies in front of it. It shows genkins.io and genkins.org. But the genkins.io.org is a deprecated domain. So now the default one is genkins.io. And we keep genkins.io.org for backward compatibility. So ideally, we should switch to genkins.io. And basically, what happened is, sort of years ago, someone made a contribution to the project to enable genkins.io domain in front of JIRA. But it never really worked. And so that's why when you go to issues of genkins.io, there are some broken links and stuff like that because JIRA doesn't like you to use different domain than the one configured internally. So that's the issues.genkins.io domain. It's a single global configuration change that we make on JIRA. And it should then work as expected. Yes. So basically, if you go to the JIRA configuration, you can specify the domain that you want to use. And you have also to put, I mean, if you have an HTTP proxy in front of it, your HTTP proxy need to accept that domain. So that's just that. And the thing is, we never really update the JIRA configuration. OK. So then this one is one we would need them to agree that they will change the default domain. Or is that something that's in our control? We could change it after the migration is complete. So in our case, the reason why I never change that is because the domain is encoded in the DOMCAT configuration. In our case, it's defined in the JIRA database. And we have an HTTP proxy. So the reason why we never really officially switch to issues of genkins.io is because we never took the time to clean up that and to be sure that it's working as expected. OK. So this really is something where we can coordinate it with Anton and make that change. But we really can't prove it works until final transition that we know him and I. But yeah, in this case, because they provided a different URL because it's a Linux foundation and points, then they can just put whatever they want there. OK. And otherwise, I just quickly documented the different ways that I test the new instance. I haven't identified any major blocker here. So to me, I think we should plan for the next step. Great. Excellent. Oh, and I need to include a link to that Google Doc. That Google Doc is intentionally not publicly visit public readable because I thought if someone, one of us, is testing needed to say something in there that was not for public consumption, so if you're not on that doc and you want to see it, just click to ask for permissions. So basically, that's the way we work when we don't want to be public for this reason. Then we usually ask the person to manifest his interest and then we just share access. So we don't want to be private. So just want to know who has access to the information, basically. Great. So the next step. Before we leave that topic, everyone's still OK with the week of November 9 as the transition week for final transition. I assume we continue testing for a few more days here, hoping not to find a blocker. If we don't find a blocker, we go with November 9. Is that OK? To me, that's perfect because sooner is better with JIRA anyway. This basically gives us the next week to react to whatever happened. And then we can plan a downtime. We still have to look with the Linux Foundation how much time it will take to back up the current JIRA instance and restore the backup because basically, we must be sure that nobody updates the prediction JIRA instance while we are back up and restoring. So the JIRA instance will probably be done for a few days, a few hours, sorry. At least a few hours. At least a few hours. So we try to communicate about what will be done next week once we synchronize with the Linux Foundation. Great. The next topic is about Azure Credit. So the first thing that we have to, we are still getting warning that the bill hasn't been paid. I'm still waiting for our opinions from the Linux Foundation, but yeah, that's the current state. Otherwise, if I understand correctly, you were able to introduce a request to get credit from Microsoft, Mark? That's great. Yeah, well, Kayla Linville, our Custom Success Manager at Microsoft has submitted a request to her management asking that we be granted credit, credits. And I don't know if they will say yes, say no, say some. I assumed any credit that Microsoft's willing to give us will help offset the cost that CDF is currently paying for our Azure resources. So I thought if they say yes to anything, it's a win. Yep, that would be really awesome because yeah, we are not moving away from Azure anyway. Any other topic that you want to highlight? And for me. Because then I think we can close the meeting earlier. Doesn't make sense to keep it for 30 minutes if we don't have any more things to say. So thanks for your time and see you on our seat. Thanks everybody. Thank you.