 So let's start. There are a few things that we have to discuss today. I think the first one is regarding a severe sponsoring. As you know, the sponsoring ended in November and so now we have to pay the first bill. The billing period is not over yet. So it ended last Friday. So I don't have the all details. So the thing that I know right now is the amount of money that we will have to pay. We still have to clarify who will have to pay for that. Normally it should be because you care, but it did not receive any information yet. So this is something that you have to figure out. So there are some ongoing discussion about should we stick to Azure? Should we migrate to Google Cloud? Because we have some opportunities to sponsor that from Google. So that's all the discussion happening at the moment, but obviously I don't have more information to share. Do you have any question or concern that you want to raise about this topic? So just to give you a quick overview, right now it costs us around 20K per month. 20,000 dollars per month to run the infrastructure. There are multiple places where we need to improve to review again, there are multiple services that are not optimized at all. Like for example, the rating. I've discovered last week that the rating application costs almost 500 dollar per month because of the wrong settings in place. So there are definitely some optimization that we have to do with the cost of our infrastructure. So that's regarding the AWS account. Regarding the mirrors, the multiple issues we had over the Christmas period. As you know, mirrors is getting quite outdated now. And the last time we upgraded the open two operating system to the stable version, we also upgraded multiple services like the PostgreSQL database used for mirror brain. And so because that machine is running a lot of different services, mirrors, it's also the place where we generate the artifact for the RPN repositories. That machine has a lot of configuration, has a lot of specific Python Ruby environment configurations. And so we would have to take some time to refactor that machine in order to speed the different services in different locations. So we can reduce the load on that machine. And what's basically happened to that machine during the Christmas period is that we have three different PostgreSQL database running at the same time. PostgreSQL 9.3, 9.7, and 11, I think, something like that. And the Apache is using the 9.3, but we have some conflicts with the other two other database that were running some loads on the machine. So I had to turn off the two other PostgreSQL database. And so now everything seems to be working fine. But yeah, we still have regular issues on your brains when we have some load. Another issue that happened, that affected a user during the period, is that while migrating Jenkins.io on a new Kubernetes cluster, we accidentally enabled the STS settings, which basically automatically switched from HTTP to HTTPS, Jenkins.io, and all subdomains. And so because mirrors, the Jenkins.io does not support HTTPS, a lot of people complained about that specific issues. And the only way to get rid of that issue, so obviously we disabled that flag on Jenkins.io. But every people went on Jenkins.io, while that flag was set to un, have to reset the settings in their own browser. So it's an issue that affects, I mean, all the people who went on Jenkins.io during one day, I think. And so that's something that, yeah. Olivier, when you say Jenkins.io, in that case, it's not, I didn't seem to be affected by the things that are already HTTPS, like wiki.jankens.io, or ci.jankens.io, but mirrors.jankens.io did seem to affect me. What are the other things that were affected, mirrors I was aware of? So there is mirrors.jankens.io, and there are not a website that does not support HTTPS, but I don't remember it by heart. But most of the people were affected by mirrors. Okay, thanks. So I did a, on a personal tack, I attempted to use, to go through a CDF security assessment thing for one of the plugins I maintain. And one of the questions they ask is about transmitting checksums over non-secured channels. And because updates.jankens.io is not over HTTPS now, I think that one is a no still. So eventually, I assume we will want to go to HTTPS for mirrors and for other, for all of our properties? Or is that no, not really? So yes and no. So the challenge that we have right now is mirror brain, the tool that we use for our mirrors that not support HTTPS correctly. And that tool does not seem to be maintained anymore. So before, just before the Christmas period, I gave a look to mirror bits, which is a different mirror tool. I can share the link. And so ideally, I would like to take some time to see if we can use it to replace mirror brain. And in that case, we would be able to use HTTPS for mirrors. Thank you. Thanks for the clarity. So the tool that I was, that I'm mentioning is I'll share it on RSE. There's this one. So if you want to have to spend more time on it, or have a look into it. So there is already deployed on the community cluster, but I still have to configure it and to see if I can have an mchat that you can use to deploy it. So if it's, if it's working correctly, then I would like to remove and get rid of mirror brain. So maybe it will be a solution. But right now, I'm also thinking that I do not support HTTPS. Thanks for the clarity. Another major initiative that you have to work on, and on that, Tim Jacob already helped a lot, which is integrating the community cluster, the whole community cluster running on ACS to AKS because the ACS service is deprecated and will be turned off by the end of January. We already moved, Tim Jacob already moved multiple services. So plugins.jnk.io is now running on the right cluster, JavaDoc as well. jnk.io as well reports the jnk.io. The chatbot is already running on the new community cluster. So watch remain. So there is an ongoing work to migrate accounts, the jnk.io. And so there are three services that are still running on the old community cluster, which is uplink, the jnk.io, LDAP, the jnk.io, and Evergreen, the jnk.io. I'm planning to work on uplink and LDAP this week or maybe next week. Regarding Evergreen, I have the feeling that we already started the discussion to say, to ask if you want to maintain it or not. Nobody really complained or mentioned whatever. So I'm assuming that we just let Evergreen die on the old community cluster. It's not only die because if we disable Evergreen parts on Asia, we also need to shut down parts in AWS because from what I've seen in the cost spreadsheet, a significant part of AWS cost also should be coming from Evergreen. So the current situation there that next week on January 15th, we have a governance meeting. Future of Evergreen is in the agenda. And if nobody steps up as a maintainer, whether it's a company entity or individual contributors, I believe that the best resolution process to just shut down Evergreen. We can always recover it if needed. Yes. So it's okay. If Olivier already preps for a decision from the Governance Board, the decision will be 15 January. At that point, he could then turn off the Evergreen services. I think we will need a blog post and final message because last time I heard that there are several active users of jnk.severgreen. I tried to confirm with Baptiste but I don't have actual information so far. I would prefer to go through at least some kind of advanced announcements. Even if it's two weeks, it's better than nothing. Okay. So it's if we decide December 15 or January 15, by end of January, there's a potential that the Evergreen infrastructure can be shut down as one way of reducing costs and reducing admin efforts on services. Good. Okay. So yeah, that's all for the ACS cluster. Another topic is that I now have more access on the Azure account. And so I can now invite people on the Azure account. So the idea would be to delegate access to specific services. So the first thing is because right now we are only the default active directory on the Azure account, we cannot have fine permission like one person have access to that specific service. So you have access to everything or you don't have access to everything right now on the Azure accounts. The first question is, do we want to pay to have more control of all the users? Like defining like let's say someone only have access to this cluster? Or do assume that if you have access to the Azure account, you have access to everything? So that's the first question because if we put in place some rules like that, we have to maintain the defining zone. And the other thing is, I think it would be interesting to discuss how we decide who has access to the Azure accounts because right now it's only Tyler and me, which means that if for some reason neither Tyler or me are available, nobody can really debug or fix anything on the infrastructure. So I was wondering what would be the rules to invite people on the engines in crap? So the most straightforward way is to just say that official roles, let's say board members and send officers are eligible to get access. Well, they have CLI assigned, so why not? It immediately increases the number of people who can access it to 10 people or even more. So definitely a good first step. I like that. I think that's simple. That would give Alex and you Oleg access as board members as you're an idea, even officers. So that would give me access as well. Also Daniel Burke, which is also critical because our security system is also dependent on Asia. Right. So Olivier, would that be acceptable to you to declare that officers and board members are granted? I'm fine with that. The only thing that I would like to say to specify, if that person does not think that you need that access, then it does not need to have access. So for example, I'm thinking to Ulrich, if the Ulrich does not need access to the Jenkins Infra accounts on Azure, the need does not, I mean, the idea is not to have two more people on the accounts. Well, so in terms of controlling that, I'd propose that Alyssa Tong, Uli Hoffner and Mark Waite don't be, shouldn't be granted initial Azure access. Right. Let's have the people with more experience get it first. Okay. I'm fine. I'm fine. That gives you a smaller set. Yep. I'm fine. So in the first time, I would add Alex and Oleg. So maybe formalizing a bit would be nice. But yeah, we can definitely do it after we have some clarity with the Azure sponsorship. Okay. And then we can also, and then I will also plan a session so where we can, I can give you a quick overview of the different components and where to look for the different information on the Azure accounts. Okay. Otherwise, some updates, I'm sorry. Otherwise, some updates on the plugin site. Gavin again has been working on the plugin site to split the API and the front end into different, two different Docker images. He also work on the UI. So now it should be, it will be easier to use to access the plugin site from a mobile phone. We are still discussing about do we want to deploy the plugin site on Azure? Or do we use the Netlify to deploy the plugin API? So, unless you have some update on here, I think we will just stick to the, to the main list discussion. For me, the main question is when we do the immigration, because yeah, there is no zero risk that things start falling apart after this migration. And it boils down to capacity of the entry key. There's not a compelling reason for us to have to do that migration in January. Is there a leg? Whereas we've got compelling things, we've got to get ACS to AKS transition done in January. Exactly. So, which migration are you talking about? Because the plugin site is already running on the new cluster, so the plugin site is already safe anyway. Yeah, for me, it's just a concern in terms of capacity. So yeah, we still need to click on the button. Yeah, even we run it as this on Jenkins infrastructure in Asia, fine. But still it may require maintenance effort, because plugin site APIs are consumed by other services. So, if it goes down, we may experience some issues. I'm happy to do it now if you have capacity just to get it shipped. I'm worried about doing it now, even if we think we have capacity. I feel like we are light on capacity no matter what. And Olivier, any surprises either in ci.jankens.io or elsewhere may consume a bunch of your time. I'd rather delay plugins transition if we possibly can. Well, what we could do, we could just set up a separate service because plugin site is quite self-sufficient. So, we can deploy the new version and parallel with the existing one on your URL and then get rid of that. Well, I believe it's just one patch to health charts because everything is more or less in place. But yeah, so it would be a safe change. I'm not sure that plugin site costs us too much money to worry about the service. So, yeah, if you want to deploy it, it would be my suggestion. Okay, thank you. So, from my point of view, we covered different points that I wanted to discuss. So, I don't know if you have any specific that you want to discuss. Then we don't have to keep on the meeting for 30 minutes. So, thanks. Yep. So, I guess we can stop here. Thanks for your time and see you in ours anyway. All right, thanks.