 Bonjour tout le monde, et bienvenue à l'infrastructure weekday team meeting de Jenkins. Aujourd'hui, on est le 8 de février 2023. Sur la table, on a moi-même un porteau, Hervé Le Meur, Marquette, Stéphane Merle. On sait que ce n'est pas là. Il est dans les holidays, on regarde à lui. Et Kevin Martins. Let's get started with announcements. The weekly release 2.418. The bill process failed during the Git push tags. It looks like there has been a commit pushed during the release. It's not the... Oh, it's your mark. Okay, so you're responsible for relaunching the release now. No, no. Well, because I don't have permissions on the VPN, I can't relaunch the release, but I am happy to take the blame for the damage I did, and it was intentional. I wanted this change to arrive. Okay, let's restart it now. We'll do it in live. That would be interesting. Okay. Now, has the... So the tag was not pushed. Therefore, is it safe that it will not reuse the tag? Or it will still use the 2.418? Or is the tag 2.418 burned so that we won't be able to use it? That's what I never remember. We can look at the steps that were already done. Okay. Let's... The system connects. I'm sure we can update. We should be able to work on updating the Git configuration so it will rebase properly next time. Yeah, well, there's already a ticket opened on this one by Basel. So the question then becomes... Okay. 4.17 is still the latest version on the mirrors. So the war file has not been published to the mirrors. So I'm pleased to see that. Yeah, yes. It fell away before. The question is, what is the latest version published in the Maven metadata on Artifactory? That's the real question. If we started to push metadata or Git tag, then we can consider this release burned and we need to increment. But otherwise, it should be okay. It means we haven't pushed anything. Okay. And so if I check the Maven repository under releases and go to the correct location, I should be able to confirm that based on the palm of Jenkins Core, it's using org.jankins-ci as its parent. Okay. So org.jankins-ci, org. And you're going to read the log to see if it pushed anything to Artifactory. Okay. So I can do a separate verifications. Org.jankins-ci dot main. Okay. The release was prepared. But the commit, it fell during the push commit step. I don't see anything pushed. And the release was on stage. Nays are verified. So we should be able to just restart the process. Can you confirm you see the same? I do not see. So I can confirm that I do not see anything in org.jankins-ci.main.jankins-core beyond 2.1417. So I think we can restart. Perfect. So let's trigger a new top level release builds. Okay. And that one will start a new downstream release build. Let's look at the console output to see the first. I want to see the parameters. Weekly is correct. So it will use 417 plus 1. Okay. Good. Okay. Release process restarted as nothing was pushed. Okay. So let's see in two hours if everything is working as expected. Just a reminder, next week, the weekly meeting is cancelled. I still don't seem to have the permission to delete the event from the shared agenda. May I ask you, Mark, to do a little bit? Yes. So next, this is next week's meeting. Exactly, the 15 of August. Right. Okay. Ah, yes, public holiday. Right. Absolutely. Okay. Meeting has been deleted. Cool. Thanks. So we'll see in each other. That has been, so that means we will have a two-week milestone. Any question or announcements? No. Okay. So let proceed to the upcoming calendar. Next weekly will be released next week. The version 2.419, 15 of August 2023. Next LTS, as usual, I forget. But let me copy past from last week. So now in August, we will have the release candidate for the next LTS. And 23 of August. So it looks like I will be the person from the infra that day. Ah, no. R.V.A. and Stephen will be there. So no problem here. Security releases. Do we have announced security releases? We don't. Good news. And I don't see the major event for now. Anything else folks to add? Yes, so DevOps world. We lost Mark. What? Mark, are you back or was it only me? You froze again back. Oh, weird. So I'll be out the 14th of September. DevOps world tour. But that's the next major event in my world. World in September. Cool. Let's look back at the release. The release started and the version is 2.418. Perfect. So we can continue the meeting. What are the tasks we're able to finish? First, we helped a lot of users to release their plugin by unlocking their account on Artifactory. So we have at least 2 closed issues for separated. We had 2 or 3 users on the aggregated issue. So yeah. More on this with the work in progress on that area. We were able to fix a minor security request from the GenSec security team. There were cases where even if you were reaching some URL, even in HTTPS, you were done ready to HTTP before being redirected to HTTPS again. So that has been fixed. That was, let's say, a nice one because we had to first search in Fastly, then on Ingress level, and then at the end, it was the back end container on EngineX. So we only applied. We might have the same issue on other EngineX-based web server. The trick was to use, let's set up a directive on EngineX, named absolute redirect, that tells EngineX, when it redirects from without the training slash and it redirects to the training slash, to use a relative path to be sure that your HTTP client rebuild the full URL and avoid done grading or changing us name. Is there any question on this one? No? Okay. And I don't remember. I think I surveyed you took care of the fourth one. Yes. That was an account issue. Okay. Let's remove the triage label. The triage is for any incoming issue that we want to triage during the next meeting. Any question on the task that we're done? Did I forget something or are we okay? Okay. Let's continue with the work in progress. A first status on the artifact or issues, I've contacted GFrog because right now there is no easy way to fix the issue globally. I tried to run a batch on a subset of users and the setting was changed two or three days after. So I'm not sure what is creating that mess. I just received the response from GFrog just before the meeting and it looks like they want to find a time on my agenda to run a direct session. So they don't have any clue what is happening. Really good news. So sorry for the person that are blocked by that. If you are a plug-in maintainer and are blocked by this, please try as much as possible to switch to the CD process, which doesn't suffer from this because it doesn't use username and password. It choose fmrlapie access token, which is way more secure. Is there any question about the artifactory problem? Is something unclear? Am I frozen? No, you're not frozen and seems clear to me. Cool. We have a user who wants to reset password. I think it's an artifactory issue. Oh no. Mark, what is the status for the SDLC coin? Oh, you send them an email and we are waiting from an email. No, actually this can be closed. They've replied to me, they've solved it. Sorry, I should have closed that. May I ask you just to drop a note since it was assigned to you? I will happily close it. Yes, I shouldn't have even asked you to close it. I should close it. If I'm going to solve a ticket, I should absolutely close it. Cool, thanks Mark. Another artifactory issue. So the user have a problem but they don't even share their username and their email. So there is nothing we can do about that. I will get this issue open for the milestone and next week in one week, if no answer, then I will close it as usual. Maven 3.9.4 is currently being rolled to the platform. The windows agent container images are still missing that Maven version, but that will be later today. And then we will be able to send an announcement to the mailing list and done. Hervé, your turn. We have Update Jenkins IO. So the goal is to move the update center to our Azure public cluster. What's the status on your home? So I've created the additional resources to run a little bit instance for this service, namely storage account who store the update that Jenkins that IO content all the reason on an Azure file share the stand volume. So we can use it from trusted unrelease and also from this service and use the mirroring service to redirect to R2 bucket where the same content will be exposed as a public bucket. So in addition to this storage account, I have also created a ready database and the CNAME record. I'm currently working on the deployment on public A test cluster. Working for us. Installation mirror bait. OK. Just a note while we're there, because I think I forgot to comment on your pull request. Could you take care for the initial installation to put everything on read only because we shouldn't unlike get Jenkins IO for the update center, we shouldn't have to write on the Azure file system, whether it's on the Apache or the other. Cool. Nice work. That one is not an easy one. So nice job. That was also an opportunity to onboard a new contributor to the infrastructure. Even if it was a small contribution, that was still a nice contribution to keep updated properly the HTTP Docker image we are using for the mirror bit installations. So thanks for monitoring that new user. Yeah, we saw a bunch of minor fixes, but still good fixes on the mirror bits and chart and associated setups. Anything else about that topic? OK, thanks. Next topic is on me. Defined in France, CI admin service accounts, but as code on each of our clusters. So digital ocean. I'm almost there. So initially. Walked on low level terraform. And now I have issued the terraform module because it looks like that the definition of that service account, its permissions and the definition of a secret token for each cluster is the same everywhere. The only change is the Kubernetes connection that you use to bootstrap the initial user. But then that's the same. Same for the outputs. That will be a cube config that we should put on infrasci. The goal of that user is to have a technical account used by infrasci builds for Kubernetes management and only that. So on our own machine, we should keep using the common line cloud based authentication. And the Jenkins controller should keep using a restricted user only to a few namespaces. But that is infrasci admin essay is an administrator cluster wide. So for digital ocean, I was able to run the module successfully. But now I tested it locally. It worked as expected connection and cube config included. But now I'm waiting for a new deployment with the DOCTL common line installed in the terraform environment. It was missing because we were using direct connection to digital ocean API. But now we have to use DOCTL for the bootstrap of that user. To be added to terraform environment. So almost there. And Azure is working progress. So I was able to have a successful terraform plan. So now the next step will be open the pull request and see if it deploys as usual. Because the local test with my user, same problem that might need a bootstrap. And then the last candidate will be AWS. So to do for this one. But with the terraform module that should be easier. Is there any question here? Okay. Next one is PKG and Kinsayo. So I've created a dedicated issue because there were some analyze initial analysis work like I read it with the data center. But that time for PKG. So the goal is to move the PKG and Kinsayo website. That is the origin one which is the backend for with Fastly in front. To public gates. That's the same idea as the updates on the work. The first point is it looks like there is a partial work which is already done and during the release process inside Azure already. The amount of tasks that are run on the remote machine is really low. But we still need an Apache server. I'm in the process of preparing a mirror bit system with the same let's say motivation as the data center. We want to provide the package index through directions. So we can provide that to China or any location on the world which are not closed in terms of network latency from US East. That might need once done to change the way we are using Fastly for that websites. We might want to use Fastly as a destination mirror instead of CDN in front of the web service. We don't want Fastly to cache the redirection because this redirection depends on the client not on the server. So we might need a change of topology. Right now I propose that the first step will be to deploy the hand chart with the mirror bits and start with Apache only and then we'll see if we can change the topology of the service. That analysis work was also a great rethink about some choices that has been made for distributing Jenkins packages on Jenkins plugins. Somehow we have a split between what is distributed through GetJunkinseio which are the binaries while UpdateCenter and PackageJunkinseio are only providing indexes meaning kind of text files EV text files but still text files. That could have consequences and some other might be opportunities in the future to merge these elements either having one distribution service for Jenkins packages and one distribution service for plugins but the UpdateCenter could be merged with the HPI plugin given the way we are using this redirection. My guess is that that will depends on how R2 is used with the work from Hervé because again the goal is to be sure that we can provide mirrors that we control but somehow the mirrors we control could also have all the HPI and all the EV data and that could be used as the last fallback mirrors. So there might be opportunity to simplify the installation. Given the priorities we have now that issue will be installing mirror bits here I will try to test if I can reuse the same Redis instance as the one that Hervé created but with a secondary database because Redis manage instance in Azure can support up to 16 databases given that we pay per instance and per amount of request that will be a way to avoid spending too much money. If it work then that mirror bits won't be used immediately. I will try to scale it to 0 and we'll only use the Apache server so then I can the next step will be to patch in the release process so it also updates data on that new service. Is there any question on this one? Use mirror bits but scale but only the Apache file service at first. Okay, Stéphane your turn unless there is any question for PKG. By the way PKG and updates are part of the Ubuntu 22.04 campaign. I've created this split issue and removed from the current milestone the Ubuntu 22 issue. As soon as we'll have moved both services we could close the Ubuntu 22 issues. Stéphane your turn. Yes. I have 64 on public gates. Yes, I did migrate Jenkins.io and the version for should go down. Oh, there's checkbox. You missed the checkbox. Sorry. Can you also add comment when you go saying I did that before checking checkboxes because only message with checkboxes is hard to follow? Oh, sorry. So there is a ZH version. Yes. The ZH version and the Jenkins.io version that are part of the same mchart have been migrated to the IRM64 not pool. Sorry. And I'm working right now on the pipeline library short pipeline library that is building our images to be able to build multi architecture during the same process and to be able to have both AMD64 and IRM64 for the same images and be able to be great them easily for working. Cool. Were you able to check there is a service a web service related to plugin Jenkins.io on the cluster and I believe it's also using a genics image so that one could be next week also migrated on IRM64. What do you think? Okay, of course. Yes, if it's an engineering that could be done I will be out of the office on until Thursday so bad that it's okay. Okay. That could be migrated as for today. Any question something else on that topic? So Stefan you continue working on this one is that correct? Yes and but maybe as a second task because we may have to switch the genre 21. Oh, yes. True. That's okay. Perfect. So worst case we can put that on hold then. Pointing on hold until TDK 21 deployment is is done. Cool. Thanks. Any other question? Okay. Next item Matomo. So here I raise it okay for you for the upcoming milestone to share the burden. So are you still okay to try to create the bicycle bicycle manage instance on Azure cloud? I started to look into it. We have to decide of virtual subnet and everything like that. For this flexible server. Yes. Use the same architectural decisions as what we did for the PostgreSQL public database. As far as I remember you need to create a dedicated subnet for the many for the flexible servers. They need to control that subnet at all. I went for the flexible server as the single server is deprecated and would be retired soon. So no. Yes. So you need flexible and where is their own subnet? Correct. Flexible instance to create. And then on my side I'm gonna take the see why are the builds of the Docker image and chart and stuff failing or not. My secret plan is to see if we can use RM64 image. Docker image and Elm chart. Is that okay for you? Any question? Clarification? No? Okay. Pluginside issue is 502. Hervé, can you give us a head up on this one? I've looked at the Fastly interface admin and I saw that every three hours we've got these errors I'm wondering if Fastly is caching caching and reserving and serving again these errors to the clients. Looking to this side I also saw a possible integration between Fastly and Data Docker. So we have to investigate and see if we can find the right caching error and add monitoring. Okay. These are two tasks. We have to first understand the problem. I mean, Data Docker is a really good idea that will help but it won't give you the problem. It will just say there is a problem and you already saw it. So continue to check nose and improvement integrating Fastly with Data Docker. I mean, that's what I said about all the fancy integration with whatever monitoring tool when I mean, you already did the manually next time that will just help us to catch earlier the issue but now we have the issue we need to fix it and that's the limit of what Data Docker or any tools can do here. That's why I want to do in one time fix and then integrate unless we split the burden again. I don't mind working on the analysis. If you need help, your call. Mark, can you give us an update on the Artifactory bandwidth Reduction option? Yeah. So we had good news and bad news. The good news is that bandwidth reduction worked in the builds that we ran. The we saw and still see to this day that Maven central is being read for certain repositories. So that's a plus. I see in the digital ocean logs all still that they are being read from read from the central repo not from our cache. So that's the plus side. On the minus side there were some failure messages that we have to diagnose that have not yet been diagnosed. Okay. Bad news still needs to be diagnosed. Do you need help on that topic, Mark? At this point, I think it's okay. We'll discuss with JFrog later today, actually. Any other question? Okay, application. Okay. ATH builds commonly become un responsive. Stefan, that will be for you. You had a wonderful idea that we should use. Now that you have played with pipeline during the past two days almost the shoulder pipeline. Yes. Exactly. Now you are the king of groovy syntax. So you should be able to propose a full request if it's okay for you to the ATH Jenkins file that would that would allow to run on spot instance during the first run. And if it has to retry then it will switch to a non-demand instance. I have already set the labels and the template on CI Jenkins. So you have the perfect ground for starting on this one. That should be three to four line of pipeline changes. Oh, don't tell me that. I will fix out if I put 10 lines. I haven't say only. I say that should be around. Yeah, each time you say that should be easy. I'm dying myself. I haven't said easy. I haven't put any qualificative. I say that should be around three to four lines. Okay, we'll see. Solution, you have to find a question. Is that okay for you, Stefan? Yes, yes. We agree that first Java then it changes all or ARM ARM for the priority. That one is not really top priority. And if you have to go on early days we will take over. So GDK, Java first, ARM then. And that one a third. Okay. Right now, there is no more problem on the ATH. It's working as expected. I haven't checked the billing yet. I will add a message later today or tomorrow on the impact of switching from no spot to on-demand. To full spot to on-demand, sorry. Next step. VM improve common problems to avoid confusion between services. I've done nothing on that, sorry. Okay. No one is assigned. So by default I consider you won't have time and if I have time I will take it. If I don't, that will stay in the in that gray area. Nothing done, still to do. Dieselow issue creation in event project type. Mark, I might need help on this one. I'm not sure it's a kind of... Oh, Daniel did something. I have no idea what is I don't understand the issue at all. It's something in general administration. So I'm not sure. Yeah, but given Daniel's comment that he's done the switch, maybe that is in fact done. I'm gonna ask Alex if it's okay. Issue closable. It looks like it's been... Okay, I'm gonna ask Alex will close it if as a since it's the requester. Great. Cool, thanks. Should be closable. AWS decrease cost for summer 2023. So that one is an epic. I propose to remove it from the milestones since it depends only on PKG and update change. And there is a subsequent issue right now on the backlog around S3 garbage collection. So if no one object, I will consider that issue an epic and we will move it outside. Is that okay for everyone? Yes. Considered epic moving out of milestones like the 012-2204 campaign. LF status page redirect cash for too long. This is my task and no progress on it. Okay. Daniel's asked, please ask LF to investigate, but it's just, it's an investigation that I'm still not clear the net benefit. Okay. Remove IP restriction on bounds or migrate VPN still to be done. So again, it's about the ending and finishing the cleanup of the former networks and allowing trusted CI to be protected by the VPN instead of IP restriction. Okay. So let's have a look on the new issues. The biggest one is GDK21. I believe we don't have an issue yet. Nope, we don't. Okay. So Mark, can you give us a quick explanation and we'll have to take care or Stefan or both of us to create an issue to track the GDK21. Right. So we've got a plan that has been discussed with the board and with the officers suggesting that by October 31 we would like to have Jenkins weekly released that supports Java 21 that matches with the mid-September release of Java 21 by the OpenJDK project. So in order to do that, though, we need Java 21 early access bits available in the Jenkins infrastructure. And so just like we have Java 8, 11, 17 and 19 available now, we need 21 available. Okay. Okay. So we'll need help because I wasn't able to find an easy way to get the GDK21. It looks like Timurine is proposing on a hidden way some pre-releases. Do we have details or help on that area? So it's the only way that I know of to get 21 right now. So once they've released September 19 then we should be able to get it the usual ways for Timurine. But to get it now, what I've had to do is I download it from OpenJDK.net where they have a download for AMD64 Linux machines. They also have it for Windows 64 bit machines. They do not have it available from OpenJDK.net for ARM64 and I don't expect them to have it available or for System390 ou for PowerPC or for RISC5. It's only available for AMD64 and for Linux and Windows. I don't think they even have it from OpenJDK.net for the macOS platforms that are running on M1. Let me double check just to be sure. Just open GDK AMD64 Next distribution. That might be an obstacle. However, I think there was a a pre-release program access which it doesn't. So, Stefan, you will have to check with Bruno on High we were able to locate Timurine early releases that are dated from end of July of the new Timurine GDK21 because the challenge here will be oh, can we make sure that we use the proper Linux and Intel capability for the GDK22 templates because we have IRAM64 agent capabilities 21, sorry. So, that part, I'm not understanding why we need I mean, if Temurine's available now we should absolutely take it. I agree wholeheartedly but if it's not available I don't see why we can't just take the binaries from OpenJDK's early access process. It's not officially available from the Timurine page but there are GitHub repository which are delivering the binary releases of pre-release candidates. It's just that it's they they are not complete so AMD is on one of the pre-releases which is updated sometimes sometimes you only have windows sometimes you only have IRAM64 sometimes you have all Linux it doesn't make sense for me but they don't publish all the binary bits on each pre-release candidates. So, that might need a bit of research because we want to install this at least on Linux ideally on the whole platform and so we might have some tricks to bypass the usual Temurine release resolution that we use because they use functions that provide the direct links so we might need to cheat a little bit Stephanie. Okay. Maybe a first step will be not installing it on the agent and instead use junk in tool capability and define a tool that would only run when it's an AMD64 Linux machine and we can we might get started with this one. Yeah, so the version on openJDK. on JDK.java.net is dated august 3 so 5 days ago. Okay, yeah, that's the I think that's the latest on Temurine. Yeah, it's it's their build their internal build number 34. So as you can see I'm opening on the screen the pre-releases from the nightly builds of the Temurine for 21 distribution. And oh, that one is complete. Finally. Oh, good. Okay. All right. So we could even be using Temurine builds then even better. Okay, great. Yeah, because maybe we will don't have to change anything. Right. Well, Temurine builds for us are much better than OpenJDK builds just because we will eventually switch to Temurine. Why not switch now? So we'll have Packer image. We will need the Windows image container image. Win VMs all and the Linux container. We have the tool. That's a really good start. Question. Do we replace the existing JDK 19? Do we remove them? I don't think we can replace or remove because we've got builds that reference them. And if we take if we were to take them away we will surprise the consumers whose build will now break. Yeah, we need to change the end of life and prepare it. Yeah, I mean, Java 19 is off support now. So or it was no longer issuing patches. So we we should correctly deprecate it. But I think this is a good excuse to go through the deprecation process and declare end of life with an orderly end. Yeah, but maybe 21 first and then deprecating. Exactly, right. So you have to add 21. So Stefan, that's your new top priority that will start by creating an issue explaining what we want to do. You can copy pass from the JDK 19 if you want. That's exactly what I had in mind. Perfect. Stefan takes it. You can already you can remove the triage and add it directly to the new milestone for the 22. Is there something else to add? Tuesday, 22 of August. Oh, sorry. That was the data. I thought about the JDK. OK. Good try. Is there something else to add on the JDK 21 port? No. OK. I believe there is something I will want to add to the upcoming milestone. Then that will be using JDK 17 as the default controller and agents. GVM. I'm willing to take it but anyone interested is absolutely welcome to to get there. OK. JDK 17 for controller plus agents. Great. OK. So I'm adding this one directly to the new milestone. Removing triage. OK. And since it's I'm adding it because we will have two weeks milestone. Finally, do we have so sorry. Do you have a new topic you want to have a survey eventually blue ocean, right? Yeah. I saw some requests around blue ocean and some exchange between developers which made me which made which remind me that I have an old issue to do a great blue ocean to see to see public and to run instead of dedicated controller somewhere. OK. So I'm adding it to you. And Edway help us out or help me out on that one if we just turned off that dedicated controller would it critically damage Olivier Lamy and his work on blue ocean? We want to alloy him to have a working OK. So it would it would be harmful to him. So we absolutely want to do the migration not just the disable. Got it. Thanks. If that's OK for everyone I propose a way to create a new top level directory organization skinning name blue ocean that will contain the build eventually the builds from this one. The reason I'm asking for that is to avoid having to trigger a scan of the plugins or core or tools because we have critical at least that could be an annoyance and while I believe you might need to iterate quickly. So if there is no object and I propose that you create a top level directory directly on see I can tell you. Good. Another point. I don't know if see I blue ocean IO is publishing artifacts docker image artifact repo if it does we will have to only add the non publishing part on see I Jenkins IO and we must not publish anything from see I Jenkins IO at all. So if they publish artifacts then you need to start the discussion on the issue we don't know which one I believe that might be the docker image and ideally that image shouldn't be built anymore as per the SIG platform. If you find anything don't hesitate to challenge the fact that these artifacts should not be deployed because that should only be a set of plugins that are already built either with CD or manually. The reason is these are public controller so no credential we can consider this controller being active every day. So anything that could be public is dangerous. I would have used Afraa.ci if I had to publish anything but first the question is do we need to release and why? It's good to have some cleaning in. Exactly. No worries if it tries to deploy anything from see I Jenkins IO it will fail because it won't find a credential. So you can start by migrating and if you see build failure during what would be deployment phase then you can ask the question. Cool, thanks. So what's the top level folder on CI Jenkins IO or OX scan but I believe a folder will be easier. No deployment on CI.jenkins IO we have to challenge any deployment of any artifact done on CI blue ocean.io Let's ask Olivier Lamy. Nice, thanks Arvet. Something else to add on that topic. Do you have other topics you want to bring or should we go under the suffering of triage? Or we could finish Peter earlier. Yes, so if we have the certs so remove remount of the legacy Azure overlap virtual network that one will be the last mile so I keep it as triage for next time. Same for the VPN because you need to delete the VPN in order to remove the network. However, I'm going to add the migrate certs CI Jenkins IO from network to the in the upcoming days. That's the same as what we did with CI entrusted. This issue means at least changing the network and I propose also to change the virtual machine used by certs CI. We know by experience that we only require to have the Jenkins home and we will keep the holder machine if we have system level. Let's say customization because I believe the Gensec team might have played with some known as code elements. But the goal is to have a version 2 generation version 2 of virtual machine which can be way faster. Okay. And what do we have in triage area? Kubernetes 1.26 for later. Move public IP to a distant resource group for later. Okay. I don't see anything else interesting that's really a lot of tasks. So is there something else you want to add or we can proceed? We will finish three minutes early. But we won't be late. Okay, I'm stopping this. Yes? Yes. And discuss it out. Sorry? I wanted to to speak about GT. Yes. We've tried the GT conference include integrated elements. So desktop tools we are using to discuss on matrix slash IRC. It works great. And we can I was thinking we might try it too for the inframitting. So it could be integrated directly in the Jenkins Infra channel, for example. So everyone can join it easily. There is no login needed. No acquaintance. And we can I have to try it on my own account but we can use YouTube live stream to record the meetings. So it can be there are a lot of options quite nice. Using it between this and Zoom. It has some benefits, I think. We can adjust more easily the quality of the stream, etc. So yeah. I'd like to test a bit more and then try a meeting on GT instead of Zoom. Later it can be depending on how it works. It can be extended to other ACG maybe. It'll be easier. Some Zoom and with the current news about Zoom and their terminal service. It can be easy. Yes. Zoom is going to use IA to train their own LLM based on the content of the meetings. Yes. That's and that's that's not. Yes. Already the case right now. Yes. That's already the case and they don't care if you accept or not. They have added a set of non-reli privacy respectful elements about the meetings and also socially speaking we are an open company and we want people to be at ease and Zoom is currently calling or their employee to go back working from the office. Yes. It's not a joke. I'm dead serious. So that start to be enough information that mentioning they are passed with the Chinese web security spying on the meetings. So yeah. I think I agree with the survey if she is able to provide the same feature set for these meetings that we have once a week and including recording that could be a great. Let's say a great privacy and user respectful tool instead. The quality was good on what we tried. I propose that we try recording some private meeting with the GC. We have on our private channel survey. We have to prerequisite so the one that survey mentioned that need to be validated by the board mark because that require a YouTube account. There is a feature named live record, I believe. My correct survey. Yes. You have to activate in the YouTube account the live streaming and there is 24 hour waiting period to be activated. The alternative could be using something like OBS Studio to get OST or someone else recording locally submitting then uploading. But we do we already have a Jenkins CI account and it already places so if and I've got I've got permission to upload to it and I believe you do as well Damian, right? Yes. So is there a reason we couldn't just use that YouTube account as the live streaming location? I don't want to make that decision laterally because the live record the term of usage are a bit different than just uploading. I don't mind it's just I don't want to take that decision alone or just as the infra-sig. So that's why I want the validation formal validation in case there is an objection or something we didn't think about. But I don't see any problem on doing that. Okay. So next board meeting is in two weeks. Okay. So we've got time to prepare for an agenda put an agenda item on the board meeting and the board meeting notes as an experiment are stored on HackMD right now. Nice. The thing I see with using YouTube for live streaming is that it cut out see upload the step later. Right. I think it's very desirable. I love the idea of live streaming it so long as we understand that we are being live streamed and we agree that that if someone shares something sensitive it has been live streamed and it will have to be accepted that it was shared. Yeah. Good point. So don't screen share your credentials during a live stream. What do you know Callity? Right. Because we hope that image manipulation programs will not be able to improve the quality enough to make it too fast at least. I think that all police base series on television were able to zoom in on whatever bunch of pixels. Immediately and then it's flawlessly perfectly. In fact, even if they're blanked out I can do that. Of course, these are the best police shows. Right. I don't know if you saw that but there is a project to de pixelate screenshots. If you know the police the fonts you give it an example of the font then pixelated the image and they can export it. It's not really readable but you can descript it. Oh, a new Oh, yeah. I think it's not a good solution now. I have students who are trying while working on a PhD process where they try they continue the work I did a few years ago with the first chat GPT where they are trying to insert weird data in the training models. So then you can influence the way this system is working and they are using that one as an example and they are manipulating real life data. So it starts to detect writings that are on the real writing but writing based on the training model training datasets that's really impressive program. I'm not sure the program I'm speaking of was using LLM or anything like that. I'm not sure. Okay. So I I did have a question before we end. And it's actually related to Jenkins Infra. Damien, could you open up the weekly release process that's running and confirm it's still running? Yes. Just for for our sake to see that yes it's still operational. Okay. And while I'm opening it just one last thing we need to my I think in order to use JTC for the Jenkins Infra IRC I think we might have to migrate out from IRC to whatever bridge room it is. Otherwise, I'm not sure that JTC link might work. I'm not sure that has to be confirmed might need to have that requirements to whatever they proposed. Hervé, you're responsible for checking. And I don't mind we try the YouTube recording with our either yours or mine account just we have a way to know the what feature do we need to test. The process is still running. Good. Very good. That's all I wanted to see. Perfect. It's preparing release so still one on 15 minutes left. Excellent. Thank you. That's all I needed. Thanks very much. Thanks folks. That mean Stefan, you will update infrasia only tomorrow with the weekly release. Yes. Anything else? Ok. I'm stopping the recording. See you in two weeks. Bye.