 Here we go. So hello everyone. Welcome to the Jenkins infrastructure public weekly meeting. Today we are the 23 of August 2022. Hervé, may I ask you to share the collaborative notes linked inside the chat please. So most of us should have access to the notes there. Do not hesitate to help me. So we have a guest today, Bob. I propose that we get started with you before we proceed with the rest. So let me take note. Bob Miles writes, can you let you introduce yourself and explain a bit why you have joined us today? Yeah, absolutely. Thanks for having me guys. My name is Bob Miles. I'm the founder of Salad.com is our domain and we're building a distributed cloud infrastructure product starting with a managed container service. And as a very early stage startup we're kind of exploring what are the different workloads, use cases, jobs that we might be able to support with our infrastructure. What makes it unique is instead of being powered by a purpose built data center, we've got tens of thousands of nodes provided by gamers and their PCs with GPUs providing the compute. So we're very early stage just bringing an MVP to market. And I'm kind of reaching out to explore partnerships and provide compute in return for feedback for the new product that we're building. Here at Jenkins, I know you guys, some of the work that you guys do with batch jobs and CICD kind of fit the characteristics of what might be compatible with our infrastructure. Obviously there's plenty of nuance with the infrastructure, but I've probably overstayed my welcome with the introduction here. But yeah, looking forward to hopefully learning a bit more whether we can provide some compute in return for some feedback. That's really my agenda. Okay, thanks for the presentation. So just a turn of the table before we continue on that subject. So today we have, after Bob made them into portals from the Jenkins infrastructure elected officer. So the community elected me as the person technically in charge of the platform. And I can speak on behalf of the community of the Jenkins board. Then we have Hervé Lemeur, raise your hand Hervé. So Hervé Lemeur is working with me as an SRE. So same job, same role, same area. Stefan Merle as well. The three of us are the part of the community team at CloudBiz. So we work full-time on the Jenkins infrastructure. Some other members are outside CloudBiz and they do the same work, but not as full-time. And finally we have Bruno Ferrarten. So Bruno also worked for the community team at CloudBiz. He's a developer advocate for the Jenkins project. And we just have Mark Waite joining us. So Mark is our manager, he's a member of the community and member of the Jenkins board as elected person. Hello, Mark. So yes, so thanks for the introduction, Bob, about Salad.com. So if I understand correctly, your proposal is asking if the Jenkins project will try Salad.com projects and to have some feedback on how it works and the product and everything. Is that correct? Yeah, at this stage, we're still learning what use cases and workloads are sort of viable for our network. So I think you characterized it well. Yeah, we're still, it's not ready today, but in the next, say, month or two, looking for people who are kind of willing to kick the ties and use some of the compute and providers feedback. And we have more compute than we know what to do with right now. So I thought you guys would be a good kind of community to explore whether you've got workloads that might be of interest to distribute. So yeah, I think you, I'm happy to share a lot more as well, but I think you've characterized it well. So as you explain, it means that the compute of the network compute is not running on servers somewhere in the data center, but on personal machines that people lend to the overall network. Is my understanding correct? Yeah, that's correct. So I mean, if I may, I'll just take a minute to explain how it works. We've got an open source client that PC gamers download to their PCs. And they essentially opt in to provide us resources or compute bandwidth and storage. We've been around for four years now. And we've been primarily monetizing the network with proof of work. So crypto mining, so you guys have all heard of that. But we incentivize everyone on the network through a pricing model or incentive model where they get games, gift card subscriptions, so it abstracts away the workload. So that means that we can distribute any workload across the network. And after four years tapping into crypto, we now have these tens of thousands of gamers and their PCs with their GPUs. So we're using WSL, WSL2, Windows subsystem for Linux to kind of standardize the compute environment across the network. And we're building a managed container service to support containerized workloads. And so obviously, it's not a purpose built data center, as you described, with racks of servers optimized for certain workloads. There's a bit of nuance here where we've got consumer broadband to actually distribute jobs. And also, you know, I'm sure as you guys as a bunch of engineers would appreciate if you have physical access to the device, then you can, you know, we can't provide kind of the level of security that say NAWS can. So that's why we're leaning into more open source type jobs, those that don't have really high data sensitivity with their data sets. So in a nutshell, we're early stage, bringing this kind of MVP managed container service to market. And I'm kind of exploring who are some of the initial use cases that that perhaps we can trade free compute in return for feedback form. Okay, clear and high level for me. Sorry, Mark, go ahead. So Bob, I had a question or two. So that the it sounds like this is something that your compute model is something like the compute model that was used previously by the SETI project where donors are willing to offer a compute to a central resource, they receive they receive a task, they do the compute and the results of the task are sent back to the central resources that is that a safe description or a safe analogy? Yeah, it is Mark. And just to build on that SETI at home or folding at home, you know, very different incentive model, it's you kind of paid by the warm and fuzzies, you know, it's all fruistic actors, charitable actors sharing their compute. So there's no no reason for nefarious actors to defraud SETI at home. Here we've tapped into proof of work, which is trustless. So it allows us to onboard nodes that got to do 10,000 minutes, sorry, 10,000 seconds contributing, sorry, 10,000 minutes contributing to the network before they and we characterize those nodes as trustworthy or not trustworthy, you know, that's one data point in kind of this algorithm to deem the reputation of a node. And then we're unlocking those nodes, we've been on the network, built that reputation for different workloads, which is which is kind of the product we're building today. So yes, same model. But we're still about six weeks out from from actually having an MVP for people to kick the ties with. So that you explained it, explained that's the perfect analogy, actually, just a different incentive model. Thanks. Okay, thanks very much. So so ultimately, you've got what we would consider untrusted compute, but that's willing to donate. And that's that I think will be a different mental model for us trying to find a place where we we what are the tasks that we can do where we would run them on what we would classify as untrusted compute. I know you've done a trust developed to trust model. But for us, it would look like untrusted compute. And we've we've actually got isolation levels already in some of our systems that intentionally do something like that untrusted. But this is sort of another level of of untrusted than what we've already got. So I think we'd need some time to consider it, to ponder it and see, okay, what places might we might we apply this effectively, and which places would the risks outweigh the outweigh the benefits of the additional compute that you're offering. Now, I guess one another question for me is typical duration of the contribution from one of these workers, is it we have jobs that take 30 minutes to execute on a on a multi CPU box on one of the cloud providers, is that way outside of the range of what you're used to do you typically do give them a task that's a a one minute task or a five second task? How does your how does your your allocation work in terms of compute time that they donate? There's a very wide spectrum. The one thing we can't provide is on demand, you know, sort of six nines of uptime, as you can imagine. But it's really interesting actually looking at the characteristics of the different nodes on the network. There is a good cohort, say 10 to 20 percent, who actually run our software 24 seven. And we kind of characterize those individuals as gamers that have perhaps got an extra PC and older gen PC, that they're just leaving running the whole time in order to generate a you know, a discord subscription, or whatnot from the value of that that device. So we certainly have devices that have respectable uptime. And we can we can certainly accommodate 30 minute jobs. And the way we're designing the system is where if if a job does get interrupted or terminated, we can fail over to another node on the network and actually see that container kind of execute through. So, but 30 minutes is definitely within the realm. And one of the fun things we're working on is kind of gamification where, you know, if if someone is going in there to kill our software in the midst of a job, we can actually encourage them to, hey, give us, you know, give us five minutes to either take a snapshot or or finish out the job. So another another bit of nuance of our infrastructure you've stumbled upon there. Thank you. Thanks very much. So it sounds like in the future, we'll have more conversations as you're getting closer. We need to have some conversations within the Jenkins project about things that might or might not fit. Thanks. I might have some question because I wasn't able to find any information about what kind of isolation is provided on the workload on these machines and what kind of encryption is used to ensure that the owner of the underlying machine isn't able to pick anything inside the workload because container is not encrypted and not isolated. And the third question I have for you is do you have metrics about the electrical consumption of that thing? Yeah, so so let me I missed the first question. I'm sorry. So let me step back and then you can re-ask that one. When it comes to the suppliers, so the people running our software, you know, we mentioned, so in our discord community, there's 50,000 people, mostly gamers, we're very transparent about, hey, there is a catch to running salad. It costs electricity. You know, if you're getting rewarded for your compute cycles, and there's no catch, then obviously something's wrong. So it's almost a marketing line for us to say, hey, there is a catch. You've got to maintain your cooling system of your PC. You've got to be aware of your electricity usage. And that's the catch. So fully opt in and the community is very aware of that. We kind of published that all through the FAQs. So we let that decision about the economical rationality of running our software, the onus is on the supplier to make that decision, whether they want to actually supply their compute resources or not. Second question around the level of encryption, when not employing anything like fully homomorphic encryption or anything at this stage beyond just the compute environment, what we do have outside of the virtual machine that we spin up is a bit like the gaming industry. There's certain watches that we have as part of this reputation system to see whether that host is actually running any nefarious software or interrupting any traffic from that container. That kind of feeds into the reputation system that we have for the nodes on the network, along with onboarding new nodes in that trustless environment where you get paid only through proof of work. There's also other endpoints we can use. We're working towards working with sift.com who kind of provides a rating of the trustworthiness of an individual and we're also working towards having a salad card where you turn that latent value of your PC into a essentially a credit card or debit card that you spend online and as part of that process there'll be a KYC. So it's kind of the equivalent of a know your provider for us. So multiple kind of levels that are going to minimize fraud there, but ultimately we do not recommend the platform for anything that has sensitive datasets. So we're really focused on kind of open source, the open source community and workloads and datasets and code that is going to be made public anyway. We feel like that's kind of our initial target market for our resources. And I'm sorry, I missed your first question. Oh, you're on mute. Yeah, sorry, thanks. Sorry, I'm not English native so I might have missed. So does that mean, I'm not sure to understand, there is no encryption but it's compensated by the different layer of trust indicator that you have put on your model. Is my understanding correct? That's correct, yeah. Our head of engineering here would be the first to say because we don't have physical access to the machine, we cannot provide an SLA guaranteeing the security of the workload on those machines. So that's kind of our party line here is do not distribute sensitive datasets, do not distribute secrets, do not distribute anything that's kind of highly sensitive with the data. So I don't see a future for us supporting, say, HIPAA compliant sensitive medical data workloads. That's just not in our future. We're kind of targeting a very different type of customer who doesn't have those security concerns. Okay, because the thing is that even without going to the hardcore medical or banking model, we have a simple example there. All our data for the plug-in of Jenkins that are built and tested are public. However, if you want to be able to git clone the repositories to avoid reaching the API rate limit of kit.com, you need a key. And the old CI system usually needs some kind of key to identify who is getting the data. And that one is sensitive because if anyone gets that, they can reuse it to whatever use case they could have, could have, but that's a credential, that's a secret, which is sensitive by nature, even though the data is public and reachable by everyone. It's just there is in the framework, we are at the firm, it's accounting issue. It's not authentication, neither authorization. It's accounting because of the API rate limit to not destroy the GitHub, because with such a network, that's why they put API rate limit, same for Docker containers. If they don't put API rate limit on the system, such an network would absolutely destroy their service and the requirement for a token. And we are in that area for part of our feedback, which is not sensitive, but not untrusty. See what I mean? Yeah, I'll have to take, I've jotted that down. I know Daniel and Kyle, they're sort of head of product, head of engineering here. This is something we've looked at is kind of the management of secrets and keys. And I know they had an answer to it, but I'd be lying if I told you what that was right now. So I'll have to circle back, I'll come to a future, perhaps I'll bring them along, because I know this is a question that will come at multiple times. I'd love for them to join, perhaps next week as well. Okay. And my last question was about the workload isolation, more for QS question, more than security. Is it virtualization? Is it row containers on the machines? What is the kind of technology that could be used there? Yeah, so we're the vast majority of machines on our network running Windows. And now shipping native to Windows, we've got WSL 2, which gives us the ability to actually tap into the GPU, 3D acceleration as well. So we're using that Windows subsystem for Linux to spin up a Linux environment. And that's the compute environment which we run these containers in. So that's standardized across the network, despite having very heterogeneous hardware that powers our network. Sorry to interrupt. Do you have any real Linux machines also? Because gaming on Linux is a thing in 22. So does your product work directly on Linux? Yeah, because our software is open source, we actually had a community member who cut a build for Linux. I top of mind don't have the numbers of how many nodes on the network are actually running Linux versus Windows, but I do know that it's same with Mac. We had one of the community members actually create a version of Salad for Mac as well. But the only one we officially support is Windows right now because it's the vast majority of the network. Is there a way in the application to specify which kind of nodes, which are not nodes, we want to run the job on, for example, if we got some jobs that need a powerful machine, then we can specify that when we sent the job to build? Yeah, this is something we're working through at the moment. So we're lucky we're kind of standing on the shoulders of giants in a sense where we're essentially copying the pricing model of the managed container services of the hyperscale cloud. And right now, kind of the resource selection you can make is how many VCPUs and how much RAM are you looking for to support or will that container consume? One of the things we're struggling with and thinking through right now is when it comes to GPU resources, how do we establish brackets for that? In the cloud, it's easy because there's like three or four. There's the K80, A100, V100. There's kind of a set number of GPUs. For us, there's a list of five dozen different GPUs that kind of make up the majority of the network. And how do we actually allow individuals to select what GPU type do they want? And what dependencies like do you need CUDA for your workload? And how do we kind of provide that pass through support from the container, from that Linux, sorry, from that Linux VM? So all of these things we're working through at the moment. And these are kind of questions that we're building towards having answers for. But hopefully that answers your question. Yes, thank you. Okay, so that gives a clear idea of what it does and what it does not. So thanks for that. At least for me, it clarifies. So as Marc said, we have to evaluate what kind of workloads could possibly run on such a model, given what we have, most of the open source content, evaluating the risk with the security team as well. Is it okay if we continue communicating on the community forum where you have had the message a few weeks ago? I think it was three weeks ago? Yeah, absolutely. And I'll just finish by saying really appreciate the questions and your time and consideration, guys. And yeah, look forward to continuing that communication on the forum for show. Thank you. Thank you very much, Bob. Is there any one last question or feedback or from anyone there? Nope. Okay, so we'll continue to our usual meeting then. Thanks a lot, Bob. You're welcome to stay if you want to follow or go away. That's a public meeting, so your choice. I'll stick around. Thanks. Thanks. Okay, so let's get started with announcements out there. So just a reminder, folks, I'll be in PTO again. These French people. Friday will be my last day, then I will be unavailable until the seven, which means I need you, folks, to take care of the meeting, meeting not on recording for the two next weekly meetings. I'll let you decide who is going to take care of the charge and drive the meetings. Chaos for everyone. No, that's my gift for, sorry, Avi. Are there any of you in PTO during the two upcoming weeks? So I'm out next Tuesday. I'll be just arriving back from Alaska on grandbaby visit. So I actually might be available, but I'll probably be sleepy for multiple hours in an airplane. Don't think I should attend a meeting trying to run it. Fair. So it's the 25. Is that correct? Correct. Okay. Are there other PTOs? Next announcement, we had the night visory only on plug-in today. Security advisory. Damien, has it been published? I haven't seen it and I'm not seeing the update center updated yet. So it's, I think it's at least in progress and news by saying in progress is certainly accurate. I've not seen the published email from the security team yet. Okay. I saw that the first part on the plug-in, open source plug-in was okay. So, okay, maybe it's in progress. Okay. So let's wait for the finish. Usually it can take a few hours before the finish time for us to wake up and get ready to start. So that's plug-ins only. So low impact for us. No action expected from the team. Is that okay? Any other question on security topic? Okay. We have a weekly core release, like every week. Sounds like no issue during the build or the packaging. Docker image has been published. We should have pull requests to update our Docker images in the upcoming minutes. I don't remember the number of the release and I assume that we have the checklist. 2.365 and the checklist is in progress, not yet complete, but everything looks good on the checklist. So it's, there have been no failures on the checklist, the job completed successfully, the change logs been merged, but it's not yet visible to users and the Docker image is confirmed available. So all sorts of positive. Okay. Just a note about the security. Everything looks good as well. Didn't saw any issue. So yes, we can proceed. Do you have another announcement, folks? None for me. Okay, let's proceed. So next weekly, next Tuesday as usual. Next LTS, I heard, is it somewhere in September, Mark? September 7, 2.361.1. Nice. I will get back from PTO to get on this one. Very good. Yeah. And that is the first Jenkins release that will require at least Java 11, Java 8 no longer supported. That's the first Jenkins LTS dropping Java 8. Yep. Nice. So security release is today. Do we have another major event on the upcoming weeks? Nope. Okay. Is it okay to proceed to the usual operational overall? Cool. So four issue closed last week. Earlier this morning, we had an issue with the update center that was caused by me. When we refactored the proper configuration, we ended up with providing the same SSH key host to both agents. So the good point is that the controllers are not connecting to the agent to run builds if the SSH key host are not matching, which is a recent behavior change on the SSH plugin and Git plugin. So that demonstrates that it works when it mismatch. So it was only one configuration line away to fix it. So thanks Alex for raising that issue, in particular, before the advisory that could help to release plugins. We add around 10 to 12 hours of plugin release not treated that were treated on one time. So everything is okay. No failure and the builds are flowing now again. We had also an issue fixed that took some time. The GDK8 docker images for the previous LTS weren't built because we weren't strict enough on the process in charge of building these images. The LTS2, the document LTS1 should have been built not by the master branch, but from its own stable branch on the genkin.ci slash docker repository. So we did a patch which should be only for the duration of that stable line that only builds the GDK8 images and that copy as it's the GDK17 images published by the manual job to the former GDK17 preview. The use case is that the user using the first version of the LTS with GDK8 or GDK17 preview tags should now be able to use the latest patch versions. So thanks everyone involved on that because that's an easy one. There has been some proposal by Basil Crow about the way we could manage in the future this repository. That's really interesting. I think that's worth discussing and trying it. The goal will be to fit the same model as the packaging or release system with the master branch with the weekly where contributor merge issues if they are approved and then maintain of the repository have to roll back to the stable lines at least the two last ones for LTS eventually being automated. That will avoid such issues and will ensure that all the community review each change to be back ported. Because what is lying under this one is that the latest open GDK minor patch update that we did broke some new support. Backport of the C group version 2 support so memory limits are working on recent host system but it also created memory issues with some of the Jenkins plugins. So it's a weather matter of should we merge these updates right now in the case of LTS line or not. There is no doubt we had to do this for the weekly however for LTS that could ask let's say created some mayhem for the LTS user which we don't want. So that's a community discussion to get started but it's important for us to understand the concern and what it could cause in issues for us as well. There has been a misguided issue redirected to the correct forum. And finally walk around account Genkinsio web application where we had let's say hold instruction that are not valid anymore around the spam and some users were unlegitimately spam blocked by the system. So thanks Daniel and Vadek for updating the image and thanks RV for helping me on automating that application. We did not update that application since month is so the goal was to add all the auto machine system on this one. As a reminder the reason is because that application should die soon in favor of ASA key clock system or another system that gave in shared with us because it's an old Java application and it does not it's not solid rock solid enough. Now walk in progress unless there are questions on these topics. Little addition I've made a patch to a CQ plugin to be able to use a zava pass. Absolutely. So thanks for that contribution. I don't know if it's merged yet but it has been tested by us successfully using the incremental system so great work to specify custom Java pin pass. So that one is foundational for us for the GDK 17 upcoming updates because we need at any time to be able to separate which GDK is used to run the agents and which GDK is provided by default to developer in order for them to build against the specific version and in that case for EC2 virtual machines agents the path to provide a custom Java version that should be the same as the controller wasn't featured present yet. It was relying on some Akish way or the default Java installation but since we have at least three GDK installed on our machines then that can be a problem. Thanks for stepping on this one. I hope that will be released soon. Are there other tasks finished during last week first that I could have forgotten and not tracked by an issue? So let's continue to the walk in progress. I'm taking them on the notes order. So we had a new issue about incorrect missing Maven setting file with James Nord. So the first part of the issue that was clear for me is that we thought there was an issue between Windows and Linux templates for Maven builds. That's not the case. Then it looks like there is a philosophical discussion about should we had the Maven repositories per POM XML for each plugin or should we define that once and for all on the system and finally it seems that James is also asking us to use a catch all mirror for Maven dependencies so we control where the dependencies are coming from. Right now as a reminder on CI Jenkins.io when you build a Maven project the dependencies are downloaded for a bunch of different servers. We tend to store at least the Jenkins dependencies inside repo Jenkins CI the gfrog artifactory system but some other dependency can be downloaded from other areas. The goal on CI Jenkins.io is to let developers to not be blocked by security systems because CI Jenkins.io does not publish or deploy anything. It's not sensitive workloads so there is no problem if there is a supply chain vector attack there because it will don't have any consequence since all agents are ephemeral so it cannot be persisted. The idea is it's limited by the let's say the intrinsic usage of the controller. So the idea is that issue might be merged to the one that Erwe is currently working on about the proxy because if we set up the proxy for all agents then we will be able to finally control which second let's say backend repositories are used. So that will be easy it's only ngenics configuration but I've asked James to just clarify what you want the infrastructure just to describe so we can evaluate the amount of work and the consequences. The main risk I see here is that could increase the bandwidth that we consume from gfrog artifactory repository which is an issue already today that we have to decrease it. So that's that's the risk but James disagrees so maybe I misunderstood something and there are some magical maven things that I don't know so that's why I asked him to clarify because I think he has a good set of ideas that could help us on that area. Is there any question or things that I might have missed on this one or clarification to make? Okay next one a user told us they enter too many times a captcha code. I'm flagging this one as word issue and be careful on this one because the GitHub user has been created today there is already an account with some data that hasn't been used in some times I wasn't able to understand exactly what happened so I might rely on the security team to have a double pair of eye there or eventually ask the user to create a new handle. Mark do you remember if the captcha code is only temporary blocking is it by IP? I don't know the details of the blocking that the captcha does. Okay so we'll have to read the code of account. So I propose that that one like the previous one we keep them somewhere not on the next milestone and wait for feedback from the security team is that okay for everyone? Wait for security feedback. Next one is publish acceptance test orness so that's adding a container image build test and deploy on infrasci for the Jenkins acceptance test orness so almost there but it requires us to create a new case that we never had before is that that image is under the Jenkins namespace on Docker hub and not the Jenkins CI infra so we need to adapt all the components to be able to support additional in particular the credentials and the GitHub checks so team is working with me on that one almost there we are actually discussing about GitHub checks which is let's say some kind of nitpicking the only last thing will be trying the deploy with tag and that one can only be tried when you create a tag on the repository to see if you have the full chain of credentials or closing any question on this one? Okay Stefan can let you do a status check about collecting data dog metrics for the ephemeral VM agents so we can provide a dashboard to developers to see what went wrong during their build if the agent crashes or is OOM killed? Yes that's a work in progress for now I tried manually on a VM on Azure in the process of adding that in in our controllers by code as code and still work in progress for now. Okay and the next step is EC2 virtual machine is that correct? Yes first as your and then EC2 VMs okay thanks Stefan. Welcome. Next issue is providing Java 17 Windows agents so it's a split work so Hervé worked on the Linux agent part while Linux right now it's because we want to have the same template whether we run a container or virtual machine so Hervé paved the way with the Linux because we have Linux today container built so Hervé what's the status about Linux? I didn't have the time to test it completely before my PTO but it should be quite okay. Same one Linux to be tested and deployed if working cool and on my side I'm working on being able to build Windows container with Packer. I've started the process and I'm dealing with nitpicking with PowerShell differences between the default Windows Server VM template on Azure or Amazon versus the official container image for Windows Server. There are some tools missing same like Ubuntu that's on Docker image that means the sudo command for instance so that one is the second one once both are okay then we can deliver the new Java 17 Windows agents because Java 17 is already installed on our template. Work in progress nitpicking on PowerShell. I reckon I need to describe the artifact caching proxy status. Yes so there was in the past a proxy behind the repo that Jenkins.io to avoid consuming too much bandwidth from gfrog and we are reintroducing it with many proxies one in each provider AWS Azure Docker and I've put one in Azure to test it and I'm modifying the pipeline or shared pipeline library to be able to configure mevan to use this proxy depending on the agent location. To know where to know this I'll add an environment variable in the agent template so we can determine the provider. It's in a good way I've made a test on Jenkins Afra test plugin and say that Jenkins.io is correctly retrieving data from my proxy. The next step will be to add a user password protection in addition to the IP white listing and then deploy a proxy in each provider and publish a shared pipeline library. Thanks a lot. Any question? Okay next work in progress is the Jenkins underscore release Twitter account which is an automated system which reads the RSS feed of the release of each plugin and create tweets to say hey there is a new release of that plugin that's the idea of the account so I was able to recover it thanks for the help of Kosuki so all the infra teams should have access to the credentials or to visit the password using our private email address and the password has been shared on one password for now so Kosuki won't be able to use it so now the next step is to understand what was the automation system I should give in mention it was on the LVR so the next step so account recovered I'm not alone to be able to access it now next step is to I assume understand what the LVR is something like it's kind of magical whatever platform in the cloud that allows you to publish automatically from events. It could be a good opportunity I think to try some Twitter RSS code on this account and it should be then extended maybe for the official Jenkins account so you people wouldn't know is there could be another way of publishing tweets without asking for a plus one on the guitar maybe opening up a request getting a review on it and then be published I am tweets for advocacy so not for Mark and Bruno there if anyone is willing to help me because I'm not really at ease with these whatever platforms I could absolutely have some help I have no idea how we'd authenticate how we'd connect to Twitter is it the same account or is there a DLVR account that we also need to recover and also to check what is going wrong you cannot need to say sure check that for sure cool that's for help very volunteer so I will add also that issue to the next milestone like all the previous one and I will add you to this Array what's the status of retrieving access to the npm name spaces I don't remember I think we can remove it from the milestone as I'm waiting for GitHub support response npm support yes github and bm it's the same so okay yeah npm has been booked by github and by microsoft and the microtech github and I think they had ideas on github okay I do they had different supporting that ends my question okay I'm maybe maybe maybe an npm but no it's linked to my GitHub account okay there is a Jenkins user which has who has no plugin no activity and I've opened a reclaim on their support to see if we can retrieve this account so we then will be able to convert it in a proper organization so we can publish plugins in Jenkins npm organization okay so let's wait from their feedback anything survey weekly release build does not resume so that's an issue that I took I took it will be easy I stumbled across one issue mark we have to ask ourselves do we really want release build will fail to automatically resume using the retry pipeline that's the question I need to ask all the release team on that issue because maybe we don't want it for in that case valid point I think that's a safe a good question to ask we certainly shouldn't implement without asking the question first I hadn't thought of that however maybe we could say we want automatically try if the agent fail but even that I'm not sure in the case of a release because last time we had agent failure that was for good reasons so we didn't want to add a wave of builds waiting for a failing agent they had some fundamental fix to deliver before being able to retry yeah release release builds are far far less frequently right once a week as opposed to every 10 minutes every five minutes or every hour that happens on pull requests so I think it's fair to say no we're going to intentionally not enable retry because human beings need to analyze release failure absolutely so if it's okay I will ask the question and I will be team on the issue directly has a release officer we will have the last word on that one unless someone disagree and we can then start discussing no problem nope okay last one I've just started working on Puppet so that one is blocking the migration of the virtual machine updates from aws to oracle cloud and that issue blocks because mirror brain user which is a remnant of former system that was running on that virtual machine that user owns the scripts that are syncing all the plugin updates to all mirror system and all the archiving systems so there is uh two steps one putting back on automation everything that has been managed manually on that machine for three years before we all joined and second point is deleting that user mirror brain so that mean maybe having to create a new user to be used across all the nursing and SSH command of all the release scripts of core and all plugins so there are some work to be done on that area uh that could be risky for the releases so I've delayed the elements after the security advisory and I will try my best to work on that but I think we have to expect some delays if I'm on pto uh the reason is I don't want to throw my colleagues under the bus for this one unless you are willing to try it you can will be my pleasure but I don't want you to die on the front right um so might be slowed down with Damian pto uh and also it took some time because a bunch of puppet uh refactorization has been done that was required for this one so I call that foundational silent work but yeah we are in good good direction required uh puppet factorization okay folks almost there that's been almost one hour um just a quick check of the task that you think we should add to the next milestone I did uh next to the one we have there given your current workloads um there has been last week as a reminder request to had SSL uh certificate valid SSL certificate for the private instance which means switching subboards for these services to DNS validation instead of HTTP because private address cannot be reached by let's encrypt bots um that's absolutely possible that required to create a specific credentials restricted to the DNS record zone so that token on Azure cannot be used and abused to generate certificate for another domain I feel like that one is a bonus unless someone really want to do it it's not emergency and it's not important so I propose that we keep it on the backlog there there is one though that are that is important um CIG and Kinsayo despite what we said earlier is currently generating data that is then consumed for uh making and deploying the Jenkins.io websites it should not because that means there are credentials that should not be there they are isolated following the state of the art of credential per folder isolation however we will want these elements to migrate it to infraciai which is a private instance where we can store credentials safely so I propose that we move that issue on the upcoming on the upcoming milestone and I'm happy to work with someone on that one and pass the knowledge so we can get started uh because that's not that complicated but maybe need a first step is there anyone interested to work on that with me and then take over once I'm on PTO do you think I will be able to absolutely so with pleasure okay um okay we need to add it to the next milestone are there other issues that you saw that were raised on the path dates that are important to look over for the upcoming week nope okay so that's the end so I'm gonna open and create the new issue update and publish everything folks you will have to manage the next week meetings I'm stopping the screen share I'm stopping the recording bye bye b1