 So hello everyone, welcome to the Jenkins Infrastructure Public Weekly meeting. We are at the 29th of March 2022. Today we have Mark White, Stefan Merle, Timi Acombe and Hervé Lemer, who just joined and I'm here. Announcements. So we have a weekly release which is not released yet because I'm the culprit. I merged the change this morning without thinking that he started release.ci and of course it wasn't able to start what it stopped. So I've triggered a manual build. I'm going to follow up to see if it fails, but nothing else should be merged so I broke it. So I'm responsible to ensure that it goes to full packaging. And I checked it just minutes ago, Damien, and it's 90 minutes into its two hour process. So I see nothing, it's in the build release phase so about 30 minutes from now it should apply the label or apply the tag, push, etc. So I'm not overly worried about it and if you want we could, given the late hour or the day, just plan that I'll run the release checklist after it's done and dot the i's and cross the t's. Okay, so let's see how it's behave. Another announcement, what's the number of the release I thought it was, 2041, correct? 341, thanks. There is a plugin advisory that should be published in a few minutes if it's not already the case, haven't seen any message, so it's currently running. Yeah, it's actually been published. So the plugin advisory is published, oops, I'm in the wrong location. No problem, advisory, okay. Okay, so directly on Jenkins, you're cool, what is the advisory? Security, so top right, yeah, there we go, advisories. Nice, thanks. Okay, are there other announcements? Let's see, we've got an LTS coming and the new is the Fonso is the release lead, right? I don't know that we need to announce that there but the release candidate, so the 2.332.1 LTS release candidate is available for testing. Okay, available for testing. And I've found no problems in the last week of testing I've been running under or three or four days of testing I've run under. Cool, do we have a timeline or yet or not? We do, it's on the calendar, just a minute, let me know. Okay, I haven't checked the calendar. Well, I should have had it on in mind just a moment, I can read it really quickly. It is the 6th of April, so April 6th. Okay, good to know, so one, two, three, four, five. Okay, so we still have a meeting before that LTS, so that will be the Wednesday after our next meeting, right? Right, correct. Okay, thanks Mark. Okay, for you if we proceed? Yes, actually the next item on the notes list is an announcement that needs discussion, so I'm thoroughly jazzed about this when Tim brings it. Yep, so that is something that Jan and I have been working on over the last few weeks called the design library. It's basically a rewrite of the UI samples plugin to showcase all the UI components and provide guidelines on how they should be used. But it doesn't really fit on Jenkins.io because it's dynamic, it renders the jelly components and everything. So I actually need to run in Jenkins instance. Everything that it needs at the moment is released, well it's getting released in the current weekly, but we'd like a place that we can link people to from documentation from Jenkins.io and play in the meeting list and whatnot just to send it out to people so that they can try it out and see it. The goal eventually hopefully that CI Jenkins.io would do that, but given it requires the latest weekly version that wouldn't be practical. I feel like we'll get a much better uptake and usage if we've got a service that we can send out. So there was two ideas, one which was just a temporary service called designlibrary.jngs.io which is just a Jenkins with nothing else on it really other than the design library. And then there was another one that Mark and Alex had which was to create a weekly.ci.jngs.io which just runs the latest weekly publicly so you can see what it looks like and I don't know whether it would run jobs or just have some example jobs or something, I don't know. But just to show what the current one clearly looks like. Yeah, Tim, I'm impressed, I installed it just this morning and what a nice piece of work that design library is. I think we do want to put weekly, we already have weekly running in one of our CI servers and whether it's design library and we accept it's temporary or we do something a little more permanent. I think I think it's healthy for the organization to consider, let's have something available that is publicly visible, secured offers, it uses the Jenkins LDAP and has weekly installed on it. This instance wouldn't need a lot of resources anyway since it runs only design library. Well, so I might ask for it to have at least one Windows agent and one Linux agent, just so that it could run some set of interesting jobs, but other than that I agree it's very light. And maybe it doesn't even need those, it just for me if it's got access to a Windows agent even if it's purely ephemeral. Tim, am I maybe I'm wrong there because if it were just design library, the design library certainly doesn't require agents does it at all. No, no, it doesn't. But if you want to have some jobs and weekly just as examples of some different types of jobs would be fine. I mean, that starts with you, which is, sorry, that's a nice idea. Since we already have infrasci that will be a public available instance hosted on Kubernetes with ephemeral agents either Cuban and or AWS machine as for now. Does it sounds good for you because that will be almost the same configuration of infrasci, except we install it on the public cluster instead of private. Yeah, I don't see any blockers on that. It's more for the naming weekly design library. Yeah, Stefan are they are you more prone to temporary instance only for the design library feature and then we dump it or to have something a bit more permanent that will be the weekly or preview or something. I think a permanent more permanent one would be way better because that's that will show how it's evaluating. Yeah, but that's almost nothing. No, it's just one VM. Yeah, we have two decadent for the CPU at least. So not a virtual machine. That won't be a virtual machine for sure. It just run on Kubernetes with many resources at all. 500 makes a memory and half CPU or something. One CPU to gigabyte if you go under that might be tricky because because of the sidecar and the way plugins are loaded. Going under is the cost difference won't be that much. The cost will come from the agents mostly. Yeah, and, and if we don't put much work on it right now it's purpose. It's initial purpose is just is just to be the host for design library right and then if we decide oh we want a sample job or two or three. Okay, we put some lightweight job on it. I'm thinking, what do you think about a preview that CI the Jenkins that you so it's not strictly tied to days and library or weekly, it could be used for release candidates of LTS as well in the future or whatever usage, but that may be too much. Yeah, I thought about that for release candidates release candidates track LTS line. Going backwards in time if we tried it to. Yeah, correct. So weekly or design library. And my mind, I've thought about preview and, and wondered people when they see that it's running weekly may may may misperceive that weekly is a preview because weekly is not a preview weekly is ready to use. It just happens to be released every week. Yeah, I'm sorry Mark but for me weekly is a G or preview. I'm sorry I cannot say weekly stable that as a user. We have it we update it every weekend. I can. It's not weekly work. Large production system. You like to leave them shortly. Investor. Okay, but yeah, I'm inclined. I vote for weekly as well. So it seems like four to one. That can be always changed that's not that much it's a configuration under DNS the rest is only configuration on him. So we can start tracking it as soon as it's opened. And as soon as we have the desk issue we jump on it. You've got it. You've got an issue. I've got more than one issue trust me. So jump on it it's open. Okay, so that's as soon as possible. And then team you can either work with us to add it. But yeah, the first the most is here we can take it you can start it that will be installing a new release on the public case. Yeah it depends on how long retail someone could pick it up. I probably won't touch it in the next couple of days but I can do it later on if it's if someone doesn't get a chance. Yeah, let's go that way. So for everyone, the first person that has free time and wants to work on that must absolutely think about allocating the issue to themselves so the other know someone started to work on it. And eventually commented just to be sure so we can work asynchronously sounds good for you. That sounds good. Okay. Are there any other questions that need to be discussed on that topic. Cool. Let's go. Thanks for the work team. The library is impressive. Okay, so what did we do this week. So we have a lot of long running tasks. So we only close the one. One task there has been a few in the desk close so thanks for everyone will closing these issues on on the fly. So every solved some issue with the account application and team. Thanks for helping there. So, so it looks like that that account application wasn't able to connect to the LDAP different error. So, I'm sorry, but this, the solution is to delete the pods. It is restarting the application that was very well. If anyone is able to be to compile the application on the machine, they are they are totally free and they can try fixing the issues. But honestly services just I tried to I tried to just do some minimal patching on it. It doesn't build on my machine. Like, it's using ancient greater with some ancient jelly plug in which doesn't exist anymore. I mean, it's using jelly as well. Who is using Julie today? Yeah, the bigger problem is the the jetty plug in that it uses and built into gradle. And so to get it working you need to pull it to an embedded jetty container and just rewrite the whole bunch of stuff. And I spent 15 minutes alone and then set screw this way I'm using a better solution. So yeah, so that's why I would restarting better to so as a reminder, we will want to have key clock or the tool that gave in pointed to us. But we need a tool to be able to manage the account on the LDAP both tools are our looks like okay. But in order to have these tools installed and used definitively, we need the migration to the private cluster to be done first. That's a strong requirement in terms of networks. So that's why we need to restart as a temporary measure for now. Looks good. So the current work we have rating the Jenkins IO migration to Azure. I'm taking on the order on the milestone right. So Stefan worked on that topic is driving the topic. So we were able to add a manage PostgreSQL database. Thanks team for the insight about the flexible instances. We weren't aware on that. So we now have a full terraform manager Azure with management of network and everything. So going in the correct direction. Now the next step Stefan is currently working on that since known today, installing the chart for ratings Jenkins CI that Jeremy play out and gave in worked on, which is a read available on our chart. So now we have to install it and why you're everything together. Once that part will have been done, we'll have to import the data from the old database and then switch to DNS. Is there any question on that topic? Okay. So that topic is a work in progress. So I'm migrating it to next week milestone. Yep. Okay. Next one apply to docker open source program, which is closely related to the current credential for VM agents. So last week we had rate limits. We need to ask the docker open source program if they can extend their current sponsoring they are doing for us on the Jenkins docker account. If we can extend it to one or two technical accounts that we could use in the infrastructure. The main idea is we would want to be able to use to increase the rate limit of a single account and ideally have an account that is only used for pulling images. So we can safely share even with read only token we could safely share this credential and not being burned by the API rate limit. So that task is on me. I have to contact them based on the information that Olivier gave us in the past few days. Which is one. Okay. On the area of the docker rate limits. So there is also pipeline library pull requests by Stefan under they that try to maps to credentials per Jenkins instance that we host one for pool and one for push. So we are currently creating the accounts creating the API token on each account so the password cannot be used and we can recycle the token we have to level of credential then a some already exists. And that library will keep using by the whole syntax that we currently use will keep using the push credential to not break the release capability. That library has been posed because of the weekly release and the security advisory of today. So we should continue working on that on the coming days. It doesn't include the modular yet the idea of for a given build on CIG and can say you spreading randomly between two three or more accounts. That's the first step now to be able to see how we could cover the cases and if the API rate limit come again we can start spreading many on CIG and can say. Any question on that topic. No. Okay. So that one, that one has to migrate email alias for the press. So for for this one we haven't heard from KK. I'll make the migration from my story if you want to continue. Oh, cool. Thanks a lot. So for the email alias for press we haven't heard back from KK Mark. Do you mind to contacting KK directly usually tend to response to you. Sure, I'll ask. Yeah. Otherwise, as we said last week, we are going to contact directly mail gun to see what is the procedure if they're still have an account or worst case if they can just give us the emails. I'm not sure how that would happen. But I don't think there's any in use. We don't, we don't. So what would these be useful. The question is what are the existing email in something at Jenkins IO that already exists weren't able to extract them from the GitHub history. And Tyler and Olivier weren't able to tell us which ones. So Tyler say we're any. That I don't know. There is the mix. The thing is that if we move the mix that will be the worst case we move it to whatever email system sounds like the next foundation as per the ticket. But yeah, we need to be sure that we need to have a catch them all then. So yeah, let's let's ask KK at least to be able to reach mail gun and see the list of email and then move away. You ask him but I doubt there's anything in use these days. I've seen grids being used for election campaigns before, but no, I'm not aware of mail gun at all in years. Yeah. And if Tyler isn't aware of the mail gun. That means it's really, really hard. And no one's sending emails to change the style so unless it's something for KK himself. I don't think it's a thing, but double check with him, but I wouldn't worry too much. I think it's perfectly okay if we set ourselves a time out if we don't get an answer in a week or two we just proceed. The risk seems very low. So is it okay last week we say this week has time out for contacting mail gun. But I propose we extend one week just to be sure if direct communication just that to be sure that we can close safely. Was case we go. Sounds good for you. Yes. Next topic. GCAWS on our image. So that one was a minor one. I don't think you had time for that, Stephen. I'm sorry. Don't be sorry. We don't have time. We don't have time. There were more important topics. Do you think you will be able to walk on it next week or do you think that might be too much given we have a rating and pipeline library. We'll be able but I'd like to because it's nice. Okay. Just don't don't over commit on. I'd like to. I will do my best. I don't. I'm sure you will. You already. Define credential at folder jobs level instead of Jenkins instance. So the goal is to have a way to use in France code to avoid having credential on top level of Jenkins instance. So current status. I'm able using my custom homemade Helm template. I'm able to convert a set of YAML definition into something that's generate valid job DSL with credential at folder or multi branch level. And it sounds like that if you create a config map on a Jenkins and chart deployment with the correct annotation, the system that search for the config maps dynamically, if you use the correct annotation will get it. So I was successfully. I just did it successfully. One hour ago. So I'm pretty confident that for our infrasci and release CI, we should be able to have a custom made and charts that use the official Jenkins and chart and apply the configuration directly for now. Because my template is not perfect, it doesn't cover GitHub organization scanning and some kind of plugins. Right now I propose that we start with this one. And after a few weeks of usage, we can start opening the topic for contribution on the official Jenkins and chart. Given that it involves job DSL and there were some job DSL dead pull request or routing pull requests since one year that I mentioned on the issue. Thanks team for adding at least a label and pushing back on that I will contact the maintainer. Because some of the credential needs to do DSL to manipulate the XML configuration. So that's a bit of a trash fire and it sounds like I understand that we need to update some things on some plugin to be able to provide an easy syntax for that. So let's start this one because it's blocking towards our requests. So right now I'm focusing a lot on that I should be successful tomorrow. So as if it's okay for everyone, I plan to do the first deployment on infrastructure during the day tomorrow, unless there is an important task to run on infrastructure. I'll send a message on IRC but that means that all the jobs will be failing because I will totally destroy infrastructure in the process. I cannot guarantee continuity on that service until it's fully done. So yeah, that will slow down our ability to deploy Terraform, Docker images. So that's why I prefer asking if there is something important to deliver tomorrow for you. Nothing that I think of. Yeah, nothing that I'm aware of. Okay. Okay. Now in fracost. So every, do you want to explain it or? Yeah, I can. Yeah. So can you open the issue? So get the infreMD on my screen too. So I'm trying to implement a fracost, which is a tool to report estimated cost when there are changes in Terraform repository. For now, it can estimate Azure AWS and Google Cloud. So they have a main method of estimating the cost from the Terraform plan. So we decide to implement this method only on Azure since AWS has sensitive secrets like SOFs or others. And joining their Slack community, I've noticed that we can also use an experimental feature, allowing us to estimate cost from the HCL file directly. So no access to the plan and no access to any secret or sensitive value. I've implemented it on AWS and I also want to add it on Azure too. So we can compare the discrepancy between the two methods. Seeing fracost engineer tell me that they weren't sure the result would be exactly the same. So it's experimental for now. But yeah, my pull request has been merged and we have to see on the next Azure on AWS pull request to see the result and the reports. The fracost is a SAS but can also be self-hosted if we want or need. And their documentation have example for a lot of CI-CD and their Jenkins example repository doesn't have any real example. So we will be able to propose them our integration as example later. Thanks very much. I just realized that we might have inverted AWS and Azure because we don't have any Terraform sensitive outputs on the AWS Terraform project. While since this morning we have on the Azure project, we have the output that exports the database. So the sensitive output are stored inside the plan. That means we cannot be totally sure that this data could not be exfiltrated to their SAS. So that's why we say maybe on some ends the HCL change. So we might have to change the pipeline library, but the impact is not that much because it's only the database for a rating that we just mentioned earlier. So the impact is really slow. That's why we selected Azure initially. Especially because it's empty right now. Exactly. And we will rotate the credential before going to production. And their Jenkins example. But thanks, that's really nice in fracost. So let's continue working on that. Let's see the first results. We might have some incoming Azure. Next topic unless there is a question. Cool. You also worked on Git setting colon pass on the Windows machines. So the status, you solved the issue in short term. Were you able to confirm with the people who opened the issue? Okay, but since James has approved your pipeline library fix last week, I assume that it should be okay. Could you just check in privates on the CloudBees channels? Because remember, there were only two person from CloudBees. So on the virtual machines agent we already have and there is a pull request on the containers. So the thing is that we discussed. So unless I miss something, I reckon you confirm. Now we have a task, a long running task that will be building our own Windows Docker images for the infrastructure for CIG and Kinsai instead of relying on the community images. So we will build on top of the community image and we will add our own settings because sometimes we want some settings like the Linux image currently, same for Windows. We will want to have some specific settings on our images that would impact the image if we contribute. So and we need to be able to build Windows Docker container on infrasci. Stefan did all the easy work about providing Windows machines with Docker. We have the agents that run on EC2 fmroll agents. So, Herve, if you're okay to take that task, you can start building the two GDK8 and 11 Docker images, but on our own repository. Yes. Is there any other question on that topic? On the long term, I want to propose an update manifest in Jenkins key repository for this image. So they can be kept up to date more frequently. That would be a huge help. I didn't understood the team, the reverse of the release number in the pipeline. I've asked you in a commit. I don't remember. I don't know if you remember it. Let me try to find the link. The 328.9 that I put there, right? Oh, it's on Docker inbound agent on Jenkins CI. I've put it in zero chat. So that one is Docker inbound agent builds on top of Docker agent. So it uses a release version of Docker agent. So if you go to the Docker agent release and GitHub Damian. Currently. No, no, not this one. Docker agent, not Docker inbound agent. So you'll see it's got 411, 413-1, but if you see the one below it's 4112-5. So that number there is if we ever need to build against a newer version of Docker agent from Docker inbound agent. That's version needs to be bumped. It's normally one, but occasionally we build against a newer version. Oh, the left the left hand side is remoting. And the right hand side is the image version. Yeah, so it's the last digit we've considered. Yeah. And it's going from 421. Okay. There's two components to the version number. One is the motion version. And two is the build version kind of. Okay, that's okay. It's like the package increase number when you read the package. When you type the version scheme to the application. Okay. Yeah, it's kind of manually managed because the kind of unrelated images. It can be managed by update CLI, but apart from that. Yeah. Okay. Another thing is that we are dealing with Windows container. And docker bake and docker buildings aren't able to use docker bake. And this is when I looked at how it was done. I saw there were two way of doing the image one from Linux with the docker and the other with completely different scripts. So, yeah. Yeah. I suspect the Windows ones don't. I'm getting bumped. Yeah, that's why I wanted to make a pair for this, but then I. It's probably harder to one. That will be so the updates CLI manifest there will be a nice contribution to keep all the images up to date at the same time. Yeah, just create like a property to file the route or something that both windows and Linux can read. Update CLI also manage. If you have two files, one for Windows and one for Linux that are different. It doesn't care. It can update both from the same source. So, sure. Okay. That's a good point. I assume the scale way cluster should be delayed. Yeah. We have to contact the scale way to ask them. Yes, exactly. An account. So. And we have the monitor build on our private instance. So that's something that came from Daniel. That's an idea of to be sure that we monitor on trusted CI when some of the most important builds are failing. So we already have something like this for the date center. We have the data dog monitor that check the last build time of the season generated by the date center. And if it's if it's more than one hour or two or I don't remember, then it will start sending alerts and that a certain threshold will start page page UTS. So that will be the same idea having an external regular process they will check a season file somewhere. Daniel said, okay, we should have a crown job on that instance will be the same for really see I though that will export a season file with only a strict subset of information. Why not using it object or notification because trusted should be a private instance that should not exfiltrate any data as much as possible. That's the reason so Daniel say that should be quite easy to implement since we're really doing it. If we can't we can still ask him to help us as had had it been considered another approach possibly just exporting the RSS somehow copy the RSS feed from from an inside the VPN to another location we could then monitor with an RSS reader. I do that with ci.jenkins.io and I admit I don't I don't think anything is being exfiltrated from the RSS feed. Right, as far as I know there's nothing sensitive in those except the existence of the job and whether past or failed. That could be a nice one. And the only one that report is an external system that monitor right, we have to have some, some web server that well, I'll reply to this one because I thanks for highlighting it. I think it's worth a reply to have a conversation with Daniel he may know a reason why RSS is a bad choice. Yep. Would you be able to to help us on that part Mark because I don't feel at ease with the RSS feed on Jenkins. Yes, yeah, at least I can I can talk about how I use it and and why that that has helped me and then, then we can decide is that is that an interesting technique or not. And if it is then if you are able to share what you did then we can take over that time sounds good for you. Okay, yes. Sign it to you and change the milestone then. And we will change a Sydney as soon as we have more answers. Thanks Mark. I think we have. So scale where I'm removing the milestone. Can close this week. Yeah, this week milestone. There were two other topics. The first one from our chair here. So, they bumped us about if we're able to evaluate the security requirements. We haven't had that time but yeah. I understand they won't have any bandwidth until the 14th. So, so we probably oh Tim and explanation this is one Tim that I launched an exercise didn't invite you to the invitation on on a session our chair is a company that does an AI based open as optimization of cloud resources, and they offered a, a free and hey we'll support your open source project effort to the Jenkins project that we started a conversation with them. I'm sorry that this is a new one for you I'm sure but what it was was a, an exercise is there a way to get somebody who could help, help us with finding cheaper ways to do what we're doing on Jenkins.io and they've got some techniques that they were interested in and the permissions they were requesting looked pretty simple and safe but we don't, we don't want to do anything with it until the security teams told us yes that's simple enough and safe enough. So, don't say the words too loud because my YouTube ads got stuck with infracost for cloud and Kubernetes for weeks and I got very annoying watching my YouTube videos constantly getting the same infracost case study bled at me all the time. Oh, sorry okay so. All right, I will, I will try to not use that word too frequently lest it suddenly get kicked in thanks. So, my proposal is that we start sending requirements to the security officer around the 13 14th, not because of arteria date but because I know that vadek is more than busy until the 12 included. Right. So, yeah, that makes that makes sense. Damian, I think you might have to take a slot on their availability. Yep. And if it's okay I will take care of synchronizing with vadek on that and sending them. Unless someone want to. Okay. One, two. Okay, go. Every on digital ocean. So it sounds like that we have for used half of the credits. Yeah. We currently consume about 1000 per month so we might need to contact them to see if they are okay to sponsoring us. Because we must absolutely prioritize writing a blog post and adding the logo on our website before asking them for donation. Actually that's a great, great excuse. And so I thought we had their logo already but if not that's an easy one. But so logo on the website because we it's a this is a great story to tell. Yeah, okay. So who is going to, to share the burden to write down a blog post. I can try but I'd like some help. How about how about Mark and their date together. Yeah. Thanks. I wasn't, I wasn't ready yet. We could ask Tim, but he's pretending that to hear. No, Tim, this is this, this is just your being to ignore that that was not that was a completely unfair shot. Yes, the city print. I heard. I heard that we were volunteering. Stephanie to. That's good Stephanie in French is the female name so it's okay not me. All right. We'll find a Stephanie for you. Thank you. Hello folks I think we have reached the list of topics we have other topics additionally just in case important things to know unavailability. No. Okay, so I hope everyone will continue to take care of themselves and that we can continue to take care of the infrastructure. Have a nice day. Have a nice week and see you next week. All right.