 Welcome everyone. This is the Jenkins platform special interest group. I'm Mark wait joined by Jim Crawley. Thanks very much Jim for being here and we're going to Talk topics. I want to share my screen and let's look at the agenda. Okay, so here we go. Oops wrong agenda. This one right here. So Jim, can you see that all right. Yeah, I can see it perfectly. Excellent. Thank you. Thank you. So open and we'll review open action items. I'm not expecting all like Natasha to join us he's been ill this past week, not feeling well and but renaming agent Docker images and Google summer of code projects. I'll give a brief report on that. Then we've got a question from a submitter or a question or a statement of a project that's in progress that we'll talk about related to rework of the agent projects. Then Jim you've got a topic on reducing the workload for publishing images and a second topic on how get LFS is installed in Docker files. Yep. Great. Thank you. Any other topics you wanted to add Jim. No, that's it. Do you have any recap from the conference you went to anything good share. Oh, that's a that's a good one to. Yeah, let's see so foster notes and comments. Yeah, that's a good one to include in the in the list. It won't do any harm and let's put it there at the end. Good, thanks. All right. Okay, so let's go over action items or any anything else Jim. No, that was it. That's the only thing I could think of extra. Okay, great. So I still got the open action item hang my head in shame open Jenkins enhancement proposal for Docker operating system support rules. Sorry, likewise for Oleg on the windows support policy. What has had some, what I call progress and non progress is we've changed blocking modes on the code signing, instead of being blocked on finding a vendor we've got a vendor. What we're now trying to work through is the legal legal entity definition things to allow them to issue a signing certificate to what is not a corporation and not a person. There are certainly organizations that have done this before but they're working through legal discussions and no because it's involving lawyers I don't know when it will complete. It's just ongoing. So next topic will, we'll skip this one. There's a, there's an ongoing question. Can we do more to get rid of the use of the deprecated term slaves inside the Jenkins Jenkins environments and one is that the Docker agent Docker images actually use the word slave in their name. We'll discuss that in another session. I'm glad to note that there are Google summer of code 2020 project ideas that are being run from inside the platform special interest group. That includes the get plug to get plug in project ideas and several others get the Google summer of code 2020 timeline we are now inside the period where we have submitted our application to the Google summer of code project. We're hoping to hear and the timeline says their answer will come back in a relatively few weeks. The recording of the Google summer of code team meeting for the Jenkins project from just a few days ago includes a review of the timeline as presented by Oleg Nanashiv. So this looks really good. And it looks like we're going to have an even better Google summer of code this year than we did last year. Awesome. Next topic on the list. This one was a rework item that is just here is included in this list for information. A user has submitted the following and I'm going to increase the size of this so that we can read it together. So they're using Docker JNLP slave and SSH slave and the Docker slave image. And they need it with newer versions of open JDK, but those newer versions of open JDK are not yet included in those base images. And he's proposing a significant rework of the structure of the projects to allow easier introduction of new images and to adapt more readily. Oleg said, hey, this is this looks good to him and gave some initial warnings on hey it might be this might be a problem in these areas but it's a really great thing to be doing. And it's been mentioned in in other poll requests that are open. So there's some progress there. I haven't seen much recently on the project. So we're, we're making progress. I think the platform. SIG will need this work as well because as we consider, we probably will want to consider adding agents for power PC for S390 etc. So that's just a for everybody's information. Jim any questions that you had there. No, I actually haven't took a look at this PR. It would be interesting to see what the rework actually entails. So right now this currently doesn't affect the main repository right this is just a Docker slave and Docker agents repositories right. Correct. That's right. Okay, so it is. This is just touching agents. And but I think I think you've got a good point that it's likely worth it as we're considering alternatives for the main Docker repository. Should we look at the ideas that are involved here and evaluate them. And with this with this help the main repositories build processes. Yeah. What one thing I do have to know is, maybe it was a month ago. And I know he mentioned Alpine a couple times. Open JDK, not docs but you know just the regular open JDK did put out a beta image, or maybe beta release. I don't think it got yet into image of native muscle support. Or so it would be the Alpine images. So, so let me make a note of that that's that's news that I wasn't aware of so Jim noted that the open JDK project has delivered a pre release of muscle. So I don't see library support in open JDK. Yeah, and we had talks adopt to pull that in into our new images as you know like, I think it's for Java 14. I want to say that. Okay, so it's not. They didn't do it for JDK eight or 11 but rather for the new cutting edge. Yes. Yeah, so it might not be exactly helpful for you guys right now but it was definitely something I took note of because adopt did get blocked in terms of pushing Alpine images to the official repository because we were using the well one of the reasons due to using live GC on Alpine instead of using muscle. And then we didn't have enough testing currently at the time but we're fixing that. Yeah, and so he lives he is not the native. No, native C library on Alpine right they use muscle as the native life. Yes, yes, you have to go out of your way and it's it's pretty intensive. So you have to go out of your way to thus reverse all that and get lib G or G lib C back on to it. Great. All right, thank you. Thanks very much. Yeah, it might not be helpful but I'll find the link for you guys and post it up in the agenda. So that way you least can just look at it. Thank you. Thanks. Yeah. I think we're on to our next topic then ideas for reducing workload for publishing images Jim. Can you give for people like me I don't know the details of the image publishing process so tell the shell that share the story with us I'll take some notes I'm actually going to mute so that my keyboard is not is not terribly Okay. I didn't think it was bad, by the way, but. So, since I'm still new to the project, it might not be 100% coverage of whole build process but for what I've done and I should have posted the PR link. I can push that to you on glitter. If you want to take a look at that eventually. But what happens is when when we trigger a build, and I'm pretty sure it's a manual process so Daniel Beck or whoever's in charge of the builds. I'm not sure who those people are. I have to kick it off. What happens is it grabs the last 30. I think our production grabs less 30 releases of Jenkins on the experimental build so like the multi arch. I think it grabs last 20 but it grabs a significant amount of the Jenkins builds and what it does it basically loops through and starts building images. But before it pushes the images it checks if the tags already exist up in Docker Hub. If they do. They do not push them assuming that the images they just built or the exact same as images of in Docker Hub. Once they're done, they go ahead and start making more tags in terms of like, hey, this is an LTS image for Debian, you know, yada yada yada. And then they start making the multi arch manifests and push them up. So I guess what Daniel Beck was saying was, it takes a lot of time just to push out. You know, update to a certain variant. So if we need to update just Debian. It takes a lot of time because there's no way to control it right now via the pipeline that we have you guys have set up on like, hey, I just want to rebuild Debian, or I just want to rebuild Alpine or CentOS. There's no way to do that. It's literally, hey, grab the last 30 releases and then go through every single variant that you guys support, which is very intensive. So he was saying that what he wanted was a quicker way to kind of streamline building. And actually my PR actually addresses a lot of the issues that he brought up in the glitter chat. But that's kind of just of how it works right now. And some of the downfalls of it. And I can get you that PR right now. If you want to take at least a little look at the synopsis of it. You want me to send it on glitter? Do you want me to add it to the Google Doc for you? I think you're muted Mark by the way. Either is either is great. So if you want to put it right in the meeting notes, that's probably simpler. If whatever works for you. Yeah, I just, I just put it down so. Perfect. All right, you know, I wanted to open that up and take a look at it together. It's a good excuse for us to look at it. Yeah, I know the the phosdom kind of, I wanted to get this in front of you guys. Two weeks ago, or I guess would have been four weeks ago, whatever it is. But now it's a really good opportunity. I was hoping all I would be here to see he was the one also talking with Daniel back and also slide actually was to about the whole build. So some of the major changes I went through is currently you guys are using the the manifest tool, the manifest tool to build manifest is for Docker was made. I think I should buy IBM or but what happened it was made before Docker Docker manifest the command actually got integrated into Docker. It's Docker manifest is still in experimental. It's still like in kind of like a technically beta, but it's what we use internally it's what a lot of companies actually use to build manifest is manifest. Yeah, I guess that's a plural. What I did was I swapped to Docker manifest instead of the manifest tool. Hey, that gives us a little better use of things in terms of it just native Docker we don't need to know another tool. And it gives us some cool extra things we can do later on in terms of instead of like hitting Docker's API, we can just hit Docker's command line and parse out the information we need where the API usually changes a lot. And we're actually seeing that in adopt. Wow, the Docker manifest should always roughly point to the same, you know, endpoint whatever they change it to should roughly give you the same output. Okay, so forgive my ignorance a Docker manifest is a is some sort of a list of the contents of a Docker file or give me a basic tutorial I'm sorry I'm not even familiar with what a Docker manifest is. It's very simple, you're pretty hit right on the money that they're, it's really a metal file that basically says hey here so for one manifest you guys have is LTS right for LTS right you might be producing a s390 image might be pushing like a arm a right. What it does is you basically say hey I want to have one tag called LTS right and in this LTS tag, I want if someone pulls for s390 I want it to point to this specific image. If someone pulls for AMD 64 point to this specific image. And what happens in reality is when I do a Docker pull Jenkins LTS, it goes up and hits the Docker API. It looks at what archer running and then basically points me to the right image and pulls down that specific archers image. It's helpful because you know you don't need to worry about hey let me pull from s390 Jenkins and blah blah blah it's just one universal image image and because very important, at least in IBM where we are doing constant builds on any type of arch we can get our hands on. Any server would get our hands on right so we don't want to have to currently you know always have like some sort of if statement saying hey if this arch if this arch pull this image. We just want one image. Oh, that's all this. Thank you thank that so that means that's why, for instance, even in my world where I don't have access to s390 but I do have access to arm boxes for instance so this would allow me to say Docker run. Jenkins slash Jenkins colon LTS on my arm box and it would automatically detect oh you're on arm I need to go get the arm image instead of the x64 image. Yep. And then if some, some reason they didn't put to push the arm image in the manifest. It would basically come back saying hey no, no image for that platform found. You can see that like in official images. A lot of official images go through the Docker library build pipeline which the Jenkins server that has access to all the arches, and then let's get to you know hey a manifest file, but some of them don't build for arm. So if you go to like pull hey like Docker Ubuntu, you know 1604 right it might not have been built for arm, and we'll give you that little error message saying hey no image found for that platform. Got it. Okay, thank you. Sorry for the long trip on Docker manifest continue. No, no, it's good it's good background info. Alright so we basically just for that whole kind of bullet know is switching to Docker manifest. We're getting this whole another tool, legacy tool even the next major update is parallel builds. So what I did is I split up all the builds into basically split into three stages. The publish images publish tags and publish manifests. These three different scripts that you can trigger at different points or just trigger one if you wanted. And basically they do as the name kind of applies public the published images builds images and pushes out to Docker hub publish tags does all the tagging mechanisms. So, hey, let me tag this image. Let me take Debian ones as LTS let me tag, whatever latest image is pointing to, and basically just generating all these tags. And then for Docker manifest, we actually pull all those tags and pull all those images build the manifest pushes the manifest files up. I did that on a way where, hey, if you if you want to build, you know, if you have all these different variants like Debian Debian Slim Alpine right, they all can build at the exact same time the colossal script that you guys kind of were using before, which is going it was one for loop, basically going through all the different possibilities and wasn't paralyzed at all. So, you know, this provides speed upgrades and also kind of going back to Daniel back's point was like, Hey, I really just want to do update or security update from Debian. I don't really care about slam I don't really care about these ones because they never were affected by the security patch. It's just Debian. I just want to build just Debian how do we do that. This would solve that. Okay, now this. So did your point you had mentioned that the the previous build or the current build process is collecting the past 20 or 30 releases is this still doing that. Yes, I kept it that way for easier integration. But in in my script, I broke it up, or you can actually pass in the version. Yeah, so what it does actually is go collect 20 versions. And that's the only thing it's looping over. Right, you pass in Hey, I want to build for S3 90. I want to build for Debian. And I want to, you know, do you know, wherever the last parameter is, and then it basically loops over the versions, passing them in it's very easy to just modify it slightly or just have a make command where it will pass in a certain version. And you can just do it for that specific one. Yeah, so, ideally, what Alex and, you know, Oleg and Daniel and I were talking about it'd be really nice to trigger. If we could set up a GitHub actions to kick off a Jenkins job, like GitHub actions on the official Jenkins repo and whenever we release a release, a new release of Jenkins, right, have it trigger this build pipeline we have and do the release so that way the gap between when the Jenkins gets released and when the Docker image it's released is a lot smaller. And, you know, that could be like hey, let's push it to the unofficial repository first, and then one of us could come in, just make sure hey it works, you know, make sure it's fine. And then we could easily enough just re trigger the build but for the official Docker repository with you guys. That's why I had a couple ideas of mine. But this whole rework of the whole build pipeline really addresses that. And then the other major changes was instead of building with QMU headers or the QMU emulation, we are building our platform. So this would require a rework of how you guys have it set up in terms of you'd need to have access to architecture. And I did that because QMU emulation isn't 100% it's pretty good. But there are some I've seen some handy some issues prop up on S390. And I think power has some issues once in a while when you're doing like a lot of low level like C code or something like that with a QMU emulation that sometimes doesn't 100% emulate it perfectly. So now, when with this requirement to build on architecture, what are all what are our alternatives to do that build on architecture so I know we could borrow from the, the organizations that have. There's one if I remember right in Oregon that that offers hosting. What are some of the other alternatives did I did GitHub actions provide multi architecture for us already or no. No, that GitHub actions only supports x86 currently, but they also produce, they produce, they support Mac OS and I think I think we might do arm, but the thing is for Docker builds, it's just a x86 right now. So that there are other build pipelines that haven't looked into it all that much. But what I was working with get a large file system, and they're using GitHub actions. And the limitation right now is only x86 on Docker builds. Got it. Okay, good. Thank you. All right, so, but this one, this one is the change from QMU to building on on the actual architecture. To me that seems like a positive just in terms of reliability. Yes, I love emulators native building on the actual architecture seems more likely to be successful so okay good. And for that we do have an arch flag passed into the whole scripts. So if you have existing x86 architecture, you can just pass in the you know the AMD 64 tag, and it just builds that one. You know if you had like and then I was thinking like you know if you have a Jenkins agent on S390 cluster right you could deploy workloads with the S390 tag and stuff like that. And this is how I was similar doing it with the example I think I showed you in Travis they had support for multi arch. I was just doing that more as like a public demo for you guys to see you. Hey what what what's the possibilities of it. No more use of Docker token. So I guess you guys were using some sort of you guys generate like an authentication token. I got rid of the need of that because we're using the Docker manifest command. Where you don't need to generate any token anything like that to make any of the calls because right before what was happening was you're hitting the manifest API, I think, and trying to grab a bunch of stuff. Again, I switch over Docker manifest command which limits the need for that. It makes it a little more simpler, and then the only credentials you need and you still need these before was Docker login and password to push the images to publish the images up to it. You can generate API key so it's not like a password password. Okay, so the usage of the Docker token was not for the push of the image. No, at least from what I got from it. Great. Okay, thanks. I added. Oh, this is this is a nice one and so for Debian tag in the tagging schema you guys had. I added a Debian tag so a lot of years would be like Jenkins 2.10 Alpine 2.10 CentOS right and there wasn't one for Debbie and it was just 2.10. I think being a little more verbose and making sure it says Debian I didn't delete any of the tags I just added one more tag tag alias if you will, that points back to Debian to make it a little more verbose and for people who are automating things and maybe iterating over that variance or dish row. It would be nice to have that. We actually running into a similar issue and adopt where we just assume the default image is JDK instead of Java runtime environment. And I'm going back with them and be like hey you guys need to do verbose tags. It helps clarify everything. Great. And that makes sense to me the so you didn't take away the LTS tag you just add one that says this is LTS on Debian. Yes, we'll point to the same image I assume as or to the same. Yeah, to the same. Yep. I'm going to go back to 256 as as the LTS. It's just a way for me to be explicit. Yes, I know I'm using Debian and I'll reference it in my from clause. Yep, exactly. 100%. That's that's all I did. I added a dot CI folder. This is what I'm seeing a lot of like repositories I've been working with the open source teams. Basically, instead of putting all the CI kind of build pipeline stuff in the main repository, you know, main folder. You don't want to clutter the main folder. You want to keep that as much as like hey here's our build files. Here's the plugin scripts, I guess, which you guys utilize for some of the Docker builds and here's everything that goes into the Docker containers. Let's move all the CI CD scripts to a folder. They kind of keep a little more clean and a little more organized. So I moved the publish scripts to a dot CI folder. You see this with, you know, dot GitHub, you know, for issuing templates. It's just a common practice, at least from what I've seen. It doesn't change any functionality. It's just a little cleanup, I guess. It's just a force parameter to push the tags and images no matter what one of the things was when we are pushing base a bill, the publish images builds the images and checks if the tags are up there. All you have to do is say there's a security patch, right, and the tags are up there, right, saying hey, there's already an image called, you know, LTS Debian, right, but you just applied a security patch. You want to force that image up there even though there is a tag already up there with the associated name. So that would just automatically push it no matter what. And that force parameter goes throughout all the different build scripts. So you can have a force parameter set for the build images, force parameter for the build tags, and force parameter for the manifest so that you're always pushing if you have something. So on, I'm surprised, I would assume there was already something like that because when I run Jenkins slash Jenkins colon LTS. When I ran it three months ago it was a much older version so so something is updating those tags already. So, so there is, I forget. I would have to look back on my notes when I was in the thick of it because it is pretty intense bill pipeline you guys have. There is a force parameter already, but I think it was just applying either two tags or images. It didn't necessarily apply just to everything like hey just to manifest. Maybe the force tag was just for the images right, but if it saw the same tags there and you really really want to re issue the tags to point to a different images and stuff like that. That wasn't an option same thing with manifest if the the manifest needed to be updated. It would just check hey the manifest are exist don't don't go ahead and do it the force tag wasn't there for those. So I was just kind of making a little more consistent and cross everything. Thank you. And give you guys a little more. Yeah. Yeah, I guess this is the same thing add additional checks to make sure images and tags are always up to date. That kind of goes back to, I guess checking the shaws of each images. And it's basically saying comparing hey, this image I just built with Shaw, you know, blow up law image already up there is shot blow up law. Are they same are they different. Okay hey they're different let me push the image, because it must have some sort of security patch or it must be newer. Okay, so this, this additional check would potentially then when, for instance a debbie and operating system patch happens on the base image below us. This would get us that base image included in a new image and could, if we were using force I guess in that case push that image out. Yes, even to older versions. Okay, got it. And I tried to add as much checks as possible to basically short circuit things. So like hey what's not re re you know waist bandwidth and re push things. If it's if the same exact shots already up there but the shots are different. We assume that hey it must be new. And if you have that force tag it will always push it even if it's the same or whatever. So I thought that was good you already had this there was some of the checks already in there. I think they were just going back I think they were just for images not just tags or manifests. So I wanted to be a little more verbose in the checking and save bandwidth and time where I can. The last thing is still ongoing thing. So get large file system. And this is actually going on to our next agenda point. Let's all keep it brief here. I was using multi stage build so I was actually compiling get a large file system from source and actually copying that binary into the Docker images because they didn't release for s390 they didn't release for power. I don't think they have arm release. So I was just compiling everything source which added to a lot of time in my update and I'm actually working with get large files system on you know fixing things that should PR I think I closed to add support for s390. Yeah. So I'm actually having an update to this PR coming soon, which added those patches in where it saves time and not needing to build everything again from source it's just actually pulling from these releases on the GitHub page. Excellent so so you've got. Where was it let's see so series 390 and power PC. Wonderful so they're already publishing. That's really good. Yes, exactly. And then there is a problem with Alpine in terms and actually have another PR open with him or issue open with him. And he's already working on a patch there is a problem with the Alpine variants where it uses it's statically linked to G lib C. So when you run it on muscle, it does not work. Welcome to the same problem open JDK has right. Yes, there is one and only one true C library for a Linux variant when muscle reminds us there is not just one C library. Yes, so I'm trying to get him to apply a patch for that. And we'll talk more about that in the second gen because there's there's some different options we can go down, but this is the general gist of this whole PR know we took up a large chunk of time on this. But it is a good rework of a lot of things and this is just the experimental images, this is nothing to impact the official pipeline. You guys have for the official images this is just your guys is Jenkins I think CI repository on Docker hub. You know, it does not affect the main one. I want to test how this, you know, ran on the official ones before we meet any big changes to the official. Right. Well, and that's, that's great. Thank you. So this, this looks like it looks like the next steps then for us are to to needs more review. Let me get some notes here so I love, I'm sure I'm not a bash wizard by any means, and there was a lot of bash. So, wherever help and review is, you know, wherever we can get help and review would be great. So let's get this into the experimental. Yes, experimental repo, so that we can do further. Yes, it will also then need it also needs CI infra for S390X and for PPC 64 le probably arm in there too. And for arm. Okay. Right. And so that needs discussion with the infra team. Yes. And I did keep the old experimental script in there. That way, you know, you can still operate as business as usual, but we could hook in this whole second, you know, experimental pipeline in just to compare and contrast and also just to keep things working how they are right now. I thought that might be easier transition. Good. Very good. Thank you. All right. Anything else on that topic before we move to get LFS. Nope. That's about it. Okay, so let's take up that good LFS topic since I, I care deeply about get so now you're going to talk about things I'm really deeply profoundly interested in. Go ahead. So in my mind talking with the maintainers of get LFS. They so we one of the big issues right now I got the building for Z and you got a bill from power. So the binaries are up there. One thing you guys were doing is pulling from, I guess they're the most recent change to those images, or all the images is pulling from package cloud.io, which is where they officially release their images, or sorry, binaries for Debian and I think also they have a YUM repository or RPM right. They do not package that cloud IO does not support act. So APK. So does not support Alpine. And that's nothing get can do. It's just package IO package cloud IO. So in terms of that, right, one of the questions I had to you was when building from it and my PR I have open right now I'm about to push in some of the changes I've been working on is I've been actually pulling from the GitHub releases on all the experimental images. I mean, to make it more uniformed in terms of, you know, hey, just one Docker file we don't have to, you know, have one for, you know, power, you know, or one for S390. The big thing with package cloud IO. And the reason why I were not pulling from there is because right now their GitHub actions is what feeds into cloud package cloud IO. And they do not support S390 or arm or power right now. It's only x86. So the the buyers that get pushed up to package cloud IO are only for x86. So hence my kind of hesitation and why I went down the path of pulling from the GitHub releases. Because, you know, I have it like kind of in like a variable substitution so hey detect what architecture you pulling from, right and go get like the S390 binary go get the go get the AMD 64 binary from the GitHub releases page because it does support every single platform, and we don't need five different Docker containers, just to support all the arches. Very reasonable to me. The only reason we use package cloud was because it was a documented technique. Yes, from the from the get LFS project so get pulling from GitHub releases sounds even better than using package cloud so that's that's as one of the I was deeply involved in getting in having get LFS included in the images at all. I have no problem with that proposal that sounds great. It's okay. No dispute for me it, it won't be quite as easy I assume in the script although that that horrible script that we are horrible is the wrong way to say it that piece of script code that we copied and pasted from package cloud IO will have to have a different script that that copies from from GitHub releases but you've already got that right. Yes, yeah I already have it in a model that after your tiny and it kind of pull where you guys are pulling tiny or tee tee I don't know it's tiny and you said it right yeah yeah I did it like that because you are pulling from the GitHub releases for that you guys are verifying the signature file and I went through the whole process of that I actually talked to the developer and he actually is documenting how to verify binaries because it was a little confusing in terms of it's a signed hash so and it's not just one signed hash it's shine hashes for all the different binaries so you actually have to verify the verify the signature with GPG and then cat out the Shaw for whatever architecture you are on and then do Shaw 256 some just to make sure verify it just just to make sure that hey the whatever we are downloading actually is the right fact the correct file in case some malicious actor did come in somehow to get access to the repository. Right and that that's that's a really healthy process that we certainly want to risk having somebody supply chain contaminate us. Yes, putting in a bad. A hacked version of get LFS. Great. Yeah. Okay. So the only the only sore thumb or the thing that kind of sticks out is Alpine right now. So Alpine. The they have a binary for Linux, you know, and 64 Linux, you know, S390 right, they're all built, like we talked about against G Lib C. The crater does have a PR open and I was actually working on him about like five o'clock last night on doing it and I think he obviously he's maybe in a different time zone or whatever but he looks like he got off for tonight. So the PR is open. I think it's going through their CI CD pipeline checking in verifying things, but that patch should make it not care what C compiler it's attached to So that would be good in terms of right now for Alpine. I'm actually just pulling from the Alpine release package of get LFS, which is good, but it's not maintained by them at all. It's some some other guy I guess heavily involved in the Alpine community. And they also lag behind in terms of versions right now, the version there they have is 2.9.2, which is one version behind the newest releases 2.10. 2.10 is in Alpine experimental or edge edge, which ideally isn't the best to pull from and enabling that in the Docker container is not great. So hopefully, with Alpine PR, we do get it supported in terms of the binary. So then we can just do the exact same methodology. We're doing that. Hey, let's curl down the binary curl down the signature file, verify it, install it, and we'll be good. Yeah, that's basically Alpine's the sore thumb here in terms of it's not great. But yeah, like you said, the current workaround could be just the package they provide or not they provide but the Alpine brides. Yeah, so that's a whole big mess I've been dealing with. Excellent. Well, that's, that's marvelously marvelously progressing. Thank you. Thank you. Yeah, and that will be pulled into the PR. You saw with the parallel scripts and stuff like that. I just need to push the changes I made into that branch of the PR. So that way, it gets added because it doesn't really change anything. It just changes how we are installing Git LFS. Super. Thank you. Okay, so it looks like we've got we've got I'm going to capture some from some additional action items then into the action item list here so one is Mark Jim. Actually, maybe we put it on you Jim. I'm going to phrase it this way. Jim begin infrared discussions. For agents, agents to run PPC 64 le and s 390 x in docker build process. So for experimental images first, right? I mean, yes. This is let's just get it into experimental images and see. Yeah. And I had a proof of concept working where I had my own Docker hub up, and I pushed all the images to it. And it worked fine. So, I mean, I don't think it hopefully will be that much work getting integrated. So, Jenkins infra team, a different glitter chat. It is all. And so I'll let me just embed here the link to it so that it's actually a mailing list. Let's go grab it real quickly here. And so community infra sub projects, infra and mailing lists should be this. Here we go Jenkins infra. No, that's the old list. Let's go grab that and look at the archive where it will tell us where the new list is mailing list migration move. Here we go. So this is the new list and I'll embed this link into it. Awesome. Thank you so much. I just want to send email to that list. You can get the discussion started. Be sure you're clear in that email that the initial proposal is just just for experimental. Exactly, because there is a, if I understand correctly there's a different process for official builds, and eventually we'll have to deal with it for official builds but this is a this is experimental stage first. Yeah. Good. Alright. And I had started those discussions some time ago but did not drive them to conclusion. So that's, that's a yes we need to do it. Okay. Excellent. Then. Oh, then we've got, we need need reviews, review the Docker build rework PR. And that one you had this right here right. Give me that URL. And that is really mark Oleg Alex and Jim review and discuss. The only other question I had, maybe to you, or maybe I go back to the get get all LFS creator is on some of the images. I was command you copy copy and paste it for the package I owe you guys are installing get LFS, and then doing get LFS install. So it looks like you're initializing. I guess like the get LFS like Damian their config files or something like that. It's not a demon it's just a conflict file creation. Okay, yeah. Okay. So the question is on some of the images we aren't notally alpine is not calling that install. Additionally, is it's calling the install on the root user right but then we make a Jenkins user and switch to that in that user space is a is the get you know LFS install needed. Yeah, you know, like alpine seems to function fine. And then be is it smart to have it do the initialize on the root user or Jenkins user. I thought it was needed on both but let's put an action item there. So that's probably one mark actually let's put it on you. Yeah, I can follow back up with Jim. The is if get LFS install is actually a requirement. Yeah, so I never used get LFS before so still relatively new in terms of that so I didn't know if you have prior knowledge but I can just go back to the maintainer. The I can tell you is in my case it's it's cargo cult, meaning I learned to do it long ago, and I don't bother even thinking about it now I make sure I do it at least once every time I when I when I'm in a new machine. Yeah, but is it really needed I have no idea. It could be that that is just entirely me having developed a condition response oh yes you need to do get LFS install, but it may no longer be required at all. Yep, that's something I can do as an action item to go get that. Great. Let me get that crap that whoops. Excellent. Okay, good. I'll put that into the action items as well then. Super. Thank you. All right. Anything else on get LFS Jim. No, that's it. I know I spilled a lot so. Hey, thank you for thank you for that great great results on your time and thank you very much for your ongoing contribution this is real problem. And the DOP contribution is going well to they were pushing forwards with that I actually we redid the whole test framework. And now they we just need to hook into the Jenkins pipeline, but we can test pretty much for any of the variants, any of the base images. Hopefully, coming up soon, we'll see more of the the variants being pushed into the official images for adopt open GA, which would be one one one step closer to becoming, you know, actually starting discussion for Jenkins and actually looking into switching to adopt. But it's it's been a little it's a little slow progress over there it's it's, I don't know that much side, you know, I don't know that much Java development side of things I'm running all these workloads and I'm like, I don't know. I use Maven once, but that's about it so I'm diving in deep without kind of know what I'm looking at so. Welcome to the world of open source that's we boldly we boldly go where angels fear to tread well done that's great. Thanks very much so I'm going to just a brief comment on the foster notes I had good conversations with several platform and operating system vendors that are platform and operating system teams that were at the same time in Belgium. For instance, the things that that were interesting to me were, let's see I talked with the centos people about how getting an AWS image and they are working on creating one. So it was there was been one where I've had to do my testing with images that I found and got lucky on because there isn't an official and they said yeah we know that I talked to the free BSD people about how to get more AWS image availability. And yes there are available so that I can do more platform testing on free BSD using AWS images. Talk to the open Susie team. They're use of the butter fs or better fs I'm not sure how it's pronounced btr fs as a, what would you call it log log formatted file system, meaning I can do snapshots. Yep. Go ahead. I've been running a ZFS for a lot of my stuff, but I know butter has some advantages to it. Well, and that's that you you make a good point on the free one of the things that I was worrying about in having this conversation with them is, how do we describe backup for a Jenkins user, and a Jenkins user that's on a ZFS capable file system should do snapshots. Right. Yeah, I mean, they should not waste their time creating a file system backup when they've got snapshots built into the operating systems. Yeah, 100%. And, and likewise and I confirmed with the, the butter fs people in open Susie that they agreed snapshots for backup is the is absolutely what they would recommend as well. It's their preferred way of doing it they've got a concept of of doing like ZFS has with where you can ship a snapshot to another machine. And so, so good good progress there. And that's, those were the key results for me for oh and I had a conversation with station with Jim Klimov about Jenkins specific knowledge of ZFS. There are some, because Jenkins was initially created long ago at sun. It actually has some very specific knowledge about how to use the fs very very well. But it's outdated because it hasn't been touched in a very long time and because the Solaris variants that on which it was based. So, so needs needs more work, but is known to work with ZFS using, I think it's called live ZFS native. Those those things are there and platform interesting. That's all that I had from FOS them. Well, I wish I wish I mentioned this too late to the to some of the adopting that went they had a booth at FOS them. I was hoping I was like, Oh, hey, like are you guys don't you know see maybe you guys can meet some of the Jenkins team members. Well, I don't think they got around to it. Well, I got I was pointed to them and embarrassed that I missed them. I was pointed to them by a colleague Oleg pointed me to him said hey Mark, the adopt people are here go talk to them. Unfortunately, he noted it on Sunday. And the adopt team was only there on Saturday so I miss them. Yes. But we will get them next year it's reminded me that when I go to these kind of conferences so I'll be at scale in Los Angeles in March, for instance. I'm going to go looking for other projects to be sure I have those conversations. Yeah, no, I mentioned it to Shelly which is one of the leads there is she's the woman I've been working with the adopt team. And I saw you joined the whole adopts slack channel to which seems pretty good she's pretty active that's actually how I've been communicating a lot with the adopt team. And we actually do have weekly meetings now or bi weekly meetings on Wednesdays. Yeah, I can get you that link if you want. But yeah, I wish you guys met up that have been a cool fusion of two worlds I've been working on so Excellent. Thank you Jim. I think that covers the topics that I had and we've we've sort of hit our time. Anything else that needs to go on the list for today. So thank you for letting me talk. I know sometimes I ramble a little bit so much appreciated. Thanks very much. I'm going to go ahead and stop the recording and and we'll call this meeting done. Thanks very, very much Jim. Thank you Mark. I'll catch you guys later. See ya.