 Hi, everyone. Welcome. This is the Jenkins Platform Special Interest Group meeting. It's the 16th of January 2020. Welcome to a brand new year. Let's start looking at the agenda and go through our meeting. So here's what I've got. We'll talk about open action items first. Then we had an item on the adopt open JDK hotspot transition progress. Open J9 progress. Renaming the agent docker images. Let's see. I guess we should probably put the people on this. So adopt open JDK. Alex, do you want to cover that? Or is that a gym topic? Is that me? Sorry, Alex. I didn't hear that say again. Okay. I'm having a little difficulty hearing you. Perfect. Is that better? Yes. That's much better. Sorry. So back to adopt. Was that something you want to take? Should we just look at it together in the meeting to see where we're at? I think gym. Okay. It's a gym topic. Great. And if Jim, that's a surprise to you. I'll do some talking to it as well. It's all good. Great. All right. So then renaming agent docker images. Oh, like you've got that broadening the platform support for docker images. I think is a general expression of the. Well, let's let's put that one also on Jim. Are you okay if we have you act as the lead on that conversation? Yes, that's fine. Okay. Then Google summer of code, Oleg, Q, Q, EMU and docker manifest integration and CD pipeline. Jim, that's you. And hopefully we'll have you show us a demo demo there. And then let's see. I think it looks like that. And then support alpine or newer open JDK docker images. And we had, and I'll have to take a look at that one. I don't remember. I think I'm the one who put it on the list. I'm not sure I remember what it is. We'll talk to it when we get there. Anything else that needs to go on the agenda before we start working through the agenda? Nothing specific. So there was a comment in one of docker images about restructuring the packaging pipelines. So I guess it defeats the building platform support topic. Okay. Yeah, I'm not sure whether it's an intention mark. I think it's either here or like broadening platform support or Jim may be talking about it on this one because I think Jim was the one who raised the raised the question. So yes, I think one way or the other, we've got it in the topics. Okay. All right. So then on action items, I am sincerely sorry. Happy holidays. I still have not opened that JEP for the docker operating system support continues on mine. Oleg, you've got the action item on the JEP for Windows support issue. Yeah, it will take a while. So I will definitely need to deliver on that some of them later because we already just start refactoring Windows service wrapper. Oh, okay. Good. So this one, this one is is the WinSW project in Google Summer of Code. Is that and is that what you're what you were saying? Do they understand it correctly? We need to start. It's not JSOC. Oh, it's not. Okay. So it might be also in JSOC for YAML support and other things, but right now it happens outside JSOC. Okay. Thanks. So it's not specific to JSOC. Thanks. Yeah. So for me, the main objective in this story is to specify which Windows versions we support and hence approximated to which .NET versions we support. Also which WinAPI version we have to support because we still have Windows process management issues in the libraries. So it's a bit far from my immediate priorities, but yeah, I'm going to finish it as JEP. Got it. All right. Well, and this is an interesting time for it because I think as an example, Windows 7 is officially off support during this month by Microsoft. Well, it's a good statement. But what if we talk about embedded versions when they will stop support of embedded versions? Oh, and that I don't know. Good question. Yeah, maybe in 10 years, maybe later. Right. At the same time, I'm pretty sure that there might be users who use Jenkins to run continuous integration on such platforms. Some may call them exotic, but I don't. Yeah. So whether or not Microsoft supports the platform may not be as helpful as I made reference. Good point. Yeah. So platform, desktop operating systems are being deprecated quite quickly by Microsoft. But if you talk about embedded platforms, there is a long term support for them. I believe that even XP still supports it. Got it. Thank you. Okay. Alex, anything you want to report on Windows installer and code signing? We're still waiting on the code signing search. Apparently the company that's going to issue the certificate is taking a long time to verify the new legal entity. So that it's still in the pipeline. Yep. Alex, I'm not sure whether you're in the CDF slug, but yeah, there is a threat in operations which you can join. And yeah, there is the recent sub data about that from them Lawrence. Yeah. I'll post the link. I'm in the, I'm in that chat, but I saw the conversation between. I don't remember who it is, but then Olivier and about the, maybe switching to a different organization to get the cert to speak your poison. They're all more less of a scene. Great. All right. All right. So that cert continues to be the challenge and we'll continue to work on it because it's also blocking the, the progress on the automated release process for Jenkins. So we've got multiple reasons why we wanted to get that signing thing fixed. Thanks. All right. So Jim, should we take on adopt open JDK hotspot? Yep. Some updates coming from upstream with that or downstream, whatever. My support for CentOS is being looked at right now for a PR to get into the unofficial adopt open JDK Docker image library. And once that's in the unofficial, there's a, not like a official process, but ought to go through some testing to basically get into the official one, which will help us because now we, you guys support CentOS. Along with CentOS, it has ClefOS, which is the S390 kind of port. Yeah. So that's really good. In terms of the testing side, I have a PR open against the adopt test, testing suite, or I forget what their name of the repo is, 100%. But I reworked all their Docker tests, which was a challenge because this is like me diving very deep into Java. They run a bunch like external tests, like run wild fly, run open Liberty. They run Jenkins as one of their external tests. So I guess really just common Java programs, I guess. And I reworked all the Docker images to be slim as possible. And once that PR gets submitted, what we're going to do, we're going to actually add right now that they're only testing, I think, like a Debian based or Ubuntu based image. And what we need to do is expand their test framework to support multiple bases. And basically that will allow the adopt Docker images repo to utilize those tests. And once they go through the ringer on all those tests, they can get promoted to official images and thus eventually will get back to us. So it's going deep in the rabbit hole, but it's going well. A little slow moving, but it has momentum. That's great news. That's really wonderful. So what you're seeing is that the existing adopt open JDK Debian, we know we need and there's really good progress on CentOS. Yeah, there's a couple other base images open JDK supports right now. But I can't remember them off the top of my head, but they didn't support CentOS. And I know that's one of the ones you guys had. Thank you. Thanks very much. Any questions from the group to Jim on that topic? Yeah, there's going to be a good amount of more images. Don't be we supported UBI, which is Red Hat's universal base image, which would be really cool. And a couple other weird ones out there. But I think UBI would be really interesting for kind of more enterprise solutions. Good. Yeah. I know that I know folks who are interested in a UBI image as well. Very good. Yeah. Okay. Thank you. All right. So let's talk open J9 is the next topic. Jim, that's you again. Yeah, you guys might need to jump in here and fill us in blanks. But as far as I know, the PR got submitted on you guys. You guys's end and guy accepted in a couple of images got built. But I think there was a couple issues that Erwin over in the open J9 team brought up, like, I guess, running. I think Alex, you were working with him on, on that, or at least there was some back and forth in the glitter chat. I don't know if I 100% followed everything. Do you have any more info? Yeah, so it is getting to a point where it's starting to build. But because we're using the open JDK images right now, it seems like they either removed some images like the arm 32 V7 or, or something similar. It still shows up on Docker Hub, but when you try to pull it, it doesn't pull correctly. So I haven't had a chance. We just got new silicon at my job. So this past couple weeks have been pretty rough, but I'm hoping to get back and look at it again pretty soon. Are you guys pulling the manifest or you guys pulling the specific like arm, you said 32 slash like open JDK or So the way the way the we currently have the multi art stuff set up is we use it uses a shell script to modify a Docker file. And then we build that with we build that Docker file separately. So I'm sure there's a, a different way of doing it, but I'm not as intimately knowledgeable about Docker to know if there's a better way right now. No, it's fine. I was in the published experimental. That's the thing I've been hacking apart right now. Yeah, yeah, you guys are pulling from the, the, you know, the arm 32 one, not the manifest goes on saying the manifest might have been edited to not include arm anymore or arm 32 at least. I mean, if you're pulling directly from the arch users, which is, I guess, with Docker Hub, like the official Docker user. Sorry, official Docker Hub organization or Docker organization, what they did was, I think it's kind of cool is whenever you push to official images. All the multi arch images, or all the arch images actually have users. So if you go on, you probably seen it before, if you go on the Docker Hub UI, you guys you can sort by users and if you go to like s390 user. Those are all the official images that get pushed and belong to s390. That's a little weird though if arm 32 is not pulling and it's up there. Well, they, they say that it's not supported that that image is not supported on arm 32 v seven, but the tags and everything are there so I, I don't know if they just push it and it's they don't check whether it works or not but it says there there's a warning right up so this image is not supported on the arm 32 v seven architecture, but there are tags and it tells you how to pull and and stuff like that so I just have a chance to go back and look at it. I don't know how many people are using arm 32 v seven with Jenkins. I would assume that the majority of people using it are using it with our arm 64 but I could be wrong on that. No, I think you're right. I even on the official images. Because when I was hacking apart the installer I was I was looking like, see what the like manifest actually annotate the architecture as for arms like there was like three different versions there's arm 32 arm 64 then I saw like a v eight, which seemed like another one amount one I would have sent sure but arm 64 seems to be the most popular but it would be awesome to support everything. So, yeah, so I thought arm 32 v seven was like my Raspberry Pi one one gig of Ram. That which it depends on which Raspberry Pi you have because like Raspberry Pi three is 64 bit arm. Okay, so my pie to I should be more specific. My pie to I thought was an arm 32 v seven, my pie for and it's limited physically to one gig of Ram. My pie for has four gig of Ram, and I'm pretty sure I know is a 64 bit arm. Yeah. Okay, got it. Yeah, so yeah, correct that the pie to is is a 32 bit courier correct. All right. You said you're getting some new silicon to a new pies or whatever to work or something like that. When Alex says he's getting new silicon that means his employer has has just received silicon back for his job. So it's nothing to do with this. Oh, that's why I haven't been able to look at it because I've been focused on my job stuff. No, no, no, Alex, Alex works for a company that actually does silicon, which is wildly cool. It's cool. Sorry. I can I always I have a cool pause laying land I can I can try to bring those into work and and see if you're gonna air I know there's that the dreaded air I look out for all the time is I feel I would have to go look for it, but I know at the top of my head because there's always the air I get whenever I try to pull down an image and it's not built for s3 90 that it comes up with some weird obscure error message. I'm like, yep, that's 100% like a arch issue. So you might be running to that where it's made like you said it's made for it's tagged that hey works on arm 32 but in reality it's it's not working. Yeah. So, so it seems the action item Alex given your workload it may not happen anytime soon but the action item is is we need to investigate further what action should I put here. You can put for me to investigate the architecture support. Okay, because we're going to hit the prep work. I don't think the like the open J nine is available for all the platforms either, or all the architectures so. No, I think arms actually lacking. So, yeah, so we need we need to have a way of kind of filtering out different combinations of variant and architecture. So you can put that on my plate to look into what what is actually supported for the different architectures. Okay, great. Alright, so investigate architecture support and adding it into the build scripts is that what I'm phrase that well enough. Yeah. Okay, great. Alright, thanks. Okay, Oleg noted that his network is not stable enough for him to stay on the call so we're going to go and we'll skip this one on the rename process. I think I've seen some progress already happening in various places in discussions but best to leave that for for his him to summarize for us. Okay, broadening platform support for Docker images Jim do you want to take this one as well. Jim you're muted. Sorry. Yeah, I'm trying to figure out what we want to talk about that because those are kind of tightly coupled. So if it's tightly coupled to QMU let's get to that one. We'll just delete this one and come back to it so Google. Let me take the Google Summer of Code topics at least give a brief overview so we've I've put the get plugin ideas under the platform sig because of its platforming nature. And so there has the plugin has two ideas. There is an EDA integration ideas which are out there so EDA tool integration ideas and there are a few others that are like that. Interesting things that could be could well be interesting to the platform special interest group. Let's see I should put a I should embed a link to the Google Summer of Code I'll, I'll do it separately after the meeting that way it's in the notes. It's just on Jenkins.io under Google Summer of Code. Hey Mark what what is a EDA. Oh, sorry electronic I should expand acronyms electronic design automation. This is the kind of tools that are in Alex's world all day long every day saying oh I need to design this kind of silicon or do this. Maybe it's physical layout or all sorts of things like that where you're trying to design either processors or boards. They're all in generally under this concept of electronic design automation. Yeah, it's really awesome so trying to like get a summer kind of like intern, I guess Google code intern to kind of work on that. Right, right that's the concept. Yeah, it's awesome that there and there there are others. Good and oh legs already start. Excellent. Oh right and corkis where corkis if I remember correctly is the is the is like growl. It's compiled the job if I remember right so yeah that's that's another excellent. Since Oleg can't join us let's go ahead to the next topic so Jim we're up to you QMU and docker manifest integration. Yeah, so since our last meeting I really only got to work on it like Monday, Tuesday, Wednesday and then a brief brief brief a little bit this morning. So what I've been doing is I've been trying to hack apart the published experimental. As far as I know, that's kind of where you guys are doing the multi arch builds. And I've been utilizing Travis. I know we kind of talked about a little bit about Travis in the past as the CI CD pipeline. I built in a way where it's not tied to Travis at all. We could easily just integrate it with Jenkins, but it will be a little more work on in terms of getting agents for the different platforms and stuff like that. The one nice thing Travis does for us already is give us access to S390 power arm 64 I don't think they do 32 yet, or will be sort of 32 I don't know. So in power, which is really nice. Do you guys want me to share my screen or do you want to click through it. I would I would love to have you share if that's okay so I'm going to stop sharing and have you take the share. Okay. Just stop. Sweet. You guys can see my idea. Yes. So I had to part the publish experimental I love published kind of unchanged. One thing you'll notice is I made this that CI folder. I thought about, and these are all just changes that I made. Obviously we have to go through all you guys as you guys know more about the whole Jenkins ecosystem to me and how you guys want to design your repository. I move them kind of out of the main repository. I see this a lot on a couple of different other popular repos where all the CI stuff is kind of pushed into a folder away from kind of the bulk of the code and kind of keep them separate so people don't get confused. One thing I did was you'll see there is no published experimental anymore. What I did is I broke it up to be a little more modular where you publish the images first you publish the tags and still work on the manifest I think it all that done. And you kind of these three different stages. And that's important due to how Travis kind of works. One of the things I was trying to do is make it as fast as possible. So trying to get things going in parallel. And I needed to be a little more modular because before how I understand it and kind of trace the code was published experimental kind of did everything you know it went through that loop of pulling down versions. I think pulls the last five versions loops through them builds the tag builds the image tags and builds the manifest. So I kind of made a little more modular broke it up. Publish images. I mean we can take a deep dive in the code later but all I'm really doing as the name kind of implies is going through and publishing all the images. Down here this is probably like sorry Alex where you were talking about where the arm this is the base image that gets replaced. I guess where the whole arm 32 and arm 64 come into place. And here I also made some edits to the Docker files. What one big change I made was taking out the use of QMU headers. I went fully focused on building on platform instead of doing emulation which can lead to some problems. So there was no need for I guess the cross. I forget was here. Alex and I were talking about in the glitter chat, but there was someone here for the copying over I think that Q headers Q binaries over. So we go back to publish images. We're really just it's mostly the same code as you guys had. I just kind of had to together to be a little more modular down the bottom. Now you can pass in a variant which you guys had. It also impacts in an architecture which we'll see just in a second why that's important. And down here I clean I basically removed all the manifest and all the tagging suggest to publish images. So it does the whole verbose tag where it's Jenkins version, Jenkins variants and Jenkins and architecture. So I saw those with these published images need to have a default tag. So it's posting the most verbose tag. So Jim, what are some common common values for variant as opposed to common values for architecture is architecture like s 390. Yeah. Yeah. So if I if I show you my Travis.yaml Travis.yaml is how Travis knows how to run the build. It's just kind of like a little meta file. And in here we have different build stages. I also edited the make file a little bit to have more things but in here you'll have make. I'm using make publish image variant, which allows you to supply architecture, which in this instance we're running on on prem for s 390. So s 390 gets passed in so just build the Debian variant of s 390 and whatever Jenkins version happens to be. Does that answer your question mark. I think so yes thank you. Any other questions before I go a little further. No, please proceed. Okay. So let me let me show you Travis so make a little more sense. So in Travis we have two stages right now. But more the manifest needs to come still. But in here you can see build and put publish images where you guys ever played around with Travis before. I have not. So this is this is interesting. Don't be shy though. I'm confident there are others who have. Okay. So Travis is tightly coupled with GitHub. So as you can see, I actually have a GitHub repository. So if I go to build history, you can see all the times I've committed and it starts to kick off builds, you can change it. So it only kicks off build on certain certain branches like master so that it's not kicking off builds all the time. That's usually probably what you want in production just to kick off a bill of the master. So really only people push with the master would be via PRs. So in here, this is the first step, which is build and publish images. You'll see there's a bunch of different platforms. We can see kind of barely hear the ones for Debian are the first four right here. You can see the variant right here. The next ones are the Alpine and the next ones are the slim. Those are the three multi arts ones you guys have right now. I know open J9. We're trying to give that supported right now too. But that wasn't in the folder for multi arch images. I just based it off what you guys had currently. So right now what will happen is these are all multi arch images. I just based it off what you guys had currently. So right now what will happen is these are all kick off in parallel and start building the images and they automatically get pushed to my repository over here. And these probably should look familiar to you guys to the same kind of tagging mechanism you guys use. And the really nice thing about this, these are all basically spinning up VMs on S390 on power on AMD on arm actually 64. So we don't need to make the use of the cumio headers at all, which I think is a big advantage. And then after that, I'm still working on it, but it will go ahead and publish tags. I split kind of the whole difference between building the images and publishing the tags in terms of basically just speed and publish tags. I can show you guys that script basically lets you do like okay hey publish all the Alpine tags. So those are the tags that go Alpine and then the architecture. Which just to confirm, these ones just point to the latest build of Alpine right available right. It's not like a LTS build or anything like that it's always the latest one. That's what I would have expected I think that's the naming convention it's already used that unless the version is called out it's assumed to be the latest at that point. Okay, good. I was gonna say so one thing to note is the publish experimental is for things that have not been fully tested. Eventually those items should move into the normal publish script. And then it would also do LTS based images. Yeah, I started I started actually adding LTS. I think that says you won the things that failed down here I think one of these are LTS. Yeah, I didn't notice that LTS was not in the experimental so I kind of added that in in my publish tags right at the bottom. Like I said it does the whole image variants, then it does LTS Alpine so it will go out and find right here the LTS version the latest LTS version. And then the LTS slam and then the latest tag, which I guess is associated with Debian right. It's always the Debian build which is latest. That's my recollection is that latest has been that the default and first operating system that was supported was a Debian variant Alex is that consistent for your understanding as well. Yeah, I believe so. All right, good. I just want to make sure that that kind of default image was always Debian and made it in a way where this is flexible enough so if down the road, we did switch to Debian slim. You basically can just swap them out right here and kind of change what latest actually points to. Yeah, so this is this basically as far as I got. I guess the major improvements is really kind of building on on platform and taking advantage of Travis's platform. But like I said this is not tightly coupled to Travis at all. And what I imagine what you could do in Jenkins is have taken agents for S 394 pbc and for arm 64. There's one other kind of questions I had was, what is my alarm. So what is kind of the whole build infrastructure right now as it currently stands what what are you guys doing to kind of trigger builds current current actually I guess Alex you can give a better story about current build infrastructure you're closer than I am uncomfortable describing it somewhat out you want to go ahead Alex. So just as a historical understanding the reason we did the QMU route is because we could not get agents for infrastructure in order to build on those other platforms so that that is the reason we went with the QMU route. One thing that we I think we have a little bit of a concern with is in order to publish images we would have to put credentials for specific repositories on Docker hub. So that was one of the concerns we had about using another service was those credentials outside of our main infrastructure that we control fully. But I'm sure that could be revisited. If, you know, if we can't get these type of agents then we're going to either need to do something else or, you know, figure out if we can pay for some sort of service or something like that so I mean, that's kind of the historical stuff we only have right now. The QMD64 based agents. So that's kind of where we're at right now with infrastructure. Oh, what I can show you how I manage the whole Travis credentials in here. We'll see up at the top. I know you guys had some sort of login token mechanism which I'm still trying to integrate because I knew I still want one of the things that I lack right now is checking whether the images are up there. So by the trigger a build which I made some changes to publish images, get latest working right now. With Travis, if I do, I add like the new change. I can do push. And Travis has a GitHub hook, which should start a build. Yep. So just trade a new build. So right now it's going to be spinning up workers. And these ones just kind of popped into existence right here. The one thing to note, which was a little surprising to me, we have an enterprise Travis, and there's no kind of like throttling mechanism. And the free Travis, it's not giving throttle necessarily. In enterprise, all these things would kick off all at once. So you're not kind of waiting around for these workers. But in the free Travis, you'll see that only five boot it up. And then as soon as one is done, it will, you know, trigger the next one, the next one, next one, next one. So it will cut down on time. Arm 64 takes the longest takes about like 20 minutes to build all the images for each variant. So that's actually pushes up the build time to about like 40 minutes total doing this way before I had it do, like in stages where it was like, okay, just build Debbie and, and it would take 24 minutes for the arm one. And then build Alpine and it would take 24 minutes, you know, so on and 24 minutes three times is an hour plus just for the building images but this gets it down to about 40 minutes, kind of doing it all relatively parallel, which is pretty cool. But yeah, this is this kind of workflow you do with Travis you kind of do a PR to say hey, we're now supporting X or you can trigger it via web hooks, or you can just come in mainly trigger it here. But then it starts kicking off a build. One thing I want to show you is my secrets up at the top I'm still working on that whole GitHub token thing you guys have. And whoever did that would love to talk more about that. But right now I just added a Docker log on method and you might be see like okay hey he's using environment variables to offer username password. I'm trying not to explode my username password. So what I what you can do is Travis has is really nice UI to manage your secrets where it's like okay hey you can enter them here. They're heading and in the output of all the logs about a copy and here you'll start seeing like, you know just normal kind of like you know Docker builds right up to the top is doing a Docker log in somewhere. Right around here if I can, if I'm not blind right now. It's somewhere in here, but you'll notice that you'll see secure right here. That's because this is you know James Crowley IBM is my user. So my Docker username and also that you know the repository or organization name. So it will actually blank out any uses of passwords or usernames because I have my secrets managed over here. So that's one nice thing about Travis they have that kind of built in next but I understand the other concern where it's off, you know off prem so you don't really, I guess I guess Travis could be malicious but I mean it's pretty widely used. Is that kind of answer your question on terms of. I know it wasn't really a question but it should kind of at least concerns with the whole passwords and stuff. I mean, I've used Travis for another project I worked on so. Okay, it's just the other, the other thing is we always try and eat our own dog food. So trying to use Jenkins to build Jenkins to do all the infrastructure that that that's one of the goals so that we verify that all those use cases are handled. Yeah, yeah, so I, you know we have used other services for like, I think one project using is using Jenkins here right now because of the, it's a Windows C sharp base project stuff. So, yeah, I mean, it's definitely a possibility. I'm totally that's why I made it flexible to not be tightly coupled with Travis at all, other than, you know I guess in the whole. It's not really tightly coupled the just the whole architecture thing. This is the only time you're really seeing Travis variables or anything like that, but you just need to get the architecture from you can do it from your name if you really want to. But like, we mentioned to mark before getting access to s390 and power is no problem. I'm giving you guys access to, I guess servers so you can make them into agents. I don't know how you guys necessarily would get arm, maybe like army of arm boxes arm is actually already available from from Amazon itself now. AWS now has arm image arm machines. I'm not sure if they're the exact architecture we need but they just recently announced it so. So I'm now we're currently Azure based so saying it's on AWS isn't isn't as much help as you might have thought, but knowing that one of the major cloud providers has already started making machines with arm architecture available, at least for me gives hope. Yeah. But anyways, I think that's kind of what I want to share is moving away from the cumul headers, making things a little more modular in terms of you can run them all at once, kind of speed up build and stuff. The one big thing that I was really confused about in the beginning with this whole process is in the original script you have this whole concept of getting latest latest versions from Jenkins which I guess tails last five releases. That seems a little odd to me. And I don't know if you guys are actually trying to make changes. I'd imagine you guys probably want to move to like, more like a push model where I guess you see a release on Jenkins get hub or up on the website and then something triggers a build. Okay, you're not just building five at a time you're really just building whatever just got pushed. Is that something you guys be kind of working towards or want to support. I'm not familiar with with this particular segment code and I don't know if Alex is either it may it definitely predates my involvement with looking at these scripts so my apologies I don't have an answer for you. I think the reason that this was done. Actually, I don't know why this was done. I was thinking of different product. Yeah, I think that. Well, I think this was done because if the Docker image changed, and there was an error in the actual Docker image, rather than the Jenkins itself. They wanted to update the most recent versions that people would be using. Okay, okay. Yeah, because I saw a couple like I guess in the published one right you're not telling just five I think you tell the last like 20 or something like that. And then you guys have checks in in trying to find somewhere last 30, you guys have checks in there to basically say hey, there's this version already, you know, is it published already so skip that it just seemed like a lot of redundancy of like, Okay, hey, we pulled the last 30 versions. Let's just loop through maybe let's say 25, and they're all already published and the last five aren't. So let me just go ahead and build the last five would be nice to have more of like a push model where because what what you guys is like lag time where like, say Jenkins drop something, are you guys immediately building that newest version or LTS that Jenkins just dropped. Or is that even a goal for you guys to kind of stay relatively in sync of like, hey, new Jenkins just dropped, let me just kick off a build to kind of build the Docker image for that. So for the official repository. It is built on a time basis. And so that's another reason this check is in, because we the way that the current release setup is done, it's, it's not fully automated. We have a project that is fully automating that release and we would be able to kick off Docker builds and stuff like that but currently it's, it's, it's not that way so that's another reason it's a time based thing, and that it has to go pull the, the most recent versions. Okay. All right, it's sweet. Thank you. That was one of the little confusion. I guess you probably do want to wait a little bit anyways to make sure everything's stable and all that right. No any major bugs or that, that is not typically an issue in terms of concern. The typical is we want, we want the Docker images very very soon after an LTS or a weekly is available. But Alex is right that the initial release of a Jenkins version so 2.221 as an example is generated at an unpredictable time and done right now by a human being Kosuke because of the code signing. Okay. All right, so you see I just those were like the little, you know, kind of areas I was a little confused about going in, and I didn't know if it was something you guys were working towards or there's other little gotchas like you guys kind of mentioned. So anyways, that's, that's kind of it for my the new code and I, like after talking with you guys I don't know how useful it would be. Since you still have the whole problem of not getting it not having all the architectures, I guess I could add back all a few new stuff. But I think I think moving the building on all architectures would be a good idea. But maybe that isn't practical right now at least. But I think is an excellent question is when we need to hoist into the infrastructure team to see. All right, are we at a point where we could, where we could turn on other agents in the in the CI Jenkins dot IO infrastructure. And you noted that there are several different for several providers that might be willing to provide access to s 390 and power PC 64. And on Azure, they do have arm 32 we seven and arm 64 but no s 390 or power PC. Oh, cool. Okay, so the arm so Azure, Azure, very good. All right, so that's really including encouraging. I don't know what their full support is but it looks like you can create VMs and stuff on those architectures I mentioned so. Oh, was it arm 32 and 64 is 64. Both 32 and 64. Oh, awesome. That could at least do part of what we're looking for and then we could maybe do Travis for the s 390 and power PC. I mean, we, we, we have resources to like give you guys access to a BPVM, and then you guys can make that into an agent, if you guys want. So I mean you sound it sounds like I mean it's it's a sounds like we almost kind of got the platforms right there. If you guys are able to at least utilize the whole Azure setup for arm. Yeah, I'll look into it further and do a few tests and stuff like that. So those are publicly facing servers. Yeah, they could be set up to be facing. I did at mayors college. It's part of the Linux one community cloud and for vendors we work with or open source projects. We can basically give a long term box to you guys because right now how the community cloud set up is I think you have a 90 day trial or 180 day trial. I don't think you can buy anything there, but 180 day like limit. But for, I guess, you know, like I said vendors or open source communities, we can give access for a long term. ACAK indefinitely. Great. Thank you very much Jim. Thank you. Thank you. So in terms of action items it looks like you're going to continue your evolution of the. You continue evolving the published grips. Yeah, I need to refine it a little bit and then finish the manifest steps. And then, I mean, I'll keep making it, you know, flexible where we can use Jenkins agents, you know, I mean, where you basically just, you know, pushing a workload to another machine. I won't probably couple it with Travis. So I'll give you guys that flexibility. So I just don't I'm going to continue testing on Travis because they don't have access to you guys build pipeline. So Alex is that reasonable to you. You're okay with that. Yeah, so any improvement is is a positive I think. Yeah, thanks very much, Jim. That's great. We have we have actually hit our time. I was I had one item remaining that I'm going to skip we will defer to another time. Unless there's some urgent thing that we've got beyond that I'd like to call for an end for today's session any other topics we need to be sure we recover today. Thanks, everybody. Alex, thank you very much, Jim. Thank you. Thank you. What great great results. It's so cool to see S3 90 and power PC. That's that's really wonderful. Thanks very very much. And we'll meet again in two weeks. Thank you for letting me share. Thanks.