 Welcome. This is the Jenkins Platform Special Interest Group. It's the 21st of December. Thanks for being here. Topics I've got on my list, action items, Windows Docker images for LTSC 2019, the master branch, and I'd like if we have time to review a draft of the JEP that I've created in a Google Doc trying to get some progress on that Jenkins enhancement proposal. Are there any other topics that you'd like to bring to the meeting? Oh, actually, there's one that I want to bring. So on new arm images or new arm servers at Amazon. So a topic from me. Any other topics? Okay, great. So I have the action item to open a JEP for Docker operating system support ongoing and finally started the evolution of the document. I did a bunch of investigation comparing, contrasting. A few weeks ago, we'll review that later today. Sentos options for adopt open JDK. So Alex, you've opened a PR for this. Thanks very much. Could you share with us what you learned from that PR? It was pretty easy. I mean, it's basically just adding in the adopt, adoptium or adopt open JDK as it was previously called, obviously, repo. And it seemed to install the JDK just fine from adopt open JDK. I think it did have to add. So the free type and font config libraries are required because it's not a headless JDK. I don't think adopt has a headless option. So those were additional libraries. That'd be you say free type and font config. Ah, okay. Otherwise you get a AWT error when Jake starts up. Got it. Okay. Excellent. So that one, now the that would give us instead of relying on the Sentos based JDK, we would then rely on adopt, the adoptium JDK. Correct. Nice. Okay. And I saw here that it's using JFrog.io. So that's the official, that's their official yum repository for adopt open JDK. Yeah, that's from their install page. So you go to their install page. It basically says to do this exact operation. Obviously, it's not, they're not saying doing it in a Docker file, but I just adopted the procedure and put it into the Docker file. Excellent. I can link from there or link their install page to the PR just so people can see it. That's great. Thank you. Okay. So that would that would shift us so that we would then be using adopt open JDK for Alpine, for Debian, and for Sentos. And Windows. Oh, and Windows. Right. Right. Exactly. This is the, this is the one remaining platform not using adopt open JDK. Okay. Good. Excellent. Thank you. Thank you. So in terms of communicating that change, is there anything we fundamentally need to communicate and do we have a vehicle to, to communicate that? I mean, there's nothing that's displayed to users when they run a new Docker image, they just get the new image, right? Correct. So I'm not sure if there's there's something we need or can or want to do. I know that it's mainly for running the controller. So part of me says that we're just packaging our recommended JD or JDK with the controller image and it'll work, you know, regardless, but maybe we do need to let people know. I'm not sure. People are hopefully not doing builds on their controller. So. Right. Okay. So now I, I saw that looking back in time earlier today, I saw that Oleg had used a place to tag when he rolled us to Alpine 3.12. Would we be willing to consider, or should we consider tagging and using GitHub releases, even though they don't fundamentally publish anywhere? Just in general, you mean? Yeah. Wondering in the general case, not as, you know, so, hey, we're going to drop Debian 9 support in the not too distant future. We're going to, and that will need some way of saying it. We could use a GitHub release and a tag or we could do some other notification, blog post, whatever. Yeah, I think that sounds good. I mean, maybe it's, maybe we should do tweets on the Jenkins release. I don't know if there's a manual way to do tweets there. I know there's not, there's automated tweets, but yeah. Does that make sense? So that one I didn't quite understand when you're, you're suggesting tweeting that, that we had done the release? So there's the Jenkins release Twitter account. Ah, okay. That has, that does the tweets when they're like plugin releases and stuff like that. I'm not sure how we would do it like an automated method of doing that, but I think we kind of your quote unquote releases infrequently. So it might be easy if we just did a manual tweet every now and then, but I don't know that I think anything just letting people know it's fine. Okay. I think, I think I'm understanding what you're suggesting. Good. All right. Yeah. So that one, I'm not quite sure how to do it, but I think it's worth consideration. Certainly we can consider a blog post for large, larger changes like Debian nine to 10 grade, et cetera. So that one feels like anything else you wanted to share on that. Sorry, I think I'll came through. No, nothing else on the, on the system. Okay. All right. And plugin installation manager and update center. I like, I like your suggestion from our last meeting that we would consider doing a shim replacement of install plugins.sh. You warned that there may be some, there are some behavioral differences. Are those a grave concern for you or just needs exploration as part of, of this proposal? I think it just needs some exploration. Basically that what we could do is create the shim and then just run the normal test cases that we have and see if they pass or not. Right. Okay. I can take a look at that. Okay. Great. Excellent. Okay. And then we had a topic there on refinements for parallelization and multi-art CI. I assume this one is delayed while Jim is on other projects. Yeah. We're a month or more. Okay. Great. All right. So next topic was Windows doctor images for LTSC 2019. Gareth, do you want to take us on a tour of current status? Yeah. So the images are all there. We're only building JDK 11. And we started to have an official release of adopt open JDK on that version. So that's okay for the time being. In terms of the agent images, there are two open PRs for, they just need reviewing to add JDK 18. And that should be for the docket agent. And I think it's the SSH agent. And then the other one is the docket inbound agent. And that requires some... Well, the current pattern is to use the versioned agent release that seems to be built off the tag, which we're not getting for the newer images. And I'm not entirely certain of what the best way forward for that is. So we did this way. Go ahead. Sorry. I'll ask my question after you've finished. Excuse me. Yeah. So the build that is the tag that was created that's been built on trusted.ci, which is 461 of the agent, that doesn't contain any of the LTSC 2019 images. So we're not getting the builds for those, or sorry, the docker containers for those. I think what we need to do is do a release on the GitHub side. And then a tag will be created that we can... Actually, it was 4.6.1. Yeah. There is a tag. So while Alex is doing that investigation, I need to be sure I capture this because I'm still learning the structure. So we've got a docker agent repository that is the base image. And then we've got inbound and the docker SSH image that are both derived from that image. Is that correct, Gareth? Or could you describe that a little further for me? The SSH image is not. Oh, it is not. Only the inbound. I see. Okay. All right. So the docker inbound agent pull request Gareth submitted is for the base image for the base image. But the issue you found is it does not yet include somehow it doesn't yet include the LTSC 2019 images you'd proposed. Then there's the docker inbound agent image. So it's that pull request that's currently... That's failing at the moment because the base image is that it's depending on don't exist yet. Okay. And this is one where we don't... Okay. Right. Good. Okay. The 4.6-1 tag is building on trusted. I'm looking to see why it's not pushing those images. I was assuming it was because when the tag was created those images there didn't exist in that build. Well, I'm looking at the docker agent one. So it should be pushing whatever it's building. Except I thought that if a tag exists already we didn't do a repush. We would only push a tag once. In the docker, the agent ones, we don't have the same... Oh, we don't. No. We don't have the same scripts or anything like that. It just always pushes. So yeah, when I was looking when I was looking at the logs for those today, I can't see any LTSC 2019 images being built at all. Yeah, that's interesting. But I can from the mast of it. My guess is that... Yeah, so 4.6-1 was released on November 4th, which was way before these new images. So we probably just need to do another dash release on GitHub and then we can create a new... That will create a new tag and we can build that tag. Okay, and the way we do that is go into the GitHub repository for that and just apply a tag and declare it a release. There's a draft one that has the commits like the one from Garrett and some from Victor Martinez. So we can easily do a release. I can do that. Oh, okay. All right. Great. So and that's in the docker-agent repository. So we probably want to get the JDK8 versions and that for those. Oh, before we do the release. Okay. So we've got... There's this poll request here that's still pending before we ask for a release of this. Okay. And Alex, are you okay with my answer on the why we're proposing JDK8? Was that okay for you? Oh, you did. Oh, good. Okay, great. I just barely merged it, so we're good. Excellent. So I'll do a release there and then make sure it builds on trust. Awesome. And so Alex, help me with the version naming. So this will be a 4.6-2? Correct. Okay. And do those versions, do those releases already include descriptions in the... Are we already using... Oh, yes. Okay, great. So it's already using release rafter. Okay. Okay. So Gareth, what I think I'm then understanding is once we've got 4.6-2 tagged, trusted CI will detect the existence of that tag and will apply it and push those images and that should include the LTSC 2019 images as well. Or did I misunderstand? Yeah, I think that's the four. We should get the Docker agent 4.6-2 versions up. And I'll probably need to just rebase my PR on the inbound stuff because, yeah, with the right version. So that's cool. I'll watch those and yeah. I think I have three open PRs. One is sorted now. I don't know how to fix it. Let's just check it out. That's cool. Great. I got a 4.0.4 while doing the release. Let me check the connection. Okay. I'll fix it. Nice. I did a submit and it said 4.0.4. Oh, ouch. I need to remember to always copy and paste my release rafter and stuff somewhere else. Oh, and we've got one other person who's interested in joining this session. My apologies. I had missed that. And that's awkward. Shame on me. Just a minute. Let me be sure that I get her the meeting invite. See participants, invite, copy, invite link. 4.6-2 is released on the Docker agent repo. Once the tag shows up, I'll verify that Trusted sees it and we can build it. I don't think that tags are automatically built on Trusted. So just as an FYI, and if you ever do this, you have to go into Trusted and verify that the tag is built. I'm not sure what setup we need to change to fix that. I assume that's likely because Trusted doesn't receive any GitHub notifications. Any, what do you call them, webbooks? Yeah, it has to go do a repository scan. Right. Okay. Yeah. Now, that I think is a separate topic, but there have been some discussion. Trusted is used to build the Docker images because of the credentials required. Is that one where we would be willing to consider possibly in the future moving it from Trusted that's behind that SSH tunnel to Infra that's on the VPN and not behind the tunnel? Or do you have a strong feeling that no, it really needs to stay in Trusted? I have no personal. Okay. So you're open to either. Yeah. It would probably go to release it, wouldn't it? Oh, right. Right. You're correct. Release.ci would be the more sensible when because we use it when we deliver releases. Yes. But it would make sense to still have it behind probably behind the VPN or something. Have it protected in some way. Yes. Let me just make a note of that here in the notes that yeah, it's this one. Consider moving from Trusted.ci to release.ci. Simplify Jenkins releases, wider access, easier to diagnose, and no less secure deliver a Docker image than a Jenkins release. And I mean, there's it's Jenkins releases just as just as security sensitive as a Docker image releases. It's releases. Just releases.ci. Jenkins.io. I think it's release.ci. Jenkins.io, but you've got to be on the VPN in order to see it. Yeah, I'm on the VPN. Maybe I don't have access. Yeah, that I don't know. I would think you should. But well, then again, Olivier intentionally limits who can access release.ci. Jenkins.io to a relatively narrow group. If we were to move Docker builds there, we would need to grant you access to that. It certainly would make sense. Okay, so we've got anything else with regard to Windows Docker images. We've got the 4.6-2 release that is has been released just now. And Alex, you'll watch it on Trusted.ci. Is that right? Yeah, I'll make sure it gets. Okay, great. Thank you. I think that covered that covers the Windows Docker images then. Anything else, Gareth, on Windows Docker images? Oh, that's everything. Okay. All right. Last month or last two weeks ago, I had asked about ci.jnkans.io master branch Docker image building. The job's been failing for a while. Alex, do you have anything you wanted to share on that? Is that still a reasonable goal? Yeah, we definitely want to do that. We can disable the experimental publish if we want. I think I'll go ahead and make a PR for that. And then we can just do some work on some PRs and stuff, because I think all three of us have access that our Jenkins file changes might get pulled in. Okay, great. So you don't object if the experimental... Oh, good. Thanks, Daniel. Yes. Yeah, absolutely. Good point. So we can use pull request to allow us to further adapt the build process. Okay, good. Daniel, thank you for joining us. Anything else you wanted to comment on with regard to the future potential? No, this is... We will certainly use a plan. We will certainly have that discussion in depth to be sure that we don't trample on the security build process. Not really. I think I recently saw a message around release CI failing because it couldn't install plugins. Might have been in the Infra IRC channel. If that is the problem there, we need to be mindful that we're not ending up in some sort of... What's the term? Where one part cannot be repaired because another part is broken and vice versa. Just stuff like that. But otherwise, we need to be prepared how to handle outages of various sorts. Otherwise, I don't really have an opinion. Right now, what happens is, before we publish security advisories, I make sure that all of the updated components can be downloaded, which means packaging for regular Jenkins Wars, the plugin HPIs, if any, are included, and as well as the Docker images. And notably, Docker images are required because I update CI Jenkins IO before I announce the security fixes in an advisory. And CI Jenkins IO needs to be online for the site build or trusted CI to work. So there's even a dependency chain in the content publishing process there. And any changes here would need to be mindful of these dependencies. So if Docker images cannot be delivered, I actually cannot really publish a security advisory. And my preferred solution would be if we had Docker image staging and just would need to copy them from the staging repository and artifactory over to Docker Hub or wherever. That would be ideal because then even broken builds are not on the critical path on release day. But otherwise, no comments from me. Okay, so that concept, the concept of Docker image staging, I'm ignorant sort of on that one. Is that one that you would, would you be willing to have a further conversation with that either today or in a later session? I'm interested in that. It would mean, I assume, changes to our Docker build process, but we're already building the images ourselves. And I think we intend to keep doing that because we need control of the images. So that seems reasonable. Alex, any insights from you or Daniel, first question to you? Is that a topic we could discuss in more depth? I would be very happy to discuss this further. But as a quick note, I've talked about it in bits and pieces with Olivier in the past. And so far, I think it's been mainly a lack of capacity that prevented this from happening. But the staging that we're doing in the regular core build process works pretty well. We've always done staging or at least for many years there, we were staging plugins. And right now the problem is on release day when I want things to go out quickly so that I can quickly release the security advisory, the need to build the Docker images immediately then still add some degree of uncertainty and fragility to the process that isn't really inherently necessary. So yeah, I would be very happy to talk to you about it. Unfortunately, I'm not that well-versed with Docker repositories, images and such. So I don't know how much I can contribute in the implementation there. So I think what I would like to do then is I'll schedule a session for deeper discussion because it feels like we need Alex, Olivier, Gareth, Daniel, anyone else that needs to be involved in that discussion? I'm not from security, so I don't know. Okay, great. Alex, insights from you, is that are you okay if we have that discussion? And it feels like another thing along the line of the Docker image build improvements that we've been envisioning with Jim Crowley. Yeah, I agree. Okay. Oh, Tim, yeah, that makes sense. The release officer certainly should be involved. Okay, good. All right. Thank you, Daniel. Thanks for that quick trip back. So then the ci.jankins.io master branch. Alex, anything further there? Oh, go ahead. Yes. So I did reach out. So this is somewhat related. It's Docker security stuff. I reached out to the Docker library folks about the Jenkins repository, the quote-unquote official Jenkins, not our repository. And they removed the latest and some other tags. So Daniel, if you could take a look and see if there are other specific tags that you would want to remove. I asked them about removing the repository completely, and they said that that's generally caused issues in the past. So they were very hesitant to do that. But they were willing to remove tags. So I don't know if the old versions on there are fine to keep or if we want to just whack everything if possible. I'm not sure if removing all tags is possible or not. To clarify, if the latest tag doesn't exist, then I need to explicitly specify a version where I poll, correct? That is correct. All right. And I think that's enough of a speed bump. So I mean, Dr. Paul Jenkins on the top right there is the copyable command that would break them, correct? I believe so. I believe it will. Yeah. I think that's enough of a speed bump to make people look at the page. And it has the giant all caps deprecation notice. I think that should be good enough to make them think twice about using the images there. Thank you very much for doing this. Yeah. Thank you. Well, and looking at the latest is 2.60.3. So it's only, what, four years out of eight now? Yeah. The last time this repository was updated was two years ago. So they were, and I just tried to do a Docker poll Jenkins, and it did say manifest for Jenkins latest not found. So it definitely will cause an error if they copy that command and try and run it. Yeah. So one thing that might be useful, and I mean, that may go too far into the weeds here, is to explain as part of the deprecation notice that this hasn't been updated in several years by now. Because the way it is phrased, unless you know what Jenkins versions mean, this could have been deprecated yesterday. Right. Well, and if we're asking them to make an edit, we could ask them to delete the reference to the latest tag, because it no longer exists, and we like that. Yeah. I think you'll report that we can just follow poll requests too. Is that correct? Well, yes, but there is no Jenkins directory anymore. So that's why it's not been updated in two years. So I need to go back and look at the history to see why or when that Jenkins directory was removed at least. Okay. So have a bunch of directories and each directory is basically an official image. So I'm not, I need to look at the history and determine when that Jenkins directory was removed. Yeah, but not having the latest tag basically means people won't be able to just use it like they use absolutely everything else. So. Right. That provides a safeguard against blind consumption, right, about silent blind consumption. It's like, no, that doesn't work anymore. It just won't work for you. Great. Anything else on that topic, Alex? Sorry to kind of hijack that a little bit. Oh, no, that was great. What a great piece of news. Thank you. All right. So we've got the agreement that we're going to work on getting ci.jankins.io master branch working. Anything else on that topic? Okay. So then next topic, and this is a precious, I propose we will stop it in nine minutes. So let's use the next nine minutes for this specific item. I've got a draft proposal for, I'd like to change how we manage Docker images process wise. And this is my draft proposal. I wanted to bring, it's particularly valuable, Alex, with you and with Daniel here to see, hear your inputs, get your comments. Oh, this is crazy. This won't work, et cetera. So the idea I had was, let's attempt to use, have our Docker images be treated closer to the way we treat plugins in terms of their management processes. One of the concepts is the notion of adopt an image. So we have adopt this plugin. I think we ought to use GitHub code owners to assign owners to specific images in the Docker image repositories so that we know who is caring for a particular image. Thus, for instance, for my needs, I would ask to be one of the owners of the Debian repository and the Alpine, but I would not ask to be an owner of the Sentos repository, the Sentos files, because I'm not maintaining them. Go ahead. So if an image does not have a code owner, would it be an unofficial image? I was thinking it would just be up for adoption. I wouldn't demote it to unofficial. I would just make it up for adoption. But I'm open to arguments either direction there. I was, I didn't want to, with this proposal, change the status of the existing images dramatically, but I'm open to that suggestion. I thought up for adoption was a safe enough concept that it would allow people to continue using the images that are there, but knowing that, hey, this does not have an active maintainer on it. So Daniel, any insights from you there? Does the concept of up for adoption seem like it could be applied to this? Anything you've learned from the security perspective that we should be mindful of? When I look at plugins, it's fairly easy to tell whether they are maintained or not, because they are independently released. If there hasn't been a release in one, two, three, four, five years, the chances increased substantially that nobody's home. But that does not apply to Docker image variants, unless there is something incredibly obviously broken with them, because it is, to an extent, the reasonable decision not to track upstream very closely or automatically. And so how do you distinguish? Well, the base image hasn't changed in a few months from nobody's maintaining this anymore. And people do not announce when they leave, at least the vast majority don't. Some well-known contributors in the community like Nicola, Grigory, Wasseno, or more recently, who am I forgetting? Not Chris. Chris is still around, but Swiss guy. Sorry, don't remember the name. They announced they were leaving and we're looking for new maintainers for their stuff. I would not expect something similar to happen with these images. So they would have a declared maintainer, but maybe move on, lose interest, and nobody can tell. So Dominic Bartoli, I think I'm thinking of, who recently announced he no longer has time. And I'm not sure how to, I agree. I think that's a valid point. Plugins, it's obvious because there is an explicit release and the absence of releases hints not an inactive maintainer. Any suggestions on things that we might use here where we've got, what basically is a monorepo for all the Docker images intentionally so, but we, would you consider, could we consider separate releases for individual subsections? I'm not sure how to. I don't know. I'm just, thinking about what I've observed, trying to reach out to maintainers after four years of nothing. And some were like, yeah, sure, I will help you, but I have no idea how to release anything anymore. And some, well, email spawns. And there's the full spectrum there. And it's also, if you use a plugin, you have a relatively easy time to tell in Jenkins that you're using it. That is less the case with Docker images or packages. You've set it up years ago. And since then, I don't know. I'm lucky enough that I don't have to care about it because Olivier handles that. But I think there's also this situation where someone set something up far in the past and nobody knows how it works. And I think with more the system level configuration, I think as a user, the expectation for me would be that there is a fairly clear distinction between what's supported and official and what is not. Obviously, in the Jenkins project, that distinction may not make a lot of sense. But yeah, I'm just not sure. I'm sorry, I can't offer solutions, only problems. Actually, I think you lobbied towards one maybe back to the idea that Alex had offered, that maybe we should, as part of this proposal, it's a significant enough proposal, maybe we should declare if at, if when we implement this proposal, a Docker image does not have a specific code owner, maybe we do mark it as not mark it, we actually demote it back from the Jenkins org to the Jenkins for eval org. That would break people, but it would make it very clear that this thing is not a maintained image. But that's pretty dramatic, right? That will break people's Docker builds. Did I, Alex, did I understand your proposal, your suggestion correctly, or your question correctly? Yeah. So the idea would be we have a set of specific images that are official images. Those are the ones that Daniel would have to care about when doing a security release. So we'd want to keep it fairly small. And then we have other ones that are unofficial that are, you know, whatever anyone wants to put in there, right? I don't know if that meets security requirements, because if people are relying on them as quote, unquote, official anyway, and it's still a problem. That was kind of my thought was to reduce the number of images that Daniel has to care about for security releases. Right. As long as we tell people upfront what is properly integrated into our release process and what is not, what we care about and what we don't. And I think that's, that gives users, or that gives, it gives admins the necessary information to judge for themselves whether they really want to operate Jenkins out of, I don't know, there some weird distribution like send us six or whatever, when there may not be any security updates afterwards. That makes sense. Okay. Thank you. Yeah. All right. So thanks very much. The already the insights of I had, I had not considered the security implications of this JEP. So Daniel, thanks very much for your insights here. I will certainly copy you on the link to this and get further involvement. This is very much in a draft state right now. I intend to keep working it trying to look at various compromises and alternatives that are hiding an impact of proposals that are happening in each of these. So that the up for adoption proposal versus the become become an unofficial image. And I think the place I'm supposed to put that is in the motivation section, or there's another section that describes about alternatives and why they were not selected. So I'll continue working that through the JEP process. So one potential problem here or something to be aware of is while this problem exists elsewhere in the Jenkins project in terms of plug-in support in a way, it is less noticeable due to the support cloud B is offers to its customers. And the benefits that users of the open source plugins have just because club B supports them for customers. So security fixes are made available to anyone the same way they are made available to club B's customers. So and all of the popular plugins are supported by club B's, which means in Jenkins for Jenkins core and a lot of the most popular plugins, they are effectively security supported by cloud B's. And only the less popular plugins typically are supported. However, the maintainers wish to support them, which is a very wide range. And for the Docker images, since club B's products are packaged independently, it is purely community support in a way. So we try to get stuff fixed there. But there's not really this corporate entity that directly supports it. And so this is sort of this weird situation here. And also why this problem feels like it should have come up in the past. But in a way, it didn't. So this is really a new problem where we would need to figure out what does it mean for something to be supported or not, or to be in scope of the security team or not. Right. Well, and I think your example of cloud B's probably also applies to Red Hat, right? In their open shift distribution, don't they deliver a Jenkins component, which is probably Docker image based. So it's it's even not it's the same class of challenge there where we've got the community is doesn't have at its at its foundation or at its base, also that cloud B's is doing it commercially or Red Hat is doing it commercially and using these exact images, right? There cloud B's uses a different image. And I suspect Red Hat creates their own image as well. I don't know how Red Hat does it. So previously, Oliver Gonja, a release officer was a member of the security team and had the necessary access. I don't know whether he was working with OpenShift folks, but they have never directly engaged with us. And right now we have nobody from Red Hat on the security team. So they are probably just downstream consumers of what we deliver. But if they consume basically the war and the plugins, then they benefit from the security fixes we deliver there. And if they do their own images and tooling around it, they are independent of what we deliver in terms of Docker images. Right. Okay. Thank you. Okay. Good insight. Anything else on this topic before we conclude our session today? Oh, I've got one more topic. Sorry. Before we end, next meeting. Next meeting would be scheduled for what day it would be scheduled for? January 1st? Oh, that's a bad day. We will cancel. Next meeting canceled. I am not going to rouse myself out on a bank holiday to do this meeting. Anyone object to that? I think we should just do it earlier at 12.30 a.m. Everybody's time a little longer. Why didn't I think of that? We will just have everyone join for their new, no, thank you. Okay. All right. So we will meet again on January 15th. Is that workable for everyone? Works for me. All right. Any other topics before we conclude? Did you want to discuss the ARM servers on? Oh, yeah. Actually, okay. One minute, highlight. Amazon has rolled out their next, they call it Graviton 2, I think. Servers are now available. And they've got a promotional period going on where they're giving a deal on lower cost if you use one of those new ARM 64 servers that they've created. Or lower cost. I've been running one now in a spot instance for a week or so attached to my CI server. Works great. So just be aware that newer ARM servers are coming online. And we may, for instance, Gareth, you and I had talked possibly about LTS release candidate test execution. And these Graviton servers might actually be quite interesting as a way to run LTS release candidate tests instead of running them on Intel. They are apparently about 40% cheaper price performance wise. So just for, consider for the future, not anything we need to take action on immediately. Just be aware that Amazon has rolled out even better ARM based servers now. Cool. I have to go look at those. Yeah. I was really impressed. And they offer a discount right now for using them. It's like, well, gee, they're, they're, it makes them really low cost. All right. Anything else? Okay. Thanks, everybody. Happy holidays and a happy new year. You too.