 All right, and sharing my screen and here we go. All right, so. And let's make it big enough that I can read it. Okay, Jim, so you had a proposed topic. Yeah, I've been experimenting with pod man myself at work and stuff. And one of the things I was going to suggest is maybe pushing the images also to quay io to have people be able to pull them down in that kind of repository. And we've briefly talked about that when there was the whole rate limits and questions around Docker hub and stuff like that. Okay, a good, good topic. And great that we've got Olivier here he and I had a recent conversation about use, possibly using multiple, multiple container registries so that's, that's the idea is the question is. And use of multiple registries, strengths and weaknesses right so strength and weaknesses, because I assume that pod man can use an image from Docker from the Docker hub. As well, can't they. Yeah, it's already registered by default. When you go down to pull down an image you can choose whether it's searching the Docker io or quay io. I see. Okay, so so what I'm not sure if you want to talk about now or let's let's get the let's go through the agenda first and then we'll prioritize the agenda and then we'll I suspect we will get to it. Let's just be sure we've got everything on the agenda. Jim, were there any other agenda topics that you wanted to be sure we covered. No, the suggests topics were wonderful. Okay, so Damian, any, any key topics from you that are that are not already on the list here. I guess one for me was this one testing and validation is sort of a hotspot for, I think you're doing great making great progress there but would love to have a conversation about that. Yeah, I don't see the topic right there. Okay. Olivier any topics that you'd like to add to this add beyond this current list. No. Okay. All right, in terms of ordering. I would like to be sure we stay absolutely within two hours, and I'm willing to consider even less than that. But let's put the highest priority things first. So, for me this when Docker image support for multiple platforms is quite high priority because I'm interested in arm 64 and Jim's interested in S390. I think testing and validation are both high value because of their relationship to image and delivering the images at all. So I propose we put those two pretty high on the list. Any objections. Okay, great. All right, so I'm going to move them to the top of the heap. This maintainers topic, I would like to be sure we get to it. I'm less concerned. Personally, I would be interested to do briefly talk about publishing images because I mean, I think it can be, we can quickly talk about that. Okay, I don't think it's a big, big change, but I just want to. Okay, so publishing Docker images. Good topic, and Olivier we would have you take the lead on more details for it. Okay. Good. Anything else. Okay, great. So, first topic, Docker image support for multiple platforms. I think the crucial thing here is we need, we need. We are we have equipment to do the builds right because we have we have the equipment for CIDA Jenkins that I know, CIDA Jenkins that I know is not configured to push to the, I would say prediction organization. The reason to that is because CIDA Jenkins that I know is a public instance, and we don't want to take the risk to be to have zero day whatever so we use a different instance named trust that CI, which is only available from a application. So basically, for the harm 64. It's pretty simple because we just have to copy paste the configuration to have them on Amazon, which is fine. Regarding the S390. I'm not sure if we just reduce the same cluster, which I think is fine. Because in the end, we are only using that Docker from there. Well, and the same. No, no, sorry, sorry, trust it. Trust it. Oh, is it trusted that does the image builds today. Thank you. Okay, I mean, maybe, maybe we could move that to infrared CI, because Damien did some work there to build images and to test images so I mean, you work on the cloud library that build Docker images does Docker images and so on. So, and it's an infrared CI sorry configure to use that but the specificity of infrared CI right now is it's running on Kubernetes. So which means that we would have to put more configuration to provision instances. I mean, again, it's not a big deal in just like we have to clearly to clarify what we want. That's it. And I know that the S390 that we're running has more than sufficient capacity to be added as an agent. It would be a static agent. Just like it is with CI Jenkins.io. I assume that's okay right Olivier is there any issue connecting a static agent to the Kubernetes cluster. No. Okay, so, so there are challenges. Just to clarify, the Jenkins running on the community cluster is the same Jenkins and everything else. The specificity is, we have access to the to the communities cluster so we can schedule but which is faster than provisioning a Windows machine so basically, we would want to basically we prefer to build images on using communities using parts, but obviously it does not works for different architecture because this does not provide nodes for those architecture. So it just easier to provide a configuration. Yeah. The S390X is having official support for Kubernetes since 1.20. I don't know for IRM and PPC. IRM 64, it should work, but I'm not completely convinced. The question is more, is it possible to add custom static nodes to the existing Azure managed cluster? The proposal here and if you want I can start the topic here with an issue but the idea will be to have a Kubernetes cluster but for specific architecture maybe a second one. So from Jenkins point of view it's just Kubernetes and pods templates that's the only resource we have to manage. And on the infrastructure side we should be able to provide Kubernetes static nodes to directly. So, okay, so now you're educating me on something I wasn't aware was even possible so conceptually we could have a Kubernetes cluster that has multiple architectures available somehow in the cluster. Yes. So the S390 and PowerPC and ARM could conceivably deliver, what do you call them, is it nodes to the cluster or could deliver capability to the cluster for that architecture. Wow. So on that topic, I have three concerns. First, we have to double check that AKS to support that because AKS is a specific managed cluster that we have that we are using in Azure. So that's the first imitation. And if AKS supported now we also have to be sure that the version that we are using support it. So to me it sounds easier to just configure Jenkins with the right SSH key and to SSH I mean to start the Jenkins agent directly on the sorry, on the agent instead of trying to configure this cluster to have the nodes, especially considering that you won't be using a lot of agents on those architecture. Okay, I would. So the idea of having a Kubernetes cluster, maybe it's not the AKS, it will be cool because it will avoid spreading the maintenance of the Kubernetes control plane. But in the hypothesis of at one moment on time we can reach an AKS or AKS cluster and that we can add these specific nodes. The idea is that that cluster could be reused across our different instances. Let's say we could have a namespace for infra and namespace for trusted for instance. The idea is that both Jenkins should be able to reach the cluster and build on their own namespace and reusing the specificity of the architecture by using annotation and toleration on the pod template of each but we would have only one Kubernetes setup and which each Jenkins instance could connect to. That was the initial idea but if we have to deploy our own control plane then it doesn't make sense and it's not a good idea. And in that case the static but that was the idea depending on how we want to share that setup and reuse it. So I haven't used AKS but internal to IBM. We, I've been testing multi arch clusters on Kubernetes for a long time. That's kind of what I do from majority of my day job. So I can tell you it works well. Again, I don't know about AKS the only thing that you need to be careful of is if you have generic like definitions right when you go to do your deployments. You need to make sure that image whatever you guys are using is multi arch or that your registry is aware and knows to pull down certain arch specific containers. That's where a lot of people run into trouble. The idea here would have been to configure the nodes to add I don't remember exact name but there is a setting that you can apply to a node in Kubernetes. So basically don't schedule unless you are explicitly asking that specific labor. What Jim what Jim is suggesting. It's a really good suggestions. But yeah, as the minister is mentioning for instance right now, we have but that only works in Windows. So we have specific presentation and that just explicitly says. Then we need to ensure the images are built for this architecture but at least for s 390 and I am with that show that we can have this images and both can be generated from Intel architecture with cross compiling with cameo. This is a feature you have on Docker for Mac for instance but it's easy to run on Linux for the PPC architecture I have no knowledge on that so I totally rely on on yours there. I know I know in Kubernetes there's also I know you mentioned labels for Windows and stuff. There's also a, I think I think it's still in beta. But there's a good arch selector you can utilize to I can look up the specific tag or whatever you want to call it. As for what what was the software you guys are looking for a scope you or Is that what you're talking about for no I think I think what Damian was asking was just if, if this same Kubernetes facility that you've described as known to be working with s 390 is it also available with power PC. Yeah, as far as I know I we have clusters with s 390 x86 and power. Yeah. Yeah, again, I just morning that's it sounds easier to just configure Jenkins to SSH on the agent on the PC agent in. I would say in any case static SSH or Kubernetes we need to do what Jim just said, we need to check for the base image existence for each architecture. This will be a requirement because the goal here is to build Docker image on this machine so we need to be sure. And each it should it looks like that Docker is supported on the free architectures standalone Docker engine, but we still need base image to get started from. No, we don't already have the agent for other architecture than right. Oh, that's a good question. I thought we did we have what's going on for windows and for the point before 64. Okay, so official open GD image provide for power for IBM Z and RM 64 so we should be able to start building and bootstrapping the GNP. So maybe we should start as you said Olivier to avoid a bootstrapping issue, like starting with static nodes for initiating the first version of the image that. And once and once we get limitation. Configuring directly communities for that. And I would add something is that the setup of adding non Kubernetes agent on infraciae is something we need because currently we build Docker image using IMG, because we don't want to run nested containerization, even with Docker rootless there are still some hiccups in security concern. And we are testing only the tar of the images using a specific tool. If you want to, to execute Docker run commands somewhere, whatever the architecture, we need static agent, we should consider that for security concern it's not possible on infraciae now with Kubernetes. So it could be totally worth it to start learning and applying setups to have static agent or agents that are not Kubernetes. So, so specifically for that. We already have the configuration CI the Jenkins.io. So we just have to export that to gcask and then put that configuration so we already have the configuration, we have the secrets in place and so on so um. Once you know where to look at it, it's pretty fast. Okay. So let's start with these machines, and then the base image for the architecture and then we can go. Okay, so I think what I've heard then is, we're okay with initially going with static SSH build agents for PPC and for s 390x with the equipment we've got access to already, and the dynamically launched cloud agents that we're already using on CI Jenkins that actually should be available as well from, from infra or from trusted, so long as we configure them did I understand correctly. Yes, exactly, exactly. Because of trust that CI we just have to copy paste the configuration. I mean, using the web and user interface for infra we have to put that in the gcask config file. Obviously I would prefer to have that in the gcask on infra. So because because we have to work on trusted CI and CI to to to put the configuration in a file anyway so. If we already do that one in front of CI that we simplify future work. Great. Okay, so I think I caught the notes correctly then alright so. So, it feels to me like I can, I've heard a description of a path forward to give us s 390 arm 64 and PPC support short term with static build agents, and with the option in the future to become more flexible and use Kubernetes if we wish. Did I did I say it correctly. Just something I want to be sure on the note I see you, you have written need to connect something as a static agent to trusted isn't it to infra. No, I think it's I thought Olivier had mentioned that today Dr ranges are built on trusted. Yes. We may, we may change that in the future but today they're built on trusted. Okay, so then can you change the info on the parent bullet. Yeah, because we'll say publish from it. Oh, yes, you're right. Thank you. That's completely wrong. No problem. Yes, thank you. Yes. Thanks for the comments. Jim. Yeah, it's a node selector and we can select the architecture it's built in in Kubernetes. Yeah, no problem. I think, I think it's in some versions depending on a version of combination running. I think it's out of beta so you should be able just to cut off the beta and just do Kubernetes. It is correct. In this case it's a managed cluster so we have to check that they enable that feature. Yeah, and also and also something that I noticed in the bathroom and since I'm using AKS inch time is often when a new feature is available you need to revision the cluster from scratch. So I'm depending on what you're looking for. We may have to replace which is, I mean, everything is automated so it's always a good time to test that we can revision everything from scratch but yeah. It's not as simple as just trial by fire. That's the story trial by fire and just soon I should not have any more fires than absolutely necessary. So if you want to do that. Okay, so nothing else. I think we've covered darker image support then. Okay, with that one ready to talk to the next topic. I'm just just the last last photo that okay what I would do is I would first try to configure in front once we have the right configuration and we are sure that it work on in front of CI, we configure trusted at CI to use that. Okay, so so we use him so we use him for as a testing environment to validate that the change will be. I see. Okay. The reason to that is first because trust that that CI is quite critical. I mean it's a critical instance. And also we have to work on having the ggas configuration anyway so I think that would be a good way to. Okay, so that means the static agent to s 390 x static agent. And now, for my, for my clarity, I assume that to, or at least today what we've got running on the s 390 on power PC as we run a different user account for CI dot Jenkins. I owe than for my little toy tests that I use to test Jenkins separately. And I would assume we would use yet another account a different an independent account so that it's not the same account that's being used by CI dot Jenkins that I owe, because we we don't want those agents sharing sharing their user account is that is that a reasonable assumption safe assumption. Yes and no, because we will be dealing with Docker commands, which may mean if you have different user on the host machine, they will share the same right right okay which which may mean it's not relevant I hadn't thought of that of course these have to be able to run Docker. And it's still better to run, at least for clarity, because you will have at least on the file system, clear separation between the workspaces of each agents the port, the processes they will be separated it's still a good practice. But we have to, to assume that multi multi ton MC is not won't be easy to get there. Right, right. But, so it's one of the crucial things is Docker command line access is required and as an example, the, the agents that I run cannot run Docker on those machines. And it was it was intentional I don't think CI dot Jenkins that I owe agents either have that so these, these are distinct in that sense that they need to be able to execute Docker commands. I mean, wouldn't be possible to have one additional machine. Yeah, exactly. Yeah, nursery 90 that it's not a big deal. We can get you another cluster for ppc is probably the same thing. If you guys want to go to the headache of putting KVM on top of either one you can, but I think getting another cluster would probably be easier for you guys. Because what what I fear in this case is, if we want to, I mean, the way I always consider CI to Jenkins that I owe us not trusted. Just because it's publicly accessible because whatever may happen, I mean, right. So I would not feel comfortable to run Docker demon on from trusted CI and from Seattle Jenkins that are you basically that makes sense. And plus you don't want someone hopping on doing tests and then screwing up the environment and stuff. So, so from I mean I'm fine to use infrared CI and trust CI because infrared CI is only available from the VPN. So I mean the risk is lower. Yeah, I did think is that you I mean we will really take care of updating it all the time and so I mean we are taking that service but I mean you never know what what can happen on those. Yeah, just in. Great. I would add also that infrasci is now completely working with GitHub app setup, which means that a contributor can see the repository of the code of the source code of whatever Docker image we want to build and share. A feedback from the build, even if they cannot access to infrasci and in particular, but the GitHub app with Jenkins setup provide the build outputs and anything which is which could fail on the Jenkins build. You have the output on the GitHub check, which means these GitHub are available for any public contributor without requiring access to infrasci so we can completely have a setup. A welcoming setup for contributor keeping the specific static agent on infra or trusted behind the VPN, not publicly available. So it's not mandatory to have these machines on CI the Jenkins I can totally set up the build on infrasci and trusted on infra for the build and test, and then trusted or whatever wherever you want for the publication. It's a kind of intermediate because these are specific machines and we cannot apply large scale security setup like we can with Kubernetes. So in that case we could let's say compensate with this. Okay, so so it seems like we've got we've got an action and I'm there that we will for to address Olivier's concern we do need to ask for a separate, a separate thing, right a separate machine or a separate cluster or a separate thing so that the untrusted environment, I think that makes sense the untrusted environment that is CI Jenkins.io, and the trusted environment that is infra or trusted dot CI Jenkins do not risk contaminating each other through their Docker command. Now CI Jenkins.io is not currently allowed to execute Docker commands on these on on S390 or on PowerPC Olivier so so the permission isn't there. So, but but I think it's still a valid point to say, if we in the future decided to enable Docker for CI Jenkins.io on these machines we would now be risking contamination between infra or trusted and the untrusted CI server. And so so now they put the question get away. Do we really need to have CI Jenkins.io with those architecture. Well, if we don't, for me, it, it, it sacrifices some of the tests that I've been running on those architectures to confirm that certain plugins that claim to support them actually continue to work. So, so, but, but the reality is we could say, oh, let's drop those from CI Jenkins.io just move them to trusted and to infra, because. All right, that the one plug in that I'm testing there is actually not that critical it's just it was just an exploratory test of a platform plugin. And I will say Kubernetes will be a nice solution here, meaning that if you need to run a container on the specific architecture in the future with all the image built on infra after a few months, then as soon as we're able to add Kubernetes clusters to CI Jenkins.io. It's easy to ask for a requirement. I want to run that image on a restricted environment for that architecture. But this is long term future. It's different. Okay, that's first. So, so I think, Olivier, back to your earlier, we could take S390X and power PC off of CI Jenkins.io. I don't know of anything critical to the project that depends on them being there they were added as an experiment we could easily remove them and move them instead to infra and to trusted. Then then then they are fully under our control, you know, without the risks of execution from an untrusted CI server. Yeah, untrusted sounds so dangerous but but really a publicly accessible CI server. Yeah, okay. Yeah, but everything that is publicly accessible isn't trusted. Right. Yeah, that's the that's the we're, we want to be safe. We want to be safe. Good. Anything else on this first topic then Docker image support for multiple platforms. Just one note, the IRM64 can be used as with EC2 instance, I don't know for Azure, but there are EC2 templates. Which are we transferring? IRM64. Yes, we have those. Okay, so this one it will be easy. We use those my platform label or plug in as valuable and important and critical as it is the platform label or plug in already tests with IRM64 and we know that CI Jenkins.io starts and stops those instances very, very well. And we already have backer images for those. Oh, right. Yes, yep. So we just, we just have to listen to something that I noticed we have to update EC2 plugin to a later version because in newer version, you can always reuse the latest MI available. Something that I discovered quite recently. Go ahead, Jim. There's two other topics I wanted to mention right here. I know I think one of them is actually going to be covered later on. My first question was for IRM, are we, have we decided we specifically just targeting IRM64 or IRM like V8 I think is, you know, like the kind of new-ish IRM not AK, not IRM32. Are we just supporting that or are we supporting, you know, the other versions of IRM as well? So the only ones that are interesting to me are ones that identify themselves specifically as AARCH64 in Linux. Okay, yes. So that's, and given that today on Windows, we officially declared we only do, we only deliver 64-bit installer. In Linux, we are very blunt about, hey, you need to use 64-bit. So, so I don't, I actually know it works on 32-bit ARM because I happen to have Raspberry Pis in my basement at work, but that's, that's irrelevant to the whole, to the, for the project 64-bit isn't is all we need. Okay. Did you, did you see something different than that? Are you aware of crucial use cases that we may have missed? No, no, I mean, I just know like older Pis, but I mean, even older Pis are getting phased out with the newer Pis. I think it's 3B plus switches over to the ARM64 architecture. So yeah, 3B plus and up will be supported, but I think if people have like the earlier threes and the lower ones, there's going to be 32-bit. So I think just kind of directing people to 64 is fine. Okay. The only other thing is I think we are, I think you have it down there maintenance, but we decided on what images we are going to support with multi-arch. So there's brought up multiple times in the SIG meetings, the platform SIG meeting that we of course are doing the Jenkins main image, but I know there was talks of agents and the other kind of a helper. It looks like you have them right down there. Okay. I think I think those are crucial questions that we need to reach in this session, and having Olivier and Damien here together with you is especially valuable because it helps us talk through process and identify how we think we should proceed. Awesome. Okay. So, testing and validation on multiple platforms, I think at least the topics that were of concern for me we've already covered in the earlier in the earlier discussion. So long as they're available in some place that consuming plugins or Jenkins core could decide to run tests. I think it's good enough to me. Any, anything else that people are worried about Damien you've you've led some specific efforts here on testing and validation. Is there anything here we should be considering or talking about additional development efforts. Right now, the, the binaries we are using for building and testing and constraint environment and to avoid using Docker are Intel only, but written in go so they could be cross compiled. So, that means that right now we will need a full Docker engine for these specific architectures for building and testing. In that case, it means that we need all the safety feature we, we, we spoke about all your isolated machines separated from the in front, trusted, etc. Okay, so this, when you when you say tool, go lang based tooling this is things like Jenkins version and, or is it. I'm thinking, particularly about IMG, a binary compliant with build kits and Google content structure tool, which is a command line that parse a YAML and run a set of checks on a Docker image, either using only the tar driver. So it only check the content of the image and the metadata, if you don't have Docker or don't want to use it for security, and eventually it connects to Docker to run a bunch of tests for be of your driving development. So, these are two tools that can be completely compiled cross compiled, because this architecture officially supported by go lang so there should be no issue on that or one bug or the other but it's the only thing is that we need to start with Docker. Right, then we'll go ahead, securing and securing right now this is mandatory and we need to take measure, according to this. Great. All right. So, Jim, anything you're aware of on that topic beyond what we've noted already. One question I did have. Is there any vulnerability scanning that we are doing with the images. This brings to another. My bad. That's the way you just keep asking those questions that's brilliant. We're not scanning yet. Yes, we need it. I should say differently no official scanning right because there are a number of us who have running prototypes of scanning, and I've seen the output and I want to improve. Maybe you want to briefly talk about that now so as as Mark said, people are from time to time use scanning tools to identify vulnerabilities server, maybe one week or two weeks ago we've been discussing about adding that to the process. So, you know, building images might not fear but what I feel is, if you put such a tools in place but nobody look at the output and try to really solve what we see, then it's useless. If you run such a tools, you cannot truly automate the results, because you have vulnerabilities that are not really vulnerabilities and so for each output you need to really carefully read the output to really says okay this one is dangerous this one is not really dangerous. And that's until now. I really don't know how to deal with that. No, it's not something I've done personally. I've seen a couple repositories, you know, open source communities start implementing it. I don't even think adopt over JK has implemented it yet they might have their own internal stuff but not any of the more popular tools. Is there not like a, again, I'm not I really play around with it but like, I'll say it will slow down. I know Mark right this will slow down the build process, but is there not a way to build it into the CI CD like publishing kind of pipeline where like, you know, you go long build them all, and then test them and then if there is an error or if there are concerns open like a PR or open an issue on the repository and somehow automatically like dump the log outputs or link the log output and tag a couple of the key members to address and if not, you know, if nothing, you know, continue on and publish the images publicly. Is that like a workflow that you've seen or would be acceptable for you guys. So, I've definitely you sneak to do scanning of images that that I'm trying to be a maintainer of. And, and I can scan after the fact. I think it's a valid thing to consider Olivia's observation that having a scanning run that we then ignore doesn't help. And so it may be part of the maintainer role is if you agree to maintain you agree to read security things, and you agree to think about the system and resolve them in the system it just for me that it's it's not not come to full fruit yet I know that the outputs that I found were in many cases did not say that we had a serious problem to me they said, oh yes they found this issue in this particular corner, but I'm also not a security professional in terms of deciding is that a real threat to this image or not. And here you have two things that you have to scan you have to scan when you build a new image, but you also have to scan that your old let's say this image are not affected by your vulnerability. Such features are usually provided on a registry, but that kind of feature does not come for free. I think the guy have provided that, but it's not part of the open source program. I thought that Linux Foundation security also had a that LFX had something we've got the benefit that Andrew Grimberg is online from Linux Foundation I don't know if he's actively listening at the moment but I thought there was a security part. Oh, so, so I thought there was a webinar recently I missed it unfortunately Andrew on LFX security services. Is image scanning part of that do you know. Actually, thanks enterprise. I'm sorry say say that again Andrew it's potentially. They're currently using Nick under the covers. Right. Oh, okay, good. Cool. Okay. I do know that they're also looking at a few other technologies for that, but right now I think it's what they're using. That's what we're using it so yeah. Good. Very good. Thanks Andrew sorry to drag you out while you're in the vehicle that's great thank you. So, so either way you got you guys are actually looking for a solution. To do this we just need to figure out what workflow what resources we can utilize that would actually work for you guys, but it, have you guys thought about like blocking, you know obviously like scanning after you publish the images definitely option. I thought about like just blocking the publishing if there is an issue found. I would, I was unwilling because blocking, at least for me blocking based on some new non analyzed vulnerable vulnerability reported by an automated tool is more than likely to leave us continually blocked. I looked at the at the the output from sneak from the images I was testing. There were always comments there and many of them that would be a place where they say issue, no known, no known remediation. So they found something in a library that was in the image, but there was no known, not even from the operating system vendor. There was not even there a known remediation. There's no remediation. I think, in that case, I'm really hesitant to block our image publishing be for something that may not be relevant and may not be even accessible in the use of the image in our context. And the real challenge that's why that's why I said you usually someone to look at the output, because sometimes let's say you have a package with a non vulnerability but it will only affect you if you use in a specific way, which we don't it just like a dependency of a tool that we that we need to use. So blocking by default, blocking by default, a job would just be a waste of time. But on the other side, if you just say run security tools, but don't fail if you find something wrong, then you will not look at the output. Right now that's that's a challenge that I have. It's like, it's really like having a lot of unique test that fails and some are not relevant to your projects and you have each time I have to look at is it really good or not. And so, yeah, that that's what I fear by automating that. So, yeah, I have a different advice on that part because it's like the linting do you do you want to make linting blocking or not. And that's exactly the same kind of concern. What I don't know for the security scanning is, are we able with tools like snake or trivy to put first a threshold or ignore certain rules, because that's what you will do on lint if you want to ensure that lint is applied but you know that for a specific rule you can ignore it. Most of the time the pattern is in line or in a file, you explicitly disable that rule in particular for a specific context. So that's why online inline command. And you had a comment explaining why, for instance, okay, we know that the package blah blah is subject to that, whatever. We don't have any case where with that image. So we disabled that particular rule for that particular package. But I don't know if the security scanning tools provide such feature for ignoring. So I don't know. But if they don't, we should totally add the blocking CI and add ignore rules, because it will complete our knowledge and it will provide something that is like test. If you need test are failing, you should not skip the test. It means there is something to be done either removing the test or fixing it. And, but I don't know the second point here is, I don't know if we can do a deep between the rules and the issue that come from the parent image that we cannot do anything about and do a different the instruction that we run. When we build the official Jenkins image, we inherit from the from open JDK, whatever. If we do a diff, we can see what our image is adding in terms of security issues, and then this issue are valuable, because they point us to something that we install. Whether it can be package that we installed that we don't need in the end so that could be an indicator or this will be a real life and the goal will be to remove a lot of the noise we only focus. So maybe if it's possible because again I don't know if technically we can do it. I don't know the form of the reports of the tools. So having a diff between the from image that we use and the instruction we run, then we can extract the list of security issue, which is less prone to creating noise or to be exploited and it will be more important. But again, we have to try, we have to try it because my knowledge is next to zero on this and this and really I rely on Olivier for a new mark or any of your efforts. I haven't, I haven't, I think that's ingenious I've just never tried it. I think that's a very good idea because what you just described is hey the packages we added in our Debian Debian packaging for instance when we add get when we add. Let's see pick another one. We have a new PG, GPG, and we add them because the because Jenkins runs better with them installed. But that means we've now accepted them as an additional security liability over the adopt open JDK image that is our base image. Yeah, I think that's very good idea. We can ignore some issues on SNCC. I'm not sure the level details, but that would be a good one. On TV, on TV, you can you can ignore some level as well. I'm sure, right. The proposal here will be the same as when you had linting to a project that's starting with a non blocking off course because we don't want to be blocked on our current capability to deliver. Right. And we do a free show like for the free next month we run the security and we see what we can do with it in three month. If no one, absolutely no one either from the infratim or the community did anything then we can stop using it because it means that what Olivier described is true. If no one has the time to check and to read and to act. Then it does not make sense. But it could be at least a side report, and then we can decide after a few months after trying because maybe it will create new opportunity new test that we don't see right now. It's worth trying in with the goal in mind that at the moment in time it should be blocking, but it should not at the beginning but we still think about what if it's blocking. Yeah, good, good. I think that's those are valid questions very good. I think it's about on securing and scanning the doctor images. No, but it's difficult to pick that interest me. And I wanted to work on that since years and just don't have the time. Right. Hang on, get that. Okay. All right. So, next topic then publishing doctor images. I guess, I guess we want to merge. This next point as well. So, so my point was to publish to quay.io. I'm not sure to understand so this is something that I've been thinking when I was looking for talk a hub solution. I mean, when they stop providing the free tier. But now that we have something in place I'm really wondering why we would have to publish to quay.io as well, because it supports multi architecture. It supports but man as well. So, I'm really curious about to hear about from Jim. Yeah, so I'm just starting to dive into the whole pod man experience. Jenkins, I don't know if you draw it. Sorry, not Jenkins. Doc, I don't know if you know drop support for S390 and power going forwards. They still have like I think the last version publicly accessible via their servers I think there's a couple like custom builds, but you know, what's the validity of those you know, but I think that like 1809 so you know any of the Docker features going 19.0, which I think includes billed acts a couple other like beta features and stuff like that aren't going to work on S390 pbc doesn't really matter too much if they're just remote agents. They're just using it to a few builds and stuff like that doesn't really change the functionality but we did doctor proposed and stuff like that but internally, my team is looking and evaluating at switching over to pod man. And as I've done that, I'm starting to understand the ecosystem and stuff like that, and hence quay IO is one of the new kind of parts to that ecosystem being another doctor repository. I think right now there isn't too many differences with quay and Docker. I mean, even when you go to do a poll of an image, a pod man has a little like command line option to either search Docker dot IO, which is the Docker hub registry, or quay IO, and you can pick one to search. But I think it would be nice to give users an option to pull down from quay IO and I think there's a couple cool features being added and additional things being added to pod man and build off, which I don't know if you guys look that build off but it's like the, the engine to pod man. And I think those features are probably going to be working with quay IO or loosely coupled with quay IO. So, I thought it'd be interesting to look into exploring and pushing images there and getting a presence on quay IO. I haven't really found out when the, when the issues I haven't really looked into is, you have the official images from Docker hub right that whole official image repo and stuff like that. I haven't really seen we don't we don't use that for change the official image. Yeah, I know. I haven't really seen if there was a equivalent or kind of comparison on quay IO. So it might be good to establish some sort of presence on quay IO. You know, marking down Hey, here's the official Jenkins images before someone else might come along and do stuff that might confuse users and stuff. That was really my, my point. So I can I guess so that's a good point I can I could definitely register the name on quay IO. Mike, my, my, my fear here is if it does not really had major values to have to maintain both registries. I mean, on our side is slowed on the process from a configuration point of view does not change because we can just add one additional tag I mean we just provide one additional credentials and one additional tags and that's fine. Ultimately, we'll have to push artifacts in a new location so it will slow down security releases, LTS releases and so on. Also, once you want to restart to be pushing image on quay IO, the community will expect us to keep maintaining them. So, if we, if for some reason, we don't want to use those images anymore, then we'll have to write a blog post to communicate about that. So, that's why we that's why we created Jenkins for eval organization on Docker Hub so we could have, we could experiment and we could have, yeah, we could experiment with images. But in this case, I just fear that we are just putting too much constraints without getting values from that. So, if, if tomorrow it appears that we need to use quay that IO for having more architecture, maybe that would be a good reason to switch to quay that IO. But for now, I think as long as the community is happy with images from the hub, we have the process in place to push there. I think I think that's perfectly acceptable. I just want to throw it out there, but I think, like you mentioned, I'd go register your account for the Jenkins account over there. At least have some sort of presence or username. So when if you if you do down the line need to push there or want to push there. There won't be any hassle, I guess. Okay, that makes sense. In terms of Jim, you mentioned pod man as a crucial part of your ongoing exploration for image building is and Damien had mentioned IMG. I think people mind had only been thinking about the docker way of doing these builds. Do we as a as a container and platforms track need to think about any changes specific to those or is it, is it sufficient to say, Okay, we will build with an image to be pod man with build it could be IMG it could be Docker but people will use the containers in any case, or is that simplicity not actually there. No, I think I think if you guys are supporting the open container initiative OCI. You guys are fine. Docker has both support for OCI and pod man does too. And with OCI that's the whole purpose of being able to be transportable across any kind of containerized platform. So, and as far as I know the images you guys are building or OCI client. So, there shouldn't be any issues. All right, thank you. Thanks for that clarity okay so now publishing Docker images for me had one additional which one was, and I guess it's a separate topic. It's the speed at which we build our images, or the delay between the arrival of a. The security team is the very specific case a security issue is is detected resolve but now they need to announce it and then publish the image and I've, I've heard noise from the security team that hey they'd like our image publishing to be faster is that already covered in pending pull requests we just need to take them through the full review process, or do we need more focus on accelerating our build process. We need more focus particular because we don't parallelize a lot of things here. And also a lot of the testing process is a mix of black and white box testing. It's a bunch of bats command line that will test the behavior of the image, while most of the time. The issue can be cooked with a static test, because most of the issue we have seen are a missing file or a file that change inside the file structure inside the image a metadata that will be missing. And so that's the work we started on infrasci and that car has worked on on the gene LP agents, mysterious images. The idea here will be combining parallelization of the build, enabling build kits, and maybe delaying the behavior, the big bunch, the big test harness, which run a bunch of images because this need power and time. While most of the time we could say, do we trust enough the test harness, at least the first part of the test harness to be run efficiently quickly. And if it's okay, then we can publish and run a test harness daily in the CI principle. But the slower test harness will be more to ensure that we reach some quality gates. But for security, they might not have the time. So in order to be sure that this would optimize right now we need to start measuring the build time. We need to check the trends, maybe put them on the dog if it's not the case, and start acting on the building to be sure that we don't if I is the build, the build time taking time, is it the tests, is it the publication, is it the three of them, but we need measurement so whatever we do to optimize this, we need to validate each time. If you don't have measurement we are blind and it doesn't make sense to say it's going fast if we cannot prove it for at least ourselves. So we need to put some focus on this as the next steps as well, just to be sure that security issues can be accelerated. Okay, thank you so so there is work to be done there. Great. One question I had about that is about accelerating the Docker build process. And I don't really know much about how your guys as machines work and I know we talked about savvy agents for us united power. But I know one of the big things in Docker build is catching of the different layers. And I think the majority of at least the Docker of the Jenkins main image that I've been working on, right, it's pretty much the same for the beginning layers. And then later on when you copy the binary or do a couple downloads, those might change but if we have some sort of catching mechanism that's saved in between the builds or some sort of way to do that that should speed up things a lot. And I know there are some issues with caching if done improperly in terms of, you know, game essay and stuff. So the main limitation that we have right now is the scripts, which builds the garbage does that in sequence in one image and the second one and so on. So the good thing is, the same script is run on the same virtual machine, but it's definitely not optimized. We just, I mean, it's typically the kind of example where someone work on something, and we just are going to put new features to the data script without taking the time to refactor it. So that's what we have to do today. Yeah, I do have a PR open from the multi arch builds. You're right where the janking like part of the build trip is looping over build in the last like I think 30 builds of Jenkins, the kind of keep those ones fresh and the OS refresh and stuff like that and those do catch very well. Because after you build the first, you know, half of all the documents is really not much changing. But I guess I don't know it's, I guess it's really that first build that will probably take longer. And then each one after that should be all cash and stuff, but it would take a good amount of redesigning to kind of really optimize it but I don't know if that's where things are taking the time, or, you know, I think we just mentioned reporting building and tracking and seeing where actually the slowdown is. And the issue with cashing is that we might have issues. Oh, do we ensure that that a given release because here we are the release most of the time we like to build a release from scratch to be sure that any package index inside the image are freshly downloaded etc. And the cashing in validation depends a lot on Docker, at least on the content of the instruction run inside the Docker file. So if we don't change the instruction, we might lose with cashing the update of packages internally. That's always the issue. However, that, yeah. Right now, I don't know if we're using Billkit, but there is absolutely no cashing issue and we could have some cashing at least on the base images. And also, right now we have different Docker files for each images, while if we were able to run the Docker builds on a powerful machine with a single Docker file that contains a lot of multi stage. If we enable Billkit, it's far more efficient than trying to distribute the build on multiple machines because Billkit is really impressive in its capacity to parallelize to pull different layers at the same time. Because this does not have any issue and it's completely able to create a graph, complete graph of dependencies and build in parallel as much in the graph. And I've had a lot of improved time on the most of my setups with this and on big setups that are far bigger than this one. So that could be also an opportunity, a mix of cashing and using Billkit. But again, as you said, we need to measure and we need to be sure that this is the part which slowed on the builds. With with the new Docker pull limits implemented. This year, guys is a count for when we actually go to do like the builds and publishing. Are you guys using like a premium account to get around that so like we had to pull down Alpine we have to pull down. You know, Debian Buster, Debian Slim, stuff like that, we're not going to hit those limits. I've been hitting limits with my free account. I didn't know we are not because we are sponsored. I mean, we are part of the source plan from for the car. I was almost suggest if we are hitting those limits, or also just to speed things up the cashing of those images somewhere. Like, okay, what's cash, you know, Debian Buster, but again with cashing, you know, you might miss. I haven't really found a good Docker cache for myself anyways so I don't know what's out there. That's correct. Right now, in time of measurement I've had the link on the conversation, which is the GitHub actions feedback from Jenkins for the official Jenkins controller image. It's nice because we have a detail of all the timings, but the first thing here is that the build part is implied. I see a builds, but since the build is done before each specific operating system, right now we are not even able to differentiate how much time did it take to build a slim image and how much time did it take to test it. Because there is a build step which seems generic with took nine minutes, and then we have 15 minutes per test so we need to be sure that the tests are not in playing a rebuild each time. And since we're using a make file, see where we might have different issues there so that's some let's say that should be a first topic be sure that we have a clear separation between build test and deployment. Maybe this is something we can contribute from infrasci because right now we have a big library with a lot of parallel builds. And I think that using declarative pipeline here and benefiting from blue ocean visualization we could have details metrics. So there is some work on to do on this. Nice. Okay. I think Victor already started some work there is a I'm sure there is a pull request for that. Okay, but anything, any other discussion needed on the topic accelerating our Docker build process feels like Damian what you described as we need to measure. And we've got preliminary measures already visible in through the GitHub chat GitHub actions that are the GitHub checks. And then then we start looking at what things we could do to start you mentioned build kit now build kit is not something that I'm familiar with so that's a that's a higher level thing over Docker. Now it's a feature inside Docker, which is only enabled by default since the last December release of Docker, which we don't have yet on the missions. So depending on the version of Docker we're running, we might have to set an environment variable to the value one. I think it's built it and those co-enabled it's a quite easy one and immediately your Docker engine will start using Bill kit. If we're doing cash, are you sure that we don't have. Why, why don't we have it yet because we reinstalled Docker all the time. I mean, no, no, no, no, I know why I know why. The reason why it's not there yet is because we have a process to automatically updates backer images that we use in an Amazon. But the problem is the easy to plug in for requires you to constantly manually updates the EMI. So we are running an old version so we need to bump. Don't worry, it's a two time process. First, the is the shorter path should be setting the environment variable to the value one inside the Jenkins file or better on the make file that takes care of building the image. Once we do that, we can immediately measure the gain in time. We might or might not. I don't know. But it this will be for for later. Most of the time where Bill kit is better out of the box is in evaluating if a layer need to be rebuilt or not. It's really efficient. We speak about 10 times faster. So then we will be able to implement an efficient caching because the caching is different between the standard Docker and the Bill kit Docker. We have to consider Bill kits to be the new official and default way to build Docker image. It's just a re-implementation of the Docker build engine. I am G build. I know build as a specific mode to produce OCI image using Bill kits. There are Bill kits that can build on remote Kubernetes. There are a lot of work around this, but it's an internal feature and so re-implementation for more efficient. I do worry. I haven't actually played around with big kit and I did see Bill kit in when I was exploring build off. But I do worry about S390 and power being, you know, you mentioned the last December build. There might have been a couple of data builds, but it might not be available on S390 or power. Good to know. Right. Thank you. I can test that later today. At least spoke around because I mean, if it does all you say that it would be useful internally for myself. Okay. Any other topics on accelerating the build process. Okay, next topic then. This one's a little more painful. Maintainers for existing Docker images. Before we continue, I just want to mention I also reserve the Jenkins CI organization name, but the Jenkins one is already taken. Okay, so. But this is the kind of thing that we can ask to the platform. We can open a support to get some time it works. I mean, I already retrieved Jenkins name on other systems previously. Great. Yeah. Okay. All right, so maintainers for existing job Docker images so the, the, the state and the problem statement here is that we've got Docker images that are in various stages of being cared for or not being cared for and the proposed Jenkins enhanced the idea for the Jenkins enhancement proposal is that we use the GitHub code owners facility to assign maintainers to declare maintainers for specific Docker images. And if there's no maintainer. We use the adopt a plug in adopt this plug in process or images so that people would understand as an example. I'm not paying significant attention to the centos images I pay a lot of attention to the Debian images, and I pay a little bit of attention to the alpine images but we need somebody else who's interested in centos to take on centos. And so those, that's the concept here. And then the notion is we need that for controllers. That's the one I pay a lot of attention to agents where I don't pay nearly enough attention. And then infrastructure images because these are those are also crucial to us and we use dedicated infrastructure images for specific tasks, if I understand correctly Damien and Olivier can you help illuminate for me a little more there. So, letting aside the renaming of the agents that went from GNLP to inbound agents for SSH agents, I don't know if it has been renamed but I will expect outbound agents but that mean we have a dependency graph for all the agents related image. We now have a base image that only specify the minimum for an agent, which mean a Jenkins user and open gdk, and that download a sample of the agent.jar, even though there is also an entry point script, which role is to start an agent. So you pass it the parameters to connect to your master, and it will download the latest agents. And that's all. Then from this we have inherited images on other repositories for inbound and outbound, which mean installing some specific script for the GNLP inbound connection, supporting WebSocket or GNLP. And in the case of SSH, adding the logic of installing open SSH server, setting it up and adding a script able to collect public keys from the environment at startup. So we have the common image inbound and inbound images. These three images are mandatory to be maintained by the community as well as the controller, because we use these images for Kubernetes default plugins, for Docker agents, for a lot of setups. And there is a bunch of images inherited from the older GNLP agents. There are actually on Jenkins CI slash GNLP dash agents role form repository, and these images are published to the Docker hub, and it's a library of images that provide a kind of base ready to go inbound agents with specific tools for a specific language. For instance, GNLP agent Maven GDK8, GNLP agent Ruby, GNLP agent Python 2, Python 3. They are for Windows also like Core runtime. There is a bunch of images. So the first issue of these images, and this is something that I propose to add next to the code owner and maintainer adopt this plugin, is that these images on the Docker hub doesn't have any description. They don't have a readme because by default Docker hub catch the readme from the images. And the publication process of such image if we keep them, we must have a readme with at least a badge that say, we need a maintainer or the maintainer and the source code are there. Because right now this image in this images look really suspicious. As a Docker hub browser. I see that image with no readme, no information about the repository where the Docker file come from, I mean, could be a Bitcoin miner for what I know. So the idea for whatever all the images because the controller has but for the other we have no check, we need to be sure that we have a readme and the correct way to evaluate the trust the user can put. It's not perfect of course even readme does not guarantee that the content of image has not been dealt with by a malicious user but still, in terms of image documentation use experiences mandatory. Going back to that. Yep. I was going to say also I think I know a lot on like a couple of Docker hub instances and stuff like that. When people post images, they have the readme that usually links back to the GitHub for more information but then if you guys can include like a link, or something like that in the GitHub or on the Docker page to where the images are built, like if it is a public build line or what that would help clarify and like that trust right like you can see hey this is coming from Jenkins, here's the build, and here's the publish. But for that we can only provide a link to the instance that we use to test the build process because in the end, we build and publish from a private. Okay, yeah. Okay. Yeah, I mean that makes sense that that makes sense that would be useful to see anyway. And I also don't, I don't remember the name but there are a few services that provide embedded batch that you can put on markdown, which are SAS service that run the equivalent of a Docker inspect. And so they provide you a web view of that given image on the Docker hub with the analysis of each layer, the Docker also provides such service. But adding this can at least the minimum the link to the Docker file as you said, eventually to the test process and the contribution process is the minimum. And also if it's in adopt a plugin process, there should be a badge or something really visible like yellow warning, big and visible that say hey, this image is waiting for a maintainer. So if you use it, contact us to help us maintaining it or risk having this image depicted in the future. And that could be a place to also have security concern if they are like a hundred percent of security whatever trends. And also the point here. So we have the read me, which is a part of the process to for all images but the issue we have with the gene LP agents images is that we have people using these images but not only we have a dimension of maintenance around the operating systems like the Jenkins controller can be built for different gdk different operating system different CPU architectures same for the gene LP. We have a new dimension of is it an inbound or outbound agent. Is it expected to use SSH to be connected to and started or is it expected to be gene LP Web sockets. And now with these images, we have the concept of a fifth at least dimension. Do you want Maven free Maven for Python to Python three Python two point seven Python three point whatever. I mean, that's a lot to maintain it's an exponential efforts, while the goal of using Docker images or even better Kubernetes pods, it's to have minimalistic and isolated images. So there is that world topic of do we need these images and if we need, there is no problem when I say we it's as a community, do we have community user will rely on these images, are they willing to maintenance, because maintaining testing checking security for of all these images as a cost. And we can be sure that we need controller and agent images but do we really would do we really need these images. I think I think that's that's the question. Do we really need to maintain every images I mean that's the same for the default Jenkins image. We maintain an image for Debian we maintain the image for centers and for alpine. And that's what I mean what's the benefits of doing that. From a Jenkins point of view I mean we don't have I mean, I don't think that's that's the responsibility of the Jenkins project to maintain every image for every combination of scenarios. And so I think having a cod owners would already simplify because something that I find challenging is, we have many different images, we have many people opening PRs, but following and reviewing PRs take time. And once the PR is merged, then people assume that we will maintain those image forever. And I mean, it does not work this way and so for instance we are so what last time we had to build the Docker images for the for the stable release, it took us like one hours. And again, do we have to build image for everything this dream makes sense. So, I would definitely be interested to have cod owners for specific images, but more importantly, I would also like to have a procedure where we could remove images, because for instance when you go to docker hub you'll see and I'm not sure for instance for instance, I think there are pollution docker image where the latest tag point to a very old version. And so I know that you had people in the past saying, Oh, look, I tried that docker image and it was not working. And so it was just, I'm not used image. So I think we should have a more aggressively process to delete images that we don't want to distribute anymore. I know that I don't have a solution I have certainly seen that even the Jenkins images we were our Debian 10 or Debian nine controller image was based for a full year on an image that the open JDK project had stopped maintaining right so we were based on a tag that they had just stopped maintaining. And so we were we were locked backwards in time for that year old JDK version, but I don't think there is a way to remove an image once it's published it's it's published right it's yes you can cash all over the world and so so you cannot you cannot remove the image from other end user on his machine, but at least you can remove the tag is you cannot remove the image but you can remove the tag. So by removing the tag that if the person if that person is using the digest he will still be able to retrieve the image. If that person is looking for a specific image let's say the latest solution image, then that person won't be able to download that image. But for me that feels, at least that seems too aggressive to me because I, I leave to their judgment if they want to continue using something that we're no longer actively, actively updating a Docker images a snapshot in time isn't it. Which they can with the digest. So when we depreciate or delete because of security issue for instance, we can have a message that say okay, if you used what an image on that list, and you want to keep using it despite the security or any additional alerts, change the tag name by this digest. So the tag won't be. It will remove the search ability of the image on the Docker hub, but they will only have to switch the digest and they will keep using that image with no problem it's only the tag discoverability that change in that case. It's just that feels that feels different than how I'm used to thinking of, of code we've delivered in other places where we don't, we don't actively prevent you from running and outdated Jenkins version, even though we know it contains security vulnerabilities, leaving the tags there seems harmless to me in that they're, they're not we're not gaining much by deleting a tag are we know what I'm personally the one that I fear is the latest tag. Oh, okay. Yeah. So, but because for me the latest tag is confusing when we know that we don't update an image anymore. I mean, I mean, I don't really care if we have images mentioning a specific version. I just want to be sure that we don't, we don't have latest image that are not latest anymore because we don't use them anymore. I would say, having things written, if the main read me of the official image has a section saying these images are known to face security issues, or it points to a file that lists all the known tags. Then it's, it's really a good enough thing with this image or they are considered deprecated and won't have any maintenance and this image are known to a bud security feature we recommend you to use that other whatever. So you don't remove tag but you have you share the knowledge to the community that they should not then it's, as you say their choice to keep using it or not. Mark, didn't we, I think, didn't you put a proposal together for the, what platforms are always as we support and deprecation and stuff like that. I think that would be good to kind of bring up and at least, you know, look at or absolutely and that's, that's, that's the, the general theme here is that that specific idea that concept so I you're you're absolutely right Jim I think this is the right place for that and, and the ideas that have been added here. Adopt this image and code owners and including it for agents not just for controllers feels like a really healthy thing to do so for me it feels like this should be one of those things yes we feel we need this. And it's an adopt adopt adopt this image process and a process that people know that this image is up for adoption so that they, they, they are intentionally caused to think carefully before they choose to use that image. I think having like a documents that really spells out. You know what the deprecation path is, you know, especially like if we're, you know, upgrading from Debian versions, you know, buster to stretch and stuff like that. I don't know how that process works and what tags I know, I think with Alex, we talked about the new tagging scheme, at least for the main image. And the windows is starting the car, the windows images are starting to use that. We talked about like, okay, hey, the Jenkins, you know, colon Debian tag right. What does that point to, you know, because that changes in given time right, whether that's okay the the latest Debian right. Okay that points to stretch and then we eventually need to move it to the next LTS. I'm just telling how that works and what is the current one and stuff like that would be extremely helpful for the community. And a good place to add those deprecations, you know, like, hey, hey, this is deprecated. Here's the security concern. Please use this alternative image, if you want to stay on Debian or something like that. Right. I want to add why we talk about the tag and the example that I have here is if you pull the Jenkins CI slash Jenkins Docker image, you're running the version 2.151. And so, so it's a very old version. And so if someone is not necessarily aware that that image is not maintained anymore. That person will just try to test Jenkins using the latest tag and it's not the latest version. It's a very old image. It's a very old proposal for this. Like building just one more time a latest image, which on three points start with a sleep. 100 seconds with a message that say this image is and maintain and secure you should check another tag just to be sure that we keep the latest it keep exist but if it takes two minutes to start, then you will be really compelled to use in another Yeah, I mean, most people have to check the logs anyways to grab the whole API key or password or figure what it is. So, it'd be cool to add that deprecation notice somehow into those images I don't know how you'd go about doing it, but I like your idea of okay hey, if we deprecate a image, right, let's rebuild it one last time and add the deprecation notice into that message somehow I think that'd be really, really cool. And also a good way to communicate to users that hey, you should probably move to something else. Hey well so we've actually already got a I think it's an interesting idea we've got a facility already that will alert users of security vulnerabilities right. And at the admin monitor that appears and says you're running a vulnerable version. And that's already there and we've got admin monitors that show that you're running outdated plugins with versions so maybe we ought to consider a way of saying hey, this Docker image is also somehow outmoded and add an admin monitor for that. Okay, it's not it's not quite as straightforward I think as Damian's proposal of G let's just put a 60 second sleep in with a big output to standard out. But I suspect it would be more likely to be seen. It may it may be as likely to be ignored as any of the other things we say. Yep. In that case, it won't be alert fatigue if we have different media to communicate on this. Okay, good. Any any other conversation we need to have on that particular topic. I have two points here. The first one that is an idea from Garrett, because I think I'm not sure if Garrett is there. He is but we had a discussion about discoverability on the contents and since we mentioned having detailed tagging syntax, another complimentary tool that we have on the Docker images are the labels. And there are two kinds of label schemes that are metadata associated to the Docker images that you can tune and that are defined by OCA or another scheme. And so we don't, I don't see any label stuff on the official Docker images but adding these labels to help discoverability by implementing at least the standard OCI scheme. By adding information like the Jenkins version that so date and time and git commit are also metadata. Of course this metadata can be tampered with. However, it helps for discoverability when you check the information about an image without running it. So we could also use this as a support to complement the detailed tagging syntax by saying base operating system by saying the package list or the tools list if it's installed and adding a bunch of metadata to explain the intention on a different way. That'd be awesome. Do you have a link to the OCI or the labeling standard. That'd be really awesome. I honestly haven't really played around labels but that's not really cool. I know scopio. I think that's the name is the one of the tools that pod will one of the tools you can use to you know, interact with remote images without pulling them down. So I imagine you can look at labels and decide whether hey this image I actually do want based on those really cool. And maybe we could add if there's like an adopt label like hey like this image is looking to be adopted or you know it's being maintained by community member X. Garrett, are the link on the tool you've wrote to check the diff between labels? Yes, I'll put the individual links in but there's the open container spec and there is the label schema for the two different ones. Thanks. And I'm taking the opportunity to also share to everyone the tool that Garrett has written. It's a common line that we start using on the infra that did does a diff between two images in terms of labels, which is really useful when you rely on labels to understand what is the content of these images. That's awesome. So maybe you can show them link maybe shouting to that as well. Okay, sweet. I know one of the things we do, or one of the improvements I did in the build scripts and it's in my PR was doing, grabbing like certain info down from Docgrubs API like instead of enabling some data features on the command line and one of them was like using the remote inspect and stuff like that to gain more information because I don't want to have to pull down one of the things I was doing the build script was avoiding pulling down images. If I didn't need to kind of speed things up, which we do have some of that in the build scripts already but I was trying to be more aggressive, whether checking we actually need to build something again, or stuff like that. I think labels could be something really useful. We start implementing them in the images. Yes, this uses the registry API to query some directly from there without having to pull down the image. Awesome. It's pretty quick. There's a few helpers in there as well if you want to like add. If you're doing a Docker build locally and you want to add all of the labels or build arcs however you want to configure it. Thank you. Yeah, it's cool. And the latest thing I did was adding the ability to diff two namespaces. You just basically just provide the names the namespaces tells you what images are in both. If there's any difference between them, it's quite handy for debugging things. And using Kubernetes. And the other point I wanted to make it's about the amount of images we have to maintain so that that's a topic I want to bring to the GSOC. That is that right now when we build images, all these images, in fact, it's like we build a set of meals that are already pre baked. And something if we deprecate some of these images that are not strictly mandatory, something we could provide to the community will be the recipe to build our own image, because we don't have these things. And a lot of the images on the Docker hub. Have a documentation section on the read me that say okay, if you want to extend that image to tune it. Here is how you should do. And there there is an example of Docker file sample that say from whatever image run up to get installed blah blah blah packages. And I feel like that all the GNP agents, instead of maintaining image in the future if we don't have code maintainers. One of the solution on the deprecation path could be providing a Docker file generator or at least some documentation that say okay, if you want to install some tools on your inbound agent here is how to do it. Here is an example with Maven from Jenkin CI stash inbound agent run apk aid Maven or on Debian run apk to get installed Maven or curl whatever version of Maven check the SHA. And this will be the idea for Docker file generator where you would have a form that will be, hey, I want to Jenkins agent inbound or outbound operating system should be Windows Debian whatever. And I, I want Maven Terraform, whatever. Select, and then it's only textual it's a library of textual run instruction block that could generate the textual file this can be done in JavaScript on the static website that could be done on numerous way. So this is long term but at least short term having a first documentation that say, hey, how do you, how can you extend that image, because this is information a lot of user and on using these shadow images but in fact what is the real need what they need. They want to customize their agents and run it with Docker container outputs. So if we show them how to do that they will take care of building security scanning testing maintaining these images on their own because this is, these are their specific needs. While here we provide still added value with the base image and how to extend it as a complete documentation. Because it's not only about providing labels or image built it's also about providing the value or maybe teaching people how to build the whole image. This is completely a proposal for at least documenting and base case building something that could be useful. You're muted Mark. Yeah, it looks like you are muted. Damian so the, I like the, I like the concept it feels like it supports the idea of, hey, if we have no maintainer we mark the thing is up for adoption up for adoption means it's not actively being maintained. Right, and, and then if we have even better ways to hint. Here's how you can overcome the up for adoption by adopting it yourself for your specific needs. Here the steps you do. I like that. Good. Okay. So, by, basically it's reducing the number of inches we are maintaining by teaching users to build their own images to maintain their own. Great. So, instead of cooking meals, right. Anything else on on that topic before we go to what appears to be our concluding topic we've got not more than 20 minutes I've promised myself I will stop at two hours. So official helm chart for Jenkins was one. And then I'd like a few minutes at the end to just give you all a description of what I think I'm going to say tomorrow and summarizing this track. So anything, any topics you'd like to address on official home chart we put it on the list as a possible. I'm not sure what you would like to discuss there or describe or, or who browse that to be to the. This was just me putting putting topics as a possible. So it's not something where I have passion to say oh I have or even ideas. So right now, right now, we have to kind of have charts in the project. We have the official Jenkins and charts. This one is using data pages. So we just put. So just just to get a page where we publish. I think they have a process in place. So I'm going to add value to that quite recently. I mean, I think it's, it works. So I don't think what was something that we could bring here, we may sign the chart, but I'm not sure the added value to do that. On the other side, we also have ham charts on the Jenkins info projects. We also have a ham chart on that versions. It's not a big deal for us at the moment. But yeah, we may. I got access to ham charts to a ham charts repository on the chief rock service and repo the Jenkins here the org. So we could push our ham chart there. We could push everything under repo the Jenkins here the org, but I mean, for the official Jenkins and charts. Again, there is, I mean they're already using that to the communities using that so I will not change the end point anyway. So I don't think we have much to much on here. Sorry, good. One thing I noticed with like Kubernetes and stuff. Helm charts are still very popular. But I know there's been an uptake in operators and having a Jenkins operator might be kind of cool. Operators more managing the life cycle over time. Well, my understanding of helm charts is more just like a single install, which I think you could update it but operate handle that magic for you. Yeah, but there is already Jenkins operator. That that thing was pushed by virtuous lab and read that. So you have you have a discussion on deaf meeting list about that. Basically, they propose to collaborate to develop the Jenkins operator. I didn't know that. But that'd be cool. I know the operator I'm probably managing is only for x86 pulling those images but it'll be kind of cool to see as we make progress on the other. You know, points in multi art support, looking to update the operator to enable multi art support to be kind of cool. So I put I put the link to the projects in the charts. Okay, thank you. So something that came up on the club native track about an hour ago. What I want to discuss is when you're using this helm chart and you're applying updates quite often. There is a period of time at which Jenkins is not available for. And you're likely to miss web hook events. Coming from something like it have. There was some talk about whether we could create something external to Jenkins, but included optionally inside the Jenkins helm chart that basically does some kind of store and forward of web hook events to Jenkins internally. So I mean it's fast to restart. No memory. Nice and lightweight. And done. Really. Let's say nice to have but not mandatory that I've used them short free time since the past year. Each time I ended up having to, to search for the correct value, because the values have were changed. And there is a shift that, but it's a classical for all m chart of the world that sometimes the values that default, the default values YAML file which is the source of truth for default values and the documentation associated to it, then to shift. For the traffic official m chart, there is another still using it, but two years ago what we started was improving the CI process. That was lending the wall and chart by adding a dumb shell script, but I'm sure we can find or build something that check that if a given value is defined on the default value file, then the same value should exist on the associated with me that just to be sure that the CI should fail if you change the default value without updating the documentation because even the best person on the world, if that person is exhausted and change something for getting documentation upon all the time and it's not a human issue. All human can do that, that error, having a tool that helps us to that improve the user experience because I've, I've been pretty frustrated, but I know deep dive or m chart works. Someone with new to that environment will just retain the experience that Elman Jenkins are painful so let's use something else. So it's not mandatory but that's that could be really something easy to not that complicated. It's not easy, but it's not complicated to implement that could have some value for the newcomers there. I'm definitely I'm definitely favor such initiative because they are relying on the documentation is really frustrating. Anything that fails to build and it's not been done is good. And I have one last topic that has been brought to me yesterday by someone who dropped on the room, and that's something I already had this discussion also during the first them events. They asked if we can use customize with a K, which is an alternative to Elm to install. So right now, I naively point them to the fact that Elm can export the results of the chart as pure YAML so you can have a helm whatever command that output a bunch of YAML that can then be piped to customize. But I don't know if we. So it's the same debate should we provide for every package manager, I don't know, but maybe mentioning the trick of the helm to YAML pipe to customize could provide an easy entry solution at least. I don't know all the pro and cons of using customize but this is this is built in in the kubectl binary out of the box since a few versions. It's pretty popular and it's pretty popular anyway so it would be nice to just give some hints and anyway. Yeah, again it's nice to have it's not mandatory it's not blocking it's not something to prioritize, but anyone willing to contribute on that part. Minimum it will be documentation and maybe a few commands or one test on the CI. And like the previous issue those are too easy to get started for new contributor for instance. And I assume Damian that the way that would evolve would be by pull requests to the to the documentation that supports the helm chart today's Jenkins controller helm chart. Yes, and complimentary pull requests to the Jenkins with Kubernetes documentation on the official doc. And that's the one that the neighbor created that could be I don't know the status if she's still working on that if there was a contributor but that could be something to add pull request on both documentation. Yep. Okay. By the way I'm just wondering the best way to highlight the Jenkins, the Jenkins operator in the Jenkins helm chart. Maybe we should put that on the download page. There. It's, it's definitely not. So you're envisioning it would be someplace like here bringing up this page on this page somehow. Um, yeah. Why not in download Jenkins with the link to Jenkins of Kubernetes operator and Jenkins. Documentation. Yeah, I mean we certainly have Docker. And we have some rather exotic ones like open BSD and free BSD and gen two that are actually outside the outside the project so we could easily add helm charts. I'm charts and Jenkins of committees operator and it would be the test because by default, I think we are using the stable version. Yeah, yeah that I think that's that's a very good idea. Thank you. You make a note of that because no reason we can't add the helm chart to the downloads page page. And that actually is maintained. That's maintained if you will by the project that's not like this free BSD package or the open BSD it's really maintained by people who are part of the Jenkins project. Good. Okay, I like that. Yeah and and today I think the Docker link just takes us to Docker hub. Yeah, so we could have a link that does something similar takes us to navigates to the helm chart. Good. Yeah, I think it's charts that are you. Okay. Yes. So just chart that you know the end point. Okay, and there isn't a centralized repository any longer of of helm charts. No, no. Okay. Nice. Any other topics we should be discussing here you've been heroic at almost two hours of working on this track. I think we cover a lot of this in those two hours. So in tomorrow's closing session. I intend to present a summary from this I've got, let's say 10 to 15 minutes, actually no more like five to seven minutes. In that what I was thinking to do is let me, I want to put yellow highlights on the things that are, I think, should be mentioned, right so I'd like to mention this one that we intend to do. We intend to do. Okay, there's no I don't want text color I want the background color how do we do a highlighting Google Docs. Okay, sorry, you're watching me learn how to use Google Docs that's really awkward and embarrassing. This is the one that I would like to include this. Okay, so, so then I didn't think we needed separate mention of this it's for me sort of part of Docker image support from multiple platforms. We believe we need more work on securing and scanning so that I think should be mentioned. And accelerate now do we I, I don't know that we're ready to mention publishing Docker images, Olivia are you okay if we hold on that one and should I should I say that we're planning we're going to be discussing it further. I think we need to discuss further. Okay, this one though accelerating the Docker build process I think it's safe to say yes we feel like we need to we need to work on this one. And then the, the concept of the platform support Jenkins enhancement proposal I'd like to talk to. And I think that's okay so so I was, is there anything from the helm chart topic that I should include in tomorrow's summary. I think that that we have a Jenkins controller and and so Jenkins communities operator. And then job but the yeah that's all. Okay, so mentioned that the helm chart and operator exists. Okay. And for me this reducing the number of images is sort of a natural natural outcome of our going through the Jenkins enhancement proposal the jet process for adopt and and code owners. We will we will naturally be highlighting that. Okay, anything else that I've that I should be mentioning tomorrow as voice for this track that that I've missed. No sounds sounds great. All right, and you are welcome to chime in tomorrow and correct me or update. We're, we're again going to do it as a Docker meeting not as a webinar. So that our Docker meeting as a zoom meeting not as a webinar, so that you will all be able to make yourself voice. And intentionally, it's this is part of the contributor summit, not just a presentation to have the world. So I'll see you guys tomorrow for that and looking forward to it. Awesome. See you tomorrow. Thanks. See you tomorrow. Bye bye. Thank you.