 Hello everyone, welcome to the platform SIG meeting. Today we have several topics to discuss. Yeah, just to provide some background, we had a discussion. We have multiple discussions about offering multi-architecture docs and Jenkins, so we decided to have a meeting in order to... I'm saying it's weird with the sound, I don't know if it's me. Could you please mute yourself? I'm not sure who was it. No, it wasn't her, somebody was talking. Yes, so I muted her for now. Okay, do you see my screen? There you go, so... Yes. Okay, yes, sorry for the mess. Yeah, let's go back to the SIG. Yeah, so we had a number of discussions. Just a second, I'll share the agenda with the entire SIG, so you're able to edit it if needed. Okay, so over the last half a year, we had topics about multi-architecture docker images. It was mostly about ARM, also Windows queries, power, PC architectures, et cetera. So, yeah, there are still discussions and we agreed that we have this meeting in order to sync up on how we approach that and what we get. So there is a threat in a platform SIG about that. It's called multi-architecture docker image. So we had a discussion here, and thanks a lot to Mark for participating and to do with us for starting this thread. But yeah, there are also several other platforms. So I wanted to have this meeting just to sync up on how we approach the thing in general. And I have invited several people so that we discuss it. We have Carlos on the call. He's one of the maintainers of official Jenkins docker image. I also invited Nikola, who is the second maintainer and probably I'm the third maintainer, I'm not sure. But yeah, anyway, we have some people who are related to that. We also have Olivier, who is a member of Jenkins Intrastructure team. So he can provide us some insights about the current implementation and how we would approach that. So, yeah, that's it. Yeah, we have a number of other people joining. So regarding the agenda for the meeting, yeah, we start from all the architecture docker and then if you have time, we discuss other topics. So we still have the chocolate, the packaging, we have for platform components, et cetera. So yeah, it's something we could discuss. And then, yeah, if you have any questions, there is a platform secret chat. So if you can join the call because there are too many people, you just drop your question here and watch on YouTube. We are still figuring out how to have more people. Probably we will be switching to another platform. Okay, so I've tried to post all links here. So yeah, generally we have three pending discussions about three platforms. These discussions got a bit paralyzed, but I think that we have some representatives. At least I've invited everybody who participated in these stories. So, Durgudas, would you like to start from your case? So maybe we start from that and then press it with other topics? Yeah, sure, Oleg. So, am I audible for everyone? Yes. Okay, great. So the main idea that we had Linux for Z, right? So we have various customers who are mainly from the enterprise type who have been requesting for support of Jenkins, right, on the Linux on Z. The idea behind that was they did not want to mix various platforms and due to some management decisions or some business strategy. So the idea behind is whether we could have a multi-architecture docker image, right, for Jenkins. If we see most of the popular open source packages that are present, which are supported by a docker official group, right? So they are multi-arch in nature in the last few months. Most of the packages have been converted to multi-architecture. Either some of them are cross-compiled binaries or they have been supported by adding the platforms onto the existing CI. So what we got to know after various tryouts on Jenkins is that it supports only x86 of basically the AMD, right, kind of architecture. Whereas if you try to pull those images on other platforms like Z or even Power, then it doesn't support because of the OpenJDK8 image, right, which needs to be pulled separately on those platforms before the image could be built. So, yeah, so if we could, if we can come up with some solution here, right, like how we could approach this problem and make Jenkins multi-arch, that would be awesome. Okay, yeah, thanks for the introduction. Yeah, maybe I'll share my screen again. So, yeah, I've just started making notes in the document. So if you want to add something to the doc, just do that. Yeah. So we had a discussion in the mailing list. Yeah, it's here. Actually, yeah, we have one official image. It's Jenkins Jenkins. This image has been built on trusted CI. So it means that we need customer searching of the to build and test this image. Mark, would you be able to help me with notes if possible? Yes, I can type notes. Okay, thank you. So then I will just show you how it works. Yeah, generally what we have. So we have this Jenkins Docker image and there is a number of scripts, for example, for building and publishing to this image. And effectively, we have a Jenkins job on a Jenkins instance which gets triggered. Then it picks these scripts, builds images, and then uses this publish script in order to deploy them. And yeah, everything is happening within Jenkins. So, yeah, and actually we have two Jenkins instances. One is CI Jenkins IO. So it's this one. Okay, Jenkins IO. So it's a public instance which builds all our plugins, which runs continuous integration. So this is public, but actually this instance doesn't deploy anything. In order to deploy the things, we use trusted CI. There were several reasons to do that, but if I recall correctly, the main reason was about code signing and about preventing conflict between release flows and continuous integration, which consumes a lot of resources. Maybe Olivier and Carlos, quick verify. Mark, you should have permissions, by the way. What was the question? What was the background for using trusted CI? Sure, it's something that Tyler set up. I think the reason was just that because we were already using that machine to build Docker images, at the moment we're not signing any packages related to the Docker image. So I don't think that there is a real reason to use trusted CI at the moment. Yeah, you should log in and then you should be able to edit. I mean, reopen the document. I'm sitting in my apologies. I was sitting in the wrong account. I need to use my public account instead of my cloud-based account. Sorry about that. Here I come. Yeah, that's why I joined using my boss accounts. Got it. Okay, sorry about that. So these are some historical reasons. Yeah, but actually currently there is a hook which triggers a job on trusted CI when everything is released. And we use these hooks in order to deliver weekly packages and also LTS packages. So these scripts are being used, but it happens on Jenkins. So if you open these packages, these are the tags. Okay, so LTS, weekly tags, et cetera, and everything is directly mapped from this repository. So Dockerfile, Dockeralpine, Slim, they're here. This is one of the engines we have now. And actually when we were working on Java 10 Hackathon, of course, this was a kind of obstacle for us because we needed to offer continuous delivery so that testers at the Hackathon would be able to get the images quicker. So what we need to be set up experimental update center. So yeah, it's here. Oh, sorry, it's not experimental update center. It's just a separate image, Jenkins slash Jenkins experimental. And yeah, if you take a look here, it has some old packages like JDK 10, JDK 11. So pretty same case as the dude with us said, pretty same base image, but another parent image. Yeah, I still don't understand why Docker doesn't allow automating that because it's a common use case, I guess. But yeah, that's what we have. So yeah, what we did, we use Docker Hub here. So these builds happen automatically when somebody pushes to feature branches in the case of Jenkins repository in Jenkins experimental. And probably we could start from something like that for architecture images because obviously if you want to push official images to Jenkins Jenkins, there should be a lot of discussions, a lot of approvals. But if you start from experimental, it's much more relaxed. So yeah, my question would be, is it something you would consider or do you prefer to go straight to official repository? So I think your question is to Durgades, right? And for me, at least Mark Wait here, I would prefer to stay with the Jenkins experimental so we can be more fluid. Pull requests to the master repository are correctly very carefully vetted before we allow them in because we're changing 100,000 or more potential installations. Yeah, so it's a rather question with permissions because if you want to add a label there, there is no way to prevent instances from uploading another label in Docker Hub. So it means that assuming that we have another CI instance which does builds for ARM or for Power or for whatever. Yeah, it means that if something goes wrong on this instance, all official images can be overriding. So it's not, yeah. What if we could we use the deprecated Docker Hub and their ability to do multi-architecture as effectively an experimental? Admitting it's still deprecated, but with that, I mean, Durgades had mentioned that there was some infrastructure already provided by Docker for doing multi-architecture. Can you borrow their world? Yeah, if I understand it correctly, we do it via the official images. There is no need to create multiple Docker files like this latest request does, which will be a pain to maintain. Yeah. The problem with official images is that they need to be manually maintained. Yeah, so you mean that due to current limitations, we would have to create a dozen of such files here, right? That's what the pull request does. Yeah. But if we get in for us, I think for us to build images for different architectures, we will need to have much more architecture hoping Docker files. Yeah, maybe we could just do Docker file generator because, yeah, from what I've seen, there is not so many differences between images from. Well, the pull request, what it does, is adding it profiles to the Docker image. Yeah, so this one. So I want to add something here, right? So basically, we don't need a separate Jenkins file for architecture because what I have seen is Jenkins, the binary, right? Works perfectly well on various architectures like even the power without any modifications. It's pretty smoothly. So one of, as I mentioned, right? The bigger reasons why we are having issues is because of the base image, right? That is the OpenJDK8 in this case. And since we are just compiling, sorry, we are just building on an X statistic base machine and pushing. So that's the main reason why we are unable to support various architecture. So we don't need various Docker files like how this pull request depicts, right? So we could just have the same Docker file that we have that's been running on for X86. As long as we just make it run on that particular platform VM, right? So that's the main reason why we are having issues in the trusted CI or any of the CI that we use. And then we push that resultant image back to our artifact to the hub. I think we should be all good. Yeah, right. But in order to do that, we need agents with this particular researchers available on trusted CI. That's correct, yeah. Yeah, and here it becomes a bit tricky. So yeah, I guess Olivier could clarify, require a serious vetting by infrastructure team and by security team. Yeah, I only see that the options is one, getting the machines or two, use the official images to build maybe Jenkins experimental and looks like the second option is a bit easier. Well, I would definitely be interested to have power or something like that on Jenkins architecture. Some mainframe resources also wouldn't hurt, but yeah, I guess it would be complicated. So currently all our infrastructure is in cloud. So for ARM, we can probably get cloud hosting for power. I guess it would be close to impossible, right? Accepting the program we were talking about IBM. Okay, that's right. So I don't see how we can retrieve that. At the moment, we don't have those architecture, so I'm not sure how we can have them. My understanding is that you can use QEMU to do some of that. But I mean, just from some of this, I've barely started to get into Docker, but I was looking at multi-arric stuff a little bit. And from what I saw, you could use QEMU and Ben Formatmesk to do some of the work on other architectures. Yeah, you mean using QEMU to build Docker images, right? Yes. Well... So I'm not sure... All my experience says so that it's not going to work. Yeah, maybe... So I'm not sure how QEMU will work here, right? Because if we are trying to cross-compile using the QEMU emulator or any of the cross-compile options, right, so that basically works mostly on the binaries. So we don't have problems with the binaries here because mostly it's Java-based, right? So it doesn't have that much issues as we would see with some of the CPC++-based libraries. So even if we use KMO, I don't think the problem could be solved because of the base image of OpenJDK. So if we want to give a shot, right? We could start with S390X, which we can provide a VM on which we could try out this process, right, and how it works. And then maybe any team, any platform who is interested, I'm pretty much sure as in how we at Linux in the team is interested, Power will be another team which would be open to add their support as well by providing a VM, right, in the interesting architecture. So we could start with S390X and then maybe we can extend it to whichever platform, right, anyone who is willing to maintain and take it forward. Yeah, right. If we talk about experimental Docker images, would it be fine with you or do you prefer official ones? We are fine with Jenkins experimental as well. I think it's always good. We have much more relaxed environment to start with so that we can tweak about the process, we can get it right. And once we have everything in place, then we are all satisfied with how it's working. But in Jenkins segment, yeah, that's absolutely fine with us. Yeah, so then my proposal would be to start from experimental. Then what else can we do? We can potentially connect the machine to CI Jenkins.io so that we don't connect this environment to main Jenkins. Probably start from here. Or another option would be to actually somehow get you credentials credentials to deploy the stuff. The thing is that if we want to publish on Docker Hub, we will have to provide the credentials to that. That's what I was talking about. So currently we have Jenkins experimental as a single image. So what we could do, we could create Jenkins experimental organization. And in this organization, we could create just, for example, Jenkins, for example, power and say something like that. So in this case, we have several separate repository. And we can use Docker Hub permissions to give particular access only to this repository. I'm definitely feeling more comfortable with that approach. Okay. So yeah, that would be my proposal. We create separate organization because even now if you take a look here, it looks a bit messy because yeah, there are Jenkins core packages, but there is also a Bloo Ocean package, which is Jenkins slash Bloo Ocean. Because yeah, I used a separate tag for that. So if we create an organization, we could have more freedom there. And we can give permissions. So for example, if Google does instead of just connecting agent, he can get permissions and then he will be able to deploy on his own. And then once we figure out the process and once we figure out the requirements, we may probably promote it from experimental to Common Jenkins. Would it work for you? Sounds right. Yeah, good to start with. to start with, so another option, since I'm just spawning a new proposal here, we can think about it. So recently, not sure how many of the people followed, but there have been a new multi-art process that has been provided by Docker, which is using the manifest. So everyone uses registry, right? So Docker Hub itself is a registry. So we have a feature called Docker Manifest, wherein what we could do is we can just have one image, right, say Jenkins Experimental as the name of our image. What we can do is once we build a particular image on various platform, we tag it with those, like say Jenkins Experimental-S390x, Jenkins Experimental-PPC. And we push this onto the Docker Hub, and then we create one manifest, and then we tag those image against a particular architecture so that any customers or any users who are interested in using this, they don't need to pull a platform-based image. They can just pull it by the manifest name, that is, Jenkins Experimental. And that command itself would look at the underlying architecture and pull the right image. So one doesn't need to know the namespace. One doesn't need to explicitly request from the architecture. So what I can do is whatever I just spawned, right? I can put those commands and how those work, right? In the platform SIG group, so that we can correlate and then save that fit into our process, right? If that is something that we need to look at. Well, one advantage of that approach is that the manifest doesn't have to be in the same organization as the images it references. So you could put the manifest into the Jenkins organization and have it reference a S390X image that was in experimental. If people are on that architecture. Who was that? David. Yeah, thank you. Well, yeah, it's something we could try. So effectively, it still means we start something from here. Then we add manifest and get here, right? Yes, correct, yeah. OK, sounds good. So Carlos, if you want to speak about your plan proposal, just go forward and I will make notes. Sure. So what I would propose is we start adding the specific VMs into the CI. That would be the first step anyways, because we need something to be built. And once we have that, what we do is we could modify the Jenkins file, which today builds based on Debian and Alfine, right? So that is the way how we are grouping things and building our Docker files, building the Docker image. So what we could do is we could add another group on the supported platforms. So we could say AMD, maybe x86, s390x PPC as the supported platform. And then we tweak the Jenkins file to send based on the architecture. We make sure that we run that particular command or the set of jobs on that agent, right? At the moment, the agent name, I think, is Debian and Alfine. So what we could do is we could extend it to use the platforms. And then once it is built on those platforms and we have those images tagged, we could use the Docker manifest commands within the Jenkins file in the publish stage and push those images to the Docker hub. Yeah, so the problem with that is that we need to add specific infrastructure to our CI. And I guess once we do that, I do not say it's impossible, but it really increases the scope. I mean, that's why I have tried to get Olivier here. So we just understand the possibility of such approach because the infrastructure team may just say, hell no. I mean, if you already have that instance, I think it can be possible to add an agent with a specific level to on that agent. And so we only built what's needed there. So at the moment, I don't see any big no. For CI Jenkins, I offer trusted CI. For CI Jenkins, I open the first way so we can. So with CI Jenkins that I definitely don't see a big no. And for trusted CI, I would like first to have to see it working and see other Jenkins that I own. OK. So it means that we would need to have credentials for deployment on CI Jenkins.io, but it doesn't seem to be a blocker. Yeah, we just need to ensure that we use different credentials from traffic. Yes, at least if you are speaking about a credential to publish on Docker Hub, if I'm understanding correctly, we still need a different organization so we can split, so we can use different credentials. Right. Because the reason why we have trusted CI is because we define the credential to publish the image, be it on trusted CI. That's the only reason why we have trusted CI. And we try to not put critical credentials on CI Jenkins that I own because that instance is public. Yeah, totally makes sense. OK, so Carlos, would you also like to speak about your proposal? Yeah, what I said before is that if we cannot get the infra and the VMs for CI, then we can use Docker official library. Docker library, it just allows us to keep the Docker files simple. It adds some more of them to meet VMs for its new version that we want to publish. And the other thing we need to consider is that the agent image should also be there. Different architecture is probably more important than the Jenkins master. Could you say that again, please, Carlos? I missed, or maybe the typing. I missed what you said. What was more important? Agent images. Agent images. Ah, thank you. Got it. Yeah, so whatever we do, we can do for agent images as well. So agent images in this sense are not much different from Jenkins master. You just replace from in the beginning of Jenkins file, and you are done. So it's something we can do. But it's what we may need to do. And do you consider agent images? So when you are saying agent images, right, so can you just also you are saying that in the from, we give the particular platform based image? Yeah, right. So for example, there is Docker agent image, or Docker general P agent. So these images are pretty popular, for example, in Kubernetes plugin. So if you want to run Kubernetes cluster on power, then you will likely need to get these images specific to the platform as well. OK. Yeah, actually, there is just a shell script. But yeah, if you take a look at the Docker file, it's not that impressive actually. So yeah, it's much easier to just generate this file instead of doing something else. Yeah, but actually it inherits from Docker slave repository. It's a bit bigger, but still not that impressive. So yeah, if you want to, yeah, these files could be just a generated potentially. So if we have whatever CI is set up, we can get it done. Yeah, I think it makes sense. I think we likely will need it eventually. What was that? All right, Cindy? Yeah, that was Cindy. Yeah, hi guys. So yeah, I think if you find a generic way, adding agents to this formula won't be easy. Yeah, it will be easier. Because yeah, we will just follow the same approach. Actually, there are only three main images we offer, this one. There are also some images, for example, for remote Kafka plugin or for Docker swarm plugin. But yeah, they're out of our main scope for now, let's see. So yeah, this is what we could do. So if I understand your proposal correctly, Carlos, you just propose to give pet missions and to use external infrastructure to build these things, right? I guess submitting the full request to Docker libraries of Docker Hub distributions. And remember, if we are going to do weekly images, they are going to probably not be a thing with that, because for them it's a manual step. But at least LTS or more less frequent images can be done in there. Yeah, so with this approach, I have one major concern about webhooks. So I mean, in order to integrate properly, we need webhooks, because otherwise you can barely get information when Jenkins is actually released. And so far, we do not offer our specific webhooks, which would say that the release is done. So what we could do, of course, is to just hook on Jenkins slash Jenkins deployment. So Docker Hub allows creating webhooks. And that means when the image is deployed, we know that everything is in place, that weekly release is deployed to a factory, and that this is ready. So it could be the best workaround. But maybe we need something Jenkins specific to send webhooks. So any thoughts on that? Am I the only one who hits this issue? So if we are hooking up with Docker official, if we are trying to go down the official library path, do you think there will be some discussions needed with the Docker official group? So we might not have that much freedom to release as in when we want. So just to explain, Jenkins slash Jenkins official image is maintained by Jenkins project. There is an official image maintained by Docker, but it's long dead, I guess. So something like that. Yes, so there is an image. It's maintained by somebody, but it's not maintained by Jenkins project. You may see that it's outdated. So when we talk about official images, we talk about official images shipped by Jenkins project, and they are here. So it won't require involvement of Docker. It will require involvement of infrastructure team. But yeah, Oliver has access to Docker hub. I have access to Docker hub so that we can set it up. I thought Carlos's proposal there was to use the Docker official image, because then that would allow us to use their infrastructure for building on other platforms. Right, so that was my understanding of that. Oh, yeah, but so you mean not this image, but shifting back to this one, or what? Or creating a new one that is for Jenkins is very radical. But yeah, in this case, we will just need a proper Docker file, right? Or something more. All right, what would be needed to use the Jenkins, the official Jenkins? So they have a separate library Docker library project where they keep pointers to the Docker files and other metadata like the platforms that you want to build on. Yeah, right. And yeah, anything to make up to that. Mm-hmm. Yeah, so yeah, there is AMD. Let's take a look. So effectively, well, it points to Jenkins Docker. Right, so it's entirely possible it can use exactly the same Docker files that you're using for Jenkins Jenkins. There's just another level of indirection. And that's where the interaction with Docker comes in that when you want to make updates to that, to point, for example, to a newer version of the Docker file, that requires a pull request to be approved by the official images team at Docker. Yeah, so for us, it would be a bit complicated because we have big releases. So it means how many pull requests would we need to create each week? Even if we automate that, it would be a bit of pressure. Yeah, I don't know. Why do they need to validate each PR? Well, because they're effectively considered themselves the curators of those images. Yes, but they do not verify that each project is correct. I mean, because of Jenkins, I guess they should delegate the responsibilities of the maintainers of the master branch on Jenkins. Yeah, I mean, it may be worth having that discussion with Tianan, because if it literally is only the version of the binary that you're updating each time, then, as you say, they don't really have any input on that. Yeah. So I'm probably not that familiar with this official library approaches. And can we just do that? Can we just do that for the pretty arched image? So we use that, I mean, that docker image. And so we keep to use the Jenkins slash Jenkins for AMD64 and for all the other architecture, we use the library version. You've seen what I mean? I don't know if I can please enough. I mean, my concern there would be how Docker are going to think, where you're effectively just using them as a convenient build engine rather than really using their official image mechanism the way they intend. The other option is to have auto-build there. I think that would probably be for multiple architecture. I don't think it does, unfortunately. No, it doesn't. The official images don't use their own auto-build mechanism. So David, what I think you're saying is that if you're worried that Docker may object to us using them as a build service or experimental, but it feels to me like that's exactly the purpose for this kind of thing. We do need to retain control of the Jenkins slash Jenkins build, at least for the near term. We may, in the future, switch. But why not give them the chance to bring us back into their fold with this experiment? If they persuade us that it's worth going all the way back, all the better. But this is our chance to try to go back. If they could demonstrate that they'd be prepared to accept updates on a sufficiently frequent basis. So Carlos, were there any barriers that you saw in your proposal? I'm trying to understand what risks might be hiding there. It feels like there's a lot of, is there a lot more work to do in that one rather than approaching it the way Durgatis has suggested? Well, if we do official images, you have to submit a PR every time there is a new version. And then I don't think they like the fact that we release every week. That's why we switch first because there's a lot of manual work for them. So it seems reasonable for us to ask them if they're willing to support it. If they say, no, you're too noisy as a release engine, you release too frequently, then that probably blocks that entire path, right? Well, it will be also a problem for us. A lot of people created a manual work even if we automated something. It will be an option for LTS and for agent images because they are not really start speaking. Yeah, but still pretty often and to note that for LTS releases, we often have security releases. And for security releases, the delivery time is critical. So if we are, for example, one day behind these Docker images, it's a major problem. I would rather prefer to keep continuous integration and at some point continuous delivery infrastructure, which is controlled by us. So yeah, although the guys at Docker are usually pretty good at accepting pull requests, they are all caliphant, four-in-your-base, and you, at a minimum, have to wait for them to wake up in the morning? Yeah, but we had an issue, a previous issue, where Tyler could not update the Jenkins infrastructure until the Docker image was published. So you need to release the word file, then you need to notify of the security issues, then you have to wait for Docker for a day. So we released the Docker before he could update the Jenkins infrastructure. It was a problem. Yeah, right. But if we talk only about experimental images, we are fine. If we start doing official Jenkins images, this way it's likely won't fly. That's why I was proposing to use the JEDA. So to keep using Jenkins as Jenkins for the official image that we published to people, and we just use that for experimental image. So if we don't have the last security fixes, I guess it's less important. Yeah, but at the end of the day, I don't think your customers are going to be happy with that. But there's a stepping stone to at least prove that it can be made to work. I don't know whether it's still worth doing. I think we give a start, so whatever, as long as stepping stones. So we get it one milestone at a time. So as long as first we have a platform-based image, as long as it's there somewhere on the Dockerfab, which any users or customers can still pull and start using, I think that would be our first milestone. And then we start and improve so that they are more satisfied and satisfied as in when time goes by. But I think firstly, at least we need to just get it there in some way or the other, so that they have something to consume. Yeah, right. So I believe that this way would be probably the most straightforward. I mean, yeah, we can deploy it quickly. We don't have dependency on this infrastructure. You will be completely managing the release process. And yeah, we'll probably need to find a way to deliver webhooks to you reliably. But it's something we can do. Then, yeah, this process could be another option, assuming that we have CI configured. And this one is probably the most tricky process because it also depends on Docker Hub team. So yeah, if you start from something, I would rather vote for starting from here or here. And yeah, if there is interest, we could be doing a release in parallel. Right. So Carlos's proposal would definitely be very good, as long as Docker is very open for all the BI's and quick with things. And we have the control. But that seems very unlikely based on how we have seen things going on, right? It's more strict process there. So Oleg's proposal, I think, is something that we can quickly get, at least get the image onto the Docker Hub. But eventually, we might need to align with how the rest of the open source teams, right, are publishing the images by following the manifest approach. That would be a wedding cake to start with. So Oleg's proposal would be a birthday cake. And then if you can implement the proposal that I have put, then that would surely be a wedding cake. OK, so something like that starts from one then from yours, right? Yeah. OK, regarding that one, does it make sense to start working on that in parallel? Sure. Carlos, what do you think? I have no personal experience about working with official Docker library. So yeah, I'm not sure what would be the downsides here and how long could it take? I think it could be interesting to try the Carlos approach. But it would depend who we do that and how long it will take, of course. I think for that, I can say, ah. Who is always a problem? Carlos, are you around? I see that Carlos had to step out after the chat. OK. Yeah. So Oleg, I think we should, we don't lose any flexibility by having noted that Carlos's proposal is available. I don't see much benefit to starting it in parallel. OK. The challenge for me is your proposal and Durgatás's proposal lets you start quickly, get feedback now, and we're not blocked from using switching to Carlos's technique later. We can still switch. I don't mind giving Carlos's proposal a go. I mean, particularly if we're happy to use that existing deprecated Jenkins image, then I can try getting a pull request in against that that would pick up a newer version of the Jenkins Docker file and add in the multi-architector support and just see what Docker say to that. Maybe they turn around and say, what are you doing? Putting in updates on a deprecated image? Right. I would bias towards talking to Docker first or using maybe use that as your way to talk to them because I don't think we want changes coming in on that deprecated image. It'll create more confusion for users who think, oh, I can use the deprecated image has no longer been deprecated. Right. Yeah, and again, I'm second-guessing them, but I think I can't see them liking the idea of us having an experimental image, because that really does seem like we're just using them. Yeah, right, so if we have an experimental image in this case, it's not really having an experimental image. It's more having an experimental process to release an image. Whatever you call it. I mean, I do not mind. Oh, sorry, I hate these notifications. So here we have long term and weekly support. If we have a number of experimental images, we can just put experimental here somehow. I mean, it could be three columns in general. And here we can explicitly list all platforms with support. So something like that, we can do it in order to raise visibility. And I don't see much problems with that. Yeah, I mean, what one option would be to only update those official images with the platforms other than x86, which would ensure that your average user doesn't pick them up mistakenly. Yeah, right. So yeah, from the Genki.io website, I'm happy to do the work, get it here somehow, get documented a bit. So currently, if you click Doc, you just get to this image. So instead of that, we could redirect to a page which describes how to do that, what the restrictions, or just redirect to whatever images we create in this format. And in such case, another benefit that whomever needs the platform maintains that. So for example, if IBM creates this image, then IBM gets control over this image. We somehow align our work and maybe coordinate our work. But in such case, it's obvious who maintains that. And it may be helpful. So do we agree with creating this organization, Genki's Experimental? Yes. OK. I can do that. Does it require Jeff? It probably should just to alert people that, yes, this is a decision we've made. OK. I just have a question for the Genki's Experimental Organization. How many people will have access to that? I mean, how many people will be able to push images on that page? Because my fear is that, OK, we just create an experimental organization. One person will put images from one architecture. And then if we want someone who would like to push also different images again and again. And so we arrive in the same situation that we have at the moment where we have one organization and we don't want to share the credentials with too many people. Yeah, right. But maybe my thing is that if someone is responsible for one architecture, maybe you can just create an organization just for that architecture, Genki's whatever the architecture. Yeah. Actually, you do not need special groups to give access to particular images. You can add collaborators. OK. So I would rather prefer to have a single organization because if you split multiple organizations, yeah, I guess at some point we can get a message from Docker as well about what we are doing. OK. So yeah, I would rather prefer to have experimental as a single org and then we create specific access to the repository. OK. Do we need custom adoption process there or are you fine with the default one? I mean, default one is something like pink in the email list with CC two weeks. Yeah, maybe we need to bump it to something more. OK, some kind. OK. So this thing I can do probably by the next week. Anything else as takeaways? Yeah. So which image do you want to start first, this one or this one or both? Brazil, I would go with the S310X. Yep, this one. I thought that was the one that Durgat asked was the most interested in. That's correct, yeah. OK. What? No, I'm still interested in. OK, so I create and these permissions. So it will be our reference implementation for the job. OK, and we need to think about webhooks. So do you need any kind of webhook to trigger the build from your site or can you just pull the repository or pull the factory or how would you like to trigger the build? Durgat, I believe the question is to you. Yeah, yes, so. Sorry, can you just repeat? I just couldn't get the last one. OK, so assuming that we have this repository and we have a build process set up somewhere, how do we deliver notification that the thing needs to be released to you? OK, so what are the available ways, right? By which we do healthy movement, which is what we are trying to follow. So is it via some kind of webhooks that we need to set up because that would be one of the ways or are we starting with something more simpler through some channels and then going to a more complex way of integrating it with the webhooks and so on? So probably I could just add an extra item to define a way for you. Sure, yeah. Yeah, something that would be very easy to start with so that at least we can make progress as in and then we can make it more robust, right? So that others can follow. Yeah, right. So I think it should do the job. So any other extra items for now? Yeah, David, I can add an extra item for you for exploratory of this option if you want. But yeah, I'm not sure whether you want to commit on that. Oh, well, yeah. I mean, if we decided we'd like to try the other route, then let's just hold off that, I think. Do you want to do that? I don't mind either way. And it sounded like the mood was to stick with trying to do it in Jenkins experimental. OK. So I would love to have it going in parallel, David. I was concerned that it just wouldn't fit with. And if we, yeah, I would love to use the Docker resources to do it. Yeah, OK. I'll take a look back. Don't want to dissuade you from looking at it, because I think it would be great if we could run it in the Docker environment and if they were willing to meet our needs with regard to how responsive they need to be in the past experience, worried us that they may not be willing. Right. OK. No, sure. In which case? I'm happy to take a look at that. OK. Things are evolving. So it's still interesting to see if it's better now or not. Yeah, I mean, certainly. I mean, I previously owned an official image and we put updates in a couple of times a month. And that was not problematic. But it was still a manual process and there were still delays in that process. OK. So yeah, I think it looks like a plan. So I'll do this part this week and this part this week. Yeah. I just met your Docker Hub accounts. Sure. OK. So yeah, when we start from that, we also have our main Windows topics open. But if you start from here, then we can discuss with other contributors how they approach that, so we get more images. Sure. OK. I guess that's it for now. We are going a bit over time. So we have in the chat. So I guess everybody had, yeah, probably we need to set up the next meeting. So yeah, there are some action items in our agenda, which I marked as tentative. So yeah, maybe we could just follow up in the Gitter chat or have another meeting maybe next week. Yeah, that next week sounds next week would be fast for me, but I can make time available next week. Two weeks out might be a little easier. Yeah, right. Oh, we can just press it in my list. So the only thing with time and pressure here is about Java 10 hackathon. Oh, right. My update is pretty simple. Most likely, I'll have to miss the hackathon. So if you're ready to drive that, then you have it. Something like that. And whether we meet next week or in two weeks, in either case, those are both before the hackathon. So either is great. Yeah, and the rest, we just go to the mailing list. So here, the update is that there is OpenGDK11 release candidate. So I will update our experimental images and see that they work on RC. That's it. And here, my action item is to submit the process. So I referenced it in several threads, but I still haven't submitted the JAP or a development that reminds me of this thread. So yeah, and Alex is not around. So we move the next item to the next meeting anyway. OK. Then that's it for today. So thanks a lot, Dorgudas, Cindy, for participating. Yeah, and thanks a lot for your willingness to work on this topic. Yeah, because I think it's really important to have some platform support available for Docker. Everybody moves there, so we should do some progress on this direction as well. And we will do our best to help you with this work. Great, our pleasure. Yeah, and also thanks everybody else for participating. So you have action items. I will believe on mine. And we have platform seek mailing list. So we sync up there. Thanks, Oleg, very, very much. OK, thanks everybody. I'm stopping with a broadcast. Thanks to everybody who watched it, by the way. All right.