 Hi all, now welcome to the Cloud Native Special Interest Group meeting. Today is May 90th and actually we reconvene after one year break in Special Interest Group meetings. So it's great to see so many people on the call and hopefully we will have even more participants in the next meetings. So the today's agenda is to actually discuss how we restart the Special Interest Group. How do we establish regular meetings and just have introductions. And another topic I put to the agenda is about Jenkins Kubernetes operator. So last week we had a great online meetup by Tomos and we could use this venue to discuss what could we do to help Jenkins Kubernetes operator project and how we could adopt more features there and how we could deliver changes in upstream. Okay, so regarding the Google Doc, how we usually handle that. Okay, so we have a Gitter Chat and in this Gitter Chat we basically post meeting link and meeting notes every meeting. So you can go here and find the Google Doc here and also just I'll post it in the Zoom chat. Thanks so much. Okay, but yeah, we have a chat where we can have all these discussions if needed. So how do we usually organize the meetings? We have a running Google Doc where we make meeting notes and everyone is welcome to participate. So this document should be open for comments if you want to add something just put it there and I think we could just begin. Since we have a lot of participants on the call, my suggestion would be to just have quick introductions and after that press it with the common agenda. Who would like to go first? I can go. Okay, so this is Marki, I'm not the ICDF Jenkins Zoom. Everybody for the CDF Zoom, that's actually me. My name is Marki. I am one of the Jenkins core maintainers as well as Summer of Code or Gatman and lead of the Pipeline Authoring SIG. I am also a member of the Kubernetes org as well as one of the release managers for all the versions that get released. I'm happy to be here. Thank you so much for bringing this back into the forefront Oleg and welcome everybody. Thank you. So I could also do a quick introduction then. So yeah, my name is Oleg Ninazhov. I'm one of Jenkins core maintainers. I'm also a Jenkins board member and the leader of a few special interest groups. One of them is platform special interest group and when cognitive SIG was running in the previous iteration, I was helping to organize the meetings, etc. and I'm really happy to see this special interest group back because for Jenkins being friendly to users running in the cloud is really important and this meeting, this meeting is a great opportunity to discuss challenges, coordinate projects and see how we could help each other and Jenkins users who run Kubernetes or in other cloud platforms. So nice to see you all here. Anyone else? Hi all, I'm Sumat. I'm actually working on one of the GSOC projects which is the external fingerprint storage and Oleg is my mentor along with Andrey and Mike. So that would be one of the founding projects I think for external for this SIG, the external fingerprint storage project. So happy to see you all and I'll keep you updated on my progress on that project. Thank you. Akram, sorry for the previous introduction. That's okay. So my name is Akram Binaisi. I'm working for Red Hat. I'm the maintainer with Shavad and Vihav of the Jenkins images for OpenShift and also the maintainer for three plugins that we offer OpenShift for Jenkins on OpenShift and a contributor on the Jenkins operator which is the topic for this call now. Thank you. So hi all, I'm Sladen. I've been an active Jenkins contributor. I've worked with Jenkins in the past with the Jenkins community bridge project and I'm also a student for the Google Summer of Code with the Jenkins customers should be in service. Yeah. So I'm really excited to be part of this special interest group and help in whatever way possible. So thanks everyone for coming here and we'd like to move this group forward. Thank you. So I can go next maybe. I'm Zered. I'm from Red Hat too. I think Akram said almost everything about what we are doing on Red Hat and related to Jenkins. So yeah, we are happy to see this community grow in a bit above. Around the operator, we started a bit working on it on our side. So we're happy to join. Okay, so I can go next. So this is Vihav. I work with Akram and Javi on the Jenkins OpenShift and the Plugins and I'm also a contributor with them. And on the side, I also contribute to Hellen and Akram from side to side. So yeah, that's it for me. Thank you. And hello. I'm Tamash from VirtusLab and I'm basically the guy behind the Jenkins Kubernetes operator. Okay, thank you. So yeah, we have a lot of people who work on Kubernetes operator. And yeah, this is great. So thanks a lot for joining the meeting. I know that Kubernetes operator has its own meeting, but right now we have a few conditions we need to resolve in our calendars. So it's great to have this additional call. I'll share my screen again so we can see the agenda. And you can see that there are some meeting notes and please feel free to add something if anything is missing because it's basically collaborative for a document. I'll quickly speak about Cognitive C2.0, especially for those who didn't participate in the first edition. There is a thread in the developer's mailing list about it. But yeah, we can summarize the status. The previous C when we started it was built around Jenkins architecture changes. So if you go here, you can see that there are projects like Plugable Storage, also Cloud Native Jenkins, which bolt down to stateless Jenkins, Jenkins file runner, and other such components. Then there is configuration as code and Jenkins Kubernetes operator, which was added there. But at that moment, the specialist group was not that active. So although it's formally listed here, I don't think that there were active discussions at the S6 venue at that point. So this is what we had in 2018-2019. Some projects got a good progress. For example, if you go to Plugable Storage, you can see that Storage 4.0 to some extent for logs was delivered. Also, we submitted some other projects and discussed changes in other areas. But yeah, at that point, we didn't go further. Same for Cloud Native Jenkins, we had Jenkins file runner and other components, but all is still under the development. Integration as code evolved pretty well, but I would say that at the moment, configuration as code rather has its own special interest group. It's registered as a subproject within Jenkins. And this approach basically has its own meetings every two weeks. The next meeting is tomorrow by the way. And it evolves pretty well. But again, this project was quite detached from the special interest group. And we even had a discussion about moving it to platform special interest group half a year ago. We didn't do the move. But yeah, I think that in the current state, it rather deserves to be a special secret. And Jenkins Kubernetes operator, yeah, people on the call know the current state address. We had a meetup last week with the summary. And thanks to Thomas and all other contributors, the project evolves actively. So it's great to see its evolution, a lot of changes are incoming. Hopefully, there will be the next release soon. But yeah, I tried this project last week and it works pretty well. So this is what was the scope of the previous SIG. And one of the problems why this SIG wasn't that active as we would like is, well, the first reason that it was hardly focused on the architecture changes. So pluggable storage, Jenkins, all of them presumed major changes within Jenkins. And it was quite difficult to find contributors who were interested in this area. So for new addition of the SIG, my proposal was to switch more on use cases for users and to not necessarily be cloud native. My proposal is to actually focus on making the SIG cloud friendly and making Jenkins cloud friendly. You don't have to have the Jenkins itself cloud native to get benefits from running in the cloud and to get benefits from the cloud native environments. At least that's my approach. And I would be happy to discuss that. So what do you think about that? What do you think about the scope for interest for this special interest group? I am a plus one. I'm sorry. I'm a plus one. It does make sense. But how will this, like you have mentioned integration with cloud native CICD engines like Acton, Jenkins. So how do you see this being implemented? Yeah. So it's a good question how it would be implemented. I can just describe my personal vision. So Jenkins pipeline is basically an engine within Jenkins. And even now we have multiple implementations of this engine. For example, when you run with sandbox or without sandbox, effectively you have different pipeline engines under the hood executing of your pipeline. And there is no blocker from making these engines extensible. And for example, making Tecton run a new engine. So for example, you would be able to run Tecton pipelines, but at the same time get all benefits from multi branch pipeline and all the harness around the core engine, which is available in Jenkins. At least in theory. So I think that's all the support that comes to the engine in the back of the Tecton base pipeline. Sorry. I might have missed the questions. Like my question was like does Jenkins X do that all the time? So the current state is that Jenkins X is a separate project. So Jenkins X is a project under the umbrella of continuous delivery foundation, actually same as Jenkins and Tecton. So if you go here, you can find that there are four integrated projects and one incubated. And all of them have basically they operate in the continuous integration and continuous delivery space. Moreover, there are some high level abstractions. For example, in CD foundation, there is interoperability special interest group, which specifically dedicated to ensure that the services can interact between each other. And for example, for this particular project, integrating with Tecton and Jenkins X could be a good subject for this SIP. Why I put it to the cloud native SIP scope is because Tecton and Jenkins X are cloud native CICD engines. So for me, it would be useful to have it in the scope. But yeah, full disclaimer that right now there is no real effort planned towards this direction. If you have contributors, we would be working on that. If not, then not. So yeah, this one, yeah, maybe it's an aspirational goal. But yeah, I think that in principle, it would be really beneficial to Jenkins users. And yeah, for example, if you go to the Jenkins roadmap, this is what we are currently building. So this roadmap is a draft. But the result of the section for Jenkins on cloud platforms. And here, for example, we have Tecton support as something in the future without no specific dates. Well, there is no dates there at all. But yeah, right now, it's a rather subject for the discussion. But if you're interested to contribute to this project, I think it's something we could discuss at the SIP meetings for sure. Any other comments? I can't mute that one. I'm not sure who is it. But everybody seems to be muted. Okay. Well, anyway, we cannot record and get muted. So yeah, this is the scope. And again, it's not something like we would be working on all these areas and discuss and call these areas at once. It's a scope of interest. So depending on the meetings on the schedule, we can set up topics. So for example, today, we discussed Jenkins Kubernetes operator, which is here. Next meeting, we can take a look at another topic. For example, Jenkins file runner of Kubernetes plugin and go in that way. But still, the SIP could be a good umbrella for all these projects. I think this is a great idea. I like this, adding this into the, to the scope. So yeah, if you're fine with that, I'll probably, after the meeting, maybe later this week submit a patch to this page just to adjust the areas for improvements, scope of interest, maybe tweak the projects a bit. But yeah, I think that it would be a good start to just start working this page. And the topic, which is quite close to areas of interest is about funding projects. Because as we already discussed, ticked on, it would be a great initiative in principle, but there is no ongoing work. But there is, there are some projects which are currently ongoing and the way we could do some collaboration even now. And we could use SIP in order to promote these projects to help finding contributors and the resolving obstacles in the Jenkins community. And I started from preparing a list of projects which are closely related and which are currently ongoing. So two topics, more or less no brainers for me because they come from the previous iteration of the SIP, those Jenkins operator for Kubernetes and Jenkins file runner. But yeah, there is a lot of other projects which we could discuss. And yeah, I suggest to quickly go through that if everyone is interested. Okay, so yeah, the first project in the list I have is a project by Sumit about external fingerprint storage. So we started it as a part of our JSOC effort. And if you go to the cloud native 6.0, there is a plug the storage page. So yeah, there is a lot of storage types and fingerprints is here. Why it's important because if you want to do CI CD, you likely want to have a kind of traceability and observability of your artifacts inside Jenkins or outside Jenkins. And if you move outside, especially if you want to build a system on the top of Jenkins, having an external storage would be great. And right now we have ongoing discussion how it would be implemented. And maybe Sumit, you would like to summarize what would be the current plan. Hi, so basically under this project, what we are targeting for is first that we're trying to move, we're trying to add an extension point which can allow, which can allow code to support external storages. Second, that we are trying to build a reference implementation for Redis at the moment. And third is that once we can do both these things is that we can actually trace these fingerprints across different Jenkins instances. So that is the core idea for our project. And right now we are like in the first week of it. So as we go on and develop it, we'll post more updates on this. So Oleg, do you have anything else in mind that I should be talking about? Is that good? I think it's a good summary. Thank you. So what's the opinion of other participants doing it to a pluggable storage? And if so, what's your opinion about this particular topic? Can I ask you to develop just what is it supposed to be in terms of usage? Just I'm reading right now the page about it. It's about tracking usage of artifacts, credentials, files and so on. What do we mean by tracking? So basically in Jenkins there is a database called fingerprint storage. Well, this is basically a bunch of files on the file system right now. It's not a real database. And it posts different metadata about artifacts. And this metadata is extensible. So for example, you can trace Docker images, you can trace artifacts. And finally, just ends up accessible to Jenkins users to Jenkins jobs so that they can do something with this data. For example, you can trace usage of your artifact or of your image container across multiple jobs, if needed, or within your delivery pipeline if you use it across multiple jobs. Okay, so let's try to give a simple example. Imagine that I built like a simple Java web application and I referenced some Maven artifacts in my Pomex MN or something like that. The Jenkins pipeline will trigger at some point the build of my Java application. Will any of my Maven artifacts, for example, will be referenced in this database? And would it be the same for any thing that I build every class, for example, that I build or compile? It really depends on the implementation. For artifacts, when you publish an artifact, it can be automatically added to the fingerprint database. And what it means, you have one job which builds your artifact and another one which deploys it to a server, for example, new release of your library of your service, and then you can trace this dependency. Or another use case, let's say just staging so you can pass your artifacts through environments and again trace usage, trace build results and test results within these environments, and query this data within Jenkins so you can quickly access what happened with this artifact. Okay, what is the fingerprint that is taken? Is it like an MD5 or SHA-256, something like that, or every file? Yeah, so yeah, I'll probably dive to the code a bit. So in Jenkins, there is a fingerprint class written inside Jenkins code. And this fingerprint class is basically an item which has ID, basically MD5 at the moment. And it also has an extensible set of fingerprints. Yeah, sorry, facets. So a facet is basically an extensible object in Jenkins, and any plugin, any step can put additional information in the fingerprint. So you create an artifact, you have a unique ID, and then any instance within Jenkins can put additional details there, which can contribute to user interface, which can be either the query in the future. And yeah, this is basically the instruction. So it really allows to do everything as long as you have for a unique ID. Okay, do you have already an idea of what would be like the default implementation? I mean, which kind of file will be tracking? Just asking because of performance, for example, let's say you have a very naive approach, like I would track all the classes or all these things that I built during my pipeline. So it would be a huge amount of CPU to just compute this all the time. So I do have an answer about this right now. Sumit, what do you think? Yeah, just for example, for a simple web application, let's assume, for example, that I want to know that which any class of all my dependent maven jars, I want to know which one I used. So I will get every jar extracted, get the MD5 for every class inside, and I will store it in ready sorry, in the file system or whatever. So this could be very CPU consuming and probably even for bigger application, it will be storage consuming. So what will be the implementation for this? Will it be distributed? Will it be like running inside of the pipeline? Will it be a synchronous or synchronous? Would it add yeah, this kind of questions asking right now. So I'm just trying to understand maybe what kind of usage we have for this. I see a lot of data and CPU cycles. But yeah, right now I don't know what kind of usage we can have. So basically, like from what I understand is that so basically, whenever we want to fingerprint a particular file, that is an extra build step that we do add on our own. And fingerprints by default, like they are very small in size. So they don't take a lot of space. I think Jesse ran some tests for those. And so I think, in fact, from the current implementation, all these fingerprints are as it is in Jenkins home, right? So they're already, it's not a good way to be storing Jenkins because so fingerprints because so we are dependent on the storage disk of Jenkins home. So by taking it out of there into an external storage, like, I think that is an extra step that is much better. That is actually solving a problem. And as far as I think the database was like, we can configure it's more easy to configure the database to, you know, have maybe replica sets or any other such things that we would want. So like, maybe, maybe I think I don't know if I answered your question or not, but that would be my take on that. Yep. I would all say that the implementation is yet to be seen. What we mostly target in the pluggable storage stories is offering extensibility APIs. So for example, what we did for artifact manager two years ago, we created extensibility API. And then, for example, Microsoft Azure team, they have experience with working with Azure storages. And they created a reference implementation, well, the implementation of the APIs for Azure, same for AWS, etc. But our main problem was to first provide architectural capabilities so that the users could do so. And to answer your specific use case, it really depends on what you run and how you index fingerprints. If you want to index every class in your file, you totally can do it in Jenkins. But yeah, it comes at cost. But the Jenkins want to index it like that by default. So it's your educated choice as a user, if you want to do that. You have Jason on the call. Yep. Hi. Just here to see if there's anything I should add to, but probably not. Okay. So, yeah, we spent some time on this project. I suggest that we slowly move on. And we could have a special session at the next meeting about the storage because it's an interesting topic and we could deep dive. And I guess it would be really interesting for us to meet other participants in this project because we really seek for the feedback from users so that we can plan our work. So other projects which we have on the list, one is Jenkins on Kubernetes. So just to explain the history, right now we have a lot of stories about running Jenkins on Kubernetes, but basically no documentation on Jenkins at all. No solution pages, no guidelines, nothing. And in the documentation seek, we identified it as a priority project. And we started from doing online meetups about Jenkins on Kubernetes, but we also want to have some documentation, to have some solution pages so that we can share this experience. And we want to integrate this knowledge and make it accessible to Jenkins. Obviously expertise of cloud native seek members would be essential if you want to deliver on that. So it's a kind of joint project between two seeks. And yeah, hopefully we will have contributors for that even during the next week. We have UI UX hackathon and this project is proposed there. And what else? And yeah, we also have plugin projects. So for example, Marky who's in the call he's a maintainer of Kubernetes plugin or Prometheus plugin. And this plugin is actually measure the seeks scope. And yeah, Marky, what do you think about covering these topics, topic periodic, the seek meetings? Yeah, I would love to do that. Yeah. So yeah, exact list of plugins is yet to be seen. But I think that this area would be really interesting for potential Jenkins users. And we can also see how we could collaborate, because for example, yeah, there is Jenkins Kubernetes operator, it uses Kubernetes plugin. So probably we could find a lot of collaboration opportunities there. I'll skip JCask for now, because it's a subject for longer discussions. And yeah, so we have 20 minutes left. Should we just switch to the Jenkins Kubernetes operator? Or would you like to discuss the logo? Because we don't have one. Yeah, so, yeah, right now what you have for this fancy icon, it's looks especially fancy when you zoom in or when you post it in Twitter, because it's something like 128 pixels. So my proposal was to actually reuse existing artwork. And the best matching thing is this logo. So I would suggest something like that. But if you have ideas, what it could be, I'm happy to discuss them and probably to put something else. So just think about that. My plan is to actually start mailing list thread about updating it. So if you have any ideas, please share them in this thread. And then, yeah, sorry for all these editors. And I think now we can actually switch to the Jenkins Kubernetes operator. So last week, we had a presentation by Tomas. All materials are already published. So you can find it, for example, if the comments ever load. So here you can find a video recording is this presentation and demo by Tomas. And let's just second, I started getting an audio also you can find slides, et cetera. And one of the topics we discussed with Tomas is what would be the opportunities for the seek to help and how we can generally help the project to move forward. Because right now it's a part of their roadmap. There is an old Jenkins enhancement proposal, which documents what would be implemented and what needs to be implemented. And I think it's a good opportunity to discuss a way we could help in order to move this proposal forward and to find a good finish and maybe switch to next iterations and help in other projects. So Tomas, would you like to quickly talk about the current status and what changes you hear? Yes. Okay. So the current status for the Jenkins Kubernetes operator that we managed to run the Jenkins on top of Kubernetes with a bunch of Jenkins plugins, for example, Kubernetes plugin or Kubernetes secrets plugins, which nicely integrate the Kubernetes itself. So we use the operator pattern for that. We have some issues with the Jenkins itself. So in my opinion, the most important thing that the Jenkins is application, I would say stateful. So we have to care about the state. So with the current Jenkins architecture, we have to make a backup of some files from the Jenkins home directory. And we have to know about the really deep internals, what, for example, that file do, or we have to keep that, or just we can skip, we don't know. So this proposal for the Plug-Able storage is very interesting for us because we can move storage or state to another place and make the Jenkins provision by the operator be stateless. And we can run multiple instances of it and also make the high available because with current architecture, there is can be one Jenkins master and we can spin another Jenkins master without tearing down the previous one. So this is the most important, I would think. Another issue is that the installing plugins can be improved. So there is a download center, which only have information that, okay, you have Jenkins with version X and you can install those plugins. But there is no such thing like dependency tree, or in my opinion, we need some another API for the plugins to give you some use case. So from the operator perspective, I want to know if the user want this plugin in this Jenkins, and after making some API calls, I can verify that this plugin won't be installed for the Jenkins. And I want to inform user that, sorry, it's not possible because something like there is no dependency or it's not supported in your version. And we face a lot of issues related to that. For example, user put the plugin list that want to install on the Jenkins, the operator tried to download all these plugins. So it's on the directory. But after running the Jenkins, the Jenkins would say, I can run these plugins because I have some kind of dependency. So when I have this some kind of API, I think it should be something totally different. It will be very user friendly because we have information that user, this plugin won't be installed in this Jenkins. So the state, the plugins, we, I would say very heavy user of the Kubernetes plugin. So I can speak that this plugin involves very well. So yeah, there is a lot of diagnostic logs there. So the user have more information what's going on because in Kubernetes, there can be a lot of issues related to some volumes can be mount or basically it doesn't exist. So this information is now available on the log. So that's fine. So it's, you don't have another person expertise. So you have on the log. So you will be free. I have more free from supporting running Jenkins on top of Kubernetes. Yeah. And I think that those two things is most important for me. Yeah. So if you don't mind, I'll start from installing plugins because yeah, you're totally right. There is no API right now for dependencies. And my question to you was whether you need the API or whether you need the tools like for example, CLI tool does make a difference for you because for example, we have a project called plugin installation manager tool. And basically, this is a new advanced tool which supports basically manager plugins and it already includes some features which are not available. For example, in plugin management scripts within Docker, like your updates, your security updates and technically we could integrate dependency tracing here as well. Yes. I saw this project. So it's for sure better that using the install hyphen plugins.sh from the Jenkins Docker image. But still, for example, in some new cases, this tool have to download the plugin first, unpack this and read the dependencies from there. If we have API for that, we'll be very, the servers which hold the plugins will be more relaxed. So yeah, even Jenkins self will benefit from this kind of new API. Yeah. So right now we have a special plugin site, plugins Jenkins.io. And actually, this site has two parts back end and front end. And back end part already can supply the CPIs, the problem that the CPIs are not really documented. So for example, what you see here, these dependencies used to come from the back end site. Right now we have static sites. So it's just generated on the run. But still, we could expose these APIs if needed. So there is a project, just second, this project called plugin site API address. So again, it's totally not documented. But in principle, it has APIs which might be needed for your use cases. And you can deploy it locally, it can cache, it can cache the update center file and provide additional search queries if needed in the future. So for example, plugins and plugin end point. And this end point, it returns a lot of information, but this information includes dependencies. The question is, this API runs somewhere over the internet, exposed so I can just use it or I have to run this in Docker or something like that. Yes, right now you would have to run it. If needed, we could discuss creating a service for that. But there is one downside of running it as a service, because this plugin site API gets information from Jenkins Update Center. And let's assume you have a custom update center in your company, then you won't be able to use the same plugin site API, instead of that you would have to run your instance and connect to that. So in principle, having a public instance with this API would be doable, but in practice for some cases you would need to have a, you would have to run an instance anyway. I see. So yeah, that's interesting. So for me, I would try to use that and see what API I can get, what information and test if that will be enough for the operator basically. Before that, we had some discussions, for example, about having GraphQL engine for this component, etc. Later, we actually moved on because we just decided that we would rather generate a static plugin site. And if you go to this service, it's fully static now, it runs behind CDN. So it's also extremely fast. And well, we don't regret this decision, but still the API services obey if needed. So technically, we could add additional functionality there. Sorry. I have a question regarding the plugins installation. So right now with the operator, there is an issue about some user complaining about every time we will have to start Jenkins using the operator. Right now, we will download the plugins that the user would like. So in OpenShift, for example, with the OpenShift image, we got a different approach, which I think probably more fits more with the Kubernetes approach and the container image immutability principle is like we build the image with the set of plugins that the user wants. And then this image is stored in the container registry, the Docker registry. So every time that we want to redeploy the exact same image, we just deploy this image, and we don't have to care at every deployment or every startup to re-downloading the plugin, checking the dependencies and so on. So this one is done once. I wanted you to share this view and have some feedback from the group here to see if this approach for downloading the plugins on every startup, what is the use case behind this? Is it something that we would like to continue on on Kubernetes? It's a subject for continuous holiwar. So for example, personally, I'm also a big fan of immutable images, and I'm submitting control to DevOps world describing how to do it with Jenkins and Jenkins tools. But there is still some use cases where you might want to install plugins on demand. So the first use case when you have users, and like for example, Kubernetes operator, users define pipeline and define which plugins they need. So basically, you will apply the SCRDE and then whatever magically plus Jenkins instance, we set a required set of plugins and execute that. So you will have to install the plugins, but assuming that you have a local update center, it can be done quicker. Another use case is to just rebuilding and provisioning multiple images. It might have a value, but it really depends on what you want to build. Okay. Regarding the first scenario, I'm not sure to get it properly. So are you talking about the pipeline where we decide to install a specific plugin while we are running the pipeline or before starting the pipeline, something like that? Yeah. So imagine you as a user, if you're able to define not only a pipeline, but also to say that I need these plugins for my pipeline. And you wouldn't have to go to your Jenkins admins and to ask to install, for example, HTML publisher plugin or un-plugin for what it works. Technically, this use case would make sense. So dynamic installation of plugins, which you need. For example, we support this use case in Jenkins file runner, where a user can define a list of plugins and they get installed automatically. I don't use this mode, but it's possible. So probably my knowledge about Jenkins here is limited. But let's try to follow the HTML publisher example. Just imagine that the user, his pipeline or his configuration, he says, I want HTML publisher version 1.x. And the Jenkins version is running already 1.8. Okay. So it will want to kind of upgrade the HTML publisher plugin of the running Jenkins instance. And in most of the cases, that would require a Jenkins restart, right? Am I correct? Yes, if you run on the same instance. But for example, if you use Kubernetes operator, if you follow Thomas's example about availability and provisioning instances on demand, again, you can provision your Jenkins instance with a required set of plugins so that you can execute this payload. Yeah. But in this case, we just return to the same original problem. If it's just about instantiating another Jenkins instance from an existing pipeline, we'll be starting a new image. And so we can just consider that this image should have been existing before, or we can decide to create this image for subsequent restarts. I mean, if we're just in the plugin, we have multiple runs of the plugin, we will not build again and over again the same image, knowing that we'll be using the same version of the plugin set inside. So yeah, this is kind of the inception. But yeah, probably, as you said, it's like a long debate. Yeah, probably it will come back often because we had this discussion probably in the past and some users are coming with it again. Okay, let's put it for further discussion also because we are probably running out of time and there is other topics also. Yes. I have a few more questions regarding plugins. So from the operator perspective, the user defined the list of plugins they want to install and operator tried to install that. But the user want the exactly versions of those plugins and I find the only way to achieve that is to install plugins before starting the Jenkins with this CR tool or install hyphen plugins.js file and then Jenkins. When I try to install plugins directly from the Jenkins itself, it always install plugins with the latest version. And the question is why I can't specify the previous version of the plugin when installing from the Jenkins itself? I guess it's a user experience aspect which could be addressed because that's how current Jenkins plugin manager works. Speaking of that. Yeah, this is very problem for us because we have to restart all the Jenkins to install plugins. But when we have opportunity to install specific version from the Jenkins API through the Jenkins API, so operator can use that, it will be super easy for us. So with some kind of scenario, we don't need to restart all Jenkins master and just install plugins on the fly. Yeah, in some cases it will work. Well, you can install a new plugin dynamically. You cannot update a plugin in Jenkins. So in order to support that, you just need a proper REST API or CLIs and they could be definitely added. I'm not sure whether we have support for that in the default update center, but yeah, basically it's a small matter of programming. If you want to add this API, I think it would be totally doable. And I think this behavior comes from what we got from the download center. I think there is only latest plugin versions. I debugged this one year ago, maybe has changed. But yeah, I see that the root cause of that is the latest versions in the download center. It depends on the update center because even in Jenkins, we actually have multiple update centers. So for example, if you go here, here you have experimental update centers, current update centers, LCS update centers, which provide different sets of plugins. And it would be totally doable to have version update centers and support versions right inside these files. So yeah, you have to do the right. If you go here, you can see that there is just a link to whatever version, which is latest version supported by this LCS baseline. But if needed, Jenkins could download another version because all versions are still available in archives. So it just needs logic to retrieve that. It's not a problem with this particular version, it's not a problem with dependencies because you would also need to resolve dependencies. Most likely you would get into conflicts there. So it's doable, but it requires some design. Yes, I see. And we've got this new API. We even can check if it can be installed on your Jenkins or not. So yeah, this will be very much more user friendly for Kubernetes operator users. Yeah, we could add these APIs. And if anyone wants to contribute, it's something quite rocket science. So in Jenkins, we basically have two classes update site and plugin manager. And both these classes already have the APIs for different use cases. So we could just add the rest API there. And also include all dry run logic, et cetera. So for example, do it's definitely not a good example. But yeah, all API is already there. And we can add more APIs if needed for use cases. And I believe that if added this feature, for example, yeah, pre-validated configuration, it's already existing API, which, well, it's not API, it's not client friendly API, because it's designed on the four Jenkins tunnels. But we can expose API, which would be consumed by C-like clients if needed. And yeah, we can add features that you can prioritize these APIs. So we could implement what we need. Or somebody could implement, we just need to report these issues and to see whether anybody would be willing to contribute to this area. Tomos, if you would be willing to report the issues for that, it would be nice so that we can process them and maybe facilitate contributions around that. Yeah, sure. How is the process for creating the issues? Is the Gira ticket or GitHub issue? Gira ticket. So Jenkins Core lives in Jenkins Gira. And we don't have a plan to move to GitHub issues at least in if I see it in the future. Okay. Thank you. Okay. So, yeah, we're already running covered time. Sorry for that. So my proposal for the next meeting, I plan to start another doodle so that we vote for regular meeting time. I wanted to start from something like meeting every two weeks. Again, we will set up topics for particular meetings. So there is no need to participate if you're not interested in topic. Let's say next time we do external fingerprint storage. So everybody is welcome to participate. And then in two weeks we will do Jenkins father-runner or Jenkins Kubernetes creator again so that we will keep running these meetings and inviting contributors who are interested in particular agenda. Does it sound good to you? Okay. Then I will just get it implemented. So, yeah, thanks to everybody for your time. And I will publish the recording of this meeting. And if you have any feedback about what topics you would like to consider, just put them in the agenda document. So there are what they will do. So I will just put a section for next meeting. So if you want to discuss something, we just, you can just propose a change there and we will coordinate what we discussed there. For example, external fingerprint storage. So, yeah, that's it for today. And thanks a lot too much for sharing the feedback because, yeah, I knew about the plugable storage story. I didn't know about plugin manager aspects, but it's great to have such insights. Yeah. Thanks all. Thank you guys. See you. Bye. See you next time.