 Thanks everyone. This is the Jenkins Platform Special Interest Group. It's the 21st of May 2020. We're delighted you're here. We remind you that we're governed by the Jenkins Code of Conduct. Be kind, be nice. And thanks for being here. I'm going to go ahead and share my screen so that we can talk through the agenda. Be sure that we've got the right topics on the agenda. So we'll review the open action items. We will... I put a topic on for Google Summer of Code projects, but I think we've got... Well, we can look to that briefly. But the more crucial thing that I wanted to be sure we talked to is Google Plugin Performance GSOC project. And Jim had notes on the moving of the PowerPC virtual machine. And then I have a concluding item on Docker images and Alpine. Are there other topics which need to be discussed? Oh, Oleg had suggested we may want to discuss Java in 25 years. I wasn't sure what the topic specific details were there that he wanted to talk about. No, I just wanted to highlight it in a special interest group. Oh, okay. Good. All right. So there is no specific topic. Okay. Super. All right. Hello, Oleg. Thanks for joining. All right. So Anna... Well, thank you very much. Any other topics we need to add here? I asked the customer service distribution service team whether they plan to join. Because they definitely plan to join. But yeah, I'm not sure where they... So you might have a discussion about it. Okay. Great. All right. Yeah. So I think we've put the Google Summer of Code projects in as line items. Good. Very good. Any other agenda items that need to be added? Nothing from me. Okay. Oh, so I still have this open action. I can get with you separately. I have misplaced my CDF Zoom account invitation apparently. So I reuse this as Zoom account today. So Mark connect again. Still have the open action item to open the jet for Docker operating system support. Oleg, do you want to share with us how things are going on the windows support policy? Okay. I'll just screen for a second. Okay. And I'll stop the share. Okay. So I'm not a host here. Oh, so I need to grant you hosts. Things have changed. Okay. You can just the screen share for me. That's okay. Let me make you a host. It's easy to do. There you go. Because I'll need to do the same with Rishab. Okay. So this is my screen. Yes. Yeah, so a few weeks ago I started a discussion about windows support policy. So why it's important. We have a GSOC project related to windows services and the YAML support there. So to do that, we need to define our net support policy because currently we support.net to the zero, which is pretty old. You can find libraries for that. It's definitely a major overhead to maintain that and we would like to drop that. So that's the story behind why we start this discussion now. Unfortunately, but it is unable to join today because there are some events in India, but hopefully we'll be able to join next meetings. So what do we have? We have a proposal for windows support policy. There was a discussion on the main list about what exactly would be the support policy. Originally we started from this. The platforms we wanted to support this approach. Yeah, got a lot of feedback about what exactly do we maintain it would exactly be committed, etc. So instead of doing such approach, I proposed the alternate option, which is similar to how our web browser support is organized. Just a second, web browsers. So basically we introduce a number of support tiers with clear expectations for these tiers. And they also track changes, etc. So this is what is our browser support policy. And this is what is our windows policy, which is currently in full request. But yesterday we had a Jenkins governance meeting and they've been good sign off to proceed this match. So assuming that there is no negative feedback at this meeting, I would just go ahead. So here what do we have? We have four levels. The first level is full support is basically the latest 64 bit support versions of windows and windows server plus versions we use in Docker. Level two is basically whatever is supported by Microsoft and 64 bit. So this is what we support and this is what we intend to keep supporting. Level three is kind of supported but based efforts. And here are interesting things here. But yeah, so 64 bit versions, which are no longer supported by Microsoft 32 and other architectures. So for example, some people still run Jenkins on the Italian and other things. So it goes to three because although it's important keys, we definitely have no opportunity to test it. Also non mainstream versions like windows embedded, basically the same reason we have no opportunity to test changes. If there are suggested changes which do not input compatibility level one and level two support, of course we could have said that. Also preview releases, basically, because preview releases can change it. Well, whatever circle and whatever previous state, I didn't describe the windows support policy. There are documents linked below. And actually also additional engines like emulation, wine, reactors, et cetera, et cetera. So all of that is something, well, we could technically test but it's, well, in some sense it's windows API. It's not really windows. If Jenkins works late, it's fine. If not, life happens. And unsupported is basically platforms where we know there are serious limitations. So here we have Windows XP below service pack three. Yeah, so end of life 10 years ago or so. It comes from the dotnet framework question because we propose to bump the minimum requirement to the net framework for the zero. Well, a lot of reasons for that, including better TLS implementation, including just wider set of libraries which support this version. So we would like to move. Well, I added windows phone, mostly just for fun. But yeah, theoretically you could run Jenkins there. And the other windows platforms released before 278. It's just a cutoff. This data is set. Well, because nobody really wants to support all the platforms. But yeah, generally this policy is up for discussion. So what do you think about such policy and it concerns any additional things to keep in mind. So for me, I think this is the right approach. I did see I did see comments from James Nord on phrasing a little bit, but I wasn't overly concerned there. I think it's good that it's the poll request is getting reviewed that's great. I need to review that I haven't checked my inbox notifications from yesterday. Best effort. Yeah, for me, best effort seems a common phrase. So I'm not sure. But that's relatively minor. Yeah. Okay, so yeah, we'll clarify it with James is intranosly. So my main point here that some support policies better than no support policy because right now we don't have anything to comment on. Well, it was a status quo. But for example, personally, I used to maintain some libraries like windows process management library, etc. There were problems with it. We had problems with updating the GNR and GNA for Java 11. Again, some versions stopped working on all the windows versions and since there is no support policy, it's hard to really say anything. What it's supported or not. So I would be happy to at least have something. So I guess my action item is to review comments from James. And if there is no negative feedback, I will probably not shoot this week. Thank you. Like, thanks very much. Thank you too. So, yeah, stop screen sharing and make your host again. Okay. Oh, right. So, I've got to somehow reteach this thing to allow others to share their screens. Or just switch to the CDF account. All right. Yes, I will do that. I apologize. Okay, so we've still got the open item to review the Docker build rework PR. And that one continues. I'm actually scheduled for some time with Alex tomorrow that he and I can discuss further there. Jim apologies. We continue our focus on core release automation and keeping other things going. We will get there. No, it's all good. I understand completely. The core release is really, really cool. I definitely want to see that come on out. So next topic then was moving of the power PCVM Jim you want to go ahead and tell us what's happening there. Yeah, so I was informed by Raphael the power contact that they are moving data centers slash like hosting platforms for their community, open source community access to power. So moving. They can't do any transfers is I think they're doing from like some sort of IBM cloud product to now open stack. And I guess they, the formats of VMs don't transfer. So they're spinning up new VMs. And there's about a 90 day basic countdown for the life of the VM you guys currently have. I think it's actually a little less than 90 I think 90 started back last week when I was talking to you more. I did talk to the power contact he said it's possible to spin up the new VM in tandem with the old VM. So if you guys I know we kind of talked that you guys didn't really have that much to transfer over or you guys have you know init scripts. But this VM should be up at the same time. So if you guys needed transfer anything over you guys can. This is going to be a more I guess permanent home for the community open source VMs. They'll as 390 VM is staying the same it's not changing this is just about the power. Great. So, so we'll, we'll be notified then of what the address is of the new machine and we can just connect to it. Yeah, so I'm going to ping Raphael today. And then I'll probably just send you email to you and Oleg, I'm not like Olivier about the credentials. Perfect. Yeah, so I might need your ssh keys again. I think we have them in email chain shouldn't be too much of an issue but I'll message you if I do. Yeah, you let me know if you need it. That's just a case of we need to share a public key with you. So, you know, no large threat there. And yeah, that sounds that sounds great. Thank you very much. Thanks again to IBM for being willing to host host this s390 machines and the power PC. Thanks very much. And just to confirm again, you guys, you guys planning on transfer over things, or do you guys have those init scripts you guys were talking about. We won't transfer anything that I know of, I have nothing to transfer and I'm not aware of Olivier having anything to transfer we intentionally try to have, try to act as though the agents on ci dot Jenkins.io are ephemeral. They should be, and many of them actually literally are ephemeral. We use therefore, we have to act like certain of the agents are ephemeral, because they are others, we want to just act like they're ephemeral even if they're not really ephemeral. Okay. All right, sweet. So, I just want to give you guys heads up on that issue. So, I'll let you know when the new ones up. And once we've started using the new one. We just notify Raphael and you that we're done with the old one and he'll turn it off. Yeah, he'll probably turn off before that 90 days. If you guys are done with the old one. Right, there's no no reason for us to have him wasting wasting power on the machine that we've transitioned off. Yeah. Great. Okay, thanks. Anything else Jim on the power PC transition. Nope. Okay, Rishabh you're next get plug in performance improvement project let's see if I can figure out how to let you share the screen. Okay, stop sharing. And multiple participants and no one. Sharing. Oh, here we go all participants can share try it now Rishabh. Yes, I can do it. Okay, so let me introduce Rishabh is a Google Summer of Code student he's working on a project that I like that I am I find it's very fun it's a get plug in performance improvement project. Rishabh go ahead. So one of the things we're doing to improve the performance of it plug in is basically evaluating it operations in term of their execution time for both of them existing implementations which is the native gate implementation CLI get and J get which is purely Java implementation. We're using jmh to evaluate the the the performances of these implementations jmh is a Java written micro and micro benchmarking framework easily integrated it's integrated in a Maven project using it like that. The current plan is to the first step is to select a gate operation to benchmark and we tested in an isolated environment provided by jmh. The third step is to take the testing from a local machine to the Jenkins infrastructure existing Jenkins infrastructure to create a comprehensive report where we test our benchmarks and different setups different environments. The next step is which is under discussion is to use Jenkins telemetry to gather user feedback on performance enhancement. This is something I'm going to talk about after I discuss the strategy we've used for micro benchmarking. So I'd like to share my what are the little experiments I did with git fetch I selected git fetch as an operation to benchmark and I'm going to share my results for that. Before I share my result I'm a little background on what I was doing. So the aim was clearly to test a git operation on the basis of its implementation, removing the noise added by any external environment and jmh optimizations while I'm testing the operation. The testing environment I have a macOS environment. I used the remote gate repository used by me as a local file so that I don't interact with the internet while I'm testing the benchmark. The process overheads like declaring a variable or initializing the repository all of that was taken care by jmh. The benchmark only calculated the single operation which was git fetch. Then the test parameter we chose was the size and structure of remote repository. I had four sample repositories to test git fetch with and share the details of the repository as well. So as you can see these are two bar graphs you can see and the first one is from a vanilla benchmark. I call it the vanilla benchmark because it's not using any kind of performance benchmarking framework. It's just me using system.nano time to calculate the time execution time for git fetch. This was done in the form of a j unit test. So these are the results. It's pretty obvious here that it's showing that CLI git, the native git, it's able to, for each repository, the repositories, it's able to perform better than jgit in every scenario. And the repository size and structure I'd like to show first before the results. So these are the four repositories I chose. The sizes are here. This is basically 0.3034 MB, then 5 MB, then we have 93 and 324 and the number of commits and that's the structure. So with the vanilla benchmark you can see what is happening with the GMH benchmark. There was a difference, a clear difference which I suspected was because of the JVM. So before, so the GMH what it does is it warms up the JVM for us for multiple iterations so that it is trying to simulate the same environment the git operation would run in. So to confirm that this was happening because of the JVM, I ran another test and this GMH, it also gives us an option to run the benchmark in a different mode. It is called the single short mode, which basically involves running the benchmark without warming up the JVM. And when the JVM was not warmed enough, I could see that jgit was not able to perform better than git. So from this, from these results, I was able to gather some observations. One of the, the biggest thing I could find out was that jgit was able to perform better than git, CLI git under the condition that the repository sizes let's say lower than 5 MB. And since we can fairly assume that the JVM will always be warmed up to the point that we reach when we are running the git operations. So this, this is a, this is a fair conclusion that jgit will perform better than CLI git under the condition that the repository sizes less than 5 MB. So this is one of the, one of the nice results we had from the experiment. The other is a clear observation that the benchmarking framework, we can see the execution times are reduced in terms of their magnitude, which I think means that it's supposed, it's doing what it's supposed to and nanotime is involving some kind of process overheads and things I'm not aware of probably. So, so yeah, these are the results and okay, and the, and the fourth step I was talking about was using user feedback. So we have two scenarios under which we could possibly do that. The first one is that we, we include system.nanotime for some operations, some git operations we want to test. And we, we, for one week we trial, we gather data from user. We basically gather the, for different, for both of the implementations, user data for execution time for the, for their use cases. And we would, we would have a good reference data on how we would have to plan our benchmarking strategy. The second scenario, how we could use Jenkins telemetry is to first add the performance enhancement that is whatever we, whatever report we have, whatever conclusion we have from our study using the existing Jenkins infrastructure, we, we encode that in the git plugin and then after doing that and giving performance enhancement as an option, what I want is that I want, what we want. It is that we want to gain user feedback on whether the user is using that option or not. And secondly, is that option is using performance enhancement is that actually working in the intended way or not. So this, this is one of the open questions we have, and we're discussing it currently and, and yeah, any any questions from you guys anything you would like to ask. So we shop one of the specific reasons that I was hoping for feedback here was in terms of what's allowed and not allowed in terms of telemetry and oh like I think you've had past experience with telemetry kind of questions. What are we generally allowed to collect from our users. I know it has to be for a limited time. So telemetry is there is policy, which is what you can collect, but basically as long as data is anonymous, and as long as you reduce the traffic so that provides on the basic metrics, you can run with it. So, to be honest, I'm not really sure that using the Jenkins telemetry right now is a good step. Because you are doing a lot of profiling in first three steps, but the Jenkins telemetry can be used when the depository stalls or whatever. But in practice, you will see a lot of companies operating with extremely big heat repositories. So you cannot use telemetry to just collect information about common operations. You will see short operations, long operations, but mostly because of the data sizes. You can probably collect information about particular operations, but to be honest, I'm quite skeptical about doing telemetry. So one of the thoughts on telemetry had been to actually send back to the destination point repository size information along with the duration information. An LS remote took this much time and based on local copy the repository we know that the repository has this many objects in it or something like that. Is with that do you think that would be allowed further from the telemetry record perception, or is that too too much at risk of not anonymizing anymore. I think that it's possible in principle. So yeah, it needs wider discussion in the community, but I don't see reason to say no to that. But yeah, again, what decisions would you make based on this data. So the for me at least the decision was trying to understand what are the actual sizes of repositories and the distribution of sizes in the community, because if if your assumption have been hey we're going to find many that have large repositories mine was most repositories will be under the five megabyte size. But I probably but what's your target audience. So we're trying to improve the plugin performance if the repository is five megabytes. Most likely this repository has no real issues with performance of the game. So, the theory is that this project rather targets users with big repositories and the specific use cases where a good plugin performance could be improved. So, if we use this telemetry in order to capture this use cases and to discover where a good plugin becomes slow. It's a good thing. But yeah, you need to filter the data properly and you before you start investing time on that you should be sure that the data you collect actually will help you with decisions. So it's not just telemetry for telemetry. It's telemetry to help the project. And for example, if the same could be achieved, let's say, instead of putting telemetry, et cetera, but running a survey. Asking Jenkins contributors to submit their feedback or get plugin performance issues, et cetera. Yeah, maybe it's something which could be more effective there. I'm not sure. But for example, what I would recommend the team to consider. Yeah, these are great technical steps, but there are also non-technical steps we could take. So, for example, there is already a great data collected for this project, including some metrics, et cetera. So what if, for example, you create the first blog post, announce the project, share such metrics and ask readers to provide feedback. For example, in a Google form and ask, basically, do you have a plugin performance issues? And could you just provide some details like the repository size or whatever you want to collect so that we get information from users and from the field. And you can also invite them to join the project meetings and probably provide more feedback. So, for me, such a way might be efficient and less time consuming than implementing telemetry. Good. Thanks, I have not considered the idea of a survey and certainly with 250,000 installations, even if we only get feedback from a tenth of 1%, that's still quite a volume of people that have given feedback on interesting, interesting data and it would bias towards asking them the question would bias towards people with an interest, giving us answers. Yeah, I like that. So, yeah, you already have questions, which you want to answer. Yeah, I've seen the original proposal. They were already these questions to discover in this week. So if you reform that, that's somehow maybe it would be a good initial step and parallel this doing cool things in the list. Great. Thank you. Thanks very much, Oleg. Yep. Thank you. Yeah, maybe another thing to consider is how you do benchmarking. Because for example, if you use Java 11 and Java 11 you have Java flight recorder embedded. So users can perform low cost profiling. And again, if there are users who have performance issues, who are willing to migrate to Java 11, they could just provide us profiles, even from production instances, provide this data. Or if you can reproduce particular cases, again, we could capture them. For example, from CIG and CIO. I'm not sure what the CIG and CIO has any performance issues with it right now. But if it does, we can just enable flight recorder there and collect some information. So I was not aware of flight recorder. I think that's an excellent idea, Rishabh, from Oleg to consider the Java 11 world and using a profiling tool like flight recorder. That sounds really interesting. Well, you can use it on Java 8 if you're Oracle customer or there are other profiling tools, for example JetBrains provides some enterprise ones. I'm not sure whether they're accessible with a student license. But if you want to get them, I know the context. Yeah, if you target Java 11, it's basically a part of the OpenGDK. Yeah, I'm pretty sure it's part of Adobe OpenGDK. And I've seen an announcement somewhere that it's also available for OpenG9. I might be wrong. Great. Yeah, I've just seen a Twitter a while ago. Excellent. Thank you. Marvelous that have had you here, Rishabh. Are there other things that you wanted to present to us? No, I think this is a thank you for this discussion. I think I've got great points to explore more options. Right. And I assume we put Java flight recorder as a topic on our next office hours. And let's let's, as part of your exploring in the coming week, it'd be great to hear what your experience was there. Sure. I have added it. Excellent. Thank you. All right, then I'm going to switch back to my screen and you should be able to switch us. Thanks a lot for this project. It should be really interesting to users. If we can improve the behavior, especially for embedded Java client library, then it would be awesome. Well, and I, yeah, I'm, I am thoroughly excited by Rishabh's work so far and delighted to work with him. It's been a lot of fun already. We're getting ready for Git plug-in 4.3 release. We hope to do it by the end of the month and, and no end of month before about the time of 235 release as an LTS. And so we're, we're going to be busily, busily working. Lots to do. Thanks very much. We had put custom Jenkins build service on as a possible topic. Oh, like I don't detect them present. So I'm going to take that off. Slayton and Krishna actually there. Oh, oh, great. Okay. Yep. So shall we give them Slayton Krishna, would you like to share your screens? Or you want to just give us a status. Yeah, good. Probably give me access for a couple of minutes. Maybe I could share something. Okay. And I think you should be able to share. I think I set the control so you can share. Okay, okay. Just tell me and probably see my screen. Yeah. Yes, we can go ahead. Yeah, so with the customer begins distribution build service, what we plan to do is provide an out of the box solution so that users and you know, configure to the plugins online and probably choose what plugins they want to choose. Based on those configurations, we could generate some sort of, you know, the package of configuration, so that the customer package, you know, just directly downloads the warfire and then you can write up the bad download the warp and use it as is. Also, you're planning to provide users with an ability to, you know, configure their jcask files right off the bat. So, so I'll just give you a demo of what the project is all about. So what we plan to do is list all of the plugins that Jenkins has. So for now, I've just listed. This is a prototype. Okay, so I've just listed a couple of the plugins. So what users will have the option to do, you know, to edit the configuration of each and every plugin. So he can edit the version of the plugin. He can add the configuration. So if you click add to configuration configuration, those classes get added to the configuration with the latest. If the user wants to select a particular version, he can definitely add those words. If he wants to edit the configuration, if he wants to specify anything custom, he can do that as well. So when you click on edit configuration, what we plan to see in the future is something like this. You know, you can, you, this is just an example of the SSH configuration. Thank you. You know, he can set the system message. He can set a number of tries, the launch time out in seconds and so I'll just hit dummy values, maybe 12 and you know, hello world. And then you hit submit. You'll be taken to this page. So what you should see this configuration is your Jacob YML, obviously it's just a prototype so it's not generated. You should get to see whatever values you've entered previously into. So that will get you ready for using the jcast I'm right out of the box. And what do you see at the end of the editor here should be your package or configuration that Jenkins custom warp package or uses to, you know, generate the warp file. So as you can see, this is just an example. It just picks off certain demos and it displays it over here. So after this, the user will have the chance to edit the version. So if the user's not happy with the war version, you can maybe insert on the go. And after changing it, go, he provided with a set of options. So although these are just dummy buttons right now, but some of them do function to hit and download jcast, whatever YML you added over here just gets downloaded. So as you can see in my bottom left corner of the screen, I have a jcast YML downloaded. Then you can even select to download the bar file right off the bat. You can download the Docker file as well. The bar file generation takes a lot of time currently because it's quite weak, but you can definitely do that as well. Add more plugins to the configuration. So if you can button the user will be taken back to the initial plugin this page and he can configure the region. You can add more plugins. If you want, if you're not happy with the initial configuration, you can add more. And he can also create a request. So if you see right here, what we plan to do is if the community creates something which is widely used. So say, suppose you create something like a Kubernetes AWS plugin and you want the entire community to have a look at it. You know, you just create a pull request, which should get created in our Jenkins repository, what we plan to do. So this PR just describes and you select the blank. So when you hit on submit. So for now, the sandbox repository, which is my own, which is private to me. So if you have a look here, you should get the new. Just hold on a second. Yeah. So whatever pull this was description what the branch of entered, it will just create a pull request with all of the package or configuration and your jcat. So the jcat squammer is empty. It doesn't get added to the files in this year, but in an ideal case scenario, you would have the entire both of the file is available. And then I mean, just can go through it, you know, provide suggestions and stuff and then merge the full request into the main repository for the minute you use. Apart from that, one of the major functionality that we plan to have is prebuilt configuration. So once you have added your configuration to the pull request of the repository, it should show up right here. So maybe you get these are just dummy sections created, but you can have a look at them later. They are the plug is configured. You know, most use highly rated. So you can just order them as per the usage. And also we provide the search bar. That's a dummy bar right now. You can maybe search for a particular plugin right off the bat. And as you can see, just view this configuration will take you back to this page. And now the name of the YML file has been added here and you can see the entire configuration. So this has also been picked up from the repository, which is a full repository. So again, whatever configuration you add will just be picked up the repository and added here. And then the users can, you know, do whatever they wanted to do before, like download the file, docker file, the jcast file and add more plugin for whatever. Or even if modify the entire existing one. So yeah, these were probably most of the functions list. There is a lot of work to still be done. Yeah, that was probably my demo for this login. Christian, if you want to add anything you can. But thank you. Right. So the main thing I guess we want to show here is that we're using the whole point of this is to be able to automatically generate a configuration for your, like your Jenkins depend with the inputs kind of on the UI. And so, like with this laden has this really nice example and that's a guess for looking for any type of feedback or anything else that could be beneficial for you all here at the platform. So any questions or comments about we've been doing or Thanks a lot for this demo. It looks really great. We have three months ahead so looking forward to see the final version of the service. Yeah. So, so the concept seems to be and I, I obviously should have have understood this before but seems to be crowdsourcing choosing good configurations or interesting configurations for Jenkins Jenkins setup so somebody could submit a proposal to have a get lab focused one or a get hub focused one or a bit bucket or or some a giddy focused one that that it looks really exciting. Thanks very much Sladen. So any other feedback. I could do some technical commands. Oh, go ahead you go like please yes. So, yeah, I'm really looking forward to see how it works in practice. And it would be also great for example, to have a discussion about this project next week. Because I believe there would be a lot of users who would be interested to see it. And because initial configuration of Jenkins is definitely not the best part user experience. And such service could help you that one of the things in this project that definitely needs more contributors who try it out who would provide the feedback. And from what I understood one of the problems is having somebody to help use the front end. Because, yeah, right now we have a lot of Java experts, but Sladen, could you please describe how the front end is implemented. Yeah, so currently the front end is just vanilla jobs. So I think I increased my control of the screen but I would have a chance, yeah. So, yeah, currently it's just vanilla JavaScript and bit of HTML files. So this is probably just a set of HTML files in a file because it's a prototype currently. So what we plan to do is as Christian suggested, we would probably use a front end framework like React or Angular. So that it makes it easier to, you know, now for example, if you're actually using the demo. You know, for example, if the user edits a particular configuration or adds a particular plugin, the configuration that he has already edited should probably be cast in the browser on the client side. And we would not have to make const API calls back and to keep retrieving the configuration and presenting. So it would be much easier if we were not using native and vanilla JavaScript and be using a dedicated front end framework. If we were using that, that would probably help in, you know, organizing stuff, keeping it more clean and more contributing to the project. So yeah, so currently it's just a, it's just my initial implementation was using in HTML JavaScript. It could be done using this as well. But yeah, if we were using a front end framework, it would help in multiple ways. So contributions are welcome. Okay. So we have a few examples and Jenkins. So if you would like to build a static site or whatever you could take a look at Jenkins plug insight. Also, we have a few other services, but not really the most powerful front end is written in Java and Java like know what it is. But having something can react and is definitely reasonable in this project or whatever other technology you prefer. And definitely we could find some assistance from contributors. So let's try to find more people who are interested in these technologies. Yeah. And also, you have CSS libraries, for example, for Jenkins website, also for Jenkins, etc. So when it comes to Stalin, I think we could improve local fields or it looks like a Jenkins service. But it's definitely not an immediate goal. Thanks. So like any, any other feedback from the, from the group here. So, Slayton, I assume that the eventual destination for this would be something connected to Jenkins that I owe, but during the development phase, you develop independently. I would love to participate because I've got some configurations that I run already as part of my testing. And so if, if you copy me periodically on email or remind me, Hey, Mark, there's something that would be interesting to try. I would, I would love to be one of your experimenters. I, I'm not, I'm not terribly focused on your specific project, but I would be interested in borrowing your work to help me on my projects. Yeah, definitely. Sure. I'll copy you in all of the emails, whatever we're doing. So we initially do not plan to host it, but if we have a POC working with maybe an issue set of just maybe we could host it for the during the phase one. Yeah. I would say about this. And yeah, I guess you will meet frequently at the platform meetings. Great. Also do kind of want to mention too, there is a Gitter channel for the project where we're pre-active and talking about the design. And if you're interested in following along, you can pop in our channel and ask questions at any time as well. And I'll put it to the platform and say, Gitter, if you're interested in joining or working. Well, in the Gitter channel, maybe enough because that lets me then decide, hey, when I reach a personal point where, oh, I need this. I need something like such and such. I drop into your Gitter channel and see, hey, where are you? How close are you? Could I, could I leverage that for now? Thanks. Yeah, I will put the link to the Gitter for anything else. Thanks. All right. Anything else on on the custom work on the, the packaging service GSOC project? That's all right. Thanks. Yeah. So, speaking of customer packages last week, we had a code dive session. So you can already find recording coffee on the Jenkins YouTube channel. If you're interested to know how the engine for building Jenkins works and book images. Hopefully it will be completely different in one year, but let's see. I think it's so cool that it exists. The notion of creating a single war that contains everything, all of my plugins. That's an amazing piece of work. Thanks, Oleg. But it wasn't me. It was a part of Jenkins for a long time. I just applied some secret knowledge and other things to package things correctly. So they, they run. We still need some patches, for example, in jcask plugin to run from a word file, but it's detailed. Excellent. Yeah. I'm just picking a bit slide slide in. If the service eventually supports Jenkins file runner, I think it would also find audience. Yep. All right. So let me take back sharing then and let's look at other topics. So we had, we're almost out of time. We had one last topic that I'm aware of. The Docker images and Alpine are under evaluation right now because we, we saw that it looks like Java 8252 is not available for Alpine on the version that we've got. And so we're evaluating how do we get to a current version of Java. Right now it's we're running Java 8 an older version. And so that's, I think 212 is currently bundled. And we need to move it and bring it up to the current. So one of the ways that we're considering doing that is switching to adopt open JDK. Thanks Jim to your, your leading that that effort and those discussions will continue. So if I recall correctly, they should be hit that there is no adopt open JDK images for Alpine and Java 8. It's that there's no official because there is the unofficial right it's an unofficial image for or Java 8 Alpine. It's difficult to define what official means, because for example, the current image we use open JDK. Well, it's kind of official. But it's official not in terms that it's not supported by open JDK project. It's not supported by open JDK. So team or whatever. Actually, it's somebody in Docker packaging periodically. It's always behind. But whether you want to call it official, you're welcome to do so. But the recommendation from open JDK team was actually to not use this image. Okay, got it. So, so we're we're already already in the danger zone with the current image. We're not likely making it any more dangerous by using an image that the adopt open JDK team actually tests and uses and delivers. Yeah, so I'm totally willing to just buy the bullet and switch all our pine images for Java 8 adopt open JDK. There are some requests from Alex already. I was also about doing so I just go stomp on the other things like your ex hackathon. But it personally would just go ahead and switch to adopt open JDK. And that's Alex and I are scheduled for some time tomorrow to discuss that. So, and I've invited Olivier to join as well so that we could have a conversation about that. Yeah. You're welcome to join but not not required like I would love to have your insights I'll send you the invite and let you decide if you want to join or not. Okay. I also can put some pressure on the adopt team to see if they can push a release of those official images for Alpine for Java 8. That would be very kind of you Jim that's asked for adopt support. They, and I would understand if they said no but that that would be very attractive. I think I think update is supposed to be coming soon. I know I'm working on this pipeline that will connect adopt open JDK Docker images and our testing adopt open JDK testing Jenkins servers. And once that pipeline is connected. And all our nightly images that get produced will be tested. And we have a much more faster pipeline to do PRs against the Docker image official image repository. So we can pump out a lot more updates and a lot more base images. Like I've been telling you guys for a long time. It's it's coming. So what could we help with regular regardless of the official official status. We could introduce a new tech for adopt of adopt open JDK Alpine. So for example, start not from default tech, but for another additional tech and in such case we don't really care whether it's official or not. So we start shipping this images we announced we ask Jenkins users to provide feedback. Maybe we'll discover some glitches here and there. But I think it would be reasonable to start the adoption. Idea. Right and that feels like those those ideas are worth discussing in tomorrow's session with with Alex that way we get all of us together on the same page. Thanks. Anything else with regard to Docker images and Alpine. Okay. Last type topic we had was just that highlight yesterday was the 25th anniversary official of Java. And so the Jenkins project tweeted. And lots of lots of different things congratulating a very long lived project that certainly had a great impact on the Jenkins community. So, yeah, there is no specific details here just highlight. Yeah, Java celebrates a date. So why don't we celebrate as well. And thanks to marky thanks to mark for helping yesterday so that together we delivered this post basically, well, we just discovered that somebody celebrates in Twitter and joined to the party. Yeah, I know. So, definitely something to highlight. Oleg's Oleg's image and and slide slide work is impressive for me I'm delighted I he always makes me reminds me that while we can we can use people who think graphically in images that's great. That covers all the topics for today. Any other topics. Okay, then we'll conclude the platform SIG meeting. A recording will be posted in roughly an hour. Thanks very very much everyone. Thanks. You should do recordings for yesterday, because I haven't published anything yet.