 Welcome everyone to the Jenkins platform special interest group meeting of July 23rd. This is a special edition will focus on Google summer of code today. First, let's take a look at action items and then we'll talk Google summer of code. I'm going to share my screen so that we can see the agenda. Right. So, let's go larger. Action items I had I had the action to switch the meeting URL to the CDF zoom account. It is. It is switched we're using it. I believe the Google calendar has been updated but I'll check that after this meeting. And I'll double check that it's in the platform page we're now including them regularly in platform pages. I have the action to open a jet for Docker operating system support. And we've still got as far as I know the Docker build rework PR and the Alpine image update PR now on that one. I've been doing interactive testing with it and it's been working quite well. Oh like you've got an item on Oh on that net framework to Oh do you want to go ahead and talk to that one. Yeah, sure. So one major announcement that we finally integrated changes based on the recent windows support policy. So we dropped support for the network to from the recent weekly release and going for the framework for is the minimum requirement. So what it means for users that there might be some extra great steps if they want to keep using that net framework to the zero. I have just submitted a blog post for that is still in drafts but if somebody is interested you can take a look. And it also unblocks updates. For example, we're working on YAML configuration support and Windows service right there. The current baseline in the master branch already dropped supports for the framework to so in the future versions we will be able to update and it should work smoothly for the Jenkins users. So one major update. Again, there will be a blog post because it's a breaking change. So I will appreciate reviews after the call and maybe another issue which was noting this MSA packaging issues on Jenkins in Jenkins weekly releases. So just to explain the context, we experienced multiple issues over the past week. So if you go to the Jenkins change log you may see that there is a warning that MSI package is not available. It happens due to two reasons. So firstly we hit the issue with preliminary password expiration and official Docker agent images. It was a critical bug in Jenkins users as well. Now it's resolved and thanks to Alex for patches and the release. You won't be able to see it here. It's only in the agent change log. I mean the Docker agent part. Right. And yeah, still even with this patch we are not able to release MSI packages because there is another issue is code signing and packaging for the Windows. We traced it back to breaking changes in the container environment we use. Basically it's a bug in containers shipped for Windows and we cannot do much about that. So if you're interested you can go join Jenkins to this RC channel. But basically it looks like we will be waiting for Microsoft to release the version. So most likely there will be no MSI releases for a while. Hopefully until the next week. I'm not sure what's going to happen with the next LCS release. Currently the scheduled LCS releases mid August. Hopefully by this time this issue will be fixed. If not, you may have to ship it without MSI installer as well. It would be unfortunate because the new MSI installer actually includes a lot of patches and updates by Alex. So we would be interested to ship that. Or maybe it still makes sense to do it in .1 LCS. So let's see. So the work around this trivial you instead of using MSI packaging you just download Warfile and replace it manually. It will work. I guess that's it on platform related topics. Alright then let's move on to Google summer of code project reviews. So let's see we've got the get plugin performance project performance improvement. Which others should be on the agenda like. So officially we have three projects, which are part of C. So it's get a plugin performance improvements. Then my custom Jenkins distribution build service. And the third project. Windows services and YAML support. But I will also have other students on the call. So we have Smith who is working on external fingers storage, which is probably a part of Jenkins platform as well. And you have Kezier who is working on a GitHub checks API. So if they interested, I think we can talk about their projects as well. Any particular order you prefer. No. You have one of them basically so we can go with that. All right, well I'm going to mute myself so that my clickety-clack keyboard sounds do not do not disturb and if others would like to share you're welcome to share just call me out and I will stop sharing and and you can take over sharing. Let's go ahead and get started then so Richard. Would you like to let's start with get plugin performance for improvement. Sure, I'll start. Mark, you have to enable screen sharing. You're, you're. And yes, the screen sharing. So for this phase for the phase two, one of the major deliverable we had for get performance, get plugin performance improvement was to implement the insights we gain from the benchmarks we did throughout the phase one. So now what we've done is that we've created a class inside the get plugin, which is right now being called as the gets reposize estimator class. So the name might change, but so this is the architecture and I'm going to explain what it does. So this class will enable get plugin to recommend the optimal get tool for the current repository size. So if we have the size of the repository, we will be able to tell which implementation we should use. And the rule we will use for that to tell that was derived from the benchmarks we executed during the phase one. So, so the so from the starting the class takes can be instantiated using two things. The first is a multi branch project uses an SCM source object that can use that or it can use a remote URL the repositories URL to instantiate both of them are possible. What then what it does is first it can it checks for cash. So the multi branch project it it it stores cash the dot gate repositories are stored as cash for the multi branch project. So what we do is we estimate the size of the repository using that cash. And then we apply a rule to recommend the tool to get to use the plugin should use. And one thing I forgot to tell is that we will not check out any repository the aim of this class is to to to estimate the size without checking out the repository because if we check out the repository and the whole purpose of improving the performance, it kind of doesn't make sense. The second option is using other plugins to find out the size. Now how are we doing that we have exposed a new extension point, which is now called right now called the repository size API, which can be extended by plugins like get a branch source plugin get lab branch source plugin or the bid bucket one. So those plugins can query they can query a get request an HTTP get request to get the size of the repository and we ask for the size of the repository. And once we have that we can use that if we don't have the cash. The dot gate repository cashed in the project, and then we apply our heuristic and finally we recommend the tool the plugin should use to to perform the gate operations. So right now the status of this, this class is that the PR the pull request have been it has been raised. It is under review right now I have written the test cases some of them are some of the test cases I have to write so it's that is the progress with the with this class. The second thing I'd like to discuss is some of the benchmarks I so the second objective was to expand the benchmark study we were doing. So initially we were only taking the size of the repository as the parameter to judge the performance of get fetch. Now we tried doing that with multiple parameters like the number of branches for the repository, the commit history, the commit size and the number of tags. And while we were doing these experiments we made sure that the size of the repository is not changing because if that changes then there's no we cannot infer anything from these tests. So the first test you see is you see here is from the so the what we do here is that we're varying the number of branches with each repository. There's one sample base repository with just one branch then it increases to 10 to 100 2000 and then to 5000. And what you're seeing here the graph is the most obvious thing is that the performance the time taken by get fetch is increasing as the number of branches increase that's the obvious fact difference in implementations get get and j get if you can see. jgit is performing better for branches less than 100 jgit is performing better than get in those cases. And I think that is the only valuable insight we have from this from this benchmark. Also, yeah I think that's that's it for this benchmark. The next one is here we try to keep the size constant and we vary the number of commits, the number of commits, they start from 100000 and 5000 in a similar fashion. We can see here to noticeable, I would say points the first is that there's not much of a difference in the performance of get fetch with varying the number of commits. As you can see with get it's it's almost same as a one millisecond or a difference or so for jgit. It's also the same thing. But the second point the difference between get and jgit is is again we can see that jgit is performing better in all cases for the number of commits it's for for that parameter. So, this is also something interesting here. The last test is tags. So here also we've done the same thing we've tried to keep the size constant and we've increased the number of tags. This test, one of the major thing I could find out was that the effect of increasing the number of tags is affecting the performance of get fetch magnitude wise in a, I would say, see the right word would be that it's affecting more. If we compare branch and commit the difference in performance is not that much, but with tags you can see for 5000 tags, which is a very high number, but it's almost half a second. That is the amount of time it takes if we have too much tags. So, and again, there's a difference between get jgit and get jgit is performing better in all cases. So, from these tests, what we can infer is that jgit can perform better than get. So we've we've seen conditions where jgit is performing better than get. Before these tests, the condition was that the size of the repository has to be small, very small like 5 MB or 10 MB less than that. And then jgit would perform better than get. But here we can also see that for certain parameters, jgit will perform better than get. But the question, the bigger question here is, which, what parameter affects the most and will that parameter overshadow the other parameters when we're talking about the, the level, the amount of correlation these parameters have on the performance. And that's a question I need to figure out. I haven't figured out yet. We could discuss that with the mentors as well. So, so apart from these we've improved the benchmarks, improve them in the sense that we've added a validation initial check kind of a class which checks the validity of the operations which are being performed. And yeah, that's it. That's what I want to say. Thank you. Thanks for the update. The improvements look pretty nice. Yeah, I'm looking forward to see them released. So the most of the changes are in the get client plugin, right? Or are they in the get plugin? The A, the class which we have developed will be in get plugin. Not in a client plugin. So the benchmarks are in the get client. Get client plugin. Yeah. Benchmarks live in the get client. I'm looking forward to see that. Well, for me, the biggest repository I check out more or less on a regular way. This is a Linux repository. And it's really big. So it would help a lot. Yeah, so yeah, you must not use J get for that repository. I don't think that two gig monster is is a beautiful thing. It's impressive how pretty that repository is. But yeah, never ever touch that thing with J get it. It just is not ready for that repository. It's fine. Well, they develop it with the next. Okay, so thanks a lot. Yeah. Questions you would like to ask the platform seek or whether you need additional feedback. Um, the question I think feedback I would require would be related to the class I've created. If you could review it, it'd be great. And one more thing and I'm not sure if it's a question, but one thing I'd like to say is that to make that class useful for get plugin. I need to go to the two plugins like the get GitHub branch source or get lab a bit bucket and either encourage the developers there or create the extension extensions myself so that when we're using multi branch project, and we have those plugins, my class is able to derive the information we need to recommend the best implementation. So that's something I would have to do. I've decided that I would do it for GitHub branch source plugin I also raised a discussion thread on the Jenkins dev group. So I'll, I'll, I'll start with that and then do that's something I'm going to do as well. Yeah. Thank you. Should I stop sharing my screen. Thank you. Yeah, I think next up is the custom distribution service. Yeah, so I can share my screen. So I don't have a slide deck ready. I hope I can save that for the for the demos. But I do have a couple of updates that I would like to share with the platform. So as we've seen in the last demo, we had the package, the custom distribution service being able to generate at least your packages, given a set of plugins. So if you could, you know, you could provide the service with a couple of plugins, it would go ahead and generate the package. I'm sorry, the configuration package that you need to generate the war. So for this, for this phase two, for the phase two milestones, one of the major milestones was being able to to generate the war file and being able to share community configurations. So these were the two major ones. And additional to that was one of the, the third ones was being able to make pull requests using a bot. So I'll just talk about some of them in detail and then, you know, not taking up too much time here. A couple of them would be I'll start just give me a second. Yeah, the war download feature. So one of the major things that we added this time was the war download so you can now hit the war generation. And it will, you know, just just download the war file for you given a particular configuration so you can generate the configuration provided to the service and then the service will generate the war file for you. It will obviously download it as well. What we, there are a couple of downfalls to this we do not support configuration as code. So if you provide a configuration as code section in your configuration file, it will like break because there is no, we do not have support for that as of now. Maybe that's something to write to the read me later on when the project gets self hosted. But yeah, for now we do not have configuration as code support. So that is one thing in the war download feature that we've added the next feature that we came to add was the community configuration page. So now we do have a community configuration page where users would be able to share all of the configurations that they've developed. So for now it just it just is on a local repository. So if I think I can, I can, I can see it's custom distribution services. Give me a second. I'll find that repository. Till then it is, it's just hosted locally on my account. And I hope it gets hosted later on to the Jenkins organization so that everyone can find all of the community shared configurations, their own release cycles and so on and so forth. Another feature that this, that this update supports is that you can add your own URL. So if you have a URL or your company has a URL that you want to share configurations with or you want to provide configurations on support that. So you just change a bit of environment file configuration that you can, you can, you know, configure a repository where you can host all of your configuration that a user docs have not yet been added, but they will be added soon. So that users can see what kind of, what kind of steps to follow to be able to custom. But for now, yeah, if you run it locally, you can definitely, you can definitely store your configurations on that community configurable. That was the second update. And the third one, which was major but wasn't added to this, to this milestone is being able to create pull requests automatically using a chicken custom distribution service bot. So as a mentor decided that this we do not quite know yet whether the plugin will be hosted on as a service as a web application or a web service. So this, this pull request doesn't quite make sense as of now because, because there's no, there's no point in having a bot if the service is not hosted. So, so these, this was another update that we tried to add, but yeah, but as a mentor decided, we would maybe put it for history, if the service is hosted on, on, on a, on a service. So yeah, fun, non-independent. Anyways, that was it. The third update was just some minor search functionalities, you know, that you can search plugins, you can search, you can search community configurations and so on and so forth. These were just some of the minor updates, not major. And the last update was that you can now the Docker compose works. There are still changes being made to it, but you can definitely and quick started. So yeah, that was the last update so that now the project seems to be in a condition where it can be self run by the community. There are some changes as you can see this connect back in front and using the new Docker config compose. We're still working on that, but yeah, you can still test out a couple of the features as of now. So yeah, these were the major updates and we have achieved almost all of the milestones for phase two. So yeah, project looks in a good condition. Yeah, I'm open for any questions. Thanks a lot for the hard work. Yeah, it's great to see that they basically creates a new level of services and user experience based on our tools which we will be discussing platforms like the custom work package or the Docker images, etc. So basically this project accumulates many other projects we have in the seat and provides them as a service or as a self hosted application. I'm really looking forward to try it out. Maybe we will be able to even run it as a kind of image controller in the future, especially if there is support for conica. We will be able to put a package images on demand by using the tool as a service. Probably have the configurations set up so I'll probably add a fingerprint storage external fingerprint storage configuration also there. So, definitely. Thank you. Any other questions. Yeah, yeah, so I'm done. So. Good. Okay. I'm a configuration support so main task force. These four was the main task was budget general configuration support and new CLI is my school validation and you know, so in the first two updates, I have completed that I am a configuration support and it already has been merged with the version two master and new CLI is almost finished and there are some discussions about that and will be merged in with and this just next turn is being updates into version three as so I think I might be able to merge the new CLI with version three before face to finishing. I don't know. And so there are a few on going up updates at the moment. So those are there are extensions in Windows. Like, and process killer. And I think I shared it. It's already removed from this. So, however, in my computer support, I have to update the support for that those extensions as well. So it is how to be done to do so. And also I'm at the moment I'm writing. I'm a user documentation for the ML configuration support and if you talk about email support so as we discussed in phase one. Now provided those configurations in more structured day like log service account and downloads on failure actions and my variable those things are getting in a more structured way. Like in the configuration then also in codebase as well. And yeah, and how I have published a ML all option sample all option file where user can use this file where all the configurations has implemented as a test file so user can use this asset for the configuration file so it has not matched it and there is another pool request has been created for new updates for my company supports so if you talk about new CLI and redirect command has been removed so in the after the phase one updates. So those are the major updates that I have to tell about this project so there are two open pool request at the moment, which I have to complete one for new CLI so it is under review and I think I don't I'm not sure if you can merge it in phase two but we have to discuss about that. So, sorry. I am a complication, we have a support pool request has been merged but however, I open another pool request for some few updates for that and it's under review I think we can merge it in before phase one presentation. Those are the updates that I have to present in this presentation. Any questions. Yeah, so basically this project is one of the reasons why we do the groundwork for the network support in the JANKS core because YAML configuration support is like highly demanded by configuration management tools so landing let help JANKS administrators a lot especially if they use various tools to deploy in the service. And yeah, I'm really happy to see that we're getting close to the first release of the support. So hopefully next week by the time it will be out. Yeah. Okay, shall I stop sharing my screen. Thank you. Mark doesn't use Windows services, right. Not yet. I've got to get, I've got to get down to that level I'm still using my login from a desktop and running from a desktop. So you're right. I need to use services more. I like I like the new installer that the MSI installer does the handles the service support so I'm, this looks like a great project. Thanks very much. So external fingerprint storage our next topic. Is that right. Hi, so you can hear me great. Awesome. So I don't have exactly a presentation set up so that the short notice for today's meeting but I, I just talk about the project and what we did in phase two so as a quick recap for phase one. What we're basically building is an external fingerprint storage engine for Jenkins. So all your Jenkins fingerprints can be stored instead of on the physical disk, they can be stored inside an external storage and as a reference implementation, we built the we built the redis fingerprint storage plugin around it. So you can configure it is and your fingerprints will automatically be saved inside the external storage. So this is what we did in phase one, but so they were few aspects that were remaining some few missing features that we went that we targeted in this phase. So one was that earlier we had to. So if if the plugin was configured. So basically at installation the plugin was configured directly. So now we refactored it to use a descriptor implementation so that now the user can go to Jenkins configuration page and can choose the external fingerprint storage engine they desire. So tomorrow say we get another fingerprint storage plugin say postgres or say my sequel then you know the user can just install the plugin and then you can choose which plugin they want. So that was one of the features and that was released in 2.248 Jenkins core. Also, the next feature we targeted was the fingerprint cleanup. So as a context, the fingerprints they automatically get deleted on a periodic basis. Whenever they're the builds they're referred to are no longer present on the system. So when the builds they're referring to they're no longer in the system the fingerprint is supposed to be deleted. But this functionality was not yet present for external So we designed that we extended the fingerprint storage API to support this. So now the external fingerprint storage plugin developers can have methods where they can configure their storages storage engines to use this fingerprint in a facility. And that was also one of the things we released in 2.248. Also, the Redis fingerprint storage plugin implemented this API and basically we used cursors so basically to optimize this cleanup so that we don't have to traverse all the fingerprints like on an all call. But yeah, we can use cursors and that that is something we did. And then we targeted migration migration has not been released yet but the PR is there. So fingerprint migration basically is that in fingerprints. Whenever say a person configures an external fingerprint storage, he might already have certain fingerprints on his old physical disk. These were untouched earlier. So now we have introduced a lazy migration system where whenever a finger an old fingerprint is referenced. It is automatically transferred to the external fingerprint storage as and when it is used. So that is our fingerprint migration was something we targeted and we proved the testing for our plugin. And I think that's about it. That's what we targeted. And we might be looking at a new reference implementation in the coming weeks. Maybe tracing. We don't know. But yeah. So you said you were considering potentially a new reference implementation. Do you have any hints you want to give us of which, which storage back end you'd be using or is that still to be determined. So we have actually determined it. So we are doing it for postgres. So that will actually be a so so we actually in the last meeting we discussed that you know we can offer certain optimizations when it comes to postgres. So we can so the API can be built around and so basically postgres is also a challenge because it has a relational structure. Redis used basically it is a key value store. So this is like an it's a new challenge for us when it comes to so it will be something like a different reference implementation for plugin developers also that you know when it comes to relational So yeah, awesome. Any more questions. Just one comment from me. Yeah, so firstly, it's great to see that the work is progressing so fast. And yeah, we've got key API changes to this several weeks ago. We integrated the APIs for fingerprint cleanup, et cetera in 2.248. So this all this API is still in beta, but it looks pretty solid. So hopefully in one of the next LCS releases we will be able to say that this API is in GE. One great thing is that now everybody can develop their own implementation of storages. So being in this project submit just works on reference implementations but as for the plug both storage stories. We actually invite users and jinx adopters to implement something for their own needs. So for example, what should you want to keep the data in elastic search or in whatever database available in your cloud like the DynamoDB And then you can just take this API and click the right plugin. I guess in the next phase, you may also have an external fingerprint storage plugin which provides some basic API that's made. Sorry, I didn't get the last. So in the next phase, we may also have an external fingerprint storage API plugin. Yeah, so actually that is something I want to discuss in today's we have a sync up today. So yeah, basically that is that is tracing essentially but we were we had some roadblocks along the way for tracing because you know, it also needs a use case and we tried starting threads on the developer mailing list but we didn't get. So yeah, it's July and we did not get many use cases. So yeah, so we'll see what happens. Thanks for this work. Excellent result and impressive that it's in a weekly release already so like you had mentioned general availability the API what are the criteria to decide when it's declared general availability. So we follow the Jenkins enhancement process. And that is already a proposal submitted as draft Jeff 226. So the process is basically get the API so this doesn't better get the reference implementation. So we use to then get feedback from adopters feedback from core maintainers and then if everything is fine adopt accept that Jeff and make the API public. So I don't expect it to happen for the September release. So most likely it will be three months lock and maybe sometime in December we may make it G but by December, we will hopefully have enough field feedback about the feature. We can follow to that. Thank you. Okay, beta API is quite popular for example, artifact manager to use beta API from the Jenkins code. Right. Beta is not beta is not a mark of shame or ignorance it's it's more great. There's no way to deliver feature and to relate to that. So one of the ways to just deliver experimental changes which works quite well. Excellent. GitHub checks API next to like. Have cash on the call. I'd like to present that in our next demo today. But now we have implemented the customers implemented the part in the in the warnings checks and coverage checks so that works fine. So case you I missed it you implement you've implemented the coverage checks could you say again what was the other that you had implemented. Yeah, we just consume the API the general API implemented before no one is plugging and the code coverage plug in. And there's a disappointing. Disappointing feature is that we can't send those train the features train the graphs train the diagrams from the Jenkins to be happy case. Those, those diagrams in the Jenkins pages in his views HTML based but what you have it nearly just a link to the images so we didn't implement that feature but we provided some markdown based the train charts of that works fine as well. It's a great start and maybe in the future we could have images as well. Because the rather use cases where images could be useful. At the previous cloud native seed meeting we had a discussion about Jenkins fellow runner. And it's a first engine which executes pipeline and then stops, obviously without providing to your web UI. And for example, if it was able to dump reports as HTML or as images, again, it would be really helpful. And maybe at some point we could have just feature just as a part of Jenkins image framework, because now we prioritize all imaging around each art. And the oneself I think is each art so then we could probably add some logic for generating images and download them. One thing I'm thinking is that is that feasible to to get those images from the rest API if we if we can implement that. Wow. I don't know what that is feasible. It's technically feasible. There's some problem that it might be producing a lot of traffic. You will need to catch this images somewhere. Maybe you may need to put these images to CDN against somewhere. And for example, it helps to actually touch image to issue commands to pull requests. There is no magic. They just uploads it somewhere using API and then uses this link. So maybe we could use a good hops image hosting service for that. If not, the rest API definitely makes sense. But in this case, you will definitely need to think about the implementation and the loaded will produce some Jenkins instances. It's definitely an excellent feature because if you have the rest API, you can generate image and download it somewhere. And then you can handle this somewhere somehow to have enough redundancy. So if you're interested to implement this feature, I think it will find a lot of users on its own. In my personal right now, I just do screenshots of graphs when I need to do something. But if there was a button download this image from the web interface, which basically boils down to the same rest API, it would be really helpful. I've fallen in love with a Chrome extension that takes pictures of my Chrome web page. Same technique you're describing all I guess I would love that what she's proposing, if it could eventually be available. Thanks for the update. Thanks to all students. Unfortunately, we didn't have so many participants on the call today because it's summertime. Everything is super slow, but it gives you a lot of opportunities to develop things. And if you have any questions, comments, let's discuss them. We still have 10 minutes until the end of the meeting. If you need any assistance or if you're looking for some ideas, it's a place where you can discuss it. Thanks again to everyone for demos. So Oleg, I have an open question to the to those students who are still on. Are your mentors actively engaged? Are you feeling like things are working? If not, take that Oleg or to the org admin separately. We want to be sure that the mentors take good care of you and that you're feeling well supported by your mentors and the Jenkins project. We're doing weekly check-ins with mentors. This year it's a bit informal, but still we try to contact everyone every week. If you need something, it's definitely time to let us know because next week is evaluation. And if you haven't voted yet in the do the lesson for demo times, please do because we need to schedule the meetings. This phase most likely will do it again as internal event. But for the next coding phase, we will be definitely doing Jenkins online meetup, I guess multiple meetups. So stay tuned and thanks to everyone for the great work. Because the whole project will work really well this year. I'm looking forward to see these features released. Thank you everyone. Thanks all. Excellent. Let's go ahead and end the recording. We'll post the recording in the platform sick playlist. Thanks everybody. Thanks.