 Today is July 29th. Again, we are doing first part of coding phase two demos. I'll do really quick introduction and then we will spend the most of the time on demos and discussions. Just to introduce the Google Summer of Code. So Google Summer of Code is the world's biggest open source mentorship program. It has solvents of students each year and the Jenkins Project is proud to participate. It's our fourth year and during the previous years, we had a number of great projects. This year we have seven active projects, six in Jenkins and one in Jenkins X. That's basically the summary. Just to clarify, this year we have two organizations participating in JSoc. One is Jenkins, a Jenkins umbrella organization for Jenkins and Jenkins X project. But also our own umbrella organization continues the delivery foundation. It also participates in Google Summer of Code, the projects for Spinnaker and Screwdriver. And if you go further, even Linux Foundation participates in JSoc. So yeah, there is a lot of organizations this year. Now, in case we present on the projects in the Jenkins organization and you can find the list here. So there is custom Jenkins distribution build service, machine learning plugin, need performance improvements, YAML support for Jenkins Windows services, it upchecks API, external fingerprint storage, and also Ups and I don't send edition for Jenkins X. So this is the projects we have. Right now we have three months in the JSoc. So we had two months of community bonding and two months of coding. And basically all projects are ready to be presented. We have three demos to show. If you're interested in JSoc, we have public mailing list, we have forgetter, we have not so relevant office hours. So during the summer timeframe, usually we do them on demand if somebody is interested. But if you want to participate, we're happy to host them. And also every channel released it, they have their own project channels. So how to find them, you can go to our projects page. And here you can find all the information about the Google Summer of Code in Jenkins. So, for example, if you're interested in that on Jenkins Distribution Build Service, you go here, you can find project details, the reference to materials, plans for your face, also communication channels. All our projects would appreciate feedback and evaluation by users. So if you're interested in them, please use these channels to contact the teams. Okay. So what else do we have? Yeah, for Jenkins X, Jenkins X operates in its own community channel. So if you're interested, please join the Slack. It's not on GitHub, the majority of our communication channels are on GitHub at the same time. Okay. Before we press these demos, I would like to thank all participants in JSOC. So it's all students, mentors, or cadmins and also all the reviewers and other community members who participated in this year because we've got a lot of feedback from review, for example, on Jenkins core reviewers, plugin maintainers, we also got a lot of feedback from the developer, my name, please. And hopefully we will get a lot of feedback from users. So thanks to everyone who participated in JSOC this year. Let's start with demos. So today we have three demos. So with Git plugin performance and performance, we have Chex API and external fingerprint storage. The next three demos will happen tomorrow at the same time. So if you're interested to know more about other projects, please join us tomorrow. For each demo, we will basically have small introduction, demo and then discussion and Q&A. So if you have any questions, please feel free to ask. We are doing this meeting in Zoom. So basically everyone can unmute yourself and ask questions. And basically that's it. If you're watching this recording and if you want to ask any questions of mine, please use our GitHub channel or the project channels which will be communicated due to the presentations as well. So that's it with introduction. Does anyone have any questions or comments? Okay, then let's proceed with phase one demos. I will just stop sharing my screen. And the first presentation is by Slayden. Slayden, are you ready? Oh, no, actually it's not Slayden, it's part one. Yeah, so it's a Git plugin performance and performance. Sorry. Thanks for your good, no worries. I hope you all can see my screen. Yes, we can. Okay, welcome everyone. This is the phase two review for the Git plugin performance improvement project. I'm Rishabh Boudhulia. Before starting, I'd like to thank my mentors in this phase. We've tried to design a new feature and implemented it and their advice has been really helpful to me and I'm glad that I'm working with them. So a brief summary of what we have done in the project. So the singular aim of the project is to improve the performance of Git plugin in phase one. And the essence of phase one was to differentiate the performance between the Git implementations we have inside the Git plugin, which is Git and JGit. So we used benchmarking principles to do that, micro benchmarking principles, and we use JMH as the framework to do it. It provides us the environment to design, implement and analyze benchmarks. So we implemented a module inside the Git client plugin to do that. Now, one of the major experiments we did was to compare the performance of Git Fetch as the Git operation for Git and JGit. And what we found out was that there is a strong correlation between the performance of Git Fetch with the size of a repository. And so what was not that obvious in the results was that JGit's nature of performance changes after a certain size of repository. So if I have a size of the policy, let's say less than 50, JGit would perform better than Git, but for more for a repository size, let's say greater than that or let's say 400 or 500 MB, JGit's performance would exponentially degrade in those scenarios. So we found out that during the phase one, and we also fixed the double fetch issue in the checkout step for the Git plugin. Now, the second phase was about implementing the insights we've gained from the benchmarks inside the Git plugin. And to do that, we've created a new functionality called the Git tool chooser, which is basically it's going to recommend the optimal Git implementation. It's going to try to recommend the optimal Git implementation for a particular remote repository. The second thing we wanted to do was to expand the scope of benchmarking we are doing for multiple repository parameters like branches, commits, or tags. And we wanted to see the consistency of our results across multiple platforms. So I'll start with the Git tool chooser. So I've explained it's basically it's going to recommend a Git implementation which is going to be optimized on the basis of the repository the plugin is using. And what does it need to do so? It's either in your Jenkins instance you have a branch source plugin like GitHub, GitLab, Bitbucket, or GitE, or you can have a multi-branch project. Both of them, if you have any of them you have, you can use the functionality to improve performance. How? So this is a two-part answer, the first part is that. So I've explained from the insight we've gained from the benchmarks that we have a size rule now. For a particular size, we know which implementation is going to perform better than the other. So that is the first part. And the second part of the how is the architecture of the class. So if you have a multi-branch project within the Jenkins instance, we can use the cache stored in the workspace to estimate the size of the repository and then recommend you the optimal Git tool which is the implementation. If you don't have that, if you don't have a multi-branch project in your instance, we can use the repository URL and then we depend on other plugins to ask the size of the repository using REST APIs provided by GitHub, GitLab, or other service providers for Git. So if you have that in your environment, we ask them for the size. They provide us the size. And if we have that, we would, again, provide you the recommendation. So I'd like to show you how that is going to happen. This feature has not been released yet. I have this. This demo is going to be in my local machine. So I created two projects. I cannot show you a live demo because profiling, to see the performance results, profiling it would take time. So I created two projects. Both of the projects, as a user, I've chosen JGit as the implementation I want to choose for a repository, which is Ruby, which is around 400, 500 MB. So now what is happening? What is the difference between the projects? In this project, I'm not using the Git tool chooser. It's not there. And for the second one, we're using the Git tool chooser. So now what is the difference in terms of the expectation for a user? So in this project, the class, the Git tool chooser, would recommend Git instead of JGit, even if you've chosen JGit. Because inside, it would internally calculate that for this particular case, for a repository size such as 400 MB, Git would be a much better implementation than JGit. So what is the kind of difference we're seeing in performance? So I profiled this Jenkins instance using Java Flight Recorded. I attached it to the Jenkins instance. And so this is the performance thread for the project where we don't have the Git tool chooser. And what you see here is this is the thread execution for Git Fetch. It is taking around five minutes to execute that step, which is the majority of what the checkout step takes. Now, with the introduction of Git tool chooser, what you see is that the fetch is going to take just a minute, less than two minutes. So this is what is going to happen if we include this functionality within the Git plugin. But this will happen if you choose JGit as the implementation to perform the Git operations. So this is, I think this is what the Git tool chooser wants to do. Now, there are some challenges which we've faced, which are facing. The first is that we've discussed this, but still we want to see if we want to give the user an option at the global configuration level or to a much tighter scope at a project level if they want to implement this feature or not. The second is, since we depend on other plugins to get the size, we need to implement. So we have provided, we have exposed an extension point which upon implementation can communicate with the REST APIs of those providers. So we need to implement that to have support across GitHub, GitLab, GitE, Bitbucket. The other challenge we have is that JGit doesn't support LFS checkout and shallow checkout. So we need to make sure that we don't recommend something which would break existing use cases. Now, the second part of this phase, the progress, is that we wanted to expand the benchmarking experiments we were doing. So the first thing we did was that, as of now, we've mostly tested any kind of Git operations performance with the size of the repository. In these experiments, what we tried was to keep the size of the repository constant and vary the number of branches, the number of commits, and the number of tags. So with the first experiment where we vary the number of branches, what we see is similar to what we would see. Similar in the sense that JGit's nature is similar when we talk about Git Fetch's performance with the repository size, the variation of repository size. It is changing. JGit is changing its nature after a certain increase in the number of branches, as you can see here. But we can also see that for less than 100 branches, the performance overhead is negligible, because the execution time we are measuring here is in milliseconds per operation. So in terms of the whole plug-in performance, it would not make much difference when we are talking about branches, less than 100 branches. If you're talking about more than that, it would still, at some point, maybe half of a second possibly. So we're not thinking of using this as a parameter to gain any actionable insight. Now, the second is with the number of commits. So what we can see is the nature is different for both of the implementations. JGit or Git, both of them are not affected too much with the increase in the number of commits, as you can see, both of them are almost following a constant line. JGit is performing the better than Git though. So that is something there. The third experiment was with tags. With tags, what we see is that the correlation factor, if it's quantitative, I would say the amount of how much the number of increase in tags is affecting the performance of Git fetch for both of the implementations is much higher than for branches or commits. Like we can see for 1,000 tags or more than that, there is almost half a second added to the operation, Git fetch. So we would like to add this to the current tool we have. We're not sure how we're going to add it right now, because we need to make sure that the experiments of the repository size, are they significant enough to not include this parameter? Or should we do that? That is something we have to explore. These are the results with those parameters. Another experiment we did was to check if the results we've gained from a single platform, the benchmarks, are they consistent across multiple platforms? We needed to see that, and that's important for us. So we compared the performance of a Git fetch operation using a 400 size repository. In the platforms, we used Windows, FreeBSC12, an IBM workstation, and S390X. So the most important observation here, I'd like you all to concentrate on the second graph, is the red line here. So this red line marks the difference in performance between JGIT and Git. And if you'll observe, this line will almost remain constant across all the platforms, whether it's FreeBSC or it's an IBM workstation or Windows. So that makes us observe that our results, if we have coded one of our benchmark results on the basis of, let's say, a Linux instance, it would not vary. Our estimation, our recommendation, would not vary across multiple platforms, which is a great thing, I think. The next phase, so the next phase is the most important thing for us is to release the features we've added in phase one and phase two. And that includes solving the current challenges we have. We need more test cases and more support from other plugins. We need to implement those extensions, extension points we've provided. Apart from that, we'd like to explore other areas of Git plugin to improve the performance if we can find, and if we cannot and we have time, we might implement Git clone inside Git plugin. Currently, we do a Git in it, let's Git fetch step instead of Git clone. So we might look into that. We haven't discussed much about it. We've worked right now after the start of this phase. So, yes. And this is it from my side. Any questions? Thanks for the summary. And it looks like there is a lot of research for this project going on. So, yeah, my main interest would be about the implementation and what has been able to shift to Jenkins users. I actually have a question. Yes. My question is, when you compared fetching versus the number of branches for the performance, it wasn't clear to me which one affected performance the most, whether it was, it was one of your slides. You could, yes, this one. So you said, you're comparing branches and tags, yes. You said one of them has a lot more impact on the performance, but it, it wasn't clear to me which one. With tags, we are seeing a greater impact in how we can see that. So the y-axis is the execution time for Git fetch. It's in milliseconds for operation. But so if you see that for branches, for let's say practical size of branches, let's say a hundred, less than 500, we're not seeing much of a gain of an overhead of performance. It's less than one-fourth of a second. But with tags, if we increase the tags after a point, we would see a gain of half a second or maybe more than that. So to be more sure about it, we would, I would actually like to calculate. So I've used, in the research, I've used a factor called the PSN correlation coefficient, which would quantify the relationship if there's a linear relationship between the two parameters we're discussing here. The first being the tags and the second being the Git fetch performance. It would quantify the relationship between them. And then we might be able to more confidently say what kind of impact it is showing. Yes, Martin. Thank you. And I think though it's maybe important to note that those are like to compare like the impact with size, which was from the earlier phase, I guess. I guess it's not a comparison with size, but size has been the primary thing that Rishabh's been looking at. And then this is looking at additional factors. And then in terms of platforms, primary development's been on Linux and Mac. So like those other platforms were tall. So make sure that we would, because we were seeing those on Mac and Linux already, wanted to explore and make sure that those were on Windows and PowerPC. Thanks. I think that's it from my side. Any other comments or questions? So yeah, I would still be interested to know more about the plans for release. So maybe clarify what are the next steps, because yeah, it looks like a great addition. Yeah, as a user, I am definitely looking forward to try it out. One more question here. You said you had removed the duplication and fetching. Was that a coding phase one activity? Yes, it was. I was recapping what we've done for phase one. Okay. And the next one you said, possibly look at git clone versus, implementing clone versus letting it do a bunch of fetches. So you're also thinking there might be performance improvements there? We haven't compared git clones performance versus git fetch, but actually that's an interesting thing we could do. Right now we were just thinking that we do clone a repository, but we actually perform a git init plus git fetch there. So that's something I would explore. I haven't compared both of them, both of their performance. Yeah, we have. Yes, Mark, yes. We have anecdotal information from one or more bug reports in JIRA, which claim that the choice to use git init plus git fetch is actually less efficient than using git clone. Now my benchmarking that I did two or three years ago on that bug report did not support the assertion that the bug report was making, but we have users who say clone is faster than a init plus fetch. Now I don't have evidence of that and Rishabh's benchmarks have not even attempted to test that, but we have at least one user who says no, it's much faster if you just use clone. So that's something we could definitely do before thinking of implementing git clone. Thank you. Thank you, Martin. So Rishabh, to Oleg's earlier question on the release plan, we certainly will do a release, including the changes. We're excited by them, looking forward to them, expect it. If not, a portion of the changes will probably release within the next week or two, and the full set shortly, if not by the end of the project, shortly thereafter. Sure, that'd be great, Mark. I'm looking forward to it. Okay, that was no other questions. Thanks a lot for the presentation. And I suggest to move on. So the next presenter is Kajeh and he'll present GitHub checks API for Jenkins plugins. Okay, so I'm going to share my screen. So can you see my screen now? Yes. So hello everyone, I'm Kajeh. I'm going to talk about the GitHub checks API plugin and my mentors are Wuli and Ting. So first, we have added some features from Fistone and then first we have a general API and now we're hosted in the checks API plugin. And we also added an implementation for the GitHub checks API. We hosted in the GitHub checks plugin and we now have released both of the plugins. You can search them in the plugin market and we released the 0.1 version and looking forward to our first installation, the user. So in Fistone 2, we actually use the checks API in practice and first we use it in the variance plugin. We use it to report the code gates as you can see there are many green checks and the red X here. They represent the code gates from the different tools. And here's the messages like the new issues or no new issues when total and if you want to see more details, we'll show you some severity about the issue statistics and the next thing is about annotations. Here you can see the check style wording and it's parameter number check, the severity is normal and the message and in the row output you'll see some row messages from the tools report, normally it's XML. So you'll see some tags here and let's go to the link. So that's the checks page and you'll see here are the two different tools. And we will only show the annotations for the new warnings, for the new issues because if we show the total issues, there will be too many like maybe CPD sometimes more than 100 for a big project. And if you want to see the annotations or not the code, you just click this button here and you'll see these annotations around the code. So like check style or PMD. So that's those annotations just like the review comment. And now we have already merged this feature in the warnings plugin. If you want to try it, you just need to update the warnings plugin to 8.4.0 and also instead of the checks API plugin and the GitHub checks plugin, they just use this warnings plugin in the project that uses GitHub branches, maybe a multi branch project or a GitHub organization project. But if you feel terrible about this feature and if you feel terrible about to see so many warnings or issues for your code, you can definitely disable it and you can skip the public checks just like other options and you can also skip it in the pipeline script in different way enable this feature. And the next part is the code coverage API plugin. We use that to first to report the coverage trend. So in the message part, you'll see like the line branch like the line coverage against the target branch. Normally it's the master branch and you'll see the branch coverage against the not successful build. And there will be some links to the reference build. And also you will have a coverage healthy score. You can control this score by setting the threshold for the coverage when configure the plugin. And then we also add some details about different the coverage is like report a group but the most useful I believe is the line and the conditional. This conditional is just the branch coverage which is used as conditional in the coverage API plugin. And you'll see the trend. This trend is compared with the last successful build. Here's the link. Oh, sorry, it's not a link. So here is a short message in the details. So target links will directly to the reference build. And here is a link to the coverage is action page. The coverage API report coverage in recursive way but I would believe it's too complicated if we use such a recursive style in the GitHub UI there would be too many of those reports. So, and this is the reference build, no, the. Okay, so now I'm going to show the theme. Okay, so first I'm going to show the reference build how the warnings plugin works for these checks. I'll delete this build to ensure those issues are still new compared with the last build. GitHub site, you will see some warnings like this is cute. And those checks are just made in my previous checks. So just wait a moment. In the console, it should be so those processes. So let's talk about the latest first, okay. So for, so I want to first talk about the plan for phase three. So we'll add the pipeline support and then we will also add the very wrong requests through the checks API and some other tweaks for this plugin. So let's see whether it is complicated. So now it's collecting the Java warnings. Java warning succeeded now. You'll see this is newly made, that there's no issues. So if you say check style, this is it just now and still those new issues. So any questions or any suggestions for our plugins? Any anything on the UI or functions? Yeah, you said that you're gonna be working on rerun. Does that just rerun like the pipeline or the job or is that like something else? The whole job, just the whole job, whole build. Cool. Yeah, for that, we already have a plugin which is based on command tops. So there is GitHub command plugin, if I recall correctly. But yeah, this plugin isn't very active at the moment. But if you plan something more complicated, yeah, it would be awesome. And yeah, for the record, even this demo, it looks great. I'm looking forward to update my instances. Actually, besides a rerun, you can, and you can make many actions through the checks API like maybe automatic format code. You can mix that action and the users just click it in GitHub and just implement those actions in Jenkins. Yeah. It's really nice. Good job. In terms of where it's at, some warnings NT was released this morning with this and code coverage is still in a poor quest waiting for a view at the moment. But you can use the warnings in JSide of it already. Yeah, thanks to team. We already started discussion about adopting this feature on the CI Jenkins Ion. So you'll be soon available to Jenkins plugin developers and contributors. It is hopefully thanks to Uli and team for working on pipeline library patches. It was a long road to get this pull request merged, but yeah, hopefully we will get it over the line so that we have something to show to Jenkins contributors as well. So now I'll stop sharing my screen. Well, and the other thing, just what pipeline support means here is that three things like steps that users can do to can use to add their own checks. So inside of their pipeline or their pipeline library they can easily interact with the checks API. Yeah, warnings and JS are key to all static analysis features and Jenkins. So just by supporting one point, you need to support a huge number of use cases right away. There is code coverage, yeah. Yeah, the pipeline support will be nice for some of the custom use cases too. I agree, the person I still want to have access to Yeah, it's a down waiting feature. Oh, for what it was, Kasia has already documented it on the phase one blog post, right? So if you open the blog post, you can see that, yeah, there is a sample how to do that, but there is some optimization of course would be helpful. Okay, any other questions or comments? Then thanks a lot for the presentation. And yep, if anyone is interested, please try out the feature so that you can get more feedback and do more testing during the next phase. Thank you. Sliding, sorry, Sumit. Yes, yes, yes. So I'll just, is my screen visible? Yes, I guess I have a little bit of this in this desktop today. So, thanks everybody for joining this presentation. So we'll be presenting today the external storage, the external fingerprint storage project, which is one of the projects under GSOC and Jenkins this year. And I'm very glad to be a part of it. And thanks, I want to extend my thanks to all the mentors, Oleg and Raymai. So they've been awesome with helping me out with this project. So I'll begin this presentation. So we have a number of topics for the agenda. I started a small personal introduction. So I'm Sumit, I'm one of those two, and I'm a student for this project and I'm currently pursuing a bachelor's in instrumentation and control engineering. I started contributing to Jenkins in December 2019 and I started with a fingerprint engine and that's why it led me to being a part of this project. So I'll just do a quick phase one recap because I think there are new people also and that would help everybody get familiar with what exactly fingerprints are. So file fingerprinting inside Jenkins is a way to basically allow tracking which version of a file or build is being used inside the Jenkins ecosystem. So just as a small example, say team A builds A dot jar and team B builds B dot jar. And B dot jar has a dependency on A dot jar and team B finds that there's some issue in B dot jar so team A needs to fix it and team A needs to figure out which particular version of B dot jar they're using. So fingerprinting engine allows this version tracking to happen across jobs and builds. So you can basically fingerprint your artifacts or files anything that's related to these artifacts that are being created by builds. So I'll just show a small example to show exactly how this UI inside fingerprint exists. So over here I have two jobs, A and B and what B does is it copies the artifact that A produces. So if I just trigger a build of for A and I look, I can see here that I can go to see fingerprints and it's producing the artifact A dot TXT and I can see that its usage has been in job A's build number three. And if I trigger a build for B and I go to see fingerprints I can see that it has A dot TXT original owner was created by job A's build three and I can see all the versions where this particular artifact was used. So that's just a small intro to the fingerprinting engine inside Jenkins. So yeah, so we saw the UI. So just what we did in phase one. So the disadvantage with the current fingerprint engine is that it's basically storing these fingerprint files inside the local storage. And as we move towards a cloud native Jenkins we want to externalize the storage of fingerprints. So the main idea behind this project was to provide an API that can allow the plugins to come in and they can support different types of storage plugins like a Redis plugin, a MySQL fingerprint storage plugin and these fingerprints can then be stored inside these instances. And basically the dependence on the disk storage of Jenkins lessons. And so we built the Redis fingerprint storage plugin in phase one and we created that API in Jenkins core and we released it in Jenkins 2.242 and we have a JEP for it. So JEP 226 you can go and so we have all the design decisions listed there. So what did we do this phase, right? So one of the stories we targeted in this phase of fingerprint clean, fingerprint cleaner. So what happens is, so what earlier used to happen in the local storage was that sometimes what can happen is that builds get deleted from Jenkins, right? And if a fingerprint does not have any pointer to any build that it does not make sense to store it. So we need to delete that fingerprint because then it's just occupying extra space. So this is a periodic job that happens on Jenkins which cleans up these built-less fingerprints. But that capability we are not exposed to the external storages. So now this feature is now implemented. So we have introduced new methods for the API in the API for plugin developers. So now the plugin has to implement iterator and cleaner fingerprints and basically they can iterate these fingerprints. Jenkins will iterate, this method will be called by Jenkins core and the plugin, it's up to the plugin to clean these fingerprints. And we have provided these, the clean fingerprint method that we called, right? Or clean that fingerprint. So we released this feature in Jenkins 2.246, actually 2.248 because that built with some problems. So yeah, it was in 2.248. So fingerprint cleanup and this particular API was, is then consumed by the Redis fingerprint storage plugin. This is the reference implementation that we work on simultaneously. So inside the plugin we actually used cursors. So basically now we need to crawl the entire fingerprint database inside Redis. So we used cursors because we get an added notice that they don't block. So it's not a blocking operation and it's better than actually are you doing something like a fetch all. So that's how we implemented cleanup inside Redis fingerprint cleanup and also we gave the users the feature to disable fingerprint cleanup because since these fingerprints are now in an external storage and external storages are, a lot of times they're very cheap. So it makes sense to actually not have an extra performance overhead. So it's now up to the users to actually, if they want they can disable fingerprint cleanup. It's up to them. So fingerprint cleanup was one of the stories we targeted. Another story we targeted for fingerprint migration. So earlier with the Redis plugin, what happened was inside, in fact with any storage plugin or whatsoever. So the old fingerprints that were already in the system and then you go ahead and you install the Redis fingerprint storage plugin, what happens to these old fingerprints? So earlier they used to remain on the system and that was a drawback. Now we have implemented migration. How we've done it is basically we have implemented a kind of lazy migration. So whenever these fingerprints are used, we transferred them to the new external storage. So we don't create huge performance bottlenecks where we are taking along, we're transferring all the fingerprints on the local storage to the external storage at Mongo. So that's fingerprint migration. It's not yet released. It's still under review in Jenkins core. And then there was fingerprint storage descriptor. So earlier what happened with the Redis plugin and so basically now we have introduced fingerprint storage descriptor, which allows the plugins to be actually configured from this dropdown. So earlier, as soon as the plugin was installed, the storage was changed by default. So there was no option to toggle these storages. Now, basically with the dropdown, you can actually choose, you can even have multiple storage installed, but you can choose one that you want. So that was some sort of de-factoring that we did. And so this was released in 2.2 for it. And we improved the testing for the Redis plugin. So we introduced connection tests, authorizing tests, web UI tests, to be sure that, you know, configuration as port is also, so basically you can use jcask to configure the plugin. So we introduced those tests also. With achievements, you know, so as I said, we clean up API and store descriptor was released in 2.2 for it. And the plugins 0.1 RC2 release has also happened. So now you can directly install the plugins on the update center, right? So, yeah, we now have the plugin on the plugins.dnkins.io also. And yeah, so, and we have a high two RC releases. So yeah, that's, so please I would recommend everybody to go ahead and check this plugin out. Let us know, you know, if you can face any bugs, any issues. And next, so I know I'll move on to the demo. So whatever I talked about, you know, so how, so that we can see how it happened, right? So, sorry, what I'll do is quickly, I'll create a new item. So I'll create a job, it's called demo. I'll make it a freestyle project. And I'll add a build step to execute shell. Use an echo, I, into demo.txt, right? And then I'll add a post build up action for recording fingerprints. So for the demo.txt file, and I'll hit apply and I'll hit save, right? So now I have this job, right? So at the moment, I don't have an external storage configured. So this is the local storage, right? So if I start a build for this, and just a quick question, you can see the screen also, right? Awesome. So let's go ahead and see where the fingerprint is, right? So right now in my fingerprints folder, I have two fingerprints. So let's see this. So this is the demo fingerprint that just got created and we can see that it was used in build one for demo, right? So now I'll take you to the configuration. So if we go to the configuration page for the Inkins, I have the plugin already installed. So inside the fingerprints, you know? So the first, so one of the implementation that I talked about, right? Was this descriptor that we made. So now it is fingerprint, so this can be selected right from this menu. And before I actually configure this, I'll just start a local Redis server on my machine, right? So I have a server running here and I'll just start our command line database to the server. So if I see which fingerprints I have, so it's empty right now, right? And now if I do a test or disconnection, I get a success, right? So now I can go ahead and hit apply and hit save. So now, you know, the external fingerprint storage is configured, so ideally what should happen, right? What we want to happen. So if we go back here, I can still see this fingerprint, right? So this is in the local storage. So as soon as I run this build, this should get used. So basically this fingerprint gets used, so it should get migrated to the external storage. So I'll just hit build for this. Right, so I get build two. And if I just quickly see the fingerprints here that everything is working fine. So yeah, I'm getting two bills have used this particular fingerprint. And if I go here, right? So if you just notice that fingerprint got deleted, I have just one fingerprint now, which is from earlier. And if I hit the server, I have now have entry for this fingerprint. So if I just do a bit, I can see this fingerprint now in the red server. So this was what exactly I talked about when I mentioned migration that we have implemented. Third thing was cleanup, right? So if I go back to the configuration page. So yeah, so this is the option for, you know, disabling fingerprint. So at the moment, fingerprint cleanup is disabled. So I'll just go ahead and enable it. And I hit apply, and I hit save. Hey, so at the moment, no cleanup should happen because, you know, that fingerprint has two bills associated with it. So what I'll do is I'll just delete these bills. And I'll delete build number one also. Now if I go back, you know, so there's no bills associated with that fingerprint. And now if I go ahead and query, it's gone, right? So fingerprint cleanup happened. Just a small side note, fingerprint cleanup happens daily. So once in a day, but I'm just running, you know, for the demo purposes, I've just decreased that interval. So it's not having every happening every 10 seconds. So that's why it happens so quickly. So that's the fingerprint cleanup API that I talked about in that presentation. Yeah, so that was about it. What we did this phase, next step is, you know, working on a new reference implementation. So yeah, I think as you can guess, we're going with Postgres this time, and there's a new set of challenges that come with Postgres because so basically, till now, we store these fingerprints as blobs, right? And that's, you know, more easier than a relational database. And we want to decouple this, right? So basically what we are trying to do is, you know, define some sort of, you know, schema to these fingerprints and, you know, store them in a relational database. What this will allow is, one is that, you know, you can use this Postgres plugin and plus it, for new implementation developers, new plugin developers, if they want to use, you know, a reference, a relational database, so they can then use this reference plugin to build more plugins, right? So that's our idea for the Postgres plugin and, you know, some things that are further away may be our tracing. So basically, just, you know, we get the, so basically what tracing is that this, these storage is, so this plugin and the API is made in such a way that multiple Jenkins instances can be configured to a single storage. So now this allows us, you know, it grants us this huge opportunity to trace these fingerprints across Jenkins instances. So that is something, you know, might be worth exploring, maybe further down the line. So yeah, that's, I think, about it from my side. And before I start with you and I, I just have some links at the last page. If anybody wants to check them out. So I'll open up the discussion for you and I. Thanks for the presentation. Are there any questions? Okay, so my classic question, does anybody use fingerprint on the instances? Maybe you should. But yeah. So I think that it's a great improvement to the plug-in server system because yeah, when we talk to developers and the developer, maybe people say they don't use fingerprints, but when we look at use statistics section, there are many people actually have a file of fingerprints, credentials fingerprints enabled. And I believe that this, but storage, but user experience, we can actually provide a great traceability and observability features to Jenkins. So for me, this project looks really interesting. And yeah, I'm happy to see how it works. I already migrated my personal instance to this fingerprint storage. And yes, I use fingerprints. So it works really well. So assuming, are there any things that you have learned from this that we should apply in general to the externalization of other storage components as well? Certainly there are lots of places where Jenkins stores things that we would consider doing externalization. Are there any things that you need to share? Thankfully, you've got Oleg as your mentor. So he's had lots of experience in that space. Right. So, yeah. So with the cloud native server, I think this is a idea for active, you know, where a lot of stories are happening for this. So one of them to fall, you know, so I think the answer to that question is probably that young. So I think one of the facet for that answer is that yes, we figured out how to, you know, make these APIs and, you know, as an, as an, we develop more plugins and we add more features. We realize that how well or how bad our original API was. So I think that API can act as a, you know, a reference to the future externalization stories. But I also think that another facet is that, you know, all these stories are unique in their own sense. Some of these stories, you know, are more difficult to implement with, you know, that certain consoles, if you go to configurations, you need them at startup. So that's another challenge. I think all of them have separate challenges associated with them. But yes, I think as far as learning goes, yeah, I think we made a decent API. And I think, you know, time will tell how, you know, you know, if it holds well or not. Thank you. Update. In progress or coming soon. Yeah, that's all. I was just saying that that sounds really good. Just this table needs a good update. Yeah, it's on my list. I actually started updating the cloud. We just started it in May, but yeah, right after we started I think we went on a kind of summer break. We have a few meetings planned for August. Stay tuned. But yeah, all these materials will be updated. Example for configurations, you know, I would rather say that we have Jenkins configuration as code. I'm not sure whether we really want to invest in the Plugable Configuration Storage. It's a subject for the discussion, but other stories still need to be implemented. And for me, Fingerprints is actually a great story because firstly it's isolated. So it can be done in a feasible amount of time, like Sumit demonstrated during this project. It still provides us a lot of insights and experience how it could be done as plugins with database, with API changes. So architecturally wise, I think that this project is already a total success. And yeah, thanks to Sumit. We already have everything landed in the Jenkins core. So now it's a matter of reference implementation and a matter of additional features we could get out of it. Because for example, querying Fingerprints for data, like let's say querying by timestamps, querying by particular events. It's just not possible with file system storage unless you load all data in the memory. But with external Fingerprint storage, it becomes possible and hence we can explore how we can utilize it in Jenkins. And bonus points for multi Jenkins instance support, which is also available in external Fingerprint storage. Thank you. So I wonder what if you could switch to Jenkins. I already this, but we can get to the discussion later. So I think even if you didn't clean up Fingerprints, if you were to remove old builds, you would lose some of that traceability though, right? So I think we would need some pluggable external storage for that in order for that to work for like pruned builds, right? Well, it's yes and no, because it really depends on how you implement storage because with the current design Fingerprints remain available until there is one reference, at least one reference. And it means that if there is no reference, it can be garbage collector, but if there is a reference, then all the information remains available. And what we did as a part of application period to GSOC, actually there was already API added by submit in order to make it possible to prevent additional Fingerprints. For example, you want to recover Docker traceability plugin. And once I do that, I would definitely happily use all these APIs in order to provide better experience to users. Nice. So if I understood correctly, like if you had Java that had 3,000 runs and it pruned after 500, but you had Job B that was referring to that and only had five and all of its runs were there, then the traceability from the first run from Java it would like that Fingerprint would still be there. Yeah, it will be still there. Some data will be there because Fingerprint stores its own data, which makes it's challenging, for example, for Fingerprint storage, because yes, it basically stores arbitrary data without fixed structure. Hence, yeah, keeping it in a relational database is not trivial. I'm looking forward to see how we resulted during the next phase. Well, you already have a good design for that. So yeah, just, you know, so these Fingerprints have, you can add a facet to them and once there is a facet and it can decide if it wants to block the deletion for the Fingerprint. So if that happens, then, you know, even cleanup won't delete it. If you have such a facet that is blocking the deletion. So that is one way to ensure that, you know, that Fingerprint never gets deleted. So any other questions or comments? If not, thanks to everyone. And yeah, we did this meeting in time. So yeah, just to repeat what we discussed in the beginning of the Meetup. If you have any questions, we have a Jason Gitter channel. You can ask there or feel free to contact us using any other community channel. And yeah, we ask all students to actually update the project pages so that all the materials and recordings are linked from there and the participants can easily discover these materials. Okay, any closing comments, questions? Looks like not. So then join us tomorrow. We will have another session with three presentations. And yeah, thanks to all the students, mentors and other contributors who work on JSOC. It's just a needle of the project, but we can already see great demos by all the students. And it's a pleasure to see how the project evolves this year. Thank you to everyone, students, mentors and contributors. See you tomorrow. I'll stop the recording. Bye.