 The recording should be on right now. So, okay, so we'll get started. Hi, everybody. Welcome to the Jenkins online meetup. This is Wednesday, October 5th, 2022. This is Jenkins Google Summer of Code 2022. And this is the final phase where our GSOC contributors will show or demo what they've done in this past season and what they've learned from it too far. Sorry about that. Okay. So the Jenkins online meetup is a community driven virtual meetup group. We discuss all things and everything Jenkins from case studies, success stories, lessons learned, et cetera. And it is a channel for Jenkins developers. So we're always looking for speakers. So if you have a story to share with us, please submit an application through the links there. Some housekeeping notes for Q&A. Please put your questions in the zoom chat or the Q&A window. One of us will respond to it as soon as we can. And with regards to GSOC discussions, we have a Gitter channel for that. And then we also, we're also available after this event via Gitter or Discourse. And of course code of conduct applies here. And it simply means being nice and respectful to each other. So Jean-Marc. Okay. Thank you, Lisa. My name is Jean-Marc Mason. Located in Brussels here. And on behalf of the Google summer of code, org admin team, which is here represented. So we had Alyssa, Chris turn and myself here. We are very happy to present this last. online meetup where the this year's GSOC contributors will share the results of their work during this summer. So very happy with that. And with all the work that has been done. So today we'll give a small overview of what GSOC 2022 Jenkins was. And we'll move to the different project presentations and demos. And in each, after each presentation, we'll have a dedicated Q&A time for each project. And at the end, we will have a global questions and answer. For the whole project. Next slide. So Google summer of code at Jenkins is already starting to be a long long story. So it's a fifth year that our project is participating to that. And this year's edition has, has continued on this tradition very strongly. So we had four projects with four new contributors, nine mentors to help them through the summer. And all forth for projects reach a happy and successful. So these are the four projects. That were worked on during this summer. If you want to have more technical in depth details about all these projects, have a look to the URL at the end of this, this slide where you can read and have the links to the various things that were produced by the projects. Next slide. So these are the, the participants to this edition. Very happy to have spent this summer together with them. Hoshi cash. Heads. I know I had something that he was not available today. And he's very sad of that. And he will be represented by his mentor. Thank you, Mark for jumping in and representing him. Next slide. So we're going to start here with the first presentation in the first on the row is a garage. Start sharing your screen. And the floors is yours. Thank you. I think you might be able to see my slide. Full screen. Awesome. There you go. So hi, everyone. My name is Deera Singh Jodha and I'm representing our project titled as plugin health scoring system. And my lovely mentors are Adrian, Jake and other there. And it goes without seeing that without their help and support. We would not be here at this stage of this project and we'll be meeting G so successfully. So this is how we are going to be spending the next 15 minutes. And I'll be starting up by telling briefly about the project. Then we'll be going through a very short quick demo. Then we'll be seeing the achievements of this project. And I'll be sharing the next steps that we have for us. And then I'll be sharing my learnings, which will be followed by Q&As. Please keep your questions ready. So about the project, what we are trying to do here is to calculate health score for the plugins. And we try to do that by taking the composite scores of two things. First one is we want to find out how well a plugin performs on a set of predetermined probes. And second, we want to use this data to and combine it with the weights for each of those probes. So when I say set of predetermined probes, I mean anything from let's say, for example, number of open issues a plugin has in its repository or something like whether the given plugin repository has a Jenkins file or not, or it can be like latest release date of a plugin. So these are the probes that you can have. And by weight, the weight of a probe signifies how strongly does a probe wants to affect the health score of a plugin. So for example, if we think that having a Jenkins file in a plugins repository is extremely important and crucial, then you would give this probe a very high weight value so that it affects the health score in that way. So this is how the architecture diagram that we designed looks like. And at the top side, you can see the Jenkins update center, and these yellow boxes are the processes which contains the core logic of how things are working. And at the bottom side, you can see the Postgres database which captures all the observations and data related to the plugin. And on the top right side, you can see the two areas where we want to display the health score on the UI. So it's plugin dot Jenkins.io and plugin manager of Jenkins. So just for your reference, the left half of this architecture diagram is going to be visible to you during the demo because that is where we are as part of the current state of the project. So when we think of evaluating a plugin, what we mean by that is view, you would need to set aside at least one hour and go through the manual steps of looking at the plugins repository and all the files and seeing the POM XML file checking out the values. So it's a very manual task. So and you have to multiply that by 1800 plus plugins in the Jenkins ecosystem. And that's the amount of time that you would need to set aside in order to evaluate all the plugins in the Jenkins ecosystem. So I think it's safe to say that no one would want to voluntarily take up this task of manually evaluating this, the state of plugins. But there might be some one, some group of people who would want to automate this manual process. And that group of people is us. So this is what we are doing as part of this project. And of course, our end goal is to deliver better, I mean, improved user experience for the Jenkins plugin users. And we want to make this whole process community driven. And by that, we mean we want to make sure that in the future, we see that people are coming in the project and more profiles using the framework that we have developed and then participate in the surveys that we decide the weight given to each probe. So we want to make this all process fair and open as part of this community driven initiative. So it's time for a quick demo. And so I've started my project already. And if you want to know how to do that, you can refer the read me file of the repository. And on localhost 8080 slash probes, you can see this list of probes that we have currently live as running in the project. So we made this page to give anyone a peek into what's the state of this project. So you can see the first probe is deprecated plugin probe, which tells you if a given plug in is deprecated or not. Second word, second one gives you the latest commit date for a plugin. Then third one tells you whether the given SCM link of a plugin is valid or not. Fourth one finally tells you whether the given plugin is up for adoption. So if you want to view the same page on your local machine, I mean, or you can say that if you want to view the deployed version of this page, you will need to visit this link plugin dash health dot Jenkins dot IO slash probes. So you can see there's only two of them to the plugin probes listed here. So we need to update that, but this is where you can see it. This is the deployed version of the project. So now that we know the list of probes currently running live in this project. So let's see the magic of probe engine, which runs these probes and populates the database with its observations. And for that, I would take you to my IDE screen. And on my database, I'll run this simple query of select star from plugins and you will see these records in this format. So in let's say in the record, this first record, you can see let's just only focus on this details object. So these are key value pairs. SCM probe has a status of success because the plugin SCM link is valid. Deplication probe is also success because this plugin is not deprecated up for adoption is a success because it's the plugin is not up for adoption. And last commit date gives you its last commit date as promised and it stated the success. So this is the state for one plugin. And similarly, you can find all these observations for all the other plugins as well. So that's how the database gets populated by the four currently like plugins. Now, let me take you back to the slides. And so let me share with you our achievements at as part of this project that you were able to do. So first of all, we have this probe engine running live in our project. And I like to refer it as the heart of this project because it's the single most most important thing because due to this, all the probes are running and the database is getting populated and everything. So this was designed and developed by Adrian. So a big thanks to him and all the probes are now getting listed on the UI for anyone's reference. So let me talk to you about next steps. Like what do we have next for this project? If you're interested to know. So if you're curious or interested. So we want to extend the probe engine and that extend and mean that we want to make it more. We want to improve it more because there's always room for improvement and you want to make it compatible with different kinds of probes because people can get very creative in future. So that's something will be happening. And we want to broaden the suite of probes by adding in more and more probes that the health score is calculated by very smart list of probes. And finally, we want to generate the health scores and we want to display them on the relevant UI and document the whole process so that it's easier for anyone to understand and anyone can contribute by looking at the documentation. So to your to the right side of the screen, you can see this prototype shared with me by Jake. So on the left side of this image, you can see these filters that you can have to, you know, filter all these health score. So it's a prototype and this is how the Jenkins plugin site is going to look like. And on the right side, you can see all these plugin cards and 93 by 100 is the sample mock data for a health score. So this is how the UI is going to look like with the health score on the plugin site. So if you click on any of those cards of a plugin, you would be redirected to similar kind of page. So for example, for Kubernetes plugin, you are seeing a health score of 93%. So someone would get curious. Why did this plugin has 93% score? So we would be displaying a detailed breakdown of why it has 93%. So these are the list of probes that contributed to this score. And if you click on, so this is the plan that if you click on any of these probes, you would be redirected to a very helpful how to guide, which will tell you that how can you improve a score for this given probe. So this is the imagined end result on the UI of the whole health scores. So I wanted to take a moment and share with all of you my learnings because part of this program has been very, very interesting for me and it was a great learning experience because due to this, I got to learn and improve my interpersonal skills because this is an environment, it's an open environment and being part of an open source project, it's different from any kind of close source or a corporate environment. So and we worked remotely. So this was an opportunity for me to improve on my asynchronous communication as well. So there were lots of weekly meeting planning and discussing the next PR and working on it, asking doubts and moving once one PR at a time. So this was extremely interesting and great thing to learn. And then as part of the whole journey, I also get to increase my knowledge on the system designing aspect as well because my mentor Adrian especially worked on lots of interesting PR and I got to review them and I asked him so many questions and that's how I got to learn, that's how you design something and that's how it comes to reality. So most of them were related to Java so that eventually led to the fact that I started to increase on my Java knowledge as well by using the development best practices. So a big, big shout out to Adrian for helping me out here. Now, if you're interested to know what's next for this project and you want to, you know, just ask us any questions like, hey, how does this work? I want to contribute or anything. So these are the links that you can refer in order to get more context about this project and that's the data channel here and this is the project repository that you can take a look at. So this brings me back to the slide where if there's any questions, so if you want to ask any questions then this is the best time. Don't forget if you want to ask questions you can use the Q&A option at the bottom of the screen type your question and we're going to answer it. So Dheeraj, from my side, were there any things where you were concerned about scalability or the fact that you're sweeping when you're doing these kind of probes, you're running across 1,800 repositories in Git. So were there any scalability issues that you had to worry about or concern for? Yes, so that's a good question. Thanks for that. So about the scalability concerns when we were running some specific kind of probes which were dealing with hitting the GitHub server. So those kinds of probes were communicating with the GitHub server via those APIs and since, as you said, there are 1,800-plus plugins and when we run this engine there are going to be lots of requests to the GitHub API server so that easily shoots past the limit set by GitHub. So that was an issue that we found out that for GitHub-related probes there's going to be a problem here. So that was one of the concerns that we had. So we started using JGit on that. So that's one thing that we want to make sure for the GitHub-related probes. Or the audience to have access to the various links and to Deraj's presentation we're going to include it into the main slides. So they will be available. Yeah, we will make them available via the Meetup page at the end of this Meetup. Okay, so one more minute to ask questions. Okay, so thank you very much Deraj and thank you very much for sharing your experience with what you learned with that project was indeed a very large one. And thank you for the mentors that did tremendous and very substantial work on that just by the sheer size of the project. So thank you to all. We then are moving to the next presentation Yiming worked on GitHub actions using Jenkins runner. Yiming, the floor is yours. Okay, let me share my screen. Okay, so let me start. So hello everyone. My name is Yiming and I'm the Jenkins JSOG contributor of Jenkins File Runner as GitHub Actions this summer. I'm happy to give the final presentation today about this project. I'd like to give a brief introduction about myself first. So now I'm a graduate student from Carnegie Mellon University. My major is electrical and computer engineering. In this project, my mentors are Chris and Abel Daya. So our functional interface is pretty simple in this project. You only need to point out the rounding options and the relative path of Jenkins File, JCS, Young File, and the plugin installation list file in your repository. As some people might not familiar with the tool in our actions, I listed some basic comments about them. Basically, we used the plugin installation manager to install the plugins specified and the Jenkins File Runner to run the Jenkins pipeline. So Jenkins File Runner action for GitHub Actions provides the customized, containerized environment and useful GitHub Actions for users to run the Jenkins pipeline inside the GitHub Actions. In more detail, if using some, if using these actions any GitHub project which has a Jenkins File Runner pipeline executes workflow in the GitHub Actions runner, it aims at applying Jenkins in the GitHub Actions in a functional as a service context. This feature is based on the Jenkins File Runner which is a common line tool for the Jenkins pipeline execution engine. Here is a basic architecture of these actions. So I combined Jenkins Core, a minimum required set of plugins to run the pipeline and plugin installation manager and the Jenkins File Runner into a base image. Then I set up the entry point shell scripts to start up the container workflow. So in the second phase of this coding project, I developed the GitHub Actions which don't require the container runtime. So users can directly run the pipeline in the host machine and all the dependencies are downloaded for each run. So the coolest part of this runtime action is that users can run then all of the runners provided by GitHub Action which are Linux, macOS and Windows runners. So previously, JFR static image action and JFR container action, these actions can only be run in Linux runners. So as you can see, there are some differences in our current actions but the root cost of these differences is the starting time of Jenkins container. In JFR container action, the Jenkins container starts up before all GitHub actions execution and ends after all GitHub actions and have results. So all GitHub actions will have influences on the container. This means you can use other actions in the marketplace to set up the environment. But in JFR static image action, the Jenkins container posts the posts the starting time. It will start up right up before the JFR static image action starts and end right after the JFR static image action ends. So in this case, the Jenkins working container has strong isolation with the host machine. So users cannot use other actions to set up the container environment except actions checkout. When it comes to the runtime actions, it doesn't require container contacts so it doesn't have the related problems I described it before. You can use them with other actions in the marketplace. So how can you use our actions? I provide five Jenkins file runtime actions in my project now. They are called JFR plugin installation action, JFR runtime action, JFR set up action, JFR container action, and JFR static image action. You can use these action URLs in the PBT in your workflow definitions directly. So besides the repositories of these Jenkins file GitHub actions, I also provide the central documentation. So the central documentation includes user guide and developer guide. And I also provide the demo repository. The demo repository tells the users how to use these actions to compile different programming languages projects. What's more, it also includes how to use the Jenkins secrets environment variables and how to play with other plugins with JSC. So now I'd like to show you a live demo about how to compile a spring boot demo project by using JFR set up action, JFR plugin installation action, and JFR runtime action. So let's see. So this is a spring boot project. As you can see, there's nothing special here. And you can use the spring initializer provided by ADA to generate this example. And let me see. Let me see. So usually we usually put our workflow definition under the GitHub Workflows folder. And there's a young file here. So this young file is your workflow definition. What I define here so firstly you can, as we use the runtime actions, we can run it in different kind of runners provided by GitHub actions. There are Ubuntu, macOS, and Windows. And so firstly you need to call the actions check out to set up your workplace. And secondly you need to call JFR set up action to set up the Jenkins file runner action environment. So if you want to install extra plugins you need to provide plugins text file list file. And you need to point out you need to use the JFR plugin installation action to download the extra plugins. So finally you only need to use the JFR runtime action to launch your exact pipeline. And don't forget to provide the relative path to your Jenkins file and your JCS file. So what I did here is to what I do here is use the Jenkins file to compile the whole Spring Boot project by using Maven and JDK8. So I need to use the JCS to install the Maven and the JDK8. So as I need to install the JDK8 I might need to provide the extra plugins to install it because you need to install the adopted open JDK. So its name is adopted open JDK plugin. Yeah Yeah, that's it. So all I need to do is to commit my commit other changes here. So let's see. So you can see the new workflow is running here. And we might need to wait one or two minutes let's take the Ubuntu latest rounder as an example. So you can see the workflow is running by using our actions. So the setup action is related to the JFR set up actions and now it using the JFR plugin installation to install your extra plugins. And now you can see it is using the JFR runtime action to compile the whole Spring Boot project by the JFR runtime action. Yeah, it almost reached to the end now. So we can see the pipeline reaches to the end successfully. Now back to the slide. So currently these actions are still under progress although they have the basic functions around the pipeline defined by users the pipeline result lacks readability for users. I'd like to show you some logs about the pipeline. So as you can see most parts of the log are useless to the users. So users they need to go down to find the actual pipeline log here which are circling red pane. So furthermore in the classical Jenkins server, users can visualize their pipeline in different steps but this function is not supporting current Jenkins file GitHub actions. So currently the readability for the Jenkins file running as GitHub action is pretty bad. I expect to solve this readability problems in the near future or maybe find an alternative method to solve these problems. So in the end I really appreciate the help from the Jenkins community. My mentor Chris and Abadal always give me a valuable advice. Jane Mark always likes to hear my personal feelings or opinions during the project development phase. Also all that helped me find the correct path in this project at the beginning of the coding period. And also finally thanks Alisa for coordinating so many online meetings because I don't think it's an easy job and so thanks for listening. Thank you very much Yeming, thank you. So just opening for questions if there are any questions. So for me, GitHub actions have been complicated when working with credentials. Yeming have there been any things that you learned in operating in a credentialed environment in GitHub that you want to share with others or were there insights you gained when you were dealing with credentials in GitHub? So usually what I did is you can go to the so this is my this is my let's see this is my personal project and you need to go to the settings page here and let's see there is a secret secrets here and you can click it and you can update or set new repository secrets over here. So if you for example if you want to use any extra secrets you need to set up here and call it in your workflow definition by using the environmental variables provided by GitHub actions or if you want to use it in your Jenkins if you want to use it in your Jenkins pipeline you need to map it these secrets into your Jenkins running environment. So these secrets are pretty safe I think. So for example if you want to equal one of the secrets the GitHub actions will try to filter it so the contributors or other people cannot see the actual content directly. Yeah. Thank you. Thanks very much. Very good. Okay we're running out of time or I just give 30 more seconds if somebody wants to ask a question. Meanwhile I'm going to thank Hieming for the very interesting work done on that and for this very powerful demo on your work and I'd like also to thank the mentors Chris Stern who made that together with being an org admin during the summer so thank you very much Chris and Abya I can't pronounce his name completely. Chris you wanted to add something? No it was a good job well done. Well done indeed yes indeed Hieming Alisa behind the curtains made a tremendous job in keeping all this on the rails and getting all the presentations done. Yeah. Thank you Hieming so the next presentation is Vihane the floor is yours now. Thank you Thank you Chanbak Welcome to the presentation everyone I am Vihane and I was a contributor at Google's Mert of Code 2022 I share my screen and then we can get started. I hope my screen is visible the presentation also it's showing great so my project was about improving the pipeline's documentation generator and I was mentored by Chris throughout the project the brief overview of what I did in the coding phase 1 these are some of the tasks that I explained in the midterm meetup and I'll just go through them briefly the beginning of the project was more UI oriented and my task was to make using the documentation more handy so for that I worked on improving the sidebar scrolling behavior of the Jenkins documentation which I can perhaps demo over here so this is an older deployment of the site which shows the condition about everything before the Google Mert of Code project was done so this is one of the pages in the steps documentation itself and this particular feature was towards the entire website and as you can see the sidebar scrolls along with the main content and when you are visiting a larger scroll page this is very hard to navigate so what I have done with it is I have made the sidebars stick along with the main content and then both of them can scroll independently so that is what is meant by this independent scrolling secondly the change was to add a search filter and as a name this is very trivial and what it does is basically compensates the browser search feature in a separately dedicated search bar what this provides over the browser search is it filters out the content on the page itself so the content you see here is dynamic and if I enter some string maybe that matches it so SCM for example then you can see all the steps or the plugins that have that particular keyword in them and this feature also came in handy for one of the changes that I made in coding phase 2 to which I will get back to in some time the third task was to separate the declarator steps from the main class so pipeline steps are of two types declarative and scripted and the steps that are dedicated towards declarative pipeline steps are generated using separate functions not the regular functions and those functions we called it is better to separate them from the main class which is the pipeline step extractor class and hence making the main class more readable and bringing some more modularity to our code the fourth change was to shift the parameter data types on our main website I will just show a quick example of what I mean by that so here as you can see for each of the parameters for example for clear case SCM this is the parameter class under the check out SCM step and this gives us some parameters such as branch, label, extract, config and all of them have particular data types that a user needs to know when they want to write the pipeline script and this data was present in a new line after the help text so here as you can see there is no help text available but if you scroll down a little bit and you see some things such as a name here for the data type of the name you just scroll through the head text and after that when the text ends you see the data type over here and this has been resolved using that is to move the data type in line with the name of the parameter itself so here as you can see the parameter names are in line and this also reduced the overall scrolling length of the website thus reducing the content that we have moving on the last thing but not the least was to release pipeline data utils this was a tool that was under development and what it does is provides users a plugin manager which they can use to query their plugins what it does is basically for example to my project I have to extract data about the steps so there is a file which is known as a plugin help text and for every plugin we query that we form a data structure, a map a descriptor and we store data for that using a hyper local plugin manager using a plugin manager and that has been made more portable using pipeline meta utils and that particular feature that we were using only specific to our project can be used outside also now so this is an even artifact which can be easily user dependency in your form I will link all these things on my project page and I am also writing a blog which will be released after the meter which will contain all the important links so moving on towards coding phase 2 what the things are ended coding phase 2 was a more specific focused coding phase part of the project and the main task was to make the pages more lightweight reduce the content on those pages that was the main goal of the project so that was the thing that I was searching for the solutions for that and before that there was a small change until the deprecated plugins on the pipeline step difference page and this was done because earlier there was no way to identify if a pipeline steps mentioned by a plugin the plugin itself is deprecated or not and for example you want to use that step and then after that you realize the plugin is deprecated that won't be very useful for the user and the only way to know if the plugin was deprecated was was using the hold on was using the plugin site so clicking through that plugin site of that plugin and going to the page and seeing if it's deprecated or not so I thought that maybe the query of seeing the list of all deprecated plugins that provide pipeline steps could be helpful for other pipeline developers and what it was I used the update center as we mentioned in this project before to see if a plugin is deprecated or not so I have a list of all the plugins that provide pipeline steps I have a map of that and I can simply iterate through the keys of that map and see whether that has an entry as a deprecation to true in our update center file and if that is the case then we'll add a deprecated text label near that plugin's name itself so this is very helpful when you want to basically see all the deprecated plugins in a complete type deprecated map search filter you can see that these four are the plugins that are deprecated and also provide pipeline steps to us and if we simply type deprecated over here we'll also get the steps that the steps that have the text deprecated associated with them so both these things are possible this is basically a super set of the previous query okay so moving on towards the main idea of the project that was to separate the configured parameters to a new ASCII dog the project was about separating some of the largest sections of the steps documentation towards a place such that a user is not bombarded with a large amount of information on a single page what it does is it makes a page very hard to navigate through and basically the content becomes of little use when the information is not organized very well so this was a task to accomplish in this project so some of the observations that I did before I started coding were that there are around 8 big plugin pages that contribute to about 85% of the total size of the ASCII dogs the total size before the project was done was around 16.7 MBs this is basically all the ASCII dogs that we see on pipeline steps reference all the ASCII dogs that come within that and here are some of the examples of those ASCII dogs so for example workflow multi branch had a size of 4967 kb which is around 5 MBs pipeline groovy was 2.2 MBs and workflow CM step was around 0.4 MBs so these are actually very big if you see how lightweight ASCII dogs are in general so they make these pages smaller in size but what was the issue behind that here are some problems that we are facing because of these larger pages we have a JavaScript that runs that collapses these documentation lines so for example if you are seeing these collapsed sections over here this is happening dynamically after the ASCII dog is loaded and a JavaScript function collapses these lists into these sections and that was taking a huge amount of time when it was running on the page of the size and for example the multi branch plugin page took around 20 seconds on my browser to load and become stable and even after it was loaded it was crashing the browser many times and the experience overall was not very smooth it was a very laggy experience I should say and here are some of the useful investigations some of the documentation that we saw was redundant that is many of the text present within a same plugin page or across multiple plugin pages had the exact same content and we know that redundancy is not good in any kind of documentation so this was something that was needing some kind of remedy from our side secondly we realized that if we separate every parameter and asset choice to a new page it would not help the redundancy issue much because we are anyways just shifting one thing from the main page to a separate page for itself and it was not dealing with the redundancy at all I'll show an example of the redundancy in the previous deployment that we had so for example if I go towards multi SCM over here here it is so as you can see multi SCM the list it contained had the exact same SCM classes as we have on the main page itself so it's basically a recursive list that is present in itself so multi SCM has multi SCM again in itself but it ends because it has been set like that the threshold so the other depth is not increased by more than one but basically we have every single parameter such as the Git SCM everything is present again so this is one of the examples of redundancy present in the same page itself so this needed some sort of solution so the idea that we thought could be done was one approach could be to deal with it using in the two ASCII doc generation process itself so when we are converting those Java maps to ASCII docs can we maybe identify those parameter blocks and see if that content is redundant or not and see if we want to separate that content from that page from where it is which is supposed to be loaded to a newer page but what I realized was that identifying this particular parameter of the number of lines within a parameter is not very easy because here we are dealing with Java maps and it is basically calling a recursive function as I mentioned before and hence if we visualize it like a tree I would say that measuring the depth of the tree is not an easy task in this case so we thought of having a post-processing sort of layer which would iterate through these ASCII docs which are now just changed and maybe identify list blocks within them that can be separated onto new pages so this is the approach that we thought would be feasible and this is what we went through and here are some implementation details of what we did process ASCII doc was designed to be a better layer between generating and exporting the ASCII doc so before the project writes its output like an all ASCII.zip file this layer reduces the content abstracts the whole thing out and then writes it onto the final thing and for every configured class we have for every configured class in our configuration file we iterate through the entire documentation to find its occurrences to one specific place which is named as params so all those new occurrences have been assigned to the folder named params and this is just a single file now and all those other files will reference to the single file on this location so in that way we have cleared out the redundancy completely okay you have a minute and a half left yeah sure sorry to push you okay so basically this is just a flow chart of what I mentioned right now and here are some results that I would like to share finally so the size of the final ASCII docs has been reduced by more than two MBs and this is all the redundancy that we are getting rid of and these specific pages on their own have been reduced a lot in size so earlier if you remember this workflow multi branch was 5 MBs in size it's now just 0.7 MBs and all of these things can be seen on the main website itself it's all in deployment and you can see the new example so for example I was looking for multi SCM before as you can see it's just a new link now and we can click through this link and the whole documentation will be on localized as a single place and get SCM over there would be same as a get SCM over here if you are understanding what I mean it's just a single ASCII doc that has been referenced by two different locations so in that way I was able to remove the redundancy altogether and we have a configuration file in which we can configure the maintainers can configure the parameters and this is what it looks like so for example I have mentioned get SCM over here, multi SCM over here and this has made the pages very lightweight so currently we have around 36 parameters configured and as and when we increase it the project will become the effect of the project will become more and more evident so yeah these are some future scope improvements which you can go through my slides and my project page as well I would say the main goal for the future GSOC could be to integrate the snippet generator with gencase.io this was something that we went over multiple times trying to find implementations for due to the lack of time we weren't able to but I'm sure this would be a very valuable addition in the future for us okay so I'd just like to acknowledge all the people who helped me throughout Christian, John Mar Mar, Kevin, Alisa the entire community of Gencase Docs, thanks a lot and please let me know if you have any questions regarding this I'm sorry to have rushed you it was a very dense presentation thank you Vian, Mark you had a question I did actually so Vian there were some analytical things that you did there any guidance you'd give us the things about page load time and about about expansion time were there techniques you used to identify the hot problems there that others of us should know about taking a page 10x reduction in size of a page is pretty amazing well done were there things you use to find those pages in the initial anything you wanted to share there yes so most of the configuration that it was a manual process and hence this was also kind of a drawback of the project that I wrote in the scope of improvement so I had to manually figure out sections that were having ASCII docs more than a thousand 10,000 lines and configuring those parameters is what I did right now but in the future what we can do is we can set a threshold that if you have any parameter block that exceeds maybe 10,000 lines of block maybe move them to a new page so but that could have been an expensive thing considering this is built on a weekly schedule and hence we thought that having a balance with the manual work and the build times is the best way to go for now okay thank you Vihan thank you for the mentors that participate with a very interesting project too like the others so we'll try to move on and keep the timetable honest so Mark can you share with us what was the work that Houshikash made under your mentorship the floor is yours you're on mute Houshikash worked on cash maintenance and we're very grateful to him his family schedule didn't allow him to attend today so I'm going to do a week a week version of it let's note that get cash maintenance thanks to Rashab for being a mentor as well the task that's challenging here that's hiding in all this is how do we deal with things that are sort of contrary to the nature of get so get as envisioned by Linus Torvald and maintained by a great team of engineers focuses on fast immediate operations and is willing to sacrifice larger operations or long-term operations to the benefit of short-term operations for instance it chooses intentionally not to garbage collect on every operation that a user does therefore we can collect things over time when we have long-living historic that need Houshikash as our contributor provided a great start on this to allow get to allow us to have Jenkins automatically maintain its caches and the challenges are hiding in the fact that get by its nature focuses on fast operations for developers not on long-term storage fast access for big bulky processes like a Jenkins process so the caches in Jenkins sit behind the controller and they remember things about particularly large repositories if you can imagine the Linux kernel as one example is over two gigabytes as repository we certainly do not want to copy two gigabytes every time we need to ask a question of it we can cache it that caching helps but those caches may become dirty or they may become untidy they become suboptimal over time and this cache maintenance exercise helps us improve that with automated garbage collection with automated prefetch so that when the time comes to ask a question much of the question has already been asked commit graphs and loose objects so get caching and get maintenance improvements help users by getting work done faster so I'm going to show you a brief screenshot of what get cache maintenance looks like it's like this so when you install the get plugin after this is released you'll have a get maintenance button here in manage Jenkins and here's the configuration page tells you what get the version you're running and here the cache operations that you can do I've said to do this one once a day all this is another way of saying once a day and all of the syntaxes that are used for get cache operations are available or that are used for Jenkins Cron jobs and Jenkins jobs in general are available here the results we see are in a nicely presented table so here's a good example of a hundred and sixty megabyte repository that I use and on which I depend and we see when I expand all sorts of results here hey here's the cleanup that ran to garbage collect this thing in seven seconds that garbage collection means that I waste much less time later when I access that repository those kinds of operations are exactly what automated cache maintenance does for users now there's a bunch to be gained from this users will define the schedule on which maintenance is performed the maintenance get executed and they get benefits on the flip side there were some real challenges for Fushikash he was learning things that were outside our expertise as mentors I don't know how to do really good Jenkins UIs Rishabh I think felt similarly he was doing things with a wide range of Git versions an ancient Git version as installed on Centos and Red Hat 7 all the way to the most modern Git versions installed on Windows or on BSDs all have to be handled by this one set of code we're really grateful for what Rushikash did he did a great job handling the challenges we look forward to merging that and having it delivered in an upcoming release of the Git plugin John Mark that's really all I had you really impressed me in doing a very condensed rich presentation of the work on Rushikash so thank you very much for jumping in and congratulations to Hoshikash the mentor team as well as all the other participants that that presented their work here it was really a great summer of code so I want to thank on behalf of the org admin team here thank you to the very impressive and formidable GSOC contributors the mentors without the mentors help this would not have been possible it was a learning process and like Mark said we all learned from it working together that was really really great next slide so we're getting prepared for the next session Google did not disclose exactly if, when, how the next session but it will happen so as a project will apply this is a decision made by the outreach SIG that means that we need mentors we need project ideas there will be an announcement for that if you want to join the org admin team let us know I think that Chris will join again for next year so the team is nearly complete, thank you Chris for joining, Alyssa is on board too I need her help she's been key in the startup part and I will also continue but if somebody wants to join the effort as org admin we need a message for that we don't want to work as a closed club everybody can join so I'm calling to the community through this discussion or this online meetup and there will be other announcements we need mentors we need project ideas it's a very interesting experience for the contributors and has a big impact on the Jenkins project here are some resources they will be available on the slides and this is basically what I have yes I can well that was a good good point maybe Alyssa you can jump on that so we wrap up the meeting or do you want me to give a word for this slide yeah so you can give a couple second on October but that is taking place right now DevOps world was supposed to be last week because of the hurricane in Florida that was a bit fun so DevOps world was cancelled so they will schedule an online virtual event that is TBD at the moment and then of course FOSDEM is another annual open source event that we go to every year so that is going to be in person next year that is going to take place in February 4th and 5th and we are planning to be there and it takes place in Brussels so I can go with my bicycle to the conference it will be at the main university in Brussels small details here I was ready to start the conference but was not blown away by the wind and came back home in one piece so it was for a reason that DevOps was cancelled the hurricane was there and you have seen it on the news October fest is really going strong now so we had yesterday the live stream where the kickoff live stream of October fest join us we will have another community oriented live stream next Tuesday it is announced on various channels so I will not hijack the online meetup to do the promotion of that event and final word I say thank you to everybody to Elisa for organizing this online meetup to all the participants to this very rich and very fun summer and the time we spent together and I give the final word to Elisa sure thank you GSTOT contributors thank you especially to mentors as well I felt this project went really well this year and if there is room for improvement please I will start a document I would love to get your feedback how we can make things better for next year but your contribution has been invaluable so please note that and you are always welcome to come back and continue to contribute as well so thank you very much with that being said we will end this call thanks everybody