 The recording has started. Am I, are you still able to see my screen. Yes. You need to put it full screen. This is perfect. Okay. So I can't see the time. Okay, it's actually we're on the hour. So let's get started. Yes. Let's go. Hi, everybody. Welcome to the Jenkins Google summer of code 2022 Jenkins online meetup. This is our coding phase one midterm demo and status update from Jesus contributors about Jenkins online meetup. So this is a virtual meetup group. And it's intended for users and developers around the world. The aim of this group is to conduct regular online webinars about all things Jenkins. So we are always looking for Jenkins stories how to best practices. And if you're interested in being a speaker please reach out to us on the advocacy and outreach get our channel. Questions throughout today's webinar please put it in the chat window, our speakers will get to it after their presentation, or if, if can be our mentors will be online to help answer the question during the presentation as well. You can also find us on Gitter and Jenkins discourse after this event. I would like to highlight that the code of conduct applies here so if there's any questions about it it's basically being nice and being respectful to one another. On the agenda we will quickly go over what GSOC is and what Jenkins in GSOC is, and then we'll go straight right into the project demos and presentation by our GSOC contributors. This is Google Summer of Code. So Google Summer of Code is a global online program focus on bringing new contributors into the open source software development space. GSOC contributors work with an open source organization in this case it's Jenkins, and they spent about 12 plus weeks programming project under the guidance of mentors. It's been six years participating in GSOC, and we have four project ideas which are GSOC contributors will now provide you with a what it's all about how they progress on it and let's go dive right into it so Jean-Marc. Yes, okay. Welcome everybody online or listening to that recording. So I'm very pleased with the org admin team to have the presentation of the various projects that are participating to this edition of Google Summer of Code. So next slide we're going to start immediately with the first presentation from Yeming about Jenkins file runner action for GitHub actions. Yeming the floor is yours. Okay, let me share my screen. This one. Okay. Just make it full screen there. Okay, let's start. So hello everyone, my name is Yeming. I'm the Jenkins GSOC contributor of Jenkins file runner as GitHub actions this summer. So I'm honored to give the midterm presentation today about this project. I like to give a brief introduction about myself first. I'm a graduate student from Carnegie Mellon University. My major is like electrical and computer engineering. I'm a newbie in the Jenkins open source community. And I'm grateful that Jenkins can give me the opportunity to realize the idea in this project. So Jenkins file runner action for GitHub actions provides the customized containerized environment and used for GitHub actions for users to run the Jenkins pipeline inside the GitHub actions. In detail, if using these actions, any GitHub project which has a Jenkins file can execute its workflow in the GitHub actions runner. It aims at applying Jenkins in the GitHub actions in a function as a service context. This feature is based on the Jenkins file runner, which is a command line tool for the Jenkins pipeline execution engine. So look at the picture. Here is a basic architecture of these actions. So I combine a Jenkins core, a minimum required set of plugins to run the pipeline, plugin installation manager and the Jenkins file runner into a base images. And then I set up the entry point shell scripts to start up the container workflow. So how can you use our actions? I provide two Jenkins file runner actions in my project now. One is called JFR container action and another one is called JFR static image action. You can access them in your GitHub action workflow definition by using the following URL. As you see in these high level diagrams, they are similar with other GitHub actions. The basic example is using actions checkout to set up the workspace and using our Jenkins file runner actions to run the Jenkins pipeline. It is pretty simple. So as I provide two actions now, so there are some subtle differences in our actions. So the root cause of this difference is the starting time of the Jenkins container. In JFR container action, the Jenkins container starts up before all GitHub actions execution and ends after all GitHub actions have results. So all GitHub actions will have influences on the container. This means you can use other actions in the marketplace to set up the environment as well. So in JFR static image action, the Jenkins container post upon the starting time. It will start up right before the JFR static image action starts and end right after the JFR static image action ends. In this case, the Jenkins working container has strong isolation with the host machine so users cannot use other actions to set up the container environment except actions checkout. So actions checkout has higher native support by GitHub actions. So look at our basic function interfaces. Our function interface is pretty simple. You only need to point out the running options and the relative paths of Jenkins file, JCS, YAM file and the plugin installation list file in your repository. So the command means how you want to run the Jenkins file runner. The default command is wrong and another support one is linked. And Jenkins file input means the relative path to the Jenkins file in your repository. The default command is Jenkins file. And the third one is plugins TXT. It means the relative path to plugins list file in your repository. And the final one is JCS. It means the relative path to Jenkins configuration as code YAM file. So as some people might not be familiar with the tool in our actions, I listed some basic commands about them. Basically, we use the plugin installation manager to install the plugins specified by plugins installation list file and the Jenkins file runner to run the Jenkins pipeline. So after some explanations about these actions, I want to share some demos for you so you can apply them into your GitHub workflow definitions. I already added some hyperlink so you can click them. So these demos aim at applying Jenkins a real FAS context so the one time used Jenkins controller won't store anything in the file system. So the first one is integrating actions cache to cache installed plugins. So actions cache is a very popular action to cache your dependencies which is provided in the GitHub Action Marketplace. By default, the plugins specified by plugins.txt will be downloaded from the internet every time you run the workflow. In order to accelerate the workflow, you can use actions cache outside and give its cache sheet status as input in its plugin cache sheet. So one thing to notice that this feature is only available in JFR container action because in another action you are not allowed to integrate other GitHub actions with the Jenkins file runner actions. So next one. So the second one is about uploading your pipeline log to GitHub Action pages by integrating actions upload artifact. So as you can see after these two actions and they will leave the pipeline logs in your workspace, which you can access. So you can use the actions upload artifact action to upload your logs in this path. So as you can see, the only difference is the path of the pipeline logs. This one has, let me see. So you can download or delete those artifacts in the related GitHub Action pages. You can see the artifacts on the downside. So the following examples are still in progress, but their experiments are successful. So you can play with them. And you can actually you can directly copy the Jenkins file or JCS configuration. I will put that in the official demo pages later. The third demo example is caching the maven dependencies by integrating Jenkins dropper drop cacher plugin. So this one is pretty similar to the to the function of actions actions cache. So the difference is that this one used Jenkins plugin. In this case, I use AWS S3 as an example, but you can use any similar object storage services. Firstly, you can set up the cache in the Jenkins file and validate the cache freshness by specified key. So for example, if it is a maven project, you can cache your maven dependencies by your POM. Then what you need to do is installing job cacher plugins specified by plugins.txt, configuring the AWS access key and the S3 storage path in the JCS C file. And then declaring your AWS key as environmental variables in the workflow definition. You can also find the example in the link. So the next example is uploading the artifacts to AWS S3 by integrating Jenkins artifact manager S3 plugin. So this the function of this demo is pretty similar to the to the actions upload artifact, but you can use your own cloud services. As you can see the steps are pretty similar to the previous example. All you need to do is managing the artifacts in the Jenkins file, configuring the AWS access key and some related storage path in the JCS C file. And then declaring the environmental variables in the GitHub workflow definition. And the final example is about triggering the building at another permanent Jenkins agents. So I think this demo is pretty interesting, because sometimes you might need to collect the results from different agent buildings because maybe sometimes you don't you don't want to only build the results in the only one machine, you might need to use another agent to build your job again to see the differences. It's just my guess. So the previous example only gives the single agent example but you can integrate with this one. Firstly, you need to create the darker agent in your machine and remember to set up the related security policies accordingly. So if you don't know how to set up the darker agent you can click the link below. I think it is specified in the PP2 URL. Then you need to call the different agents in your to process your building steps in the Jenkins file. Finally, you need to set up the agents in the JCS C file and declare your machine access key and the public DNS thing as environment variables in the workflow definition. So the Jenkins file can use the SSH remote control to access the another permanent agent. Let me see. So I also gave some JCS C example here. You can directly copy. So in our project, the common line, I have to declare that clarified that the common line Jenkins pipeline execution engines powered by the Jenkins file runner project and the plugin installation part is powered by the plugin installation manager. So, so thanks this projects I built on. I built my project on it on that. So that's it. Thanks for listening. Thank you very much, Yeming for this this very interesting presentation I see that you've done quite some work experimenting different techniques and the potential of it. Thank you very much just checking in the Q&A. Okay. No, that's not the right button here. No question in the Q&A. So back to the mainstream presentation. So the next one is the plugin health scoring system with the garage. The floor is yours now. Thank you. So I'll now share my screen. So you might be able to see the screen right the front page. Perfect. Awesome. So let's move forward. So this project is called plugin health scoring system. And myself and my mentors are Adrian, Jake and Aditya. So with their help, we were able to do a lot of work on this. So let me present you how the project looks like so far. So this is the agenda we're going to talk about what the project is about, then we will be going through a demo, then we'll be, then I will talk about what did I learn, and then there's going to be some discussion on what is next for this project. And then there's going to be some question and answers, if there's any. Now, Jenkins community, as we know has around 1800 plus plugins, that's a lot. So, if you think about all of them, not each and every plugin is at the same maturity level. Right. Some might be like at the utmost bar that we have for a plugin maturity. Some might not be at that level. So there's like some this, you know, there's disparities between them. So what we need is, there has to be a system in place, which tells us what exactly is a position for a given plugin in this maturity level aspect of perspective. So we are aiming to do that by putting a score, a numeric score on a given plugin. So that score is going to tell you how mature a plugin is. So that is what we are trying to do with this. So in simple words, we are just going to rate a plugin. It's going to help the users. It's going to help the, the, you know, maintainers of the plugin. It's also going to help the contributors, the new contributors who are interested to come and contribute to the Jenkins ecosystem. Right. So, having said that, let's just quickly jump into a demo. I think you should be able to see the IntelliJ screen, right? My ID, are you able to see the ID screen? Yes. Yes. Awesome. Awesome. Thanks a lot. So we would be implementing and running this project and just before that. This is how the read me file of our project looks like. So I'll be following these simple commands that we have here to setup and keep the project running. So going back to the IntelliJ, I'll be running this command called Maven package and we'll be skipping the tests. Okay, it's going to generate a jar file for us and then we'll be running the Docker compose up command to start the services. So talking about services, we have two of them here. This is the Java project service. And this one is service related to our Postgres. So both of them work together in parallel. And now you can see we have a message called started plugin health scoring. So both the services are running right now. So you should not believe me. So I'll show you, they are really started. They have really started. So let me go and click on this connection button. So what we are doing right now is trying to connect with the database that we have just started with the help of Docker compose up command, right? So we have set up a connection by filling up some details and now I'll be running some queries to query what's inside the database. So there's basically one table called plugins. And if I query it, you can see these are the fields that it has. And if I query it and arrange the document, I mean the columns, it would look like this. So I just wanted to organize them well. So this is how it looks like. We have a for a given plugin, we have its name, it had, we have its SCM URL, we have its release timestamp. And we also have an object called adjacent object, which is named as details. All right, so this detail is fetched from the Jenkins update center, which is hosted and it's available for everyone to view. Now, there's a very good thing about this database that we can do right now that is you can see some of the fields in the SCM column are missing any value. So that means they do not have any valid SCM assigned to them. Now we can query and find out how many of the plugins in the are there, which do not have valid SCM at all. So if I run this query last one, it says that there are 52 of them. So this is one of the data inside that we got with the help of this database in place. But we're going to have lots of very important details inside this database so that we can run very specific queries to find out what's going on on the whole plugins as a whole, right? So that is how this is going to look like as we move forward. Having said that, let me quickly show you the flow, the how the code looks like, not going too much into detail. So this is where the this is the import plugins here. CLR file where it all starts. So we are calling this method called read update center as its name symbolizes it's it's reading the update center. And we are using this update center URL to access it. And then we are using Jackson's object mapper API to read the values and then we have these specific records in place. So these records are written by us so that it tells Jackson what exactly do we want to fetch from the JSON object that we have from update center. So we are fetching plugins and deprecations and for them we have these two other records to tell them exactly what do we need in both of these entries that we are fetching. So that's how it looks like. And moving back, we are then storing those results. It actually returns list of plugins that we are storing them in the database using this save or update method, which just calls plugin repository dot save method. So that's how it looks like. This is the first and like the main important flow of the project. Now, moving back to our slide that we are done with the demo. These are the key details that we have getting the plugin list and storing them into the database and then we have this open structure to record details. So that told you we have the details column which was storing JSON object. So it so it has JSON object because that way it's flexible. In future, we might have different different plugins, their details can be stored inside that JSON object as per however their properties are. So that's a very conscious decision that we have taken. Now, these are this list can go on and on but I've put in this few of them. So what did I learn. So since we have CI in place for our system for the project, whenever we run the CI starts working and we have some integration tests as well. So all of them are run across around the new contribution that we have, let's say a PR. So it builds up and runs the integration test on a check whether it is up to the mark or not, it's failing something or not. And if it is failing, we would have some reports, some spot books reports and check style reports. So there's these are the things that I learned and writing best practices and clean code. So these were able to help me, you know, with this project so far. So let me talk to you about the next step, what do we have next for this project. Now we are working on designing the probe engine. And this probe engine is going to determine and going to store the logic for each and every probe, like how it works. How is it helping to, you know, analyze a plugins repository, let's say, so that's going to be a probe engine and then we're going to be using a process which will apply the weights of the weights to this probe values to generate the final health score that we're looking for. And that final health score is going to be delivered via JSON file, just like how update center file is being hosted. And using that JSON file, it will be rendered, the scores will be taken from that JSON file. And that score is going to be rendered on the UI of let's say plugin site or Jenkins plugin manager. And it's a, it's a, it's, this is a decision to take part in future. So, so this is how the plan looks like. And if you want, you can check out these resources to know more about it contains everything. And yes, so if you have any questions, let's, let's address them. There, there is one question that appeared somewhere I don't know if it's in Q&A or so, a very intriguing decision and to store the information about the plugin health as a JSON field. What were the arguments for and against what were the alternative options for that field. So I can try answering a few words on this question, then I'll pass it to Adrian who can share more lights on that. So we chose JSON because when we talk about details, as the name of the column suggests, we don't know what details a plugin might have because each plugin has different properties. Like it might not be a real project, some might be a main project, so there are some differences between them. So we need to have something in place which is flexible enough for us to store anything depending on the plugins properties, right. So that's one thing. And I think I'll then pass it on to Adrian if he wants to add anything else. Adrian, go ahead. Yeah, can you hear me. Yes, loud and clear. So that's a very good question. The idea of having a column in the database, a column by a probe could have been a solution. I not quite expleted it. I went through, I went to a JSON column because I wanted to be more flexible, not having to restructure the table each time we create a new probe. I wanted to have one one colon that would be able to hold all the probe results. And also to be mindful that not all the probes like I said, will be applied to every one of the plugins. For example, we can have we have in mind that maybe some we will have a probe that will determine if a plugin is using a Jenkins file or not. And then we can have a secondary probe saying if the plugin is using a Jenkins file. Is it using a custom, is it using a custom build instruction or is it using the recommended build configuration from the, from, from the infrared team. And then we can see that some probes won't be up. We won't apply that secondary probe to plugins that don't have a Jenkins file. And so we would have a lot of colon that won't have any data or usable data and also the the probe, the probe results is something a bit more complex than just keeping the then to use a varchar colon, for example, on for each one of the probes. So that's why I went to json. In the end, maybe we could have used a no no SQL database entirely because of the main data point of the project will be stored in a no SQL kind of colon. But after discussing with the infrared team, then they have knowledge about postgreSQL. They have a way to provide it easily on their cluster. And they have they know how to manage that kind of database. And I know that the json data type on postgreSQL is also quite perfect. You can have a good index on that data. So even, even though you the query system might not be as easy with jpa and stuff like that as with others, it's totally feasible to have custom queries with either functions from from from postgreSQL or direct language syntax to query to query json objects. So this, this is why I wanted to go to json to a json colon. In the end, the way the way we are storing the data in the database. is is not strictly the core, the core product of that project. We will be able to, of course, review that in in six months, one year if we want to change that might be difficult. But again, the the database is not keeping data for the long term. It's something we can for sure analyze reanalyze the update center and all the plugins from scratch when we want. So we won't have reloaded. Yeah, we can have we don't we I don't see any problem having to start from scratch if we want to go to a no SQL or to a colon. But I I for see more data storage. I don't see any issues with having a colon base result storing system. Thank you very much Adrian for for this very detailed explanation. I think there's been a lot of thought into taking that decision very, very interesting. I'm looking forward to see the results of that and how it progresses. There was another question. This one is for the rush will live with running out of time but the question was here. Do we have already something to share about the different probes that will be used to decide what is a mature plugin and how this will be measured consolidated. Do we have already a good view on that? I don't forget to remind about the survey going on. Yes. Go ahead. Yes. So we do have a survey going on to find out the weights for the different different probes that we are planning to have. So to answer the question, do we have anything just now? I would not say anything. There's working probe right now. But moving forward, let me give you an example of a probe. Let's say one probe can be does the given plugin has Jenkins file or not. That's one probe. So we can and to give you to answer like how are you going to do that? It's just going through the repository of plugin and checking the presence of Jenkins file or not. So that can be one another can be just trying to find out whether the given plugin has the recommended Jenkins base version or not. That can be another probe. So we are going to have these list of probes and there's going to be specific functions that are going to you know, address those probes like scraping the repository and the form XML file and checking what's the version is. Is it matching with the recommended version or not? If not, then that means the probe is not passing. So based on the weight that the probe has, we will be assigning a score for that probe and basically this is going to be happening for each of the probe and combining all of these probes values, we would be having the final health score. So was I able to answer? Yeah, great. I am I am watchful in looking at the watch so that we keep this this this meeting on rail. Thank you very much for this this explanation to things there that I take away of it. This first we're building for the future and to be able to refactor what we're working on in the database design in the probe design. This is work in progress and just the start of the story that we are writing there. There will be details, maybe on the on the meetup page or elsewhere on the various channels where the survey is. Don't hesitate to give your opinion what is for you what makes a trustable and healthy plugin and whatnot. We're listening listening to the community. The rest. Thank you very much for this very interesting presentation and some interesting challenges that you're you're facing in this project. So thank you. Well done. Well done. So let's walk to the next presentation. And we're going to listen to who she cash was going to explain us his work about automatic gift caching the floors is yours. Can you see my screen. Yes, perfect. Go ahead. Let's get started. Yeah. So, hello everyone. It's indeed my pleasure to be on HD all today I'm going to present you my work on automatic get cash maintenance Jesus 2022. So here we go. Let's get started. I'll take a few moments to introduce myself. I am Rishikesh Rao, and I belong to the land of unity and diversity. That is India. Currently, I'm pursuing my third year engineering in computer science. My hobbies are coding gaming and watching movies. The things that interest me the most and keeps me occupied our blockchain distributed systems and system design. Let's have a glimpse of the components of today's presentation. That is, from where it all started, and how it's currently progressing. Git is the most widely used version control tool. Many of us use it in our day to day lives. Any changes made in our project are stored using a blob, a tree, and a commit, which are known as get objects. As the software progresses to its development lifecycle, the increase in the number of commits is directly proportional to the get directly. The increase in the size directly affects the performance of the get commands, if not maintained properly. An unoptimized repository mainly contains many loose objects, no commit graph in efficient search of data impact files. My project automatic my project is automatic get cash maintenance. It means I'll have to optimize the get caches on the Jenkins controller. What exactly are caches caches are directories on the Jenkins controller. They are created by the kid, they are created by the get plugin when Jenkins job is configured typically a multi branch pipeline. Each cache contains a bear get repository. A bear get repository is a clone of the original repository, but only contains the get folder. Multi branch pipelines often include many branches, which results in the checkout of the same get directory multiple times to avoid this duplication caches are used. What are the advantages of caches, as caches only contains the get directory, it reduces the network IO and the local disk usage. The disadvantages currently in the get plugin, there is no way of periodically maintaining the get caches. The caches has to be maintained manually using a script. Hence, the caches remains unoptimized. Hmm. Is there a fix to this. Yes, there is. In get version 2.3.0, get developers introduced a new API that is the get maintenance API. The API comprises of various tasks. Each task has its own specialty. The aim of my project is to integrate this API into the get plugin and the get client plugin. The integration of maintenance API provides a mechanism for administrators to periodically schedule maintenance tasks using cron syntax on Jenkins controller. Few maintenance tasks such as GC commit graph were implemented in older versions. We support get versions up to 1.8, which was released all the way back in 2013. So let me show you a glimpse of my demo, like my project. So can you see the terminal on my screen? Yes, maybe. Is it easy for you to make it a little bit bigger? Although. Oh, wait, I don't. Okay, don't try it. I'm not sure how. Yeah, I'm not sure. No, okay, go ahead. Yeah, so what are caches? Caches is a directory present on Jenkins. So if I go into the caches directory, you will find get repositories. These are bear get repositories. So if I move into a get. Direct. If I move into a cache. You will see it only contains a get repository. The get client plugin and the get these caches are unoptimized caches right now. We can look, we can look, we can say that they are unoptimized by looking into the objects folder. So if I see the into the projects folder, you can see so many directories present in them. Each directory contains various files. They are known as get objects. These are the various get objects present and inside your inside the get caches. So each get object can be a tree, a commit or a blob. So let's see how much space does this directory take. Okay, this directory takes around 127 MB. Can we do better? Can we optimize this? Can we reduce the space this directory takes? Let's try it out by running the get cache maintenance on Jenkins. So let me start the project Jenkins software by running this command. This command compiles builds and, you know, yeah, starts the software. Yeah, I think the project is up and running. So it's running on localhost 8080 slash Jenkins. So this is the Jenkins UI. And if you click on the manage Jenkins section, you will be, you will see a get maintenance system configuration. When you click on the get maintenance system configuration, you will see the UI. So the UI is not that good right now. So I'll have to improvise on that. So bear with me. The current get version is the current get version on my computer, which is being used to run the maintenance task is to 37.1. In this demonstration, I'm going to explain to, you know, go through two maintenance tasks that is the commit graph and the loose objects. So I'm going to schedule a commit graph every minute. Okay. So every two minutes. So, yeah, this command schedules a commit graph every two minutes. We schedule a prefetch every hour. The schedule or GC every week. It is recommended to use GC with care because it takes a lot of time to run loose objects. Let's schedule it every minute. Okay, so Jenkins. It is recommended not to you schedule any tasks every minute as it is resource intensive. So this is a warning, but let's do it for the sake of this demonstration and incremental repack. Let me run it every 61 minutes. Okay, wait, what 61 minutes Jenkins is throwing an error so you can't schedule everything maintenance tasks for 61 minutes or let's let's change it to daily. Yeah, so Jenkins is happy. Let's save the data. And yeah, it has been saved successfully. We can look at it by looking into the logs. So there is a log system log and the kit maintenance. So yeah, the data has been saved successfully. We can also confirm it by looking into the by looking into the XML file present in Jenkins. So I can see it by Jenkins of logins. So this is the data which has been stored successfully in Jenkins, each maintenance task along with this current syntax, and whether it's configured or not. So let's start before clicking on the execute button execute button schedules the maintenance tasks. So before scheduling the maintenance tasks, let's try watching whether this maintenance task has any effect or not. So we'll go into the caches directory, we'll go into the Git line plugin. And we'll try watching, we'll try watching into the info directory. So right now there is nothing other than an exclude file. So let's try scheduling these maintenance tasks. Okay, the maintenance tasks has been scheduled. Let's look at the logs. The maintenance tasks are scheduled for execution. How does this work internally internally every minute Jenkins checks whether this crone syntax is a valid crone syntax or not. If it is valid, it is scheduled for execution, that is, it takes that maintenance task and runs it on all the caches. So let's refresh this page. Yeah, as you can see, the commit graph has been added to a maintenance queue and loose objects has well been added to the maintenance queue because the crone syntax states to run it every even minute, so two, four, six, eight. So currently on my computer it's 918. So that's why the loose object has been added to the maintenance queue. So the maintenance task currently the commit graph maintenance task is being run. So let me refresh. Yeah, the commit graph maintenance has been run successfully. Let's look into this folder. Something unexpected. Yeah, something unexpected one minute. Let me look into it. It's never a good omen if a demo works immediately. Okay, I'm looking forward for the final pushy cash. I don't want to pressure you there but you have two more minutes to the max. Okay, so yeah, I'll try ending this demonstration by, you know, talking about this commit graph. So if you look at the commit graph command, you will find two commit graphs which has been created to do this command. So now, if I let's try looking at the performance optimizations we have gate. So if I run this command, which you know disables the commit graph and tries running the log or get log command to get the latest 20 comments. Okay, it takes around. There is, there are things going wrong on my side. Don't worry. Okay. Okay, one minute. Let's go into this directory and go into the info directory. Yeah, into the commit graph. And now let me try the same command. Okay, I think I should go. Yeah, this command, as the commit graph file is turned off. So it is taking some time to get the last, last, the latest 20 comments. I think it would take around 10 seconds to, you know, get all the latest comments like to execute this command. Yeah, so as you can see it has taken 1616.895. So let's try running this thing using the commit graph turned on. Wow, it has taken 0.025 seconds. That is around 600 times optimized. Oh, yeah, that is one performance again, which we have seen using a commit graph. And one more thing I want to show is, whoa, where are these directories present on my get plugin on my caches. All of them have been banished. And what exactly is the size of this directory. Before it was around 128 MB and now it is an 8.1 MB. So you can see the directory has been compressed by around 15 times. So, there are various maintenance tasks as well but as we are running out of time I think I would. Yeah, stop. This is an impressive way of doing maintenance and a nice way to conclude your presentation Hoshi cash that was impressive. Thank you very much. This was a very interesting presentation very lively. And you dare to do a full demo of your product so congratulations for that. I'm sorry to rush you a little bit. I just would like now to give the word to Vian for the, the next presentation. Hello everyone. It's great to be here. And I'll start in a moment, once I share my screen. So I am doing the project for pipeline steps documentation generator, and my mentors are Christian and her ship. I'm giving a brief context about my project. What is it about. So pipeline steps documentation generator is a tool that generates set documentation for our pipeline steps reference page. So what it exactly does is it queries all the Jenkins plugins for the documentation of steps for max it as an ASCII doc and then feeds it into Jenkins.io. And that way the users can see the entire self documentation at a single place, and then use it the way they want. Now we come behind the rationale behind this project. Why this project is important. So now we can find around 600 plugins on the pipeline steps reference page, and these plugins provide us more than 1500 steps. And if we dig further, these steps have several parameters, which have their own hierarchy own health text, and this makes the entire documentation enormous user feedback there has been a lot of feedback about this page and it is not very easy to find a particular piece of information amongst this documentation. For instance, if you look at pipeline multi branch plugin page and look at the ASCII doc for that. It comes out to be around 150,000 line long ASCII doc, and that is not good for the developer or the user, both of them will find it very hard to deal with it. Even the loading speed of the page becomes quite low. So we have to deal with that right. And whilst we were finding solutions for these, we also came up with some additional improvements that were not initially a part of the problem statement, but we thought could be beneficial for the website. So I'll start with the work that I've done and the GSOC might want to. Firstly, the things that I did under the community bonding period. So like everyone I spent some time sharing the code base of Jenkins at Iowa and the PSDG repository, these two repositories were the main repositories of my projects concern. I created a pull request on Jenkins attire to fix an abbar work. And this actually led me into understanding the CSS and the layouts that are being that are working on Jenkins attire and help me work towards my first pull request. And then, as my mentor Christian suggested, it's better to create an epic on Jenkins zero to keep our work organized and this really gave me a corporate type of field to get the project organized. And I listed all the tasks under that story. And it's working really well for us. And then finally here designs and wireframes for the UI the layout part to get some feedback of the community so that they can understand what I'm talking about and get the idea on paper. So let's begin coding phase one. The first improvement that I did was change the sidebar scrolling behavior. So this was a completely UI based change. So the sidebar, the sidebar scroll along with the content. So let me just move on to that page. So this was the UI that was present earlier for the Jenkins documentation sidebar. And as you can see the sidebar scrolls along with the main content. So if someone wants to scroll down, and then they want to shift towards a different page, they'll have to scroll back up, or maybe just press the home button, and then they can see the sidebar and they can click the link that they want to. But this is not an effective way to, you know, work with the things, apparently, you'll need to have both of them together and that is the standard convention that is followed by most of the documentation nowadays. I went ahead and did that. And now the updated look is something like this. So both the content and the sidebar have independent scrolling. So one of them, the other one will stay stationary. And if your screen is smaller than the height of the sidebar, the sidebar can scroll. And in this way, you can scroll, you can navigate through your entire content. And the major change about this was this has been affected affecting the entire Jenkins documentation, the user documentation. So not only pipeline steps reference, but the entire documentation was could get the benefit of this change. Moving on to the next change. The presentation. So these are the things that I mentioned. The second thing that I proposed using the wireframes was listing the plugins in the pipeline steps reference. What I meant by that was to, in fact, list all these headings that you see over here inside the sidebar so that the user can actually see what all plugins are there and then click on any one of them to find their individual pages. But after I implemented them, we found out that it might not be very useful for the users to, in fact, use this change. The page was becoming more crowded and we felt that it was dragging us down more than it could benefit us. So from some, the community's feedback, we found that that including a search filter for that would be much better. And that would be the way to go. So we scrapped that change of the listing plugins and then we moved on towards working on a search filter. So that search filter was more successful and it was the change itself was much more effective than the list could ever be. So I'll show you what I mean by that. For example, if I want to search about it, so you just want to see all the steps of plugins have given it just as an example. As you can see over here, the page has 101 search results for that. And for example, if you want to toggle between them, here in one line you have two places where the Git is present. And basically you'll have to navigate a lot in order to see all the search results and seeing the relevant results at one place is not possible with this method. But with a new search filter that I've created. And it's very simple, you can just type in Git over here, and you will immediately see only the results that have the keyword Git in it. So maybe the plugins or the steps. So for example, this step has a word Git in it, but the plugin doesn't. So the plugin shows anyways so that you can at least see what plugin is coming from, but it's not the other way around. So if a plugin mentions the word Git in it, the steps won't be short and only the plugin will be short. So for example, if I type SCM step over here, it will just show me the plugin and not these steps that come along with it. So this gives us a very minimal, minimal data on every page. And this essentially makes it very easy for the user to filter out the data that they want to see. So this was the first filter improvement that I did. Moving on to the next one. So we need all these pages in the before and after section if anyone wants to see these previews. So the after is basically the Jenkins.io website since all these changes are merged and are live. And before are some previews deployed on Netlify. The next change was separating the declarative steps from the main class. So what happens is this generation of declarative steps is separate from the generation of the scripted steps of the pipeline. And because of that, we felt that it would be useful for the developer to see these two as a separate class. So why not separate the declarative steps in a separate class and then use it inside our pipeline instead of the main class in the way that we want to. So this would make it much easier to refine or restructure the way in which the declarative steps are generated. Currently, they are present only in the declarative heading on this page. So if you want to remove them in the future, or maybe put them at a separate page itself, it would be much easier for us. And the biggest benefit for this was it made the main class for Pension Extractor much cleaner. So that way, we have only had the relevant functions that provide the maximum documentation and not the minimal things on the main class. So this was some board-based iterative changes. And then came the shifting parameter data types change. What this was about is what you can see that whenever you visit a parameter documentation, what happens is, for example, let me open the SCM step plugin. And let us see the checkout step and SCM parameter within that. Let me open one of these dropdowns. And as you can see, these types are presented in a new line and the branch is there and then you have a type. So there is no help text over here, but still the data type is present on a new line. One more example could be opening up the Git SCM class and seeing the branches example. So here for name, we are not able to see the data type yet, but if we expand it, you'll have to scroll down through the entire hell. And then you will find the type sitting somewhere over here at the very end. But now with this new change, you can find these data types in line with your the names of these parameters. For example, let me open the clear case again. And as you can see, the difference is clear. The data type is present in line with these changes. And seeing the Git SCM thing for the name. The string is present here itself and you need not even expanded the first place. You can only expand it if you want to see the hell, but it's not required now. So at all levels of hierarchy, these types have been replaced by inline data types. So this ultimately reduce the height of a page, the scrolling height of a page. So if we expand many of these, you can see that the scroll bar height itself, the scroll bar size is different on both these pages. So the scrolling length of the new page is much lesser, which in turn reduces the size of the ASCII doc and everything. The loading speed, everything is improved second. So this was the, this is all about, all about the parameter like shifting public best. Then came last but not the least task of coding phase one. This was a releasing pipeline message by utils. So what this does is it provides you a localized plugin manager and a mock Jenkins instance that even load in your program and which can be used to query plugin manager. Different plugins using the plugin manager. So the way we use it in the pipeline set extractor was a diagramming the plugins feeding that into a reactor that reactor initializes. And once that is done, you can, you can find the components that have the type step descriptor inside a plugin, and you will get a list of these. And from that list, you can generate the documentation that you want. But we realize that this can not only be used for extracting the step head, but other types of things as well. So let us look what we can do. So for example, if you want to use it in a rest API documentation project, we can use this artifact and we can use it the way we want the plugin manager gets initialized. And then we can query it the way we want. So there are many options available. We currently only use three of them for pipeline set extractor but there are many more because the hyper local plugin manager itself inherits from the main plugin manager objectives. And you can find this artifact on the Jenkins main repository. It is released and all thanks to Java, the release has been done and we are able to use that in pipeline set extractor now so we have removed the local dependencies and using this published version. So this is, you can find it over here and linked it in my presentation. And some things that I did under this release process for first of all completing the Java code of the hyper local plugin manager. And then we moved this to Java 11. It was earlier on Java 8. And then I wrote some tests to see if the green path for hyper local plugin manager is working fine or not. So under that, I initialize the plugin manager using the reactor that is used in pipeline set extractor. And from that I can query my plugin manager for some tests, and we can assert those cases to see if the tests are working fine. And finally, we release this as a main artifact which can be used as a dependency by simply copying this down and pasting it in your form. And we are running out of time. So you have one more minute to conclude. Sure. Sorry. We are using this in our pipeline set documentation in the repository. So this was all about the changes that I made for till now mid evaluation. And these are some tasks that we have proposed for boarding phase two. So many of them are obtained from the community. And first of all, the task would be to label deprecated plugins on respective steps page that Mark suggested. So that would be the first thing that we'll do. And then we want to separate deprecated steps from the advanced steps. Currently, all the advanced steps. Some have optional, some are deprecated, and it's really very hard to distinguish between them. So that's the thing that we would want to do. And then comes a bigger change in which we would want to break down these pages with larger information into smaller ones so that we can solve the ultimate issue, which was a proposed in the problem statement. And yes, any more changes about the layout, the fee, the UI, the documentation, they are welcome and you can get in touch with us in the docs feature channel or the Europe edition of the docs office as you can find me there. And any feedback is welcome. And these are some links you can find them on my project pages. Yes, this was it from my side. Any questions. Please let me know. Great. Well, well done, Vian. So I'm sorry to have to rush you at the end. Great, great work. I was going to conclude in Q&A can go through this course or get her. After there any there so I just want to share the Jenkins upcoming events that we have so the Jenkins community of Jenkins project will participate to scale scale conference in Los Angeles. And at the end of this month, a mark weight in a list that's on will be there and probably Kosuke, the initial writer of Jenkins will attend also will be present at a booth. The big event that we're going to hold is end of September DevOps world. You know, I'm not going to do more explanation what it is, because we're completely running out of time and be prepared to hear information and looking for volunteers for this year's edition of October fest in the Jenkins community. So stay tuned for details, people on the West Coast maybe looking forward to see you, the Jenkins booth in scale and stay tuned for information for the two other events. I think that's it Alyssa did I wrap it up. Thank you very much for everybody's attendance. The word is back to you Alyssa. Thank you to our G SOC contributors. Thank you to our mentors, could have done this program without all of you. So, well done. We're out of time. So, we'll see you next time. Bye bye everybody.