 on the cloud. Okay, so good morning, good afternoon. Good evening, good morning, I don't remember. So it's midday somewhere around the earth. So welcome, we're here together to discuss and give details about the plugin health score and displaying the plugin health score. So my name is Jean-Marc Mason, I am org admin for Google summer of code with Jenkins and we have also Alyssa and Bruno who are helping me doing that. And the two key persons of this session here are Adrian Adrien Lechard-Panthier, it's a French name. And we have Jake, Jake Leon who is in the United States who is also a key actor in this project. So thank you very much for having joined us in this discussion. We'll have, we'll go the two projects one after the other. I'll give the word to Adrian and Jake, I don't know how they organize themselves. So first a presentation of the project so that we get introduced and then we'll have a Q&A session. Now in order that the maximum time we're going to spend on these two projects is one hour. So on the top of the hour, so that will be 10 p.m., half past 10 p.m. in India, I will call it for a stop. And one project maximum half an hour or I'll start giving signs when we're getting out of bounds. There, this was for the introduction and I'll give them the word to Adrian. Okay, so thanks everyone for joining. It's nice to see so many potential, so many people interesting in those project ideas. So let's start with the PROB addition idea. So if you are not familiar with, let's start with a beginner's scoring project. If you are not familiar with it, the idea behind that project is to provide key information to plugin maintainers, but also to plugin users of Jenkins ecosystem about the else state of each and every plugin that is available that are available in the update center. The idea is to find key details that we can measure and that we can assess to determine whether the plugin might need some attention or is in good shape overall or is in perfect shape all around. So that's the idea of the project and concerning it, we have two ideas for JSOC. One is about adding new props because the plugin, the project is using two key feature which are pubs and scoring implementation. The pubs are there to gather data about a plugin. They are not here to make any judgments. I had the question quite a lot on Gitter and I thought that was key on the documentation, but my bad, it's not. So the pubs are here to fetch data on key details about plugins. So for example, we are currently gathering, we have a probe to gather the number of installation of a specific plugin. We have, thank you, Jake. We have a probe, the JEP 229 is about the usage of said JEP which is a continuous delivery. We have depend about and so on and so on. Yes. JEP is, so in Java it's a JEP is in Jenkins, an announcement proposal, Jenkins and an announcement proposal. Yeah, I think that's what it's called. And so we have here a list, the list of currently implemented pubs and the idea is to add new ones because we want to have more details about plugins to make sure that the score we compute for each plugins is as close to the reality as we can. And so the pubs, so we need to get more data about each plugin. And once we have the data, we have scoring processes no, the Lincoln, yeah, that one. So the scoring processes are in fact where we make a judgment call on each plugin. We are using the probe results that are generated by the application of a probe on a plugin. And the scoring processes can use one or multiple probe results to generate a score on a plugin. And all those call then we have an average of all those calls and they make the score of a specific plugin. So for example, ERJ is going to the Git page and we can see the score is of 96 out of 100 because the plugin only has 0.75 on the repository configuration. If we score down, we can see all the probe results for each of the probes that are implemented and have been on the plugin. So we can see the number of pull requests, the Jenkins versions, whether or not the plugin is deprecated, sorry for that. The last commit date, the JEP 2.9 and so on and so on. So the main idea of the first object for this Google Summer of Code is to add more probes and make sure that and use those probes results inside scoring processes. So the scoring processes, the probes can contribute to existing scoring implementation, one of those four, for example, or we can create a new scoring processes, a new scoring implementation to use those new probes to illustrate a specific aspect of a plugin. So for example, the deprecation scoring implementation is only using the deprecation probe result because that makes sense. It's a one-one relationship. But the scoring, the repository configuration is using four and soon enough five probe results because in the repository configuration, we want to assess the presence of a Jenkins slide, the usage of Dependabot and either is the usage, is Dependabot used correctly and does the JEP 2.9 was configured on the repository and soon we will also have code courage and we will also have on that scoring implementation contributing guide presence because that's part of the repository configuration. So in this facade here is mostly adding new scoring and new probes, but it's also including those probes into scoring processes into scoring implementation, whether it's inside a dedicated scoring implementation or a new one or an existing one. And explaining why it makes sense to have that joining a different one on an existing one. What else can I say about that idea, Jake? I think you covered it pretty well. I'll ask, does anyone have questions about what Adrian just went through either about the history of the project or what's expected of this project? I have a question. Yes. So is the value, I mean the plug-in health value, it's going to be reflected in the update center or we are going to create a separate updating system which will be pulled by the plug-in site. Okay, so that's more for the second idea. Oh, nice. So let's table that question for the second part of the meeting so we can have a specific discussion about how we display the scoring. But yes, we will have a specific, there's a plan to have different ways to display the scores, the numeric values of the score for each plug-in. And I have another question. Can we navigate back to the website of the, I see a tabs here. Jake is trying to break the world record of number of open tabs. Yeah, so what is the difference between value and coefficient here? So the value is whether or not, the coefficient is how important the scoring process, the scoring implementation is overall for the overall score of a plug-in. So we can see that, for example, having a security issue is pretty important for the score of a plug-in, which makes sense. And having a good represent configuration is as important as a security issue. The value is, we have two scenarios for security implementation. The score is, the value is mostly a Boolean value. It's you do have security issues or you don't. So you either get a zero if you have a security issue or you have a one out of one if you don't. For the repository configuration, it depends on whether the how many of the different aspects we want to see inside the repository are actually configured inside for the plug-in. For the adoption, is the plug-in marked as adopted? Yes or no. And then we are evaluating the last date, the time between the last commit on the repository and the last release of the plug-in because you can have plugins that have many commits that are never released and we want to assess that as well. And so the adoption is like the repository configuration. The score is, the value of the score depends on different elements. For the security and deprecation, it's Boolean. So it's zero or one. Another way to think the coefficient is the weight, right? The weight of the importance that we put on it, right? So can I ask one more question? Yeah, I have one more question. Then we have you did to raise his hand. Go ahead say it. When we view the security aspect of the plug-in, so correct me if I'm wrong, but security issues are not this close publicly. No, that not. So how do we, I mean... So that's a good question. So we are just assessing disclosed security issues. So a plugin could have an actual security issue that only the security board and the reporter and plug-in maintainers know about. But we don't and the project is not assessing those issues. We are assessing security issues that are public. So there's a security advisory that was published and that is listing a security issue on one specific plugin and that security issue is not resolved. So that's, yeah. To shed some light there for people who are unused to that process. So normally security issues are not disclosed to give the time to the owner to fix them and we give them an appropriate time. If we have no response or nobody is dealing with it at a certain time, we have to publish the security. We don't explain how and where we just say there is a security issue logged. And that's what we call a security advisory. Exactly. Is that okay? Thank you. Yeah. Okay. So Yudit, you have a question. Yeah, Yudit, about this project, is this going to be another plugin which a user can install in checkpoints, public hours and health score? Or is it going to something core feature on which the end user will automatically knows what the score it is? You got my question, right? I'm not sure. Do you mean that is the plugin else scoring project is part of JenkinsCon or is it separate? Yeah. Okay. So no, the project itself live on its own. It's separate from Jenkins. We will have plans to have different ways to display the score to users and to plugin maintainers either on the plugin site, which is the second idea for this year, JSOC. And we have other plans to also display those scores inside the plugin manager. So on the Jenkins instance, when you want to install a new plugin, that is called the plugin manager and you have the list of plugins that you have installed or you have available in the update center. And we have plans to add the score value there as well. But the project itself is not a plugin for Jenkins. It has a separate life from Jenkins is just contributing to the else of the Jenkins ecosystem. Okay. Got it. Another question about it, is the foundation for this setup or say feature, is it done or we are going to make it from scratch? No, it is done. Yeah. We already have a repository of code that was started last year with Dirage as a JSOC as well. And so on the plugin page, oh, we don't have the link to GitHub, but we have the link to the issue tracker to GitHub. So no, not that one. If you scroll just a little bit, yeah, this one. And so ERO, you have the code of the project. Okay, got it. Plug in health coding. Maybe update the project page. Yeah. Any other questions on this project? I have another. Go ahead. Yeah, so the weights of every probe, I mean, it might matter. I mean, it differs from person to person, how much weight a person wants to get to a certain probe and how much to another. So how are we deciding for them? So it's, I mean, what's the decision process behind the scoring? So the weights and stuff. I mean, we did conduct a survey. And I guess the, what we tried to do is impose what the community thought was important to a plugin. And we thought, you know, with some feedback from surveys that we conducted and other things that this was a proper distribution, at least to start with, right? As time goes on and best practices change, things will change, right? This is a living thing. And as best practices and things change, we'll change the values and the weights of certain things, right? But a lot of these things that we have right now make perfect sense, at least to me. And I think it shouldn't make sense to others, right? If a plugin is deprecated, right? Obviously that weight should be rather high, right? And same with adoption, right? If nothing's getting released or, you know, and nobody's there to do the work, right? Then that's obviously a bigger problem as well. And security self-explanatory. And you can see why the repository configuration might be weighted a little bit lower just because at the end of the day, these are what Jenkins community believes as best practices, but they are sort of nice to haves, if you will. I don't know if that's the proper way to put it, but that's how I view it. Adrian, do you want to shed any more light on that? No, no, I totally agree with what you said. The repository configuration is mostly good things to have. But the most important thing is a plugin that is moving forward and is which has a plugin maintainer and has no security issue. So in the end, not having a Jenkins file, for example, is not great, but it's not a problem for the plugin itself. It would be better with a Jenkins file, that for sure. But to be honest, as soon as we have an active maintainer on a plugin, there's a Jenkins file that I can guarantee that. Yeah, it might help. I mean, just to speak from the kind of user or product perspective here about the goals that we wanted to accomplish with this project, right? Obviously, the overarching goal is we want users to get better plugins, right? We want users of Jenkins to have quality plugins, and we want to help them make decisions when picking a plugin to use, right? Because currently, without these scores, it takes a good hour out of your day if you get asked to install a plugin, right? And it takes a good effort to decide whether a plugin is healthy or not. And what we've tried to do is encapsulate that thought process and put that in programatically. So we want users to have better plugins, and we want them to be able to make decisions quicker and easier about which plugins to install, right? You go and look for Kubernetes plugins. There's seven of them. Which one do I use? Which one's good? Obviously, this doesn't answer anything functionally related. If it's functionally going to do what you want. So there's still a little digging. But then, of course, the second part is we want to help maintainers make better plugins, right? And a lot of the times, whether you just adopted a plugin and you just became a new maintainer, this can serve as a little guidance as to, okay, what can I do to make this plugin better? I maintain this plugin and I just got a 65 out of 100, right? You know, what can I do to improve the plugin so more people use it and all that good stuff, right? So the two main beneficiaries of this are definitely going to be the end users and the maintainers of plugins. I'd like to ask a simple question rephrasing or trying to reformulate. Did I understand it correctly that we currently have a framework to collect the data and this is what you've shown here. And this year's project is to add additional probes, additional measurements. Is that correct? Yes, correct. And there are ideas of probes that we could add or are people free to propose new probes or can you elaborate a little bit? So yes, there's ideas of probes but there's also, we welcome any ideas that can come with the proposal for the JSOC project. It's not mandatory to have your own idea. It's a good thing, but it's a welcome thing to see probes that we might not have think of or to just exert your critical thoughts against a plugin or against the project itself to say this is already implemented but I don't think this is the correct way to implement it. Okay, let's explain us why in the proposal and we can have a discussion. That's for sure. If you have other ideas of probes, things that seems interesting to you are, that's also welcome. But we already have some ideas that are listed on the page. Yes, you did. I just would like to add a little detail here. I strongly recommend that people interested in there watch very carefully the series of videos that were made about modernizing a plugin. I think it's referenced somewhere here but this is a good reference so you know what are the things to look for. Anyway, you did, you can go ahead. Thanks, John. Hi, I think, can you explain me the part about the probes? So what are basically probes? Does this specifically target a section of it like if it's about its application or the security issues or does a single probe checks every feature or part of the plugin and gives out this code? How does a probe work and what activities? Can you give me a bit of idea about it in words? Yeah, let's take an example. Let me share my screen and I can go to GitHub and explain one probe if that can be useful. So let me see Firefox. Let's share this one. All right, so normally you can see an empty Firefox, right? Yes. Okay, perfect. So let's go to the Geeker page and let's go to the probes. So if I understood your question correctly, you add in your question, does the prob is responsible to check one detail of a plugin? Okay, so we don't have one probe that is checking Jenkins file presence, dependable configuration and so on and so on and so on. We would have a very 1,000 line or 2,000 line or 10,000 line class in the end. Here what we have is we have an engine, the prob engine, which is already coded, which is using all the probes that are implemented and found on the project and they are then executed based on an order. So the order basically is that annotation and we can schedule them thanks to that, not scheduling the time liminer, but in the order how they execute one after the other. And so the prob code is really short, most of the time. Here for example, for the JEP229, so we are checking the presence of a specific file. How do we know which file to check? It's basically you read the JEP229 documentation and you see that inside the GitHub workflows folder, you have a CD file, a CD workflow for Jenkins. And so basically what we are doing is just we have a repository of code. We are looking inside the repository whether there is a GitHub workflows folder. And inside that folder, we are checking whether or not we have the specific file. So that's how it works for that one. If we look at another one, for example, let's the known security vulnerability, it's a bit more complex. But you see, even though it's a bit more complex, it's still short because the code of the prod are those lines and only those lines. And the others are just decorative, they are decorative to the probe to know whether the probe should be executed or not each time the probe in Chinese ran to describe what the probe is doing to the user interface to know how the probe result is represented inside the database. And so when we are executing, when we start the execution of the probe engine, we are gathering, we are fetching the update center, which is published at the time that is run. And in that update center, we have a list of warnings. The list of warning is basically just a map of plug-in names and a reason why there's a security adversary on them. So it can be for each, if I recall correctly, the content of the map is the plug-in name is a key and the content is an object that represents for a certain version of a plug-in, which can be a regress, we have a security warning. And so what I'm doing, what we did here is basically say, okay, so let's look at the security warnings we have and let's figure out whether the security warning applied to the current version of the plug-in. If it's not applied to the current version of the plug-in, which means the latest version available on the update center, it means the plug-in doesn't have an active security warning. So there's no security issue on that plug-in. Doesn't mean they never was. It just means that the latest version of the plug-in doesn't have any. And so in that case, we just say if we don't find any issue, it means it's fine. The plug-in doesn't have any issue currently. And if we do find some security warnings for the plug-in, it means it's a failure because the plug-in has a security issue. Okay, got it. Thanks, Adrien. So each and every probe does its own job. And at the end, all of them gives a collective warnings for a score, right? Yeah. So the probe engine is running all the probes. And what we do is we are going through all the probes and for each form, we are going through every plug-in. Normally, we go through every plug-in and we run on every plug-in all the probes. On the specific order. And we are registering, we are persisting the result of the probe execution in the database inside a column that is called detail, which is a JSON object. Adrien? Yes, Jean-Marc. Adrien, Odit, we're going to pause the questions on this topic here. Yes. To have enough time to discuss the user representation of that data and the time left at the end, we can cover question on the two projects. Is that okay for you, Adrien? Yeah, sure. Okay, so you can go or I don't know who's going to... Jake, do you want to explain that one? Yeah, of course. I think this one is a little bit less complicated, I hope. But as you know, we have the plug-in dash health site to go check individual plug-ins and things like that. However, we want to make these scores more apparent, more digestible by end users and maintainers alike. So, one of the ideas is to display the scores on plugins.jenkins.io. Currently, if you go there and check out a plug-in, we notice there's quite a bit of wasted space on these cards. And we think with the wireframes that we'll go through in a second, this would be a great place to display the health scores of said plugins. And that's what you'll see in this wireframe here. It's fit with filtering based on those scores, right? And of course, some explainers as to how the scores are produced and what goes into the scores, the different probes and all that stuff, right? Second piece of that, more than just the composite score on the card itself when you first get to the site, you want to do an explainer within each plugin, right? So essentially taking the results that you saw previously on plugin-health, essentially taking the results from here, right? You know, it has 16 pull requests, the Jenkins file is found, all that good stuff. The details of helping people understand, you know, the importance of the score is a valuable score, what goes into the score. Now, we want to provide that in the details of a plugin when you go here, so you can see in this wireframe it lists out essentially what we have on the page there, the details of the probes, the individual probes. And I think that is the scope of this project. There's other things we would like to do, but I think for the scope of this project, we'll leave it here at the displaying the scores of plugins.jenkins.io. So essentially, the more I think of it, converting the plugin-health site that we have right now into viewable information on plugins.jenkins.io. Did I miss anything, Adrian? No. I would just add that the contribution to that idea would be partly on the plugins-health-scoring repository, but also on one or two other repositories on GitHub, which are the plugin-site and plugin-site API, which are the projects that are used to generate the data for the plugin-site API, and the plugin-site repository is where the UI is made for the plugins.jenkins.io. So in that project idea, the contribution will be in two different, three potentially three different places. What are the technologies? So the API and plugins-health-scoring are in Java, and plugin-site is in RGIS. Okay. Questions on this presentation and eventually on the other project that was discussed? We have 20 minutes for questions. Can we come back to my question about the update center? So how is that supposed? I mean, what is the idea behind that? So the data of the plugins.jenkins.io are not made from the plugin-site API, not directly from the update center. The content is coming from the update center, but not the API itself. So we can either have the UI can either go by on its own fetch the data from plugins.jenkins.io, plugins-health-scoring project, because we have API or we can create new APIs to be able to see score or to have a score representation of a plugin. Or we can say, no, that's the problem of the plugin-site API. And then it means that the plugin-site API will have to communicate with plugins-health-scoring before being able to propose the data to the UI part of the project. Is that answering your question? Yeah, that is answering my question. So I have another which is linked to this question. So I explored the plugin-site repository and I also contributed to it. So I found one thing that the plugins are coming from Algolia, which is the search engine for that site. And so what I'm guessing is happening that the plugin-site API is feeding Algolia and from there the data is coming. So am I right or am I not able to... I'm sorry, I didn't fully understood what you said. Could you repeat, please? I'm sorry. Yeah, so in the plugin-site there is a search bar which is powered by Algolia and the data is coming from Algolia. Yeah, that's right, search by Algolia. So it's not coming directly from the plugin-site API. And so it's a medium between the two? No, no, here you are talking about the search engine. Yeah, the search engine. Yeah, but you won't have to change the search engine. Can you click on any plugin, Jake? What we want here is to change that web page to have the score displayed on that web page. You won't touch the search engine. Just adding a tab to this page, correct? Yes, correct. Adding a tab to this page, but as well as essentially replacing the bottom section like the shortened list of contributors and this little periodic table square. This will also show the composite score, so the total score, the out-of-a-hundred score that it got. And then here we'll have a tab or a health score that goes into the details of why it got that score. Okay. Algolia is only to filter the screen, correct? Okay. Other questions? No questions? Hello. Yes. Hi, Chalma. What is the difficulty of this project? I guess this should be beginner level, right? Or what is the need to learn? Like I think according to me, this project has the difficulty of beginner level. And am I correct or do I need to learn most of, like you said, the data analysis from data and data representation in all the skills part where I need to learn, right? You're getting me now. Okay. So if I understood correctly, you mean is the data representation up to you? And if that case, if in that case is that your responsibility to know how to represent the data? Is that what you mean? Yeah, kind of. And another part I was asking that, I think according to me, I don't have much knowledge about it right now, but I think the difficulty for this project might be beginner level or is it intermediate? What do you think? According to what the required skills, it's a knowledge written on the site now. According to me, Java is fine. I know core Java. I worked on it in professional fields. Second part, can you open that screen where data is written? Which screen? On the projects. On the project community part. No, no, where the project idea? Project idea. Okay. So a lot of work has already been done. So the framework is already. Thanks. Skills to study and improve. Data analysis, I don't know. This might be a big breakdown point. Can you explain what part of data analysis we are expecting here? Like just that one, and I'm getting data from data repositories and showing it over here. Doing all that process and showing the plugin health score over here. That is the only part of data analysis now, or is it something else I'm missing? So on this project idea, you don't have to score the score of the plugins then fetching details on each plugin is already done. So the data analysis part here is about where you find the scores to display them on the plugins.junkins.io. How to get them, how to transport them to the UI of plugins.junkins.io. I think that's what we meant by that. Okay, then. I want to add something here that I think so data analysis means like scroll up. More. Yeah, yeah, yeah, here. More up. Yeah, in this filter part, I think we have to add one more option of sort. So I think we have to implement that data analysis part is here. According to me. To be able to filter the results. Yeah, filtering the results. That's a good point. Aren't we going to use the existing filtering technique? Because we're going to filter on the scores. Right. Yeah, I want to see the plugins or whatever we decide, right? I don't know if we're going to give it a letter grade or just a good point. And since it's coming from a different center, I mean, there's, there will be some sort of changes in the implementation as well. That's what I think. On the API part, either it's inside plugins site API or not. I wouldn't say that fetching the data what I would call data analysis. But once you have the data on the UI, then you can have some implementation of it. For the, if you scroll just a little bit, Jake, we can see on this on when you go to a specific plugin and on the else of that specific plugin. Some data are numeric values, others are dates, others are just Booleans. And so it means you won't have the same data representation for each one of them. Well, the term data analysis can be understood in different ways depending on the world you are in. So there's not big data or Yeah, no, no. Sorry, sorry. We don't have the buzzword big data here. We are just using, we are just a bit over 2,000 plugins. We are far from having something that we could call big data. Does it answer your question, Harsh? I think it's harsh to ask the question. Yeah, it's all clear. Any other question? Other points that aren't clear until now. Erash, do you have any words of wisdom or any additions to this conversation? So you joined this call. And if you're not familiar, De Raj was the contributor last summer that worked with Adrian and I to get the architecture and get the foundation for this project. And he's still willing to work on the project. So that's a good sign. That's a good sign, yes. Definitely a good sign because the project is very interesting. And since we started this last year, the idea is extremely, you know, creative and very useful as well. Since we got to know now all of us know about it. The description of what are we are going to be helping, who we are going to be helping. So this is what caught my attention last year. And that's how I started and I got interested. And during my GSOC period, Adrian and Jake were extremely kind to mentor me. And that's how we set the foundation of this project and started coding stuff. And that was the most challenging part to be honest. And the most interesting part as well, because it helped me to learn more about Java and the system design as well. And those two things are extremely useful to me even now. And after the project, I still contributed a few, I think one or two full requests. And it's because I'm also doing full-time jobs or trying to balance things. But I'm extremely interested to be part of this project even now and in future as well. Because the reason is same as I said when I started speaking is that this project is extremely interesting and it has a very bright future. So this is why I came back this year as well just to help Adrian and Jake just to, you know, help with any questions that any potential contributors might be having or anything that I can be of help. So that is what my aim is to be here. So if you have any questions as a potential mentors, sorry potential contributors, then feel free to reach out to us on the data channel. We would be very, very happy to help. I want just to add a little thing. I do a sales pitch for these two projects is in my previous life, I've been managing Jenkins, a big Jenkins system helped also customers managing Jenkins. This was have saved quite a lot of energy just to have a simple dashboard to know when you have to do as a system administrator the decision, can I use this plugin? Plugins is the power of Jenkins. It really creates Jenkins. Jenkins is a butler that serves, artifact serves things and the plugins are key to that. It serves features provided by plugins. Plugins sometimes have been written like a fire and forget all I wrote this in a weekend and there it is and then I move something else. Are you able to put all your production on such a plugin? What is the quality of it? And this is where a lot of effort is required to make these decisions and it's scary. I've been talking to admins. I had it too and so can I trust this one? And this is a very useful tool, an important tool to help these administrator. And also as Jake explained, it's also a very good guideline for people maintaining plugins to know where they need to do the housekeeping to keep these up to date and that. So this is really why all the projects of these years are very useful. I can already tell you, I'm already using these plugin health scores in my day-to-day job. It's already useful. But we need to get more to it, more substance. It's only starting. It's already useful but here, without the user interface, you really need to be a specialist to start using. And you can imagine with something like this, I mean, the first, you know, we want more probes, right? As John Mark said, to beef it up because the last thing we want is no faith or a loss of faith in these numbers, right? I think having them get a wide array of data points only feeds to its power, but to its credibility as well, right? We want people to trust these numbers and we want people to put some weight behind it, right? And then actually have some value. So that's important part of this. I have one thing I leave a few moments if somebody wants to jump in with a question. I have another question. So what we are seeing is that we are searching the GitHub repositories of those plugins to get the data out of it. So can these plugins also be located on GitLab or other SCMs? No, they are not. There's a rule written in Jenkins ecosystem saying that for a plugin to be promoted, published by the community, it needs to be inside the Jenkins CI organization in GitHub. That is not true for all plugins. There's all plugins where when that rule wasn't applied or wasn't followed very carefully, some plugins are located outside of the Jenkins CI organization in GitHub. But overall, all plugins are in GitHub and normally inside the organization. Just to put some light on measurement on what you just said. We are not only looking at the source code, so not all the problems are about the repository. Some which are not implemented and are in the idea to list the number of tickets or issues for a plugin that are open. And that is potentially in GitHub but also potentially in the Gira, in the issues that Jenkins.io. So we are not looking only at the repository. We are not only looking at the code. We are looking at a plugin overall. So even the number of installation, it's not really looking at the code per se. We are looking at, is it used? Yeah. I like to think of it in two buckets. The things really fall into the ones on the code side and the repository side. And then ones more on the maintainer side of things or the activity side of things related to the people involved. There's two kind of buckets that we're looking at. And yeah, getting data from more places will only be helpful. Okay. Jake, I'm going to interrupt you. Sorry for that. We have two minutes left. I just would like to conclude. So first, any questions that left or in the probably a heap of them, use the Gitter channel or community.Jenkins.io to discuss with the people. Next step is also the next important thing that you need to do now is really understand what the project is about. And we expect you to produce checking for the date here. So we're talking here for beginning of April. So you have complete month of March to work on your proposal, your application. So this document is aimed at describing what you can do, who you are and why you are the right candidate to work on that project idea. So you need to show that you understood the problem. You come with experience or with ideas or novel ideas that you have a good grip. What we will be looking at during the month of April is what is the likelihood that contributors have to complete the project in the assigned time and eventually achieve a stretch goals so it's not limited to just the initial scope can change. But that you're the right person that you will achieve the goals that you will be able to work together with the mentors to make it a successful summer. It is very important experience has shown that the sooner you start working on that document and making that document available for review, the better. Because once you've submitted the document, if the mentors, the mentors are going to review all the documents, everybody, I will read all the documents. If at that moment that I don't understand what this contributor wanted to share, he's not making his point correctly or he could have presented it another way, then it's too late and you missed the opportunity. So make your document available. The community is there to help you, not to judge your work. I want everybody to learn something from that experience and to grow. In one minute above time, I'm sorry, especially for the people in Asia, it's very late, I apologize for that. We are on the hour on my side. Okay, I had one minute so I need to check my clock here. So I'm sorry I'm going to conclude here but it's a very interesting and fruitful meeting. Thank you very much everybody to have attended for the very good questions and I'd like to thank you also, Adrian and Jake and Adiraj for their explanations and being there to support. So let's go. If necessary, on project level, you can organize other online meetings so I will not organize always. We had this startup meeting here, so request organized. Don't forget, everything is public. No one-to-one communication. We want to be fair with everybody. It's competition. Okay. We're good. Thank you everyone. Have a good evening, rest of the day and see you. Thank you. Look forward to reading your proposals. Bye. Bye everyone, thank you.