 Okay, good morning, afternoon, evening, everybody. Thank you for joining. This is the GSOC workshop related to plug-in health score project idea and clarification questions. So we have on the call currently Adrian and Jake who are mentors for the project and we have Dirage who is one of the contributors interested in that project. And I myself, Jean-Marc Mason, I am org admin and mentor for another project there. So without any other ado, I post that we start. Dirage has new questions, just a question. How do we start and I leave the word to Jake and Adrian and Dirage, how do we proceed? I think we can go ahead and Dirage, I don't find the new question on the initial documents. Maybe I'm not looking at the correct one. But I guess we can go ahead and let Dirage ask the questions and maybe you can... I can share maybe the document. I think I have the same. Yeah, I'm looking at the same, but maybe it's because it's floated in the... No, no, it's in the beginning. I see them as... Oh, okay, yeah, okay. You have them? I don't know. The first question is, how do we score plugins, which I don't think is... Okay, I don't have that. I don't have that maybe. Do you see the document? Yeah, yeah. That's... No, here... I don't know. I need it to refresh, I don't know why, but okay. That's fine. Sorry for everyone for that. Okay, I'll stop sharing them. Maybe... I don't know. Do you want me to share the document? Yeah, that might be good, just so we can all look at it. I'll be note taker, I'll take Mark's place today and be note taker. Okay, so I'll share then. So would you like me to share the screen? You want to share the screen, Dirage? Yes. Okay, yeah. So I'm... Okay, go ahead. Okay. Great. Okay. Is it... Is it... Yes, it works. So I guess for the first question, the idea is to... About the... Like we said last week, we want to have... The rules would have weight, which would be... How much the rule is important at the time. I guess the... We want the opinion to be... To get opinions from everyone in the community. So that would be inside the developer mailing list, I would say, to get feedbacks from all active maintainers or all maintainers at all. Is there a better way to get that? I don't... I don't... To say we have a PR to review and things like that, because it can be more difficult for people to understand and have a point in a pull request. I guess the mails would be better, but that's my only my opinion. Did you have... Did you have something in mind, Yoraj? Yes, so similar to this, we're trying to understand the process of getting the feedback from the community or response from the community. So as we discussed last time, we would be getting their opinions on how much weight should we add for a particular parameter. So we can maybe another way of getting that info from them to be like, maybe make a Google form and then give it to them. And in those Google form, they'll be questioning like, how important do you think a Jenkins file presence needs to create in a repository and mark it from a score of one to five, slightly, least likely, not much likely, very likely, and then neutral, something like that. And all the responses will go to a spreadsheet, like Excel for some time. And then from there, we can do some things one of the ways. Yeah, I do like the idea, we would need to have that in a timely manner. So to not have too long before we ask the question and get the results. But that's a very good idea to have kind of a feedback that will be better than emails, I guess. We could always publish the question on an email thread and publish the response on the same, but have that in a form. That's a very good point. One side note, I don't think having a neutral point. I do like the one to five or zero to five or something like that, but I don't feel like having a neutral answer is a good idea because we want people to take a side. And if we offer a neutral answer, someone might feel like I don't, might think, I don't know, so I'll put the neutral and then we will have all neutral or mostly neutral answer. But we want people to take side. And also, we have to remember that that will be for the first set and only for those because in the long term, as we said last week, the weight of the rules would evolve during the time because, for example, at the start, having a Jenkins file is very important. I already saw that out of the 1800s plugins, only 600 of them don't have a Jenkins file. But at some point, at some point, more plugins will have them. And so we will be able to say, now it's more important than ever, or maybe now it's less important than in the past. So the weight would evolve also with the time because we could also say now having Java 11 support is important, but having Java 17 is important. And we could also have that tied to when we want the project globally, the entire project to be only running on Java 11 or only Java 17 or any other versions. And so we could say, for now, it's a good thing to have Java 17 support. And maybe in one year, 18 months, we will say having Java 17 support is mandatory, or not mandatory, but very important because we are about to put the whole project to be able to support Java 17. So I think we need to also have that kind of notion that the weight of the rules will evolve with the time. But I really do like your idea of having a form and a survey and build kind of the weight based on that. I really like that idea. It's an easy way to keep, you know, to log the data essentially. My one concern, I guess, with the survey is getting people to take the survey. I mean, I mean, I guess it would be the same as responding to an email, but in my experience, that's always the tricky part of service is getting people to actually take the survey. Yeah, I agree. I think what we could do is try to have the survey small, not to have like 20 rules to wait at a time. But if we can get something, the survey to be small and quick to answer, maybe we can have more feedback. But that's also something I don't, we cannot. If I may suggest two things, we can set the system in place and do a quick run of it. But I wouldn't like the contributors to be blocked in the development for that. So it's a task that we can do on the product itself. But and it's important to have a general point of view of the community, and we can put something in place to do that regularly. But during the summer, we need to move to move ahead. Yeah, I agree. And if we don't have any answer for the for this summer of code, we can we can go ahead and have all at the same weight, all the rule having the same weight by default, like something having a one, and then we can, and if we get some feedback at some point, then we can readjust them and the generation of the scoring would if we do the things correctly, the once we mark the weight of the rule to be different, we would have a score computation automatically, we could trigger that computation with a new scoring and should be should be if we do things correctly should be simple enough. For sure, if we don't have any answer, we can bite if we go and say all the rules are the same importance and and go for go with that at the beginning. We will get some strange results. I'm sure of that. But also, it will be a good way to show that answering the survey is important and also to see maybe a rule that we think is simple enough. Like, I'm sure that everyone here thought that more than 80% of the plugins were having a Jenkins file. I can assure you it's not so impressed. So that will show us and it's an easy win also for any plugin maintainer to raise the score of the of of a plugin that will be also an easy win to just add the Jenkins file. It's pretty easy. The contribution guide that Mark and you, Jean-Marc, built is the way to add, yeah, it's really easy and well documented. So anyone can have that and it's an easy win for anyone, for any new contributors. For sure. Yes. Did I did I answer your question, because I think we went over all the places, but definitely answered my question and I got what to expect and what to do if unexpected thing happens like you do not get any responses as well. So my question would be on again, not again, but on the frequency of this changes of bit. So as you said, we can have some changes of these rates as per a moment when we feel that Java 11 or something version should be preferred. So that frequency would be like, would not be very too frequent, right? It would be similar to an election like happening in a specified period of time. If you make it too frequent, it would look like we are helping them to customize it as per you want. So that's not fair, right? I personally would see it every six months or every year. So I didn't really think about the frequency of the screen and the weight adaptation. I don't think that during this summer, we, I think during this summer, we will change the scoring the weight a lot, not every day, but I guess at least once a week, because either we get feedback or not or stuff like that. I don't want to say that we want, we start the summer of code with a set of rule weighted at a specific point and we will finish the summer of code with the same rules and weight. But in the long term, I guess maybe six months is, it's big, I would say, yeah, too long. I would say every LTS, so every 12 weeks, we can review the weight. It doesn't mean that the weight of the rules would change is just that we would readjust them. That's a good idea. Because I think that within the LTS period, we saw in the past that we were introducing a few changes in core. I'm thinking about Jarai Devon, I'm thinking about you, about Guava and things like that. And that can also be the source of new rules, let's say, and also and see how we can plan from that. And again, 12 weeks, my idea of 12 weeks is just to review them doesn't mean that they would change. And for sure, we don't want them to change every day because we want some stability in the scoring. But for the weight of the rule, but for the generation of the score, that is another question. And I think it should be as soon as possible, as often as possible, that's not part of the question. How would we take care of notifying maintainers that the rules are changing? I think it could be through the mailing list. I think that's the developer mailing list is when you are maintaining your plugins. It's for sure a mailing list that you are subscribed to. To be honest, I don't really see any other asynchronous communication channel than the mailing list. These are the things that we need to work on. I guess this is kind of dependent on how we display the information, right? Because if it's, it probably wouldn't make much sense to notify people on the plugin site, but if we were to throw some sort of badge or something like that, where developers work, you know, hey, just as a notifier, you know, I don't know if that would happen in GitHub or whatever, but some sort of badge that says, hey, check out the new weights and the criteria. A question that I have in the various proposals that were made up to now, there is a way for seeing that people can look what is, what are the components and the weight that was used. So this can be seen by everybody. Is there a display? I think, so that's part of the question or data presentation. We have a few lines after. I think, and there's two different places where this, I think there's two different places where the score should be displayed. One is in the plugin manager. So in each controller, because we want to help users when they choose a plugin to install. So that's useful to have the score in that page. But yes, we have to discuss about is it expensive, how we fetch the data and so on and what we include in that data. And the other one is the plugin site. And the plugin site is, I think, the only place where we should have the entire details. On the controller, we shouldn't have each rule, the score for each rule and so on. Because we don't, at that moment, I don't think we care. It's, well, phrases, I'm sorry. It's not that we don't care. It's, I don't think that's the most important one. I think if we want to understand why the plugin has a specific score, the plugin manager as a link is already, has already an hyperlink to, to put you on the plugin site of the plugin, with the plugin details. And on that page, we can show all the details about the score, about why is it an A plus, why is it a B, and so on. And even be more specific, because we are here, I'm saying A letters, but we could have the percentage and say the plugin is an A plus with 95%, 99% of validation. And here are the rules that are not validated. Here are the rules that are validated, not validated, so, from, from past discussion I had with Danielle, the update center, the JSON file that is fetched by controllers and used for the plugin manager is already quite heavy. It's already quite huge. So I, I don't want to put more data in it than it is required. I don't even want to, I, I, even not sure we should put the data in this one, in this, it should be somewhere, maybe in that file, maybe in another one, that the controller is fetching at the moment, I don't know, but I, I don't want that to be overcrowded. I don't want it to be a pain to pass for anyone, for the controller and so on. So I, yeah, that's some detail we want. Maybe we, we need to look into a bit more the, but yeah, what, what did you think about that idea of having two different places where the, and not having the same details on each one? I really liked that idea. I mean, I think it brings down the cost, right? And if all the controllers are getting or fetching is just like the composite score for each plugin, I think we could keep it rather small. And of course, you know, maybe just one link somewhere within the instance that says, hey, now if you need more details, go check out, you know, the plugin site where you can actually view all the details of the plugin. I think that's it could be even the badge of the score that we put in the plugin, plugin manager badge or the later of the plugin on the plugin manager on the controller, we could have a link there to see, to say more details. And that brings, that's put you in the plugin site on the correct tab or something like that. If you go to plugins.jankins.io and any plugin, you there's already a few tabs there to see the dependency of the plugins and stuff like that. And it's already used in that kind of idea because on the controllers, you don't see the contributors of a specific plugin. You don't see the dependency of that specific plugin. But you see that in deep in the plugin manager, in the plugin page. And you have the releases, you have the issues, you have the dependencies and the documentation part of the, on the plugin site. So we could have a new, a new tab saying score, have a badge somewhere in the page saying this is the score and more details there. And so it would, it will, yeah, I think simplify the controller and also not limit us because if we want to only in the controller, then we will be, we will have some size constraints and we will need to be to do some sacrifices on what are the details that we are showing. If we get rid of that constraints of size constraints, then we can, then we can be a bit more permissive with the type of data we show. Yeah, I do, I really do like that, that, that example. Yeah, yeah, I think, you know, this is exactly, you know, what we showed last week or what I shared last week. And I think it describes exactly what you were saying there, having a new tab that would give you all the details of the plugin house for. And this is what would be linked from, you know, what you were saying, I believe, from the control. Yeah, yeah, that, that's exactly what I, what I was potentially mentioning. We could also have links to, for each rule, we could have example on how to fix that, how to improve, how to, how to, for example, you have a very, you have a number of PR open, if I can read. Yeah, it says number of PR is open current and then a rolling average. Yeah, exactly. And we could, exactly. It's just example, but we could have links to the open PRs. We could have for a plugin is up for adoption. That is an easy fix. That's an easy link to explain what is, why your plugin is up for adoption. How to solve that. We spoke about Dependable, we could say if, if it's, if Dependable is not activated, then here's a link, here is the documentation on how to activate it. And then the plugin would have a better score. So now I need to defend a little bit the contributors. We can do that in various iterations. So it's not something that needs to be done. We need to, to prepare things that we can add it and know how to add it. But I don't want that. Well, I would not recommend that contributed contributors go to attack a big, big mountain. Oh, no, no, no. And also the idea that I spoke with Bazon about this project and his idea is also to the, the tool that we build maybe should also, when a plugin is not validating your role, we should try to open the PR to make the plugin validate the role for something like, yeah, exactly. And, but it could be automatic when it's an easy fix. For example, having the parent pump up to date, we could, we can make the upgrade, test it. And if the build pass, then we can submit the PR. But if the PR is not green out of the box, it's more difficult. And that's not something that we can do for old plugins. Tera, she wanted to say something. Yes, thank you. So would it help if we also link for a particular parameter? Let's say, updating the parent pump, if it's listed in this page. So can we link it to adopt a plugin tutorial that Mark and Aaron are working on when it gets published on the site, we can attach that particular sections link with this parameter. So and since it's explained there very well, even for new contributors, so they will be knowing how to do what we do. Yeah, that's that's the idea. Yes, to have the documentation directly on the report here saying the plugin is not validating that rule. Here's why and here is how you can fix it. So having all those details in one place so that we can make the life a bit easier. And even for any new contributor, someone that is wants to install a plugin, see the score is good, but not great. And then we can is looking into why the plugin has a not so great score and see that it's because the parent pump is not up to date. And then maybe that's can generate a new contributor or the incentive for a new contributor to attend to that plugin. Because it's from time to time, it's not that difficult to just update the parent pump. Of course, it's not always the case, but that that that depends. Did we cover your new questions, Raj? About data presentation, yes, one part of it. And about delivery, yes, one part of it. So let me I had one point to add in the previous section about where we were discussing how would be notified maintainers that the rules have changed. So since Adrian says to that, we can change the rules, maybe with every LPS release. So what do we think about notifying them via the change law or would it be effective like as an entry that they are the new rules, please go check them out. I would not be that effective. I like the idea. So you you meant about the change log on the on the LTS? Yeah, yes, which goes on the Jenkins.io website. So there we can have another entry about scores changing. Yeah, we could have an entry on the LTS change log saying how the score are now evaluated. That's a very good point. But I think that the maintainers of the plugins should be notified in advance because when we I'm just I'm sorry for that. I didn't think about that about the frequency of the weights evaluation reevaluation before so that I'm thinking on on the on the fly here. But I think that if we change on the LTS, even though I still like that idea, it means that one day a plugin could be an A plus. And then just after the first LTS release, we have a plugin that is a B minus. And that and and so the the maintainers wouldn't have a lot of time to readjust and to apply to the new rules or to the new weight and so on. And I think I think we it's not a situation that we would like to put the maintainers against the wall and force them force them in this situation. So I think we need to have a way to we need to be ahead of time with the rules. We need to be at least one one quarter red. And and say for the next LTS in 12 weeks, this is how we want to score or to wait that specific rule. And maybe we should have a way to say if you want to evaluate your plugin against that, here's what you need to do. And then they could get a score with the next set of rules or next set of weight so that they can. They have 12 weeks, basically, to to prepare. And again, it's not I don't think that we will change the weight every two or 12 weeks. I think we need to reevaluate them. So meaning maybe we will say, OK, we look into them. It's still good. It's still what we want. So we don't touch them. So maybe they won't change every 12 weeks. It's that every 12 weeks we looked at them. We looked at them. We have a good idea of the rule changes or I guess if we evaluate rules somewhere in the middle with like weeklies and, you know, let's say, you know, the sixth week, right, that gives them, you know, five or six weeks to adjust their plug in prior to the next LTS where it would really, you know, take change. Does that make any sense? Yeah, but again, I don't think the the rules I'm just speaking about the LTS because for a time milestone in the time, they won't be applied to a specific LTS. It's not we will have a rule. We we can have a rule that say the the plug in needs to be require a recent enough Jenkins version. That's for sure. And then that that's also evolve every 12 weeks because the LTS change. But what about? I'm I'm I'm not sure to to that we should say, OK, so on the next LTS, this is on this new LTS. This will be the new score, the new scoring system. I think we need we need to be open and say to the community. The next LTS, we decided to reevaluate the weight of the rules for the next LTS. And here is the new weight. And that's how it will that this is how it could affect plugins and we should have the production kind of generation of the score and the pre-production different one so that maintainers can evaluate their plugin with the essentially run it on demand. Right. Yes, I think so. Jake and Adrian, I'm looking at the clock. So I think these are are important discussion. And I think it's important that submitters will know about this discussion. What I want to be sure, because it will be probably the last workshop that will able to organize before you have to submit your proposals. So. Are there questions or doubts related to things that need to be in your proposal? So that you do you want to explore subjects there? Yes, yes, I do. Of the of them, I think two of them actually. So so first one is regarding the same. And Drash, I'm just going to for fairness. So I then would like to listen to Aditya also if he has questions and so on. So we need now to use our time wisely. Drash, go ahead. What is your question? And I'm don't go too much in the answers. Yes, go ahead. I think Aditya is one of the maintainers. I'm sorry, mentors. I sorry, Aditya, you didn't get that. So I was suggesting that Aditya, I think is one of the mentors, right? Yeah, yes. Ah, OK, good. OK. I forget the names. I'm sorry. So OK, good. Welcome. So Drash, go ahead. And if Aditya wants to add things or has he can raise his hand. OK, so I misunderstood that. Go ahead, Drash. Yes, thank you. So I understood the discussion about how are we going to display it on the plug inside, as shown in this screen. Sure, my question is around another way of another place for it to display the things. It does not work. It just I made it up. So it's one of those ideas we have at 3 a.m. which does not work out. So I just wanted to discuss it with you very quickly. I've written some parts of it in that proposal and trying to scroll to it. Yes, this one. So so it would be like a new website. So its whole aim would be to presenting, comparing ways of improving the health score of all the plugins. And when you visit it, it would look something like it fully score related and they would be like bulletins and like real time kind of things going on, saying that congratulations to XYZ Contributor for increasing the health score of the parameterized trigger plugin from this to this. So this plugin saw a 15 point increase in its health score. Thanks to this. So my aim here is to make it more Contributor, a little bit more Contributor centric kind of thing so that even they can see it so that ultimately it implies more contribution from their end. But I understand the negative side of it is like why do we need to build it? It's like over again. So curious to know what you think. I think, Jake, you had a point about that. The fear to make it a bit too much. How did you I think you a game making too much? Yeah, gamification. Yeah, yeah, certainly the gamification aspect. And, you know, another I think with having it in the plugin site versus something like that, I think we kill two birds with one stone, right? I mean, we're taking care of the maintainers and the end users, right? The end users get a glimpse into the plugin health score as well. I don't know how we, you know, sell it or get, I don't know if users would be allowed to go to this health.Jenkins.io. But I think we're adding something that's outside of their normal user flow, right? If we, you know, if we keep it within the plugin site, that's already kind of part of the user flow. It's already, you know, ingrained with them and it's just natural. It feels a little bit more natural. And I tend to agree here, having another portal for anyone, for a person to go to and see that kind of details. And I'm not sure if, even though it's good to see a plugin going from a D to a B plus or something like that, I don't think that we should, we should, because the side effect of that is also showing plugins going to a B to, from a B to a C, because we changed the way. And it's not a shame. And we don't want to, I don't want that screen to, to start feeling like we put shame on, on, on a plugin because of something that is out of the maintainers control or out of the, maybe, because the maintainers is starting a new job. I changed focus is not focused on Jenkins anymore. That can happen. And it's not something that we should, we should, we should really disgrace on. Yeah. So I don't really, it's a personal feeling here. I don't really like the idea of showing plugin X gain, 15 points or stuff like that, even though it can be, I do see that it can be rewarding. I don't feel like it will be in the long term or even midterm a good thing for the, for the community. And as Jeff said, having that, having the, the score, the health details on the plugin site is always, is we all, we already address the users and the maintainers of the, of the project. So I feel like it's, it's easier. We reduce the scope. It's not like reducing the scope. It's just that we're focusing on the scope and we'll give the, we will give the good feedback to anyone wants to see those details. And with John Mark trying to keep us on track here, I won't go too deep into it. I, I agree and disagree with showing the Delta of the plugin score. I think that is useful information for an end user. It may be, you know, it may be have negative consequences for the maintainers, but I think it is rather useful information. You know, for the end users. That's all I'll say on that point. So I think maybe we should explore the Delta of the plugin score and showing that, I don't know, we can explore that for sure. Yes. So that makes sense to me because it's better to have plug inside because it's fulfilling all our purposes and introducing a new workflow for everyone to get accustomed to and it takes time. So this plug inside as a data presentation area would make more sense. So another question is about what we have discussed already on update center since it's very huge. We do not want to over burden it with the parameters force. So my question here is about about what you shared on the email. So you said that if you want the score to be visible on any controller, we just should be added. So just need some clarity on that. So, yeah, I that's my fault. I'm sorry, I don't have a strict answer. I complicate answer on that. I feel like we need to have the data on the controllers. No, I don't feel like I know we need it. The problem with the current size of the update center, the json file, is that it's already big and if we add data to it, it will be, it can start to be a nightmare. So I need to discuss with more people to see if in the email that you sent, you got answers because I think you said that we would add a lot of things in the scoring. I think if we just add for each plugin, the few characters, meaning scoring colon 97 or scoring colon 48, then it's limited. It should be it should be enough to display the score efficiently on the plugin manager page. And we already have enough details there to on the update center to be able to link to the plugin site. So I feel like we should add it in the update center. But I need to speak with Daniel to make sure that I'm not not saying anything that is incorrect and that is for him a big mistake. I need to check with him. Sure. So a little more small question on what he said is you were suggesting to not include these kind of info, like sub info, right? The entire else for parameters object that you have here, I wouldn't I wouldn't add that. I would add else score or score or else and then the value of it. Not more. Yeah, just just the overall composite score, right? Exactly. The details would be displayed on the on the plugin site. The data would be based on the same source of information for both cases is just that on the plugin manager, I don't think we need all the details. OK, yes. So just to make sure I'm not confused, I got what you're trying to say here. But my concern here is so on the login site, if we want to display information like this, we do have the score for a particular parameter. So we need to. Display it or publish it in this JSON file, right, in this form. So if we do not have it here, so how would we present it on the plugin site? I'm sorry, but on the json here, you are the json that you're showing here is that is the update center, the json? Yes. Yes. OK, we would have a difference. A different file, I think that the plug in the plug in site would use to generate its page. That's still needs refining. But yeah. And when you call the tab with the details, you would then ask to download the JSON. Is that the mechanism that I don't know? I to be totally fair, I don't know exactly how that would work. But the idea is, yes, the plug in site would have a different set of data than the plug in manager on the. Yes. So you're trying to say having different files with just like we have it on the data center, like each URL maps to a different thing that we want to display. So there would be like two files published. One would only be having just the health score for plug in manager and another path would just be having like the detailed breakdown of these parameters that they can be used to publish details on the plug in site. Or did I get it right? Yes, something like that. OK, I'm really sorry not having a. A firm answer here. But that that is what I'm thinking of. Yes. I don't have the details in head yet. I don't have the complete implementation. I also part of the GSOC. I think that the Rache could mention is in his document, the actual implementation needs to be reviewed or or further explored. But you say, well, this would be one of the prototypes is that something? Sure, yes. I can do that. Because things still need to be thought about. And we have time for that. OK, we have five more minutes to go. Yes, so I think. I think I do not have a specific thing to share. But I was just researching about this algorithm. Plug in hands code calculation algorithm. So I did this is a set about. Vated normalization technique for dynamic nature. So I tried to read some papers about it and watch it today, but it was too much machine learning. So can you like? Give me some little info on that? Yeah, yeah, yeah, sure. So I don't think we'll be able to do that in five minutes. Actually, I'll share some blogs, maybe or some video or maybe we can have a call set up later also. That's not an issue. But since you asked about algorithm, actually, I wanted to share that I was thinking of one and I had created this flow diagram kind of a thing for that. So if it's possible, I'll share the link here and you can maybe open up. I'm not sure we're sharing this. OK, will that be all right? It's a very simple algorithm, so I don't think it should take more than two, three minutes to go through it. So do you want me to stop sharing? You can share. No, it's fine. Actually, my... OK, yeah. So let me know which link should I open. Yeah, one second. So actually, I pasted it here for some reason. I think my system got overheated. Can you scroll down? It might be somewhere breaking the code here if possible, you can paste it down in the same document. Yeah, sorry, in the Brainstorm plugin here. I'm not sure why this is happening. Mysteries of computing. For all these years, I'm still wondering. I see a cursor at the end, but not able to see any data. Yes, same thing. Yes, I think I see the background. Yes. There, OK. Yeah, I think so. OK, yeah. So you can hear me, right? Yeah. OK. So I thought about the various probes that we discussed in the last meeting and I thought that we can divide it into three types. One are booleans that we have, whether a file is present or not. The other one is unbounded non-booleans, or I should have said, integers or floats there, but unbounded variables where it can be number of stars on the repository. So that's something I think Mark suggested like fine. And there are some, there could be bounded non-booleans. And here, I don't really have an example for a bounded non-boolean. So for booleans, I just think that something has to be as simple as the weight in 2 plus or minus 1 as Dheeraj has suggested in his proposal. And since the weight would be something that is, like the mod of the weight would be less than 1, so the weight would probably lie between 0 and 1. So if we multiply with minus 1 to plus 1, the range of, the range that will be let me take the minus 1 to plus 1 to minus 1. So, bounded unbounded variables are centered in the boundary between the 2 and minus 1. And if that happens, we can then scale it to minus 1 to plus 1 also. For the unbounded case, here I was thinking that some transformation where, especially using some function like sigmoid or tannage where these functions, they are known to bring down unbounded value to a bounded value. So sigmoid brings to 0 to 1 and tannage brings to minus 1 to plus 1. So we can use either of them to bring them down to these values, these ranges that we want. And then we can simply take a weighted average where here by weighted average, I mean the number of boolean, like 1 upon the number of booleans into the weight that we got from the boolean plus 1 upon the number of bounded non-booleans into the weight, the score that we got from so on and so forth. So why I said this is because in all three places, we'll get something between minus 1 to plus 1, but we also don't want them to give equal weightage to all the three classes because we are not sure like the number of booleans could be the five or 10 and we'll have probably say like 50 unbounded values. So that shouldn't mean that the boolean value should get equal weightage. So that's why if we divided by the number of booleans present of a number of unbounded non-booleans present, then it kind of averages the size of our category. And then finally, we can since this value can again scale to zero to one very easily, then we can just multiply it 100 by 100 and show it as a percentage. So this is what I came up with. Yes, so it makes little sense for now, but I think I can read more about these and experiment and ask you questions on data channel to get more clarity on the algorithm. Yes, definitely. So, but thanks a lot for sharing because this is extremely helpful. Okay. No problem. We're reaching the end here. Question Drash, do you feel equipped for finishing your submission? Yes, totally. I think so and this will work. Somebody wants to add something at the end of this meeting over time. So I think everybody for the contribution, no taking and interesting graph from it, Ditya. And well, we'll close with the call now. Yes, thanks a lot everyone. Thank you everyone. Bye-bye. Bye.