 Welcome. It's the 30, this is April 1st, 2022, India standard time. And this is the get cash maintenance project ideas brainstorming session. We're recording. Thanks everybody for joining. So the idea was in our last session we started a bunch of discussions about various topics around how should the user interface interact and what would that mean. I'm open to questions and discussions here we've got a bunch of good questions on the on the comments to this brainstorming document. If any of you have specific questions you'd like to ask outside the document or there are covered in the document that are on your mind. Now is a great time to ask them. And let's talk through them. We will limit ourselves strictly to 60 minutes because I'm getting tired and it's been a very long day. Alright, so any, any specific questions maybe I can I can certainly give you a an overview of the UI. Oh, you had a question go ahead. So, basically, on my, on my proposal, there was a comment by Cal. And I might be saying his name wrong, but I just wanted to link it I'll link it in the chat here. Okay, and he was talking like in the previous session we had talked about to get pretty fetch feature. And he also like had some insight on how we could use it and where in which cases it could be useful. Where it might not be, it might not be like suitable and like some some cases. So, I just wanted to like put this out there like, yeah, he had like given me these suggestions. Okay, so this is where he and I are going to disagree. And I'm going to try to try to justify why I disagree. Very, very good question. Okay, so Calle, Olivia name a tallow is a well known and very skilled user of Jenkins. So, however, I think he's not understanding the context. So, what he's saying in this first comment is, Hey, I don't think it would be useful because you're already being notified every time there's a change to the repository. And then he says here, oh it could be useful for a repository used via get clone minus minus reference, which is precisely how the caches on the Jenkins controller are done. So, so they are not referenced but they are bare and and as bear repositories they can then be used for other other purposes. So, and now let's so let's let's talk through this a little bit so a cash on the controller is being used to answer questions about the content of that cash repository. If, if we prefetch content into that, then that does that lets us not have to require that everything must use webhooks first point, because not everyone can use webhooks. There are there are plenty of people whose systems are behind a firewall and they're unwilling to configure webhooks. So they don't get notified and prefetch would let them bring down that data as it's detected every hour or two. And so now then to his second point here the yes it could save network bandwidth during it actually doesn't, it doesn't alter network bandwidth usage because the prefetch is fetching objects that will be fetched eventually. However, what what the prefetch does is saves time during the fetch that will act on those changes, because the objects are already there. And now his point here about it's not safe to room of objects from a reference repository is is a valid and interesting case. If the if someone is using a reference repository that's, that's a different different circumstance. And he's right that is not out of scope of this, that is out of scope of this project. We're not worrying about these things being used as references they're being used as caches. So, Ariane, did that answer your question or am I not yet persuading you why I think that prefetch is useful for a Jenkins cache. This is. Okay. So, would I would I have to like implement some safeguard for for that scenario, when, when there's like, I'll have to disable the garbage collection. No, because, because these caches are typically not well, they are typically not used as a reference repository. So the caches are are hidden to Jenkins users, meaning the Jenkins, they are on the controller. And so a reference repository on a Jenkins controller should not be of any benefit because people shouldn't run jobs on Jenkins controllers. So the first, first goal is don't run jobs on the controller since you can't run jobs on the controller, or you since you're recommended not to run jobs on the controller you should not even be able to use the caches on the controller as reference repositories. A reference repository has to be on the same file system or on the same computer as the repositories that are referencing it. So now, now there are things though that we have to do in the prefetch to safeguard it. So while the while the disabled garbage collection, we don't have to disable garbage collection on a repository that's being prefetched prefetch itself, does things to avoid updating the the refs without the user note having requested that they actually be updated. So but I think you may be aware of that from our last our discussion the last time prefetch does a has a very specific description in the get maintenance page let's find that there it is. Prefetch, and you are welcome to tell me Mark I already know this you don't need to show it to me. Yeah, I think it would still be helpful for that but I think it only shows the delta of the of the new newly collected information that is. So the, the crucial thing here for me at least is what happens when it does a prefetch is it modifies the ref spec to bring everything that's being pulled into a location that other commands won't find it. And that's that's this that's why they say here this is done to avoid disrupting remote tracking branches so what happens when a prefetch runs is to the user. So nothing changed in that repository. So if, if the master branch for instance on the remote received new objects. When I do a prefetch my local copy, I can't see those objects in the usual way if I look at a get log of the master branch prefetch effectively hides them, but pulls them anyway it hides them in this ref slash prefetch thing. And that way it's not bringing in an update to the master branch that I did not expect explicitly request. Did is is that part. Well, have I explained that or would you like to ask questions about it. Oh yeah, I remember it from the previous meeting. Okay, great. Yeah so so the idea is objects that would have been pulled. And by the user asking give me the latest changes on the remote tracking branch are already there locally. But they're just sitting in this refs prefetch location. And when I asked for them, they get also recorded in the ref slash origin. And the prefetch would also be making the fetch command faster. Exactly. That's, that's what that's in fact the benefit is that by having done prefetch objects that would have had to have been fetched during the get fetch command requested by the user are already there. All right. Anything else with regard to prefetch. So did we address back to the question from Cali, or the observations from Cali did you did did that address your question. Yes. Okay. Good. All right. So there had been some discussion here, a relative to you either I wanted to double check that everybody was okay with with the concepts are you are you in your draft thinking about how the user will interact with with the with the maintenance system with the maintenance configuration. I was thinking that we can, we can have the options that you have. And I was thinking of adding options such as comparing it with with previous runs, or saving a particular run so that it can be compared with later on. Interesting yeah okay so so that's a so the you said the idea was compare. Compare with a previous run. So what sorts of things would you envision that the user might do with that how might that comparison be presented. Maybe if the user wants to manually edit some of the tasks and how they run and they might want to change some of the config values. So what this would allow them to do would be to they can they can have a specialized way so that for for their specific repository that they are using. And how it can and how get GC can get GC and the maintenance commands can help them optimize their time. Okay so so. Okay now I think I'm seeing so for example, a very large repository may need different options for get GC. Interesting okay. Good. As we were discussing earlier what I had in my mind in terms of UI was that the administrator in an organization where people are using Jenkins. Not everyone would have the option to change the strategy for these tasks. But if I want to run my build, I don't know or I don't care how the optimizations are working within the Jenkins system right. It could be possible this from an administrator's perspective I understand that you know we're providing a page where we're going to give. We can give on some heuristics and basis of which they could create a strategy and run these tasks these individual maintenance tasks. So if that is the case if we're providing this page for the user for the administrator then, as far as I'm aware and please correct me if I'm wrong, how would we show the run history. I mean, if you're thinking that based on our previous performances we want the user to choose how they want to update their strategy of running these tasks how would we do that. As far as I understand, those are two separate areas of the Jenkins UI. On my mobile setting page, I can't see in the configuration I can't see my previous builds right that there's a history that is available to me in that context. Yeah, so good question for Shab. So, so this was a this is a concept that for me I wasn't, I wasn't entirely settled on first because, okay I'm going to bring up an example, just to talk through what I think you're you're thinking is that the maintenance tasks for me at least I don't think of as as as jobs in the sense that they're Jenkins jobs I think of the more as the results of the of the maintenance job may come out in some sort of a log like this get polling log because I was assuming the maintenance tasks aren't the same as a Jenkins as a Jenkins job is that consistent with what you were thinking as well. Yes, I mean, they would be there would be an abstract concept or maybe hidden for a regular user. I think that everyone has to worry about right is they won't be not everyone is going to be aware how get GC is going to work, even if it said developers using to build a repository. Yeah, so, so, and so my assumption there have been that somehow we would need in this accessible from this maintenance UI access to a history of these logs. Today, the get polling log as an example only shows one. It shows the most recent. And if I click poll now, it will, and then show the get polling log will see it updates the polling log but I don't have any history so that that the get polling log is actually not the kind not quite the concept I had for tasks I was I think these maintenance tasks had some history and and some configuration that defines them, but I'm not sure that they can be totally called a freestyle job because I don't think they are. So it seemed like do they need to be some kind of different new job type, or a new lightweight task type. Was that was that sort of a question you were asking or am I off base there. Yeah, I just wanted to understand that when we're talking that we were talking about how we want the user to use the job history to determine the strategy to update or modify the strategy that they have for these maintenance tasks. And if those the schedule and the strategy for the for those tasks are configurable at the administrator page level. How does that I mean what I'm not able to picture is how do we connect those two things. Yeah, okay so so let me let me see if I can capture so since tasks are a new, a new concept. How do we, how do we connect, or how does the user. How does the administrator see the results of those tasks and configure them. Is that sort of. Is that the kind of question you're asking or am I not getting it yet. Yes, yes, yes. And so my thought had been okay if we've got, if we've got a task definition in the UI in this scribbled section here. Would we have a selection over on one of the columns on the right that says something like show me the history of this task. A page would appear that is the history of the task and that that page may look something like this, this build history page. Let's see, can I see that. Where's the history well may may may conceptual back here we go. It could be something like the history of the builds. And then from there okay I can pick one and look at its its log its console output to to see what so that was my my idea but I'm open to other ideas. Is this something that we should probably. I don't we should maybe in the proposal see that how people want to do it right how they want to present this challenge. I would think so because I think that I think that how does the administrator interact is is an important part of this of the project. Yes and I, and like a small part of that question is that is this feature. Is it really focusing the administrators or is it. Is it available for everyone who is going to access the Jenkins instance, and we want to hide out that information for some people. I was assuming we wanted to hide it that we would that this is purely for administrators if you don't have administrator permission, you, you can't do anything with these, the this page or just not be available. So, now, I'm open to being being wrong about that because, because my thought was, this is purely an administrator function, because one of the actions might conceptually be delete the schedule, right. That's not the thing from doing any maintenance and your users might suffer if some malicious user said I'm going to stop the maintenance of all of our get caches. What was the increase the frequency for a prefetch or garbage collection. Yeah, there you go. Hey, we've got the Linux kernel repository here and we're going to garbage collected every three minutes. Yeah, and the answer is no that that will work. So, yeah, good point. Yep. I want, I wanted to discuss about, you know, the order of execution of the get maintenance. We stopped, we stopped right there, you know, after prefetch and incrementally pack. Last time when we were trying the loose objects, I don't know if you remember but you know the loose objects didn't get deleted. Okay. I've gone through it and Mark, can you open a big repository like in your terminal. Sure, you bet. So you want to see a big repository like maybe. Let's see if I've got a copy of the Linux of the Linux kernel with that is that that's probably big enough I assume. Yeah, yeah. Okay, let me get a new terminal window just a minute. Okay. Linux dash stable let's see if I've got one readily available. Oh, I don't see it here. How about let's go to a different computer. Oh, yes, there we go. Okay, so here is an older copy of a Linux Linux kernel. Can, can you have a look at the, you know, loose objects present in the data. Sure, you bet. All right, so. Okay, not many loose objects in this one but go ahead. No worries. Can you run the get a maintenance, you know, or run the loose object command. Ah, okay so so let's let's do it with one that's not quite as big as this one then I think I think if you're okay with it let me go to a different a slightly different repository. So let's go here. Okay, this one has more loose objects. All right, and the if I remember right this the directory is relatively large so 180 Meg if I read that correctly. All right. Okay, so get maintenance. Run minus minus tasks equals. And is it loose objects. Okay. Okay, so maybe it's our task, not tasks. Okay. Yeah, now can you have a look at the, you know, the pack pack files. No, this doesn't get deleted right now you have to run it once again. Okay, so let's look at the pack. And you want to see the date stamps on the pack files. So there is a two files loose IDX and loose pack. Now, can you run the loose objects command once again. Sure. And now can you have a look at the loose objects. I don't think you'll find any. Okay, so right. Okay, so that's how this last time when we were trying you know it wasn't a display. So basically what I was thinking the order of, you know, the maintenance commands should be prefetch loose objects and incremental repack because I'm not sure if incremental repack takes considers the loose objects, you know, pack the file. So it's better if we go in this way of what I was feeling. Okay, so, so talk me through that again so it was what you were thinking was it the order should be you said prefetch then loose objects. Initially it was incremental repack, but now I suggest if we have loose objects first. Okay, so task loose objects. And the reason for that is what loose objects does is it it collects them. It collects all of them and puts them in a pack file. Okay, so to copy loose objects into a pack. All right. And then the thing about incremental repack I haven't found anywhere on the internet, whether you know it considers loose objects or not behind the scenes. Okay, so it doesn't matter if you know incremental repack is placed in front of loose objects, but it's safer if you put loose objects in front of you know incremental repack so that we don't miss out on it. Okay. And the idea then is if we do an incremental repack. So if I do an incremental repack here. And now what it's it's done is it has. I've still got my loose objects. Okay. Obviously, I'm going to have to read what incremental repack does so incremental repack, repacks using multi index feature okay multi pack index right alright. Ah, okay so this was looking at combining multiple smaller packs into a large single larger pack for efficiency. Got it. Okay. So it would be also to be easier to search for any you know get objects using multiple you know this incremental repack because all the objects are sorted okay so it would be easier as we would be using binary search. Okay, so that would reduce the time complexity as well. Okay. So it would be your recommendation was loose objects then incremental repack and then prefetch. Prefetch at first. Okay, then lose objects, then incremental repack. Right okay so pack task prefix prefetch to retrieve retrieve new objects. And then loose objects to copy the loose objects to a pack on incremental repack to combine them to small smaller packs in the larger packs. And then I would I was thinking about commit graph and then you know GC garbage. And commit graph is is creating a database it's creating a record of the commits that are available in the in the object database right. Yes. So, and so yes okay updates the commit graph file incrementally. And last time we had a discussion about you know the commit graph where on every fetch it keeps upgrade updating the commit graph which is a very huge task right. I've gone through a web, you know, through the web and I've read that you know the commit graph updates incrementally it upgrades in various files. Okay, it's not like it's gonna upgrade all the comments at the same time. So it's not a huge, you know, process, it won't take that much time to process. Good. Okay, so it's not as expensive an operation as we thought it might be. I've shared a link there on the right side. Yeah, so is whoops did I miss it. Oh, yes, here it is in this one. Yeah, and I was reading this earlier today. Thank you very much so this is showing how the commit graph may start as a single thing and grows into multiple files. Good. Okay. So if you run the, you know, commit graph command and that same, you know, repository, you would find a commit graph file. Okay. So let's do that. And does it go into the. Yeah, so there's no that's not that much of it. Object info someone shows. Is it in object slash. It's not in there and force is that info. Okay, in Oh info. Okay. info. And do you have a three command. Sure. You mean like an LS tree. Yeah, yeah. Where do you get everything and you. Yeah, I think that will do it moment. No. Here, let's do it this way. Okay, so now we can go exploring. It should be an object. Maybe it's that I have info and commit. Yes, yes. Okay. Can you see that there's a commit graph chain. Okay, which links both the comic graphs together. Okay. So one comment graph keeps building when you keep doing the fetch command, which doesn't disturb other comment graphs. So basically you won't be modifying or updating all the comments which won't be a huge task. So well and that matches. So what we see here is March 31. This file was created and I assume in the commit graph chain that is the second file in the list. Yes. Mark, can you check in your global config global config if you have a fetch dot write coming graph is true enabled as true. Sure. Yeah, let's check that so get config minus minus list. Okay, I don't see it there. Now let's check minus minus global. And it was you were looking for right commit graph. Yes. Okay, I don't see one there. And let's check system. No, I don't have one. What is a good version that you're using right now. I think this is 2.35. Yeah, 2.35.1 so current. So what tissue case is talking about. Sorry if I'm not saying no is to to write the commit graph incrementally after every fetch. That is a recent change as far as I read when I pointed out that fact last time that if we don't have a property enabled in the previous versions, it won't incrementally fetch it every time we're doing performing a good fetch the way right to commit graph as I read what earlier work was that the GC was tasks to update it and every if the distance between the time period between the GC and the fetch is long that when you're performing the get fetch it's going to take more time it's going to be an intensive operation because GC was supposed to do it. They recently the author who actually developed commit graph from what I read from his blog he said that they recently with the recent versions included this configuration by default which is called fetch.write commit graph. And if it's enabled through it's going to incrementally update the commit graph on each fetch so that they don't have to bear the cost of doing it once in a while. Yeah, yes, you're correct. You're absolutely correct. It's even stated on that document on that website there, which I've shared. Yeah, I guess. Yeah, version 2.24 greater than version 2.24 write commit graph is enabled by default. So we have to consider cases where you know, get version less than 2.24 we have to enable it in the Jenkins. Ah, okay, so what you're saying is the default setting on older versions is suboptimal. Yes. Okay. All right, so the. Okay, so. And so, whoops. Okay. I mean, it would depend on a lot of factors if it's suboptimal. It could be suboptimal. If the GCS starts to update it. We don't know what kind of activity that repository is going through if you know the fetch is actually going to cost a lot in terms of commit graph when you're talking about but yeah it could be potentially could be that is what the author says. Okay, all right. That is why they introduced that feature. Yeah. And the feature. Notes that it may be expensive. That is that fetch and is it a safe way to say that the fetch may be more expensive. Maybe more expensive. Then, then if commit graph had been updated. Is that the right way to say it, Rishabh? Have I missing something? Okay. Yes. Now mark if you run to get GC command and that same repository to lose objects get deleted. So, you know, it gets packed into. Okay, and so, and I think this one had already had that done. The pack file. Oh, the pack. Oh, I see what you're saying. So this, these files here loose here and here, those would be removed. Yeah, now I don't promise you that get GC will run terribly fast here. Wait a sec, I made a mistake. The maintenance manual says please use this. Let's run minus minus task GC. Oh, okay, it's not bad. And now if we look at the things. So it, it, it packed everything into a single and that thing is now how big is that. Now I've got to see it's only 17 makes so it's actually using the reference repository quite well because this repository the repository that this thing represents. Is over 100 megabytes. That's good. Okay, great. And there are some loose objects again. Okay. So, so, so now what do we do we again on the result? You know, no, no, because because it'll be handled in the future right it just just when when fetch happens. Yeah, so the sequence we described was, hey, if we do a be sure where we had it it was prefetched to retrieve new objects so I did the fetch, then we could conceptually say okay fine we're going to run loose objects to form them into a pack, and then incremental we're going to go back and commit graph to update them, and less frequently a GC. And that's the one that test GC less frequently because it's so expensive. So Rishikesh, oh go ahead Rishabh. No, sorry I'm sorry. I didn't mean to interrupt. I was just asking a question, please go ahead. So, I just had a suggestion when looking at these experiments that we've been doing. So, in the proposal, would it be beneficial for everyone for the students who are writing the proposals to start with the repository of their liking and define the parameters of the repository which are going to be affected by the maintenance tasks like the size, the number of objects, the number of loose objects and the pack files and the number of references that that repository has and then define the strategy whatever they think is the best strategy to run these tasks, run them and sort of show how that has affected the the repository. And while describing how each of the individual tasks have affected the parameters that we initially defined while we were trying to run this experiment and then describe it instead of, you know, just describing how these commands are going to work because that is something that anyone can Google and find out right, but to choose your own type of repository and then to run it and to show how it would actually work would be a good experiment. But it's something I just wanted to ask what with the other mentors as well is something that we should be expect in a proposal. Good question. So, so the idea then is, if, if the, if the proposal said, hey, I have this sample, these sample repositories so a few sample repositories and, and let's say we're going to do Jenkins dash bugs as my example. And others. And then the idea you were suggesting is run the run through a series of run a series of comparisons of the operations and their impact on the repository. Yes. So the idea being, okay, if, if a bunch of new commits arrive. And in this particular repository example I give there are a bunch of new commits that seem to arrive pretty regularly. Then I do the prefetch. And what's the impact. What's the impact on the prefetch so what happens from prefetch and it's, oh, we got this many new loose objects we got this many, and then all right now we're going to do loose objects. And what happens then, and then likewise a an incremental repack and the same story now what's what's the, the, the result of that. Is that sort of what you were describing. So this would, this would be more much more, I would say valuable for me individually when I look at a proposal, instead of definitions of what I get prefetch or loose object or incrementally backward because that is something that I can anyways go to the page and see and so this would be something that for me I would say, and maybe beneficial for the people who are writing it to, you know, come up with the right strategy that they think would be because I think, even if we have to talk about the user interface but before that we need to know what kind of a strategy we're going to choose right for the maintenance tasks. It could be an aspect of it I'm not saying that has to be here. Well, and in terms of sample repositories. I if I recall correctly we've got in the get client plug in one or two samples actually coded in source code right of here's a large medium and a small so see the get plug in source code get client plug in source code for the URLs of some example large repositories now they are not. I don't know that they are chosen as highly active repositories. They're large but relatively quiet if I remember correctly whereas the Linux kernel is both large and very active. And this Jenkins bugs repository in my in my GitHub is is quite active. I tend to clutter it with a lot of junk. So this is yeah this is where yeah we would leave that to the student right how they're choosing the repository based on what parameters is something that would be really interesting. The activity activity would determine the commit size and how they're being updated and then yeah size would be more related to the objects and the management of objects. So yeah it would. That is what I was thinking. Right. I like that as a that's a good suggestion I think for consideration is, as candidates are putting in their proposals consider that idea. Hey, should should we look at the strategies and the alternatives. I think that's worth, worth discussing. Yeah. Okay. We've got about 15 minutes left. Are there topics other topics that are on your mind. I had another I had another doubt regarding the execution of the Git maintenance like we are running it globally right so on many depositories. So are we going to execute it serially or probably, you know, creating multiple types and then running away, you know, various repositories or like sequentially one after the other. And I like that question very much and let's let me make some notes on that question because for me that's a part of global configuration and how the user experience it. And it may, it may have very different answers depending on some things. So, how do we, how do we decide the scheduling of the maintenance tasks. Right, so the crucial question is, can we schedule maintenance tasks in parallel. What if that overloads the controller. Because get GC of the Linux kernel will by default take every thread, every core on that processor to perform its job if I remember correctly get get GC is designed to be massively parallel. And so it would. If we if we schedule too frequently we may consume that controllers processor and it may not be able to do its real work because we're so busy doing doing garbage collection. So you recommend are you not executing a series. I am that's that's what I'm, I'm not sure how to think about it yet in terms of. Okay, so the question might be. So how do we allow so how do we allow the administrator to choose, and how do we measure the impact measure and report the impact of maintenance. Go ahead. Yes, so what I wanted to say was that it depends on what is our priority. I mean, with parallelization what we're going to achieve. Let's say the tasks if they were above for all of the repositories it would take X amount of time if they were serialized running a serialized fashion. If we're doing it parallely it would be X minus some time. Is that our priority or the priority is to make sure that we do not, as you said, overload the controller and, you know, avoid the risk of actually sabotaging the real work that gently technically this is not what the user has come to Jenkins for. That's a good point. I think you're right the first priorities to is to do no harm, right to use the medical phrasing. So, don't harm the controller don't don't harm the controller with these tasks. So, there's going to be some cost, it's not free to run a garbage collection but so I think then that would lobby for first serial tasks until proven otherwise, right because that's the lowest lowest risk. So with serialization I believe, if the administrator is looking at how the maintenance schedule is working on a repository. Let's say I feel like oh, you know this is not working as I wanted it to be so I can stop it. You know the cost of stopping it maybe not at that point would not be as huge as it's a 10 already 10 repositories have, you know, use that schedule whatever I chose that I thought would be perfect but in practice it's not. So, so yours. When you said stop I was envisioning that you were referring to a running maintenance task interrupt it but no I think you were describing chain redefine a schedule. Yes, what I mean was let's say I, as an administrator I, in my mind I had, you know, a frequency and interval of how the tasks would work, but I've never I in practice I didn't try those tasks how they would, you know, perform the maintenance, but in serialization I would get to see that in the very first the first repository that they're going to run in parallel, parallelization I won't have that control and it's a 10 jobs. So, it for me, it seems like it will be safer if it runs at one job I get to see it and if I want to stop the whole process. Yes, I meant interrupting. If it's a serialized fashion I get the option to interrupt it between two repositories, but it's about to go to the next repository. Okay, so serial execution I think you're saying serial execution may be easier to interrupt to safely interrupt. Good. Okay. Did did that capture what you were what you were thinking Rishabh. Yes, I think yes. Also, another doubt regarding testing, are we gonna, you know, use the test, you know, test your own development approach, you know, write the test and then execute about that how, how are we going to proceed with that. Good question so preferred development technique. So, Mark has a strong bias. Rishabh has firsthand experience with Mark strong bias towards test driven development. I like test driven development. I like it because because it helps me do a better job personally. It's, it's right tests as you go. Use the tests to explore. Don't run from the tests, and don't be afraid to discard tests when they're when they're no longer helping you. So, who should casted that answer your question. Yes. So now, now one of the this this the question also highlights a point of hypocrisy if you will. The get plug in and the get client plug in are difficult to test. Why are they difficult to test, because they were initially created without tests. Okay. And I wish I could say it were different than that but I did a I, I've done a 30 minute talk actually on the history of the get plug in and the get client plug in. For the first 18 months or two years of the life of the plug in there was not a single automated test, not one. And they were they were evaluating prototypes very very rapidly and they didn't get any value out of the test so they didn't write tests. And, but that means the structure of the code is not always well suited to writing automated tests. And so, so that's, that's me acknowledging my hypocrisy right now I was not the person who wrote the plug in those first two years. If you look at the history of the blame of files in the jank in the get plug in, most of the blame on tests is my name. So I'm, I'm, I wrote a bunch of tests but initially there were no tests for the first at least 18 months of the life of the plug in. So, so, Rishikesh, what I hope that says to you is, we try to be pragmatic. And sometimes it's exceedingly difficult to write an automated test. And then we talk to each other about okay, shall we, shall we spend the time and energy to write this test. So there may be places where we are not able to write the test. Yes, shame on me for say for admitting the truth but yes. Okay, would you say that that's a, am I being honorable here am I saying things that are accurate. Yes. Yeah, I definitely think that, and I agree with that was one of the things that I learned, you know, the first thing that I learned was test driven development. This is something that I didn't consider in my mind when I used to give estimates it was all about writing the feature and how much time that would take I never considered how much time it would for me to take, and it takes considerable time to think about the state space of how you're featured whatever if it is, even if it's a line single line code how it's going to affect the users, and especially for a plugin which is distributed to a such a wide audience. It is necessary to think to have that sort of principles there for development. So I and I would say, while writing, I'm sorry, I just wanted to say that while writing the proposal it should be considered that, you know, when you give your estimations just as I was a, you know, an amateur developer I never thought about, I didn't know how much amount of time it would take to test the features that I thought that I'm going to deliver. So, you need to factor that in as well. When you think of estimations. Good, good point. And just so, just so everyone's clear why why I think that's so important within the first 30 days of a release of the get plugin there are 90,000 installations using it around the world. 100,000 installations probably have 10x that many users. So we could we could harm close to a million people if we do something badly wrong. So, so it's, it is, it is not uncommon for us to be very careful about how what we allow into a get plugin release. Yeah, you know, I have to think every, I have to think every time before I code. Right well and, and part of part of open source project coding is sort of that thing right is, you got a lot of people who depend on you and so be careful that we don't stop changing it but we're careful, and we are very careful. All right, we've got two minutes left. Anything else in the last two minutes before we end. I had found a Jira issue where with this project idea was by initiated at the very first time. So I just wanted to like share that link and if there was like some potential discussion from the users of the people that were having this issue. I noticed that there were some issues involving that the data was not being cleared for for like the master and the slave repositories for both of them. So is that something that we can ensure that in our implementation we do. This project is actually not giving real consideration to the agents. So what is called a slave here, because the cash, the cash management is only happening on the controller on the thing that's called the master here. This, this won't help agent based environments, it will only address only address things on the controller is was that your question iron. Yes. Yeah, so, so now now and this comment from Kali saying get has an automatic GC. While that's correct. It's not been. I've not found the automatic GC to be sufficient to solve the problems we had for instance on ci dot Jenkins that IO automatic GC is is too lightweight. It's correctly realizing that if it performs on GC during a fetch it's going to slow down the fetch dramatically. And so they try with an automatic GC to keep it as light as they can. And our intention with maintenance is to invest the energy to make it better to do to do more improvements and spend more time doing those improvements. Yeah, good good pointer to this to this this one. All right. Thank you. We've we've hit our time thanks very very much for your patients. We will, if you're interested we could meet again in two weeks. That will be after we've entered the period for let's see that's double check I think that's after we've we've entered the time when applications would be accepted. But it's not after the time when the applications are closed. So if you're interested we could, we could plan to meet again in two weeks on the 15th. Are you interested. Yes. Okay, so we will we will plan to and it's on the calendar. So we'll plan to meet in two weeks and yes reviews between now and then as well. Thanks everybody thank you for your time. Thank you. Thanks work.