 Welcome. This is the 24th of August, 2022. This is Google Summer of Code, GitCache maintenance. Rishikesh, what topics do we have? We have a lot of topics to discuss today. Mark, can you share your screen and open the Git plugin? Can you run the Git plugin? Sure can. You bet. So you want the current build running? Yeah, the latest one. Okay. Hang on just a minute then. I'm going to share my screen and let's start that. So share screen. Here we go. Share. Okay. Good. So first, let's go to get the latest build. And I assume no changes in the Git client plugin, but I better double check that I've got no further changes. The changes you had made were already enough. Let's see. And the pull request. Do you remember the pull request number? Well, I'm not sure. That's okay. I can certainly find it. I was just, let's guess, 1310. And we'll go to GitHub and find it. Nope. I added a few tests also. Very good. Okay. So let's go get this build. All right. So what we want is we want this link address. And we're going to go here. Jenkins. Manage Jenkins. Manage plugins. Advanced. And we are going to, in that URL, paste it. Deploy. And now just for safety, I think we should go, we should grab the Git client plugin, most recent pull request version, just in case I'm outdated there. And so following the same pattern, let's go find your build, which is right here, 862. Okay. And this one, advanced URL, deploy. And a restart. Now this Jenkins controller is at the moment, relatively busy. So it may take a little bit for the restart coming soon. We released a security patch or security fix for the Git plugin today. And so my controller has been busy adapting and making sure that the security fix is applied, that it's on all the right branches, et cetera. What was the security fix concerned about? Passwords with using the with credentials work for the pipeline step, were not being masked. Relatively low risk, but it was a security fix nonetheless. Not being lost. Yeah, so they were being displayed as literal text in the build log. And the build also. Okay, coming soon. Really truly believe me. So I added tests in the, you know, Git client plugin. So the, I've tested all the maintenance tasks greater than two dot three zero. Okay. For maintenance tasks less than two dot three zero. I was not able to write test because the way they work is completely different from how it works in versions greater than two dot three zero. Because if you look into GC for versions less than two dot three zero, we are using the Git GC auto command, okay, which, you know, works based on, you know, if I need to run, like it works based on the status, okay, of whether it needs to execute or not. So it is internally, it's an internal logic of whether it needs to run or not, even though I run a Git GC auto command, it's not a compulsion that is going to, you know, run a full GC. So testing that is kind of like I couldn't do it. So I have to look into that, but other tests have been written for versions greater than two dot three zero. We should see those tests, those automated tests here in the commits or in the, in the files changed. If we look slowly, slowly, well, either GitHub's not very fast or my computer's not very fast. I see this little blue line advancing, but nothing that hints that it's actually doing the work. All right, while we were waiting, Jenkins is back. Let's check that we've got the right plugin versions. Okay, it's three twelve zero and four twelve zero. Okay, those aren't obviously out of, out of range. What would you like to show us, Rishikesh? I would like to show the, you know, the table which I have created. So, okay, so let's see for this, we go to get maintenance and so if we say this one every minute, commit graph is pretty lightweight. Yeah, I don't run the Prefetch because Prefetch has an issue as we have seen. Ah, okay. Yeah. Do you want to run any of the others? Yeah, you can run the GC or incrementally back every two minutes. Oh, okay. Yeah, save the configuration. Oh, no, did I make a mistake? No, it's fine, it's fine. Okay, okay, data saved and now execute. So, we have to wait for a minute and then you can see the results and which table. Okay. Behind the scenes, this isn't very optimized because I don't know how exactly the read files. So basically what's happening is whenever I try to add a record into a file, into the file, what I am doing is I'm loading the entire file, adding this record to that link list and then writing that link list back into that file. So, I'm not sure if this is the way of doing it or do I have to read every line and then just append it to the end because that kind of implementation I was in finding at any minute. And I'm not aware of a way to append to Jenkins serialized XML files. I think you have to completely overwrite. So, the technique you're using as far as I know because an XML file commonly has a beginning tag and an ending tag and if you were to append, you've now somehow added something after the end tag. So, I don't expect, okay, so no data available. I didn't refresh this page because I was not able to get the data. And I guess it's possible that it's not run yet. Shall we look at the logs just to be sure? Okay, so let's go look at logs, system log. Okay, and the logs will probably be cluttered with all sorts of interesting things because, okay, so I'm not seeing anything in that log. Is there another place where I should go to, should I create a log recorder? Yes, yes. Okay, new log recorder, let's call it maintenance or maybe get maintenance and we want, we'll go for the task executor. This one. Yeah, we can add another one. Okay. But the task scheduler. Okay. And all log levels? Yeah. Anything else you want to add? I think that's fine. Okay, why didn't it finish the task? Oh, can we go and look into the table? We sure can. So dashboard, manage Jenkins, get maintenance. No entries yet. Do you have any dashes on this? Should have many of them. Let's double check. But if I look, yeah, it has quite a number. So maybe 10 or 15. Let's check that they have some, whoops. So there are these some things here that have, have content like this here. Let's just sort by, okay. So there are definitely repositories that are non-empty. And it looks like some of them have commit graphs. So what was the logs thing right now? Okay, so to ask your question again. The logs, like it doesn't display, like the logs also are not running. Okay, here we go. Yeah, so. So found a modern Git version, running a commit graph, unlock the cache. So it looks like it's doing its task. But then this isn't running right. I'm not sure why. Can we run the auditory maintenance thing from the terminal by running that mvn command? Sure, you want to run this command? Or no, no, no, that the mvn-hpi, you know, run to start. Okay, so you want to run a maven-hpi run? Yeah. So you want to run a fresh Jenkins from a maven-hpi run? Yeah, because the maintenance tasks are running here on the screen. It's showing it's running, but then I'm not sure why is it not writing to a file. Okay, so this, because this thing is sitting inside of a Docker container, it's more challenging for me to get into it. Would you mind if I do that from another system? Yeah, that's fine. Yeah, because I feel the file part is, you know, the location of where the file is is happening in fact here. So Rishikesh, can't we check the file where you're actually storing the data? Yeah, that is also an option. Yeah, can we? Okay, well, let's try reading that. I think the file's name is a maintenance task configuration. Oh, no, which one? Maintenance records.xml file. Is it at the parent directory? Oh, yeah, it is. No, no, okay, so you're saying that there should be an XML file here, and yet I see Q.xml and workflow, flow execution and content mappings, and I see a configuration, but I don't see any data file. Maintenance records.xml. Is it in a subdirectory? It is actually in the plugins directory, you know, it's not in the Jenkins directory. So, you know, wherever that plugin is, I just stored it in that folder whenever I was developing it. So, you know, the plugins, like the git plugins root directory. That was where I stored it. Okay, well, so let's go try to find it. And you said the name was something, A-I-N-T-E-N, like that. Log, okay. And you boldly gave it a space in the file name. Oh, I will punish later. Okay, good. All right, so here is that what you were looking for? No, no, that's not the file. Oh, the file's name is maintenance records. That's camel case. Maintenance records.xml file. And I store, like if you look into the code of the, you know, of where I created the file, it's stored exactly in the git plugins directory. So it's not in the Jenkins folder. Well, but if it's this, so this, the directory I'm looking at right now is the Jenkins home directory. Yeah, that's what I didn't store it in that directory because I wanted to ask which part, where do I store it? So I just stored it in the git plugins directory. So I'm not sure what you mean when you say the git plugins directory. Oh, like when I, when I was developing the, you know, this feature, right, I have a git plugins directory, right? So in that directory only, yeah. Okay, so you explicitly stored it to slash home slash ruxi 20 slash something, something slash something. Okay, so we will, we will certainly never find it here then. Okay, got it. All right, so I'm looking in the wrong place. But so you probably didn't put it in my git plugin directory either then. So how would we find it? Go modify the source code? No, I like, I didn't catch you. Like if we are, do we have, don't we have to build this thing again so that, you know, we can read it, you know, that file gets created again. We can certainly try. So let's, okay, so I think what you're saying is let's build this and run it, right? Yeah, yeah, because I was not sure of where do I put it in the Jenkins home directory. So I just put it as a temporary. So if we build the plugin, put a file we created at that point of time or at the point of time when we actually enter data. When we enter data into that and the file. So basically I'll have a check to see if the file exists or not. And then if the file doesn't exist, I'll create that file. So, so the thing that we probably need to do as a temporary dependency is depend on that. Nope, that didn't do it. Don't we have that incremental build so I think it would do it right? Yeah, now, well, so let's, when I tried to do that. So using the incremental build that's on that branch, RC 3100, which is, I suspect, out of date. Yeah, we're now at 3232 42. And it's it's now building because it's probably not publishing that one to the to the incrementals because it wasn't up to date with an Astro branch. So we'll be a while before that's ready for an incremental. So what I'm not sure how to go find an incremental, maybe hang on and I may be able to find one. Okay, so there's an incremental that I was using. We could try that one. I don't know if this one actually has your changes in a fuchsia cache, given as three dot 12 dot zero. Let's try it and see. Okay, is this one finished? It is not. And it's probably 30 minutes away. Okay, so how do we get this so we can do some quick diagnostics here. So here is the branch that you're working on. And when I did a maven clean, it creates if you want, I could share my screen. That may be the best. Yeah, let's let's do that because I'm obviously not being successful here. So let's I'll stop sharing and let you share yours. Okay, see my screen? Not yet. But do I have shared access? You do. You should anyway. The security panel says participants are allowed to share screen. We just saw something blink there. Prashikesh, you're still there? His screen froze. Yeah, I think we may have lost him. He'll be back. And in the interim, we're busily building the gate client plug-in. So we'll have an incremental that we can use. I'm not able to share my screen. I think there's so much of mine. Have you given permission to zoom? Yeah, I have given. Yeah, yeah. But then you know, my app is crashing and then I'm getting this. Well, so let's see what alternative could we do? We could certainly try to build. I mean, I built and pushed 3.12.0 or built 3.12.0 dash snapshot. And it goes into my repository. But so then why wouldn't it find that? Like, if you run a main HPI run command, like it's not running because we don't have that job fail, right? No, it's not running because it can't resolve the dependency on the gate client plug-in that is declared. So let me try it again just to be sure. HPI, let's do a Maven, clean, minus D, skip tests, install, see if we can just, we can compile the plug-in without using HPI run. And it says could not resolve 3.12 or maybe 6. That's okay. Using the version of the POM that's on the tip of the branch, it says could not resolve dependencies for 3.11.1 dash RC3100 something or other. So it can and it tried in repo jankinci.org public. Didn't it try in, well, I guess public is the right place. What is, okay, so I think right now the place where I'm stuck at least is that I'm waiting for the incremental build to be published for the get client plug-in and it is probably still 20 minutes away from being published because it's got to rebuild itself based off of the master branch. Now how could I, how can I use a local snapshot dependency? It just seems like that should work, shouldn't it? 3.12.0 snapshot because that should resolve it locally. And now it seems to be resolving it. So for Shkesh, we may be able to do a Maven HPI colon run. It says it did it and installed it. You okay if I share my screen and we'll try it again? Yeah. Okay, so here is the build that I ran and now I'm going to just do this HPI colon run. No need to skip tests, no need to do anything except that, right? Yeah. Okay, now I need a tunnel that goes to that computer and here is that tunnel. So it will be localhost 8085. Okay, so opening my web browser now. Okay, here it is. Now this one has no caches. We can fake a cache by... Oh, right, right. Okay, good suggestion. Okay, let's do that. You can go into a cache directory in the work folder. Okay, so make cache directory. You can fake... I think it's caches. Oh, is it? Yeah, yeah, yeah. Now does this need to be a bear repository? Oh, yeah. It's fine. It's fine. So is it okay if it's not bear? No, no, no. It let it be as a bear. Okay, all right. So we now have a directory here, clientplugin.get. So now we can... Okay. Get maintenance. Oh, interesting. Did you see that? Yeah, yeah. I don't know in these versions it's not supporting, but in the normal one it's coming. Oh, this is so old. Okay, got it. Right. So notice that it's running an ancient version, 2.332.4. Save, execute, right? Yeah. Can we have the logs as well? Yes. Ah, and there's an entry. Ah, finally. Very good. Yeah. So this is how data is appended into that file. So the reason why it didn't work is because I think the path for where I am writing, like to where I am writing is different. Like if you go into that folder, right, you'll find the maintenance report file in the git plugin directory. Oh, that's a very... You are very bold. You went up one, up several levels. Okay. All right. Great. Yeah. So this is the place where I stored it. That's right. I guess it was in common. Interesting. Okay. Very good. But it makes it easy for you to diagnose and debug. Okay. So we have a record there. And so if I create more directories, for instance, like this and like this and like this, we now have four directories. And so we would expect eventually that those directories will be touched. Yes. And here we see an incremental repack. And so I could give it lots more work to do by what shall we do? Something very large like jankins.io? The thing is when you clone it, the repository is already optimized when you clone it from GitHub. Right. Yeah. I was being more... I was actually trying to be a little more unkind here and was going to make it... I give up on that one. I need to find something that I can clone easier. So how about the JUnit plugin? Okay. So the question is, is this... How's it doing? And it's already been through four passes on Git Client plugin. It doesn't seem to have yet detected my others though, Hushikesh. Yeah. It would take time because I think we added it into the queue, right? So because one minute, two minutes, three minutes. So they are all having the previous... So like if we wait for another minute or so, I guess we will be seeing that as well. Okay. Good. And now that's interesting. So in this case, the repo size, so it may have been that the commit graph command initially run was somehow seeing an incomplete repository. And this one then sees the complete... Very interesting. Okay. And then there are the search functionalities as well, where you can search or those things are working. So if I search for 18. Oh, very nice. 56. Or this very magical number. That's great. I look for commit, true. Apparently, everything matches true because every line has a status of true. Very nice. So that search facility is a natural part of the tables, the data tables that you included? Yes. Yes. Uli, Dr. Hofner will be so pleased. Well done. The thing about it is now what exactly is happening right now is I'm going and loading the entire data, like reading that XML file, reading a list, and then displaying it. There's no way of, you know, lazy loading it, like, you know, not getting like only five, the chunks of five data or 10 data. Assume there are like 200, 300 records, all 200, 300 records that, you know, living to those tables. Well, but I think that's very practical because you read them, you're also discarding outdated records every time you rewrite it, aren't you? Yeah. So you're not allowing it to ever really grow large. No, no, I didn't get you. And the reason why the other caches aren't coming, I figured it out because you remember, we have a static hash set, you know, a hash set in the abstract get SCM plugin that, you know, reads all the caches when we start the Jenkins controller. Okay. And, and we only add the caches from the UI. So if we restart this, you know, Jenkins instance, then only we will be able to see those caches. Okay. So restarting. So there is no way for the, and there's no way for Jenkins to poll the updates in a file, right? This has to be an operation where the plugin updates the file and that, that when we refresh, we'll be able to see the results of those repositories. Yeah, yeah. Yes, yes, yes. There is no way of polling right now. I don't know how do I do that polling mechanism. I tried looking into it. There was this Ajax request, but I couldn't get the data from the Java file. That's okay. Yeah, so now I think we will be seeing other plugins as well. Well, and it's still, it may be, it may, I don't think there's any change here, so I assume we may have to wait one minute before the commit graph will run again. Yeah. Now, let me check as well. Did, did I make some other mistake? Hey, those all look like bear repositories. Ah, there we go. Here's elastic access. So yeah, we bought the Git plugin. Yeah, so. Yes, and the Git plugin. Very good. Let's give it, well then, and none of these, because we're not doing, oh, we, now, now oddly here, there's no entry for garbage collection on any of these, even though this was trying to spread it every third minute. There's a second page as well, Mark. Oh, I need to look, I need to look at more pages. Very good. Oh, but still there is. Oh, right. Is it, well, okay, so is it that it's, is it that it hasn't completed the incremental repack and the commit graph? No. Now it's at 17. I think, I think the reason behind this would be, can you scroll up? Because every first minute we are adding a commit graph and every third minute we are adding a GC. So I think there's a clash and only the commit graph is being added into the queue because of our prawn syntax, because if you think about it, the GC also is added into the queue, but you know, only the commit graph is getting the chance of executing it. So if I do it, so you're thinking that if I do it at every seven for commit graph, that would give GC an opportunity to execute. Yeah. Or let's, let's do, let's see. One, two, three. So if I, if I want a distinct bit every time, then what I need is two, four, eight. No, no, but you said you think that it's, that they're colliding with each other's definition? Yeah, because what exactly has happened here, this commit graph is also being added every minute into the queue, right? And GC also is added. So first the commit graph only is being entered, even though the GC is present. So I think it does not getting an execution, it's, you know, the starvation stage. Okay, so let's go ahead. Now I was just saying that let's say I have four depositories and commit graph, the, while the third commit graph is running, operation is running, my first GC for the first depository has come into the queue. Now, once these commit graphs are over, should not the GC start to run, and then the other commit graphs get into the queue? But it depends on the way, you know, the, you know, data is added into the queue. So basically what I am doing is I'm just iterating through all the, you know, caches, like iterating through all the maintenance tasks and then adding them. So if you think about it, first the commit graph is added at the first minute, then an incremental repack is added, then again, the commit graph is added. Okay, and then again, would you tell, a GC also is added. So, you know, if you think about it every, if I'm adding it every alternate minute, I feel the commit graph is only, you know, staying in the queue. And yet we're definitely seeing inter incremental repacks. And right now we've got 27 entries, so it's not a trivial amount, but there are only, well, how about, how about a different approach? Let's attempt to garbage collect every minute. Now, Ruchikesh, I thought you had said that there was some issue with GC, or was it, no, there's an issue with prefetch. Okay, so this should have redefined it so that it will garbage collect every minute. And we could even go so far as to say, hey, let's not incremental repack and let's not do commit graph. Sorry, say that again, Ruchikesh, I missed. We can see GC's map. Oh, you can. Oh, well, that's, oh, there we go. Okay. Very good. So, okay, so there's still an open question for me on how do we, how do we assure that all the tasks get run? So if I do, if I now put commit graph every two and incremental repack every three, and now if I know there will be a collision between incremental repacking GC. And now I don't have a way, the table doesn't give me a way to sort by execution sequence, right? So I can't see which things executed most recently. Can I? Oh, oh, I can see how the duration, so get plug in spent 600, is that millisecond, 600 second? So it's been 600 milliseconds running GC, whereas the node label parameter only spent 98. But go ahead. No, no, Mark, I'm sorry. Please come to me. No, you were saying something, you can, I can say. Yeah, and I apologize. Now I don't remember. So it's clearly getting late for me, and I'm not, I'm not thinking as clearly as I should. Rishabh, you go ahead. I wanted to ask what this previous execution column is signifying. I mean, I see a constant last executed that date, you know, that date and time would be displayed. This is a random number I put. So actually, there's a timestamp there. Okay. When was it last executed? Okay, got it. And that makes sense. Okay, so we're now at 37 entries. And we have, we definitely have GC. So if I I think if we refresh the page, right, you get the first five things as the latest ones, like the latest. Okay, good. And without any sorting, these would be the latest ones. Okay, so it performed a commit graph. And okay, that's a little surprising that it would show multiple GCs one right after another on the same, on the same repository. Huh. Okay, how about let's look for GC. And there are already 20 entries with GC. Good. Here, I was thinking, if you look at this example only, the rate at which the file is going is very, very fast. So is there any way of me cleaning this file? Because I didn't add any mechanism of cleaning, you know, you know, having a fetch size, because data would only be added into the file. But there's no way of, you know, restricting it. But tell me what would, what would, because you're rewriting the file every time you, every time you add new data, and you're disposing the, you're, you've got some disposal process for the data, don't you? So you're, you're saying I'm only keeping this or are you keeping data infinitely? Yeah, for now, it's like infinity, because I didn't add anywhere, you know, removing the old data. That mechanism has to be added, but I was not sure. So I'm not sure about how do I proceed with that. Well, but isn't, isn't the removal of the data just a matter of deleting it from the linked list? And then when it's saved to disk, it will be, it will be gone. Yeah, but then how many records do I store? That was my question. Assume if there are many, many gentiles or caches, and then if we have a fixed amount of size of like, you know, 100, I think we wouldn't even display data of other caches present, because, you know, all of them would cross 100, for example. So what would be a fixed amount or what would be the fixed size? Yeah, good, good question. So should a, should the, do we ask the user to give us a value? Do we just choose the value ourselves? Is there a way for us to not show each, I mean, to club these, to only publish the record for a repository when whatever tasks were designated for them? Once the first batch of that has been executed, then we publish that instead of publishing each entry of the repository with each task. Sorry, ask your question again, Rishabh. So my question is that, let's say I have a repository, the Git plugin repository, and I have, I have commit graph and I have GC. So once the first commit graph and first GC, and that is the batch of tasks that I'm going to run, you know, that is, that is the first, I mean, series of sequence of tasks that are going to be run for this repository. So once that is done, is it possible for us to then show the result instead of showing each record? Because with this approach, we will have, I mean, we can't, we don't have the control there of what Rishikesh is trying to say, right? If you're going to delete, if you're going to delete entries after a fixed amount of rows have been created, you cannot make sure that each repository which was present within the cache is going to be displayed on the table. Because it is very, very possible that since commit graph was running every minute, and there are, let's say 20 repositories, it would only, I mean, the table would be filled within, let's say, 10 minutes with 100 entries. And then you have to make a delete because that is how your optimization strategy or whatever the disposal strategy has been set. So, go ahead. My question is, how does user make sense of this data in the sense that if we, if we are able to batch, I mean, if I'm able to see for a repository, what is the task that has been run? And it can be multiple tasks. And along that, the count of the number of times that task has been run, that would, I mean, still, I mean, it would only, it would consume less, I mean, space within the table is what I'm trying to say. Within a row, if I'm able to show more data, I don't know if that's possible or not, but I guess that would make it more easy for us and easy for the user because right now, when we've collected, let's say 67 entries, how do I make sense of this data? Right. Well, so could I try a different analogy? For me, I think we want some sort of, it would be nice if we had some form of a sequence number to tell us which thing was executed first and, and which was executed later, whether that's his previous execution or something else, that might I think help people comprehend what the sequence was. But in terms of the shall we limit to a fixed number of records, what if we, what if we used a different limiting algorithm and said, we will limit to not more than n records per the combination of repository name and task. So think of the repository name and task as a job in Jenkins. It's not, but think of it that way. And we say, I'm going to keep five, I'll keep seven, no matter how many there are. So if there are 10,000 cash repositories, we'll keep 50,000 records because we need to keep some record for every one of their, of their repositories in every task that they ran. If we, if we use that technique, now who should guess that means you've got to do something more sophisticated as you delete things from the linked list. But, but I think iterating a linked list and discarding things from it is not that, that painful. So what you're saying is each repository, we would hold like five records, for example, of them, you know, and, and that of each type. So assume elastic access plug-in, five permit graphs of theirs, you know, and then five GC of that is what you're saying, right? That's what I was thinking. What do you, does that sound reasonable to you? Does that sound like that might work for the user? Yeah, that's, that sounds reasonable. They could kind of store it in a hash map as well, you know. Oh, right. Certainly there are other data structures that make that style of storage much easier, aren't there? Yeah. So that, that sounds very reasonable to me to say, okay, we're going to keep, because we think you care as a user about the task that's being performed and the repository where it's being performed, we should probably, now, now is there, is there a way with these, this very elegant data table to do some form of parent to child collapse where all commit graphs for a single, for a single repository are grouped together automatically as parent to, you know, and I don't know, you can look at the data tables and see what Uli has made available. I'm not sure if it's got a grouping concept or not. It has some concept of a collapse. I've seen Oh, it does. Okay. Can you explain like what was that feature about the collapsed grouping? All I was thinking was, okay, today I see, at the moment I see many rows with elastic access plug in GC. And for visualization purposes, it might help me if those were an expandable thing where this shows up as one and older copies of older results of the same thing are hidden under it as a collapse and expand. Now, now that's, that is completely not, not required. Right. It's, it's just, okay, as a user, it might be easier for me to understand what's happening if I collapse and expand to see what the history looks like. That actually makes sense. That would even, you know, be easier to read. I would, I would try seeing, you know, implementing that, you know, see how it's written. Yeah, now, now perfectly understood if, if the ultimate is, hey, that doesn't work. Yeah, it's or, or G that just doesn't make sense. That's a bad user experience. Don't do that. Then I, I, I completely understand that as well. This is, this is actually really quite impressive. I mean, look at this. I can sit here and search and there it is. And now the 25 applies to my search. Oh, Uli has done, Uli and his students have done amazing things here. This is great. So as a deliverable, we've achieved what we want to show to the user. And I think what we've recently discussed is an optimization that we could perform if that is possible. Yes. Yeah. Right. Yeah. There are a few things like if you go into that terminal, right, I think you would find a, like if you open the terminal from which you have started this, you'll find get, you'll find get versions. No, can you open that terminal? Yeah. Yeah, you will find these. Okay, this is something I'm much, this thing keeps happening because this thing, actually, when I want to get the version of the underlying computer, right, to check whether it's should I run legacy maintenance or the normal maintenance, I need to call the underlying bit version. So this thing keeps happening. Is there any way of stopping it? Because I couldn't. I think there must be, but we'll have to look at it and see. It's not immediately obvious to me. Wouldn't we, can't we somehow remember that we found this version before? Is there a way to remember that? Okay, there we go. That's kind of elegant. We see, okay, there's my Q now we watch to see when it moves. The thing why I didn't create a fee to remember it is because in Jenkins we have a way of changing the get executable, right, the underlying. So assume on the next cache when I want to run the next cache, which is running like the maintenance task. So then I would be using the version set in the UI, global configuration. So that was one of the reasons why I didn't change, you know, then store the get version. And that makes sense at least at some level because I could on my controller, I don't know why I would, but I could on my controller have multiple command line get versions installed, right, where I'm and I've got several different command line get tools for some specific need. Okay, one, two, three, four, five, six. So it just, I think it just completed more. And here we go 112 records. That's, yeah, so that was one thing I wanted to discuss. There are a few other things. Okay, one thing is about documentation, like do I document the code which I've written or when do I do that? The answer is yes. Okay. So the documentation should go into this location here. There's this read me. And given given the nature of this that it's got a what I call on a very nice UI component, you should probably take a screenshot and embed the screenshot. Just as this this picture has a screenshot, you should probably put a section in there that describes it and has a screenshot. Look, this is how it how it looks. Now, go ahead. Regarding, you know, documenting the code, do I do that as well or know the methods and the parameters of method takes to Java Java doc for Java doc is highly recommended for public public methods. So yes, otherwise, somebody else has to do it. And if it's me, I'll just make wild guesses is what your intent was. And I'll write those wild guesses and people will then complain, Mark, you made a wild guess and you were wrong. That was something I wanted to ask. Any suggestions on improving the UI? Are any suggestions on the UI? I don't have any. I find the Kron syntax a little bit challenging, but it's very much the way Jenkins does things. So you're absolutely consistent with the rest of Jenkins. Kron syntax is how it's done. I just I'd love to have a calendar picker, you know, all sorts of exotic things like that. But the problem is none of them are functionally rich enough to replace Kron syntax, because Kron, I can say at daily, I can say at hourly, you know, I can say, yeah, and then there are all sorts of, now, I guess there is one that it would be nice if we could do a, could we get an online help available that would coach them on the Kron syntax, because we don't have help icons here. And that may or a help, help icon even for the commit graph, what is a commit graph and how does it help them? What is prefetch and how does it help? Because we can certainly describe that in the online documentation in the README here. What are these tasks? But the user is accustomed to reading the help from a question mark right next to it. I tried adding the help files, but you know, I was facing some kind of problem while adding the help files. So I couldn't add the help files here. Yeah, and we may have to and we may have to request assistance from someone else, because my success rate with adding help is far less than 100%. I have to work very hard to find the right place to put it in the in the UI elements. Also, there's this commands, I was thinking of, you know, suggesting users to use commands such as hourly, daily, you know, and, yes, because the underlying architecture of hourly is assuming if I put commit graph hourly and GC also hourly, both of them don't run at the same time. You know, there's a random, you know, there's a random, you know, time selected at each hour and both of them are scheduled in such a way that, you know, Jenkins is not overloaded. So I was thinking of, you know, adding that as well into the README so that, you know, it will be beneficial. And I think that that is a very wise, that's a very wise thing for you to recommend, especially because it avoids the risk of them making a typographical mistake which causes them to run much more frequently than they wanted. It's not free to run these operations, right? The execution time is a reminder to us, even on a perfectly packed repository and one that's as small as the Git plug-in is, it's something under 100 meg, that still takes 600 milliseconds. I'm sure if we did, actually we ought to, just to be absolutely obnoxious. Minus, minus, bear, minus, minus, reference. Just a minute, bugs. Okay, all right, so cloning now. Now, just to show you how embarrassing this is, it's 105 megabytes. So when that one runs, and now that you said that the way we want, we get that to be seen as we restart. Yeah, so regarding the prefetch, is there like the way that just commented or like, what about that? Because we have private repositories as well, right? So how do we proceed with that? Oh, right, so prefetch, prefetch, well, how is the cache being popular? Oh, there really isn't. You don't know the credentials for that repository, right? You simply cannot, because they must not be written to the disk. If they're written to the disk, that's actually a very bad choice. So, well, so maybe what the answer then is, is we just skip prefetch if it fails due to credentials, because we cannot, we can't do a prefetch without authentication. And in order to have authentication, we would have to somewhere record the credentials that were used to access that cache. Because is there any way of knowing it before that it's a private repository? Because if once I call the command line get trade, and then once I call the command line get, and then I start scheduling and running the prefetch command, then I have no control over the process. Okay, so a technique you can use is you could do a get, you could make a call to LS remote, and LS remote will fail if it's a, if it's a privileged repository, if it's a private repository. So you're saying a get LS remote? Yes, so there is a, there is in the get client plugin, there is a method that invokes LS remote. And if you call that method, it will fail for you. Okay. If, if the repository is, is private, and you have not provided credentials. Okay, that kind of makes sense. Then using that method, I think I can, you know, skip the maintenance of, you know, for private repositories and around the prefetch command. Yes. Yeah, I think that was it. All right. I am going to get some sleep. Oh, go ahead. Yeah, can read the first page and you'll see whether we bought the cache. Yes, let's do it. Okay, get maintenance. And let's look for Jenkins dash bugs. There it is. And the execution time for that GC is longer than any other GC, right? Yeah, it's around 10 seconds. Okay, so if we sort by execution time, very good. All right. And that, and that is definitely, let's look at that. That is a no op. Because if we look in Jenkins bugs get objects, there's nothing in objects except pack. And in pack, there is exactly one file, a pack, and one bitmap. Now there isn't a commit graph yet, is there? Yeah, so it's run GC twice. Now that's a little surprising that it's run GC twice, but has not run commit graph. That may be back to my collision. Sure. Okay, so Jenkins dash bugs. Whoops. So let's make it two, three, five, save. Okay, now we refresh and now in Jenkins dash bugs. So I think with this, we have achieved what we, you know, the deliverables, what we wanted. There are few other things which, you know, can get better. So I think I would be working on those. The test for the bit client plugin also has been written only for the legacy. I think the legacy commands, those are not working. I didn't write the test for those. Okay. Yeah, I'll look into that as well. And yeah. Very good. Excellent work, Rishikesh. Really good. So we will plan to meet again next week. Now I have to warn next week I am arriving home next week on about 12 hours, or about 24 hours prior to our scheduled meeting. I leave Alaska on an airplane to return home after having visited my grandchild. My new grandbaby was just born in Alaska. And so I may not be, when we meet a week from today, I may not be, I may be even less functional mentally than I was today. I apologize for that in advance, but I may be very sleepy. We can schedule the meet for next day as well, on a Thursday as well, you know, or on a Friday. So for me, I'd prefer Tuesday and then let's see if we needed, sorry, for you, what is Wednesday? I must talk in your time zone. So the Wednesday morning meeting actually works quite well, just if we find Wednesday when you, when we're meeting that I'm not, not useful, then I may say, okay, let's try for another day. Yeah, because if we have the meet, you know, we can discuss and get things done. So it's fine if it's on a Monday or a Tuesday or a Thursday or a Friday. So yeah, that's up to you. So great. Very good. Well, Ruchikesh, thank you very much. I'm going to go ahead and I assume we call an end to our session. And I'll, I hope to post the recording tomorrow. Thanks very, very much. Thank you.