 Welcome, it's the 1st of September 2022. This is get cash maintenance Google summer of code. So who should cash the first topic I'm aware of is we need to be sure everybody's okay with extending extending the end date, a little bit so was it. Did you agree with john mark on an amount and tell us more about it. So first I thought of discussing like what are the pending stuff and what are the things which are still, you know, pending things we have to work on. So few of the things are first like the prefetch command isn't working. Like there is an issue for private repositories. So that has to be fixed. And then I need to write few tests, there are some pending tests in for you, you know, for functionalities for some of the functionalities. Java doc, the read me a spending and a blog post. So I think these are the pending stuff. Yeah. Okay, so let's the prefetch the prefetch when I'm not sure how that one is going to be solvable thinking about the layering of things. We have to somehow remember the credential ID that was used to access the cash, and, and that may actually not be what the administrator wants is they may say hey I don't want to grant. I don't want to have something that is remembering credentials of the cash. We have a way. Go ahead. Okay, we have the I was thinking of skipping the caches for you know, not like, not like for private repositories. But then I don't know how do we, how do I know it's a private repository or not. So, yeah, I think that the answer there is pretty simple. I think that if the repository uses SSH protocol. It must be private. All right, so if if the repository protocol is SSH, you can't access it without an SSH key. Now is that Rishabh help me is that correct is that true. Can I, if I attempt to access a public GitHub repository with a, let me do a quick check just to be sure but I think that's accurate. Yes, as far as I from my experience as well, I mean, when I work with private repositories it's always following SSH protocol. And we have to set up our own, you have to share the keys, the certificate needs to be able to establish a connection. I was trying the, I was trying the HTTPS protocol and it says it's kind of expired on GitHub. They removed that, you know, password based authentication was Yeah, so, so the protocol is definitely not expired. HTTPS is actually somewhat there's there were periods where GitHub said it was their preferred protocol, because it's there there are elements of HTTPS that are much more scalable than SSH. And so for an organization like GitHub that wants to do massive scale delivery. HTTPS has much more known quantities in terms of how do you scale HTTPS connections, and how do you scale it. Yes processing. But think the specific use of the past username password pair using your literal password from GitHub. Yes, they correctly have deprecated that and said stop doing that. What you do is instead you generate a personal access token, or some other form of credential, and then use that instead. But that could also be used, that could also be used for private repositories. Yes, and in fact that's it's a pretty common usage but the problem then is now the, the cash maintenance has to know what those credentials IDs are. And it doesn't really have a context to obtain those credential items. So because it doesn't I'd say, alright prefetch prefetch is is ignored, or cleanly skipped for private repositories. If, if people come back and say hey I want I desperately want prefetch for for private repositories. It'll take some more design work, because I don't. We would have to have some way of capturing which credential was used to create the cash. Remember that and then reuse that remembered information. So how do we plan on escaping it likes, is there any way or mechanism where I couldn't you know any command. I try the request and when it fails ignore the failure. Okay. Now, we show up any does that seem reasonable to you just attempt it and let it fail. Yes, I mean, I also don't see any other, the other way, because we can't determine if it's a private or a public repository. So then all we have is a reactive measure. I agree. So that was one of that is one of the major pending class. The other tasks, I think are like my new click, like it is important like the Java dog and updating the read me as well. If you test some tests, you know I'm finding it difficult to write like the get legacy maintenance test was those on I tried testing them but I wasn't able to test that. So, those are the pending stuff. Last week I've done a lot of work on improvising the UI, the table. I spent a lot of time and effort thinking which data structure to go so I think we can have a look at that if you want right now. Okay. So you want me to bring it up in in my Jenkins instance. Yeah, I think yeah I updated the route path as well on that XML file so I think it should work. Okay well so let's, let's go get it then, and I need to get right here I'll just share my screen and we will capture it. So that we can see, see all the steps so sharing my screen now. And what you should see is whoops. Oh, that was not what I wanted. What you should see is a get client plugin there let's go to ci dot Jenkins.io. And I'm going to grab the get client plugin with your pull request. So that's pull request 862. Okay, so this is the one we need. And what we need is this file. And then we're going to go to my Jenkins controller. And the get client plugin from that build is being deployed now. All right, and now let's go get the get plugin. Last time if you remember we were scheduling maintenance tasks every minute and every two minutes and there were, you know, one of the maintenance tasks wasn't running right idea like it was a bug. And that was the same bug which I faced during the presentation. So that, so actually I created a global, you know, the calendar object. And then I have to I had to check the date and time every time right whenever I read the crown syntaxes. But then I created the object that was the only time I, you know, created that calendar object so it was not created every single time so that was the reason. Okay, and that and that bug as far as you know is fixed. It's fixed. Yeah, I guess. Okay, well, so let's let's do it. Let's see a demonstration for each other and see how it looks. Also the UI needs to be a little more friendly, like little help tags, what is GC what is prefetch you know those things are kind of missing. Right. Interesting. Okay, here we go. Starting now for thing it's got probably several thousand by now jobs. Okay, so first, let's check to be sure that the plugins that we hope would be installed have in fact been installed. So 312 RC 3250. And this room is 4947 so 497 is 47 is correct. And 3251 let's be doubly sure 3250 okay so we have the expected versions. So now if we look at maintenance. And here are the old definitions. Yeah, which ones would you like to dominate the maintenance stores. Sure, terminated. And I think you already have a file right the maintenance XML file. We have to delete that because I updated the data structure within it. Okay, so let's. Let's go delete it then. Yeah, maintenance records that X. Yes. So that one. Oh, yeah. Okay now now can we refresh this page. So you're okay if I remove it. Yeah, you can. Okay, and now we want to do a save. Oh, it's fine it's fine just refresh this page. Oh just refresh it. Okay. So now we have no data. Right now. Can you click on execute. So save and then execute right okay executed. So now every minute it should do commit graph every two minutes a garbage collection and every three an incremental repack. Exactly every three minutes because there would be a commit graph as well in the queue. This is the data and then if you click on expand expand. Oh here. Oh yeah so this in the next in the next maintenance task you would see those as well. Okay, all right so now if I an interesting it shows 60 tasks. Okay. This is current time. This is current time. Okay so it went through it has executed many very quickly commit graph, but it's not yet run any oh there it is now it's doing incremental repack. And you can click on the expand as well you find that commit graph data and incremental. So here basically I'm storing five you know. Sorry. Yeah, five records for each maintenance task so there would be around 25 I guess 25 records for each cash. So this is what I was working on this took a lot of time to figure out how to get the data and all the repo name has to be updated. Because this is this is the internal hash which is being used. And so so will you provide additional data that is the the remote repository URL or is that separate somehow. We could provide the remote I wasn't sure what do I do with that so I didn't proceed with that but if you want like we can go ahead with you know putting a remote you are for me at least let's test with Rasha but for me, the repo name is helpful if I need to go into the directory. However, it's not helpful for me to know just the repo name when I care about is for a particular remote repository how long did it take to perform its tasks. Okay, so we've got commit graph. So just in case I mean the expanding this feature. So if I if a particular task is executed on a on a cache. And if I expand I can see multiple tasks associated to that cache. Yeah, so if if each if each cache is having its associated history of all the tasks that are executed and why do we why are we showing why do we have this column called tasks there. I mean this yeah this one shows the latest task which has been executed on this cache and when you expand it it shows you the history like the previous tasks which have been executed. So then there is no entry for this cache, which has a task called incremental repack. Oh, there would be an entry within the table right. What I'm trying to say is that when incremental repack ran on this cache, we would have made an entry within this table right. Yeah, or that that is being in the sub table right now so if you click on the expand right. So for that cache. So if you click on that the so the incremental repack would be stored here. I mean, my point is that once incremental repack is being is running on this cache. It would make another entry within this the table right I mean I'm not talking about this sub table. I'm talking about the larger table it would make an entry there as well. Yeah, yeah, yeah, yeah. When it ran it would make an entry there and when a new maintenance task has been executed on that cache, it would be replaced and this would be added into the sub table. Okay. That entry doesn't exist now. Oh, well I think that's actually here it is. So here we see for it. It just got a fourth entry for can commit graph worth. So there's this one was there before this was there before I believe this is a brand new one, or is it no I'm the other direction. It's this one that's the new one. So before we had only rose five two through five. And this one has been added. So if you observe right now if you refresh, I think we would have one more like you get incremental repack here. Okay. Because this is the latest maintenance tasks run on all the caches. And that has also been added into the soft table. So now if I search the table, would I be able to find an entry for this cash with GC fast. And the main table. Yes. No, no, no, you had you have to go into the soft table to see the data for the previously executed maintenance stuff. Got it. So once more than one task has been ran into the particular cash, you would show the latest tasks within the main table and then within the sub table, I would be able to see the history of that. Yes. So that is why there is there are 56 entries then because with 56 entries. I've got one, one, one, there are 56 cash sub directories in my folder. Is that right? Yeah. Well, that's that's correct. Now I'm a little surprised that you're bothering to do any work on the at TMP folders. Have you seen that they are they tend to be long live? I was inch. I'm not sure about that because when I tried it, I didn't have any TMP directories so. Okay. I think they are empty. They are well okay one of them has something in it to. They are in fact mostly empty. That's interesting. Okay. Why are these directories created? And I don't know the answer to that I would assume some things happening in either pipeline execution that's causing them to be created, or in cash in the get plug in cash cash operations, but I don't know why why they're being created. So this one, for instance, has a passphrase has a password in it. I'm going to move this off screen and see if I can read that file because that's an interesting piece of data. Okay, and now it's been removed. It only lives. Oh no. So if as a user, I want to understand what is the kind of optimization that the maintenance plugin has performed there. I would have to go to the cashless repository and just like check the size of the repository. Yeah. Yeah, if you mark, can you expand? Certainly. Which one would you like expanded? Any repo with, you know, very large size. Yeah, you can sort it. Sort by repo size. Here's an. Okay, now that's, that's an interesting choice of sort. Okay, wait, so basically this is a string. I am not sure how is this. I think this is sorting it based on the first. Okay, it's doing an alphabetic sort on the string. Yeah, not interpreting that. So you'll remember that as a bug and. Okay, so let's expand it. This adopt open JDK one. All right, because it is 750 megabytes. Or let's expand this one. Why is it not expanding? Okay, back here. Repo size. 750 megabytes. Okay, I'm not. I'm not getting it to expand. Okay, so let's sort by repo side or by name. Now if I expand, no. Okay, that's strange. It's like I've lost the ability to click the expand. Oh no, that works. Okay. So when you're sorting, it's not expanding. If I sort, then I can no longer expand if I sort twice now. Time for more testing mark be a good tester. Okay, I can expand here without sorting. Now if I sort by repo size. I can still expand. If I invert the sort of by repo size. It doesn't expand is, is that because. No, is that because it's non zero? No, this one is 25 meg. And this one is 160 meg. So I'm definitely able to expand them. And here we see a garbage collection that is non zero. That's what seven and a half seconds. Yeah. And here's an eight second garbage collection. Good. So the, the, the chevrons, the greater than symbols are number of times it's been run. So this, this thing will just keep growing. Oh, no, no, no, I actually I tried removing that thing, but that thing doesn't go as I don't know how do I remove that that was nothing related to the size of the table. Oh, okay. So this is just a bug. Okay. That is just a bug. I tried removing it, but I couldn't remove it. So yeah. And now this looks like you had to do your own table, your own create the table inside the table. Yes. Yes. Or the data tables API doesn't have a concept of expandable rows. It has the thing, but then you have to insert your own HTML code into it. So I had inserted my own table HTML into it. And that's how I'm displaying it. I see. Okay. So I sort by repo size. And now go here, let's go to 50 entries sorted by repo size. And so we're going to choose one that has 754 megabytes and it won't expand. So let's sort by repo name. And it doesn't expand. Let's try refreshing the page. And now give me a hundred entries and there's adopt open JDK. Oh, but it still doesn't expand. Maybe that means that there's nothing to expand. Oh, no, I, oh, did you sort it again? I did not think that I had sorted it, but let's try this. So what I've done is I've, I just refreshed the page. So that should have stopped all sorting, right? Then I'm going to increase the number of entries to 100. And that will bring it onto the page somewhere. There it is. And when I click expand, no action. Okay. Is it taking time to load that data? Could be. Let's see if I turned on the debugging console, would I be able to see that? Can you go to the networks tab in the debugging console? Sure. Network. Okay. And now you want me to reload the page. Okay. Page reloaded. And as far as I can tell, I think it's done, right? This was the maintenance data, which can you scroll down and now expand. Okay. And now I have to position so I can get access. How do I do that? You can move this. Just change that and now click expand that worked. The nothing in the networks. So it looks like it's doing that expand and contract entirely inside the webpage. I need a request going right. I don't see any here right on the on the performance tab. Now I could be wrong though. Can we at least look into the XML records file? Yes, we would know right. Here we go. Here let's get a reasonable editor. Okay, so. We can search for that file. So that cash. Okay. So you want to look for the, the specific cash that we were seeing that would not expand, except that they all seem to not expand. So if I just take one of them like, like this one. Okay. So here is one entry for it. This repo name would have maintenance data and inside that you would find comment graph. Right. And we see the data here. But when I click exactly that, the expand isn't working right. I think it could be a UI issue. Okay. Yeah, now if I refresh the page, then it's no longer on the page. So now if we can use the search, yeah, you can use the search there. See, but even then row 11, it's still expand is not active. Now, earlier ones, it is active interesting. So here expand works. And here it works. But here for five megabyte repository for 25 meg. So I think it's not working for page, you know, pages after 10. So if you go into that, yes. Yeah. Okay, so let's. So if I, so, oh, okay. It's pages after 10. In the original rendering. No, so let's refresh the page. Now, is there a way for me to, there's probably not a way for me to pass an argument to the page in the URL to tell it to give me 50 entries. Okay, so that works on the first row. One, two, three, four, five, six, seven, eight, nine, 10. Try that one. It expands and 11. And 12, 12 does not expand. 11 does not expand 10 expand. So as you were suggesting, it may be that only the first 10 get expansion data. Okay. Now back to that. How is it be? Oh, it won't help us because the large repositories are not in the first 10. Yeah. We've got 754 megabytes. That is actually that is a pretty large repository, but it's number is at number 11 in the list right now. Although wait a sec, that doesn't help us, right? Because this is actually not a cache managed by the caching system. That just happens to be a folder that I created in there as a hack. Right. Because all the folders that are managed by it should be get dash something. Yeah, right. All right. So Prusha cash. In terms of the tasks yet to be performed. I think we also need to get a, we need to release a version of the get client plugin with your, with your API in it. Yeah. Oh, in order to do that, it would be good to have Java doc for those API entry points. If you can, if you could take the time to create the Java doc, let's let me look to see what it looks like now, or is it already docked? No, it's not. So if you can provide some Java doc for those, you don't have to Java doc the tests, but I didn't test the legacy may get maintenance because I was, you know, for a few words. So basically the GC and the legacy get maintenance works based on, you know, when it's required. So even if I trigger a GC, it's not a compulsory that it would run a GC. So basically based on how my, so basically if the cache isn't optimized, then only it would run a GC. So I have no control over when I can run a GC. So that's the reason why I wasn't able to test a GC. The same thing for commit graphs as well. So basically in previous versions of get a commit graph is run only if it's only GC. There is a configuration. You have to set a conflict file in the get a global conflict file to, you know, enable the commit graph. So if that isn't enabled, I have no control over the commit graph as well. Okay. So, so then do you, you attempt to launch the commit graph command? And if it fails, just don't worry about it. So basically what would happen when you run a commit graph, the thing command runs, the, would you tell underlying get software runs it, but it doesn't perform any action. It would just exit like without throwing an error. Oh, okay. All right. So I won't even know if the commit graph has been run or not. Okay. So now tell me more about the choice to use auto for garbage collection. Was that based on an online recommendation? In the past, I've, I've biased away from auto myself just because I, if I, if I run GC, I want it to run. Because we have discussed it and the, you know, community bonding phase. And then I asked a question as well. Do we do a GC with an auto or a forced to see you were we would prefer going with an auto because it would be safe and we wouldn't disrupt the, you know, caches on. Oh, right. Right. Because auto does not acquire a hard lock on the repository. Is that right? Yeah. Yeah. Okay. Got it. Right. Thank you. Okay. So here I've written the test as well for all of them except. Prefetch. Prefetch. And the legacy get maintenance. Very good. Okay. Well, so, so the calling the legacy. I'm, I'm not sure it. I am not clear on the difficulty of calling the get the legacy maintenance. It seems like you certainly can call it. You may have. You may have difficulty deciding if it was. Executed or not, but even there there's this logger so you could certainly check to see that the message was logged. Okay. Now, when we do a launch command, does it also log the command being launched? I would assume it does. Let's see where his launch command. Or if it doesn't, you could always put right a message saying, Hey, GC completed or something like that. Well, or search for executed successfully. Okay. I can search for the logs. Right. Like, oh, it would be in the logger. Well, so there is, there is a listener in this. And I assume we are passing in the listener when we, or is the listener global here? The listener is being part from the get. Ask listen. Okay. So, so do we have a listener in our tests? I don't see one immediately. Let me do a quick check. I mean, checking for a logged message is certainly not as good. Yes, we do. Okay. So we've got this null task listener that if you were to use it to, instead of being a null task listener to use a real task listener, you could then assert that things were written to the, to the, the listener. Does the, am I being clear? No, I didn't get you. Okay. So well, so what we've got is when this test, let's see, we're in get client test. And this is the one where you're running the maintenance task test, right? Yeah. Yeah. Okay. So in this, we have a listener. Now at the moment it's null. So the, the, the logged output is going nowhere. You could change this to find a listener and then you use assertions on the contents of that listener. And I think you can find examples of that style of test already in the code. So how would we find it? Something like get grep. Assert dot star listener. Oh, it would help if we did like that. Has changes. Has changes. There we go. No, maybe I don't know. There it's polling the listener. Wasn't there a way for us to get the vendor? I mean the job logs from the Jenkins instance. Yeah, I remember that. Yeah, I wrote some tests where I had to assert certain logs and I did that using the logs that are printed during a break. I was able to access that. I don't remember if it was via listener or not. Right, right. At least that's what that's what I was. I thought I recalled that we had. A way to ask the listener. For the data, but I'm not, I'm not finding it on my initial search. So. Let's let's do some other searching. Let's go. Because there's definitely the. All of the listener has changes. Okay, let's check it in the get client. Maybe that's where it is. No, I don't. Okay, this one. Listener dot get logger. So I think this listener dot get logger will let you then. Read the contents of that were logged to the listener. That's what I think. Okay, so if it's your case, at least it's worth your exploring. Okay. So listener dot get a listener is a object right in the class. It is. Yeah, listener is a. So in this case, there's a. Let's see. Let's see if I can find the different types of listeners. Hope if I didn't have to use. Regular expressions properly. Okay. Yeah, there it is. I think the one we want is this log task listener. Because I believe Java doc. Jenkins Java doc log task listener has a way to get the logger. And then you can read from this stream. I think writer receives the output of the building up. I think I'll have a look into it. At least it's worth it's worth an exploration. I can't promise that it will turn out the way I hope but I thought that we had written tests already that know how to know how to read from the build log and assert the contents of the build log. We have. I have such tests. Okay, recording log handler to allow assertions on logging. So, so this is definitely one. And I'm pretty sure that it's used somewhere. Yes. Here we go. So let's check these two the build data logging one. So what it does is it reconfigures the logging. So turns sets logging a certain way. In this case, it's not even using a listener. It's using a real logger. So this one, maybe that's what I was thinking of was, but in the worst case you could do that as well, who should cash add log statements and assert the log statements. You know say hey at fine level logging, I expect to get this. And then during normal operation, you're not doing fine level and so it's okay. So, back to the back to the tasks, or, or I guess we had started this, this conversation with, do we do you need extra time to be able to successfully complete. And, and can we give the extra time, John Mark's answer to me was yes, we can give it within a limited amount. So in your sense, do you will the extra time benefit you. Do you need it. Oh, I'm not sure about it because we have few pending tasks like Java docs are there, and then the prefetch is left and then writing these tests are left. And then, yeah, every week a bug keeps popping up. So I'll have to. So I'm not sure about it, but I think it would be fine if a go with another extra week like us. I think that would be, you know, sufficient is what I'm thinking. So one week sounds great to me Rishabh are you okay with going one more week. One week. How does that work with the evaluations that we have I mean what types of evaluations then would we submit in the next round. No, no, as I understand it, they will allow us to extend the date and then we just submit one week later. And the presentations that we give within the Jenkins community. So Rishikesh would do the presentation. Potentially during during the that that period, and that's okay. Yeah, I don't have a problem. Yeah, I don't have a problem. Now what I was trying to see is the timeline. Here we go 2022 program timeline. So, as listed by them, the timeline is we will we the final week for this is scheduled September 5 through September 12. So two weeks away, right next is that let's calendar. So next week would begin final week. And we could say hey we're going to extend. This is. Yeah, this, this would be final week and I think what Rishikesh was saying is could we take one more week extend to September 919. Rishikesh and Rishabh back to the two of you. Are you okay with doing a one week extension. Would you rather. Okay. I would, I would have increased work commitments after mid September that is 15 September. So, but that shouldn't affect. Rishikesh extending it for a week. I might not be able to join the last, I mean the last week's meeting but yeah, apart from that it shouldn't be a problem. And I think that I think that is okay. Because I've got increasing pressure in about that same time as that's approaching the date for DevOps world. And I've got major things that I'm delivering at DevOps world. So extending by one week is workable extending anything more than one week for me is not workable, but I think we could do one week and and we'll just go from there. I don't feel nervous about what we've got like ooh we must have that week but I think it would be a good thing for Rishikesh to have the week. Because right now we are, I think we are in a good state like things are working as expected. But if we have that extra week I think we could finish off better you know try to wrap up stuff you know even better way. Try to fix all the bugs in. Right. Yeah, so I mean we're at a point now where it would be good for us to get a release of the get plug in of the get client plug in and get plug in out with the changes in it so that we can say hey we're set that would be that would be for me next week is probably too fast for that because there are still some diagnostics that Rishikesh you've got to do. The following week though would be really good. Is there a way to write you know Java docs like a way you know is there a standard or something which I need to follow like to write it. Certainly you'll see you'll see examples in the code of Java doc so let's go look at it so check this out. All right. Okay so for instance let's look at something that would need Java doc like with crony Java doc. It's not really intended to be extended right we're not intending for somebody else to extend that but get maintenance SCM for instance and here you could see the example Java doc in abstract get SCM source. Let's see if it's good or bad. Okay so opening this. Okay so it definitely does have Java doc and you can see how it's described. It's got an opening sentence that describes it and at since that declares when the API was added and in your case you would give it a version number you'd probably say at since to do and then I'll insert that when we release it or was it about to release it. So does that answer your question Roushakesh? Yeah yeah yeah. So I'll get this in detail. I'll get the Java doc for the Gitline plugin ready by this week. I'll also try to integrate a prefetch. I tried looking into it but I was facing some difficulties but I will try. Last week you recommended using the git lsremote command if I'm not wrong. Do you know check whether it's a private repository or not? Oh and you certainly can do that right that's good good idea git lsremote if it fails is git lsremote is certainly much more lightweight than doing git fetch. It does much less work. So if lsremote fails yeah that's a very good idea. And I'll try even writing the test for the Git legacy maintenance. So this week I think by this weekend the Gitline plugin would be ready and then I'd also try checking at looking at the errors in the UI like the error which we have seen right the bugs other than that other than that any UI changes which needs to be made like the UI. I did not I did not see any but the total experimenting I did was with the with what we've got now so let's let's do some well and I apologize I'm running out of capacity I flew in from Alaska this morning on an airplane and I haven't slept yet so I'm now approaching 20 hours a week. So so I have to get a hard stop here pretty quickly. Let's take a look at this before we're out of time. So the one worry was expand works there but has these extra Chevron the extra greater than characters. But then when I sort by repo size the expand oh repo size. As soon as it's as long as it's the first 10 increase it works. It appears that yeah it appears it's it well. And then if I make it 100 it still doesn't expand anything beyond those first oh no interesting. So if they were in the first 10 before. Then they expand. I'll see that other than that finally about the help help section or the help button isn't work I tried adding those. I thought I'll add some description and all but I'm not sure if that would be clumsy kind of a thing so. Well and and certainly I would love to have a help icon over here because users won't know what commit graph is or why they care. And GC, if we could call it garbage collection, they may know better what it is to see garbage collection and calling it GC. Okay. Any other topics we need to go over before we end today. I don't think yeah I think that's it. Those are the topics I wanted to discuss. All right, so do we do we need to meet again later this week, or are you okay who should cash going forward. And we'll meet again next week same day. Oh, I can about that I can send a message right. We need to meet like I'll send a message and yeah and the same thing. Absolutely you bet. So, so for my for my clarity. For me as far as I can tell the get client plug in changes that you've made have been quite stable. Right, they, they, you haven't had to make any significant increment increases to the functionality there. So it's, it's I think it's safe for you to market that it's no longer in draft. And add, add a few Java comment or two, and then, then we're probably ready to release that. And I need to do a release of the get client plug in anyway, because I need to release the year I'll show you what I need to release. I need to release get J get version five dot 13.1. Oh, it's not even listed here. I've got to get the change log up to date. So there's, there are a number of things pending in this get client plug in that need to be released, including a new version of J get. So your timing is good. Are you going to cover the legacy tests as well. Yeah, yeah, I'll work on the legacy test as well. Great. Anything else before we close for today. All right, then let's, let's go ahead and oh, oh, I see Rishabh you had included a thing on how to assert content of content of log messages. Yeah, I saw that here before we actually write the test setting up repositories right at this function. We are creating a log in a task listener actually for the test if you go to the setup repositories function. Search for setup. Okay, so set up. Okay, here we go. So for the for the first five lines, if you see, right, this is essentially we're handling a log handler and then we're adding a listener and this is being used across the test I saw to assert and match messages which are printed by the, you know, associated functions. Good. Okay, so this is something crucial cash that you could use as an example. Yeah. Great. All right, any other topics we need to review. Can you upload this recording for the session so that, yeah, I will. Yeah, I'll, as soon as we end here, I'll try to stay up long enough to see the recording through and then I'll upload two of them because I didn't upload the one from two weeks ago either. All right. Anything else? That's it. Okay, thanks everybody. Thank you.