 Hi, everyone. We will discuss what we're going to present for the final presentation and our work product, which is going to be evaluated by the mentors. So, Firstly, I would like to discuss the results we've got from so Justin gave a brilliant idea of So I had the results in my local instance with the macOS operating system and Justin gave me the idea to do the same thing for multiple platforms so that we have a larger variety, larger size of data, which is always a great thing so So I created three pipelines these three pipe the first one is checking out the spark repository spark is around 500 MB. Then the second project, the get to testing to project is checking out tensor flow, which is around 800 MB. Then the third one I am selecting is checking out is get plugin, which is around 20 MB. So what I wanted to see was when the threshold is. Okay, so I also have to talk about one more thing I should talk about before talking about the results. So I was the size rule we have within the get to choose a right now that is five MB. So within the five MB limit, we would provide jager we would recommend jager after five MB we would recommend get So off lately I saw a lot of results where in the benchmarks where for repository sizes like 20 MB or 40 MB or 50 MB I was seeing that jager is performing better than get and by some margin that that margin can be let's say 40 milliseconds or maybe 80 milliseconds so there is a considerable Not in terms of real performance but in terms of for theoretical experiment that the gap was there. So, so I did the jar which is uploaded in this instance contains has has an increased size limit to 50 MB instead of five MB. So let's just start with get plugin how how it has done the get to choose it so So first of all, the user has the assumption here is that the user has chosen jager and now the projects have been checked out. And I just want to show one more thing before showing the results because I want to confirm that my process of testing this experiment is creating this and performing this where it is right. So this is the Jenkins file I used to do this. In this Jenkins Jenkins file there I am not performing anything it's kind of an empty step but it checks out the repository. So I just wanted to check out the repository and do nothing I did not want a maven build process or anything and Since I had a project where the matrix where I have a matrix which runs on multiple platform but it builds on maven I just removed that step and I use this is is this wrong. It's not wrong, but it's got a problem that I would expect you to get a failure on windows it will say I can't find a safe if you just change. Yes, if you change it to echo and say echo hello world, then you've got a platform independent step but other than that, this looks great and I can tell you centos eight is always cloud hosted. Debian 10 is almost always cloud hosted and windows computers in my environment are all modern. They are all within the last three or four years. So those three are quite predictable free BST 12 is older hardware, but given that you're probably network bandwidth bound, it may be fine. And what if I remove the asset step. It's just an empty step that also be okay I would just put the echo in because I'm not sure that declarative pipeline will accept an empty and empty step step. It's cheap. I think that's what I tried first. Okay, okay, I'll do that that is that explains some of the bills here. So, so now what you're seeing here is the third build if you look at the third bill. That is done without J get one second just give me a second. Oh, and are you is this doing something that will allow it to wipe the workspace so it's not reusing a workspace. Yes. Yes, I have added the additional behavior. Why, why about clean workspace and force reclone that behavior. Great. Yes, so Okay, so the second bill is without the get to choose it and the third bill is with the get to choose it. Now with CentOS a we do not see any change. And since we are looking at now the the are measuring unit of the metric is seconds. It's not milliseconds. So in milliseconds, we would automatically see differences that that I've seen with the benchmarks. But now since we're talking about real cases and we want to show real results. We will talk about seconds because that is what the user might see. And, and in that case in CentOS, we don't see any difference in dbn we do see a second difference with free BSD. I think the bill failed. I'm not sure why this happened. It took 10 seconds. It should not take 10 seconds for any I think machine to copy get plugin, check out get plugin clone. So, so I'm sure. So what I can do is what I can be sure of is that there is, there will not be more than a seconds difference when we're talking about the positive deposit, the depository is less than 50. So, because one might ask in our final presentation or representing it a user might ask what kind of performance improvements are the bigger performance improvements or the more attractive results we have are are from the area where a person when it chooses the JGIT implementation, then we are able to provide a better implementation and give a much visible result. But in general, when a person mostly uses of what I would knowledge I have a user would not, the user would go for the default implementation. And I assume that the default implementation will mostly be get because get is installed in machines, commonly. So in that case for most user changes in performance will not be noticeable. It's, they might see a second's difference. They might not I might need to perform this this experiment repetitively so that I see. So that is when I can be confident I okay I know that within 50 MB there I can I can see that there's a reduction of one second or maybe half a second or something like that. When windows also we see that it's it's the same. So in the benchmarks I've created and I ran. There's almost a half half a second difference when we're talking about it ranges but if we average the results. It's almost it varies from half a second to let's say three fourth of a second. So that is why it's it's unlikely that we will see any noticeable performance change. When we're talking about but your measurements show that even for repository is relatively small like it plug in. There is a benefit in terms of clone time to use j get that's good. Yes, yes there there is because because the the the most important the vital insight we derived from the benchmarks was that for a small repository because of the because the JVM heats up and j get is a Java implementation it performs better. So that advantage makes j get a better contender for smaller size repositories thank you. But in general cases get will always perform better and in large repository cases it would exponentially perform better than j get. So, so because when when I was creating the presentation I was thinking what would I answer to a person who would ask for most of the users what would happen. So just for j get we have great results for j get and I'm going to show them to you but I was I was I was worried that would that be enough for us. So, so this this project was focused on a repository size less than 50 MB. Now, let's look at something larger drastically larger. This is a 9800 MB repository almost. So, here, the fourth build, the last build the latest build is is a without get to choose a build and the previous one is with get to choose it. So, if you look at any of the platform for till free BST there's there's 50% reduction in the execution time checking out process. Yeah, it's more than that for this I know it's it's all it's actually more than that for this one free BST. And but with windows I saw this and I could not understand it actually reduced without get to choose a which I did not understand. So, I am actually not very confident with the results I have I may have to run some tests particularly on windows if it's showing this behavior consistently. Rishabh on the windows it's get CLI installed just to confirm this. I guess it is and we can we can however it's possible that from one agent to another from build three to build four so there are there are all sorts of potential variability because we're he's testing in a in a live environment that has four or five windows computers. And then we can check that out the recommended get to is get and I am as it is using get here with the results we've seen so does have get don't forget we've we've now been a month or more since we talked about it but in your final presentation don't forget the highlight that avoids second fetch for some users maybe a substantial improvement already. It's just just dodging the second fetch we had reports from users that it was very that was slow, even if they get no other benefit they get benefit because we're doing one less operation. Yes, I have included that in my presentation, because I was desperately looking for points to show what we've done so I did include that. So, so I think this this result somehow confirms that there is. Let's say around about 50% degrees in so 50% improvement when we're using get to choose it, but in a particular case, and I think we'll have the same result with the get to choose a testing project which is checking out the spark repository. It's actually not the same here what we what we what we can see is this the 10th build is with the get to choose it in the latest build is without one. So, for send to us, it's almost the time reduces 10 seconds or whatever it is to it's not even 10 seconds, it's less than that so this is, I would say surprising. Because the benchmarks I have, I performed throughout the project for three months, I saw improvement in order of 150 160% when we shifted from get to jacket for large size repository like 500 600 700. That reduced to 50% here when I'm actually doing this end to end this is not just the get fetch operation isolated. This is the whole get plugin performing a complex operation. So I somehow I justified that this the reason of not seeing the same exact result with what we saw in the benchmarks here with with the fact that this is this is now not just a single isolated operation, but I'm not sure why this would happen why we would have less reduction here. So again, this is why I'm I'm actually not very confident with the results. I am confident that there is a reduction and it's around 50% because my local instance is giving that for five or six jobs I've seen, and it's almost consistent with that result. But here in this instance, I'm not seeing that so maybe I need to run more jobs and then add those results to the presentation. Actually, I would just, I would just admit that in a very repeatable environment, you've confirmed these results. However, in the wild or in in in environments that have widely variable equipment. The results are also widely variable. I'm not overly concerned that you need to run it again. I think large variability is predictable in this environment where your. This is not too different than if you were running these tests on ci.jankins.io where you don't know what class of agent you get you you just get an agent that has the label. Okay. Okay, then I can. Yes, I can include this and I can add that kind of a note that the environment we're testing in can include a lot of variability so. So, okay. So I think these are the results we have from the kit to choose it. And this is what I will be presenting in a different format I was thinking graphs comparative bar graphs where we see. I'll ensure that the bar graphs on the slide only. So, uh, okay, I'll go to the agenda once. I just wanted to ask, whatever testing we've done. Are we still do we have any case where. Any compatibility is breaking a use cases breaking for after adding it to choose it to we have any cases like that or are we left with some kind of testing. So there is there is certainly still more testing to do, but the testing I think that needs to be done is not not as much about compatibility as about the change that that using J get brings on the master or on static agents. We're on the master and on static agents now we're running in process and we're relying on J get to do a good job of garbage collection. Right. If, if it has a leak, it could, it could critically damage the, the jank, the Jenkins controller, the central master if I'm not aware of any leak. I've not detected one but that's one of my concerns is if a if a controller that previously was managing a thousand repositories or a thousand jobs suddenly develops a is exercising a memory leak in J get that didn't exist before because they were using CLI get that would be a serious problem. Those are the only kinds of things I haven't seen anything compatibility wise and if we do we'll fix it in an upcoming release. That answers my question of increasing the size limit to 50 MB then right now, we would be it would be safer to release it with a five MB size because that would make sure that we do not if we have something like that, a catastrophic memory leak issue or something like that then it would be safer to release it with five MB because that would not cover as many projects as a 50 MB size limit. I would say with the 50 MB limit we would maybe cover most of the cases. It depends on the use case but I have the PR ready to increase the size to 50 but if but I wanted to ask do we want to do that right now before releasing or can should we do that after releasing it. So, I like your argument that we wait till after release. You could queue the pull request you can still be submitted certainly but I think let's go with the current the tested values, just to see the bigger danger here is that we're going to this will be deployed in within two or three weeks or probably be deployed in 50,000 installations. That's that's only 25% of the install base but 50,000 installations and those installations will have have conditions that you and I haven't even dreamt of yet. Okay. Okay, so we'll go with that then. And so do we have any other requirement for the release. I'm not aware of any, unless Fran, Justin or Omkar have any I think we're ready to ready to deliver release. Gonna need a celebration. That's the only other requirement. Okay. Okay, so I think we have some time left so I think we can talk about the presentation so we have discussed the experiment I've done for for measuring the time. So I was also thinking do is there any way can you the mantis can. Is there any other way we can show the results. Except, so is this the is this the best way to be as near as a user would expect the results to show the results which are as near as a user would expect once they have this feature in their systems to just measure end to end the time which is being executed for this process. For me, I think that's closest to what the user wants it as some at some point in the future, not not for presentation but some point in the future. I think users would appreciate an online guide which would tell them, in addition to these results if you use a reference repository things get this much better. If you use a narrow respect on a repository things could get this much better. So there are there are things there are other things that are candidates, but for for purposes of this project I think presenting the time to do the operation is the right kind of result at least for me as a user it, it, it talks to me it tells me what I wanted to know. What's the benefit from this. What are the risks of doing this. And how do you get the benefit. Do you need to do an extra thing here and there to get extra benefits. Well, actually, Justin's got a very good point there most users do not have j get enabled. So they will get no benefit from this, because I suspect most installations did not even enable j get. So in order to get benefit we need to remind them you need to enable j get. But, but then if if someone doesn't even have j get and they have get them by what benefit would they get if they already have the performance. Small, small repositories clone a little faster. Small repositories clone a little faster. Yes. And the so for me one of the surprises was, I had a pipeline that uses a pipeline library, and it would intelligently choose j get to clone the pipeline library, and then intelligently choose get to clone the repository so I got a little bit of a benefit already because the pipeline library pulled in with j get. And then the, the whole big project came in with command line get. And I would sense I think we haven't seen how the switch is looking like I have put up a screenshot for it, but still. So do we need a demo for this I don't I haven't planned a demo I was thinking of just showing the results and how it's going to impact the user, but because there is not I am not sure how we could demo this performance improvement. But Justin's point that the user should know how we can enable or disable and then what configurations would provide better results I think that much we can show in the presentation in the meetup. So how much time. Oh, go ahead, Justin. No, I was just going to say I don't know like that it would necessarily need to be like demo that'd be could also be like screenshot or. I think you had documented some stuff from the remit or something like that but anyways in documentation would be useful I think for the long term for this purpose maybe screenshot or could be demo or something like that but maybe market idea to. Yeah, I like show the show the documentation is also a good excuse to brag that yes we're doing documentation is code we wrote the documentation before we release the product that hardly ever happens in most software so. Okay, there it is. Yeah, this one. Unfortunately, this is, this is a place where you're really begging for the new table to do the UI, because you notice that the help buttons are completely invisible on the right hand side of your screen. Yes, yes, because of because of the table layout that you used here that's, it's a terrible thing on this particular. Yeah, but I think your screenshot that you embed in the documentation is perfect. Show it and bring up the documentation we should brag to people that. Yes, guess what this is documented please please note you can read about it. Okay. Okay, I'll do that. Okay, so. One thing that I'm wondering about and I don't know that maybe this is a question that would come up to is like, I wonder if it would make sense or if it'd be possible to have j get enabled. If this proves to bear fruit for tons of people, like if that's something that could be done at some point to automatically enable j get Jenkins because it comes. You are such a brave person. Oh, my sakes you are a brave person. Asking Mark wait to change defaults as shipped. Yeah, that I love brave. That's great. I'm not ready to sign up for that. I hope as many spheres as you and the open source community. The number of times people have complained to me about any change of defaults it's like, no, I had this awful evil default but I love it it's my favorite. Maybe that's the answer if that question comes up to is like, right, a lot of people have strong preferences that they prefer not to have to get enabled. Yeah, this is they have to make an active choice. They must choose. And if they've chosen not to enable j get we won't overwrite their choice. That's right. And on top and on top of that I was thinking that maybe we can, we can also add a feature where we don't even ask the user for the implementation we ourselves figure out what is available and then recommend the best thing there is. But I think that is not. We shouldn't do that because you win extra points you're even more bold than Justin is that's great. Yes. Good morning from the log. Warning, you did not have j get enabled. Are you sure about that. We'll turn it on for you for not having it will help you here. We also deleted the man land it. Exactly because it's an old old thing. Although, truthfully, truthfully, there is, there is something to be said for things like, we probably ought to warn people. Oh, you're running Centos, Centos eight, your get command line get implementation is an old boat anchor, you should, you should upgrade. Centos, sorry, Centos seven is a really good anchor Centos seven is running a get from like six or eight years ago. And you know the get communities improve performance in that long a time. They absolutely can be confident. But but no, that's out of scope here as well. Yes. I think that's a great thing for the future. It's called an administrative monitor where we warn people, warn the administrative, you know, you're running an ancient version of get on this computer. Okay, so offer to them, you could have much better performance if you just would do this. And from a user's perspective, if I'm not sure how many users mostly I assume that the users of Jenkins product would be developers. So I'm not sure still what percentage of them are aware of the implementations, the two options we have within the get plugin. And that is why we always have the default option and the default option. I just want to make sure that I understand it. The default option means that if get a get is installed there, it will always be the default option. But if it is not installed, the default would be jagged and the user will not get to know that it will, it would be default from their perspective. No, as far as I know, if, if it will never use J get unless you have enabled j that the previous behavior anyway was if I don't have command line get installed, it will attempt to call command line get anyway. And it will then fail terribly ugly failure message and says, get command not found. It's not nearly smart enough to say, Oh, I didn't find command line get I'll try J get. It's, it's absolutely dumb. It says, if you, if you ask to use the get plugin and you don't have command line get install and you did not enable and choose J get you will just get a hard error message. Okay. What's safe, safer. No, it's just consistent with, with all behavior. I'm not sure I'd call it safer at all, but it's, it's certainly lower change to users. It will not surprise them that, Oh, I forgot to install get they must go install get and that means then the usability of J get is, is quite and I'll not say remote, but only people who would know about J get or might have any use case. Right. You need to have read the documentation and seen, Oh, there's a jacket implementation. Wow, should I try this. Yeah, that's correct. Which honestly, I guess if we're, if we're looking for these kinds of performance improvements at a broader scale, you're probably dealing with someone who's an administrator of Jenkins and so they probably will have read the documentation. In most cases like Joe user who's got a little Jenkins on his laptop with five projects probably doesn't care about it quite as much as like. Yeah, and anyone anyone that's dealing with multi hundred megabyte repositories. We hope will eventually ask themselves the question, could I go faster. Is there something I could do to go faster, and they'll start exploring and that's, that's a whole new theme and a whole new topic of what are the things we could do to help them go faster. Okay. So, I just quickly discuss what I want to present the structure of the presentation so I was thinking to introduce the project what we wanted to do. And, and I have a question here I think so. Okay, so right now our performance improvement is focused and limited to the checkout step. And what I wanted to ask was this, whenever there's a checkout step in any project in Jenkins for any Jenkins pipeline, Git plugin is the exclusive plugin which will be used for that. I wanted to confirm. If they're using, if they are using a Git repository, they will use the checkout SCM step will use Git, will use the Git plugin. Yes. Okay, and the second question is the second part to that question is that, but when we're scanning repositories scanning branches sorry for a particular repository. At that point there's a fetch step and then I think we we scan the branches and then we build the branches if you want to depends on what the user wants to so for that process for the scanning process. So that is something it's not exclusive to exclusive responsibility of Git plugin. What is it. That's, that's correct. So the, the, in fact it's recommended that they wherever possible not use the Git plugin to do scanning, because the higher level providers, GitHub, BitBucket, Giddy, GitLab, those higher level providers can ask the questions more efficiently than the low level provider can. So, so the high, the preference for a user should be if you're using one of the things that has a higher level provider, use the higher level provider. It's more efficient. Okay. So, so yes, it can be used for scanning branches but it's recommended that they please use the higher level provider. It'll give them better results. The Git plugin does not know how to do REST API calls to GitHub. Right. It doesn't know how to do REST API calls to GitLab and those REST API calls can be dramatically better than cloning an entire repository to get it's just to get its information. Yes, that is what the Git plugin does right. Right. The Git plugin is, is like a stone knife, whereas, whereas the, the higher level plugins can use lasers and other really effective cutting tools. Okay. So, so after the introducing the project and then I was thinking about going through to, to the process of how we've reached to the conclusions right now before introducing Git to user of the results, because I think we don't for most of the projects and students a demo is, is each of a chunk of their representation of their presentation because demo is something they have to do for their plugin but for my case, I don't have a demo there. So, I was thinking to use that time to not go into too much depth, but just, just show how our results, the results we've used to, to enable Git to user how we've reached to that point and I just wanted to, I was thinking of talking about we, we, the parameters we chose to see to see the dependence of the operations and the performance where they range from the size of the objects, the repository contains the number of branches comments and tags, what we got out of them just to one line result not the problems we've had with them, but the results to conclusion we had. And, and then sort of talk about, we've also benchmark those results with multiple platforms, we know that the Git to user will not perform unexpectedly. In one of the platforms, we know it's, it's platform independent in terms of the benefit it is giving. We did see that with one of the benchmarks I presented in the last presentation in the phase two presentation. So, I, I was thinking of showing that as a, as part of the process of how we reach the results and I think that gives the user more confidence that our results or whatever Git to user is doing is, is legitimate. And then I was, I was thinking to visually explain how and where we've improved. I was, I was just thinking to show that if you're checking out a repository from one of these providers, so there's this, the SCM checkout step, we've done two things there, we've introduced a new, a new feature which is Git to user, which I want to say I want to market it as a feature which takes the responsibility of choosing the right implementation from the user and let the system decide that, but I'm not sure if it's the right thing because we actually, or it is, it is. So, I was thinking of introducing it like that and then the second thing we've done is that we've removed the second fetch, which is redundant in most cases. And I would add that it was requested by users and it would benefit those users. And so I would refine this diagram but this is how I was thinking of visually explaining what we've done and then with the results I would show the graphs we have the graph said I haven't showed you any graph I've shown you the builds but I would, I would put all of that in a graph and then I was thinking of showing. So I, what, what would be better to show multiple repositories varying from a small size to a large size right that would be the best thing to do to show the performance improvements from a small repository and then let's take it to a big repository. And in that process explain that for a small repository, JGIT is performing better than Git and then for a large repository, Git is performing better than JGIT and that decision of switch is being taken care by the future we've implemented. So, and after that I would include slides of the challenges we faced and what future scope or what we have to do, which, which is the last thing I want to discuss for this meeting that we, the extension support for the extension support, we, I think Mark and I, we both tested the GitHub Brand Source Plugin extension implementation. It is providing us the information we need with credentials or without and but that hasn't been merged in their plug-in. So it's officially not available to the user. So I, so I wanted to discuss what should I say in the presentation to because that's because we ideally we want to provide that support for GitHub, GitLab and GitE Bitbucket but with each for GitLab we've had some issues and there is a actually a roadblock there in implementing the extension. We have implemented the extension but the way the credentials are passed is something which is not currently possible with Git Plugin. Do we want to go in there and discuss the issues we've had with those extensions? Okay, okay. At least for me I would say we, we talk about what more is needed and what's needed is we need an extension implemented for GitLab, for Bitbucket, for GitE, probably for Tulip. So there, there are several branch source providers that would, that could provide this information from the REST API and, and it may be you that does it, it may be them that does it but the, we've now got an API, they should, they should provide the data. Okay, but why would, I think, why would they do it? It's not something, I'm not sure because it's an information with the Git Plugin requires is actually it means nothing to them in terms of functionality of that plugin. Oh, oh, except I think it does. Imagine, imagine you are Atlassian and you're providing the Bitbucket plugin and you learn that the users who are using GitHub are getting better performance because the GitHub branch source implemented this API. You are now at a competitive disadvantage because somebody's got a better implementation than yours. So now, now for an open source thing like GitE, down to the very bottom, it's harder to say that but for GitLab, certainly GitLab's primary competitor is not really Jenkins, it's GitHub. Right. And if, if GitHub is doing a better job than GitLab is doing on, on the Jenkins implementation, users may shy away from GitLab and towards GitHub. Okay. Okay, so we'll, I don't know that you want to say that in an open source presentation, but I'm just thinking from that for them as providers, they may say, look, I don't want to be behind these other people. I want them, I want to be at least as good as they are. Okay. So that means that we will, we will discuss the extensions and the support we have right now and what we need in the future to make this feature fully, I would say useful to every provider and every user, for every user. Okay. So I will add that and I think that's it. That's, that's what I think I would present. For the meetup. And then I also have to present for the DevOps world. So I was thinking to be more concise and include lesser details about the implementation or I would say just talk about the general improvement we've done. And that's what I was thinking to include there. So, yeah, you're, they're giving you 10 minutes. Is that right at DevOps world? Yes, it's a lightning dock. Yes. This, this slide that you've got on visible right now is a great opening slide for that. I think it's not the right slide to open the, the, the GSOC presentation, but it's, this is a great slide for a DevOps world thing because it grabs them immediately. It's got pretty colors. It's got logos. You may want to get to a chooser in bigger words somewhere and, and redundant as it gets bigger somewhere, you know, so, so, but, but this is a, is a great choice as an intro slide for a 10 minute lightning talk. Okay. Okay. So I think that's it. Anything, do we need anything more? So from the Google, from the GSOC team, I got to mean that we, I need to send them the link for the work product. And in our case, I would assume the work product is the pull request for get to choose it, which is now merged, because I think sending the git plugins URL would not be the right thing because that is not exactly the work product. The work product is the performance enhancement within it and service and, and regarding the work product. Is there anything else the mentors would like, apart from what is done here of any more requirement is. So, I think it you've got an interesting challenge. You may want to send them a link to a document which shows links to the various work products right because we have one work product which is the get plant plug in implementation that had to release. We have another work product, which is get plug in pull 931 and get release 440. We have another work product, which is the GitHub branch source pull request that is pending. And each of those is in fact part of this work. Right. So, so for me, it's, if, if the, if their request is a single link to the work product, then that's probably needs to be a document, which actually has links to all the real products, you know, the pull requests, the conversations, because saying it's just get plug in, it's not right. Your project has changed much more than just to get plug in. Okay, I'll make a document for that. I probably just maybe I get up just I could do something like that and show them. Okay, I understand. Either that or you know what you could do, Rishabh, what if you put it on the Jenkins.io page that describes the project. Okay, I could update the project page. Right, because I haven't done that. Yes. Right. And that's a great excuse to update the project page and that's a place where we can put links to multiple pull requests and to plug in releases. Yes, get client plug in 342, get plug in 440 and all sorts of things like that. Yes, great idea. I like to. I'll update the project page first because I have to write a blog as well. I haven't done that. So, yes. Okay. So I guess this is it for the presentation and so Mark, we will release the plugin today and tomorrow. Is there a time? It will probably, so I've got a day full of work for my employer, so it will probably be after 12 hours. So thus, Rishabh, I'm authorizing you for once in the last four weeks to sleep. Sorry. Rishabh and I spent all day Saturday, all day, my day Saturday, working on this, testing and exploring it. And at the end of it, it's 4pm my time and I realize, oh, it's about 4am Rishabh's time and he's obviously not slept all night long. So yeah, sorry. This is, yeah, it will be at least 12 hours before I get to releasing it. Think, think 12 hours because it may only be eight, but it will be after my working day today when I can get to the time to release the plugin. That's that's fine. That's okay. I was just, okay. So, okay, so we, I think we have everything covered here and presentation is on Thursday. So, okay, then we'll meet there. I hope I give a great presentation. Yeah, thanks everyone. You rock it. You will be great Rishabh. I am thrilled and you're, now we may create a firestorm if we miss something in our testing, right? And that happens sometimes. Get plugin releases have on occasion created firestorms. So don't be dismayed if we miss something in our testing and people come back and say, how dare you? That's okay. There, there is something where I am a little concerned and that is related to how the get to chooser will provide the implementation when we're talking about expanded parts. So because we are processing parts and then we're giving that we talk about implementation, but it's actually the executables path which are which we are passing when it's when it's get, which is like slash user bin get if you're talking about the person. So I'm actually a little, there was a check which I added before, before the final pull request, which I added to include the implementation where there's an expanded path instead of just get. But that was making one of the bills fail in your system. And then I removed that check. And surprisingly, everything is working fine, even the expanded part of check that particularly use case it's working fine. And so I'm a little, I'm actually a little anxious about that because I've removed the check. And my unit test cases are fine and whatever we've tested is fine, but I am still a little skeptical of that. I think we might have issues if we would possibly have issues where the implementation might somehow get changed, which might result in, but I'm not sure. This is me being a little negative and skeptical about the whole feature. But yes, yes. I'll do some initial smoke test some smoke testing as well before release because I think it's a good place if you feel concerned it's a good place for somebody else to look at it as well. Yes, if I if I find some disaster I'll let you know. Yes, please. We have to fix that. Okay. Thank you everyone. This is so shall we shall we be covering like more test cases with more versions of gates here that maybe like after the match. That's certainly one one possible future activity is comparing get 1.8 ancient history to get 2.28 modern modern recent release to see if there's something the other is there are more operations that we could optimize like LS remote. Right. It's we reshot correctly focused on the big the big win the big win was check out for the clone. You know the big win is clone its network operation. But there are other operations where we could actually prove conclusively that Jake it is good enough and silently replace it. So, so yes on car you're right there are many more things we could do in the future. Mark, I think we did benchmark get LS remote. And I think I have a I have a study with I didn't quite extensively experiment with get LS remote. Oh, good. I don't I don't remember what conclusions did we derive from that. I think I look for that document and we can discuss that on the chat platform. I'm not sure what I'll send the documents was find it. Okay. Thanks everyone. Thanks for again giving much more time that is allocated for. Bye everybody recording will be posted.