 Okay. So just one second. Yeah. So the first agenda for today, the demo we had, the first thing I wanted to tell is that I have raised the PR for the blog post. I'd like you guys to review it and as soon as we can publish it. So apart from that, from the demo, what I'd like to discuss is Oleg's comments. He was talking about about the scope of our benchmarking strategy. And he was, he was, he gave a suggestion that we should also consider the depend, as far as I could understand him, the dependencies upon which the Git plugin works like the credentials API, we should also consider them by we're benchmarking the plugin. So I'd like to discuss that because I think this meeting, we can use it to plan what we want to do for the next phase. And yeah, so first, let's just discuss what Oleg was saying, Mark. I'm actually not, I haven't explored the dependencies of Git plugin. So this much I could understand, but could you please explain more how we could use a suggestion in our plugin in our project? Yeah, I think, I think so audio check first, can you hear me okay? I'm not sure if my internet's well behaved. Okay, great. So the I believe his concept was that there's a higher level or no, there's already in Jenkins, a layer at which things operate, for instance, in a job, and you see them when you run a job. The output is there where it does a Git version command and that it does a Git config something and it does something else. And I think what he was expressing is, could we benchmark the aggregate of all those things together and get useful information with the intent of looking for things we could stop doing or do differently. As an example, in the checkout process when you watch the commands that are being being reported, one of them is it's doing a Git config. So it's actually calling command line Git and doing a Git config. That is an obvious target for a JGit transition so that we don't have to fork a process. Java is great at opening files, writing content to files. And so that's one of those places where, but it's not clear to me, would that actually help at all? And I think that's what he was looking to explore is, okay, is there an interesting difference if we were to put a benchmark at the level of something above in a pipeline step? So a checkout SCM, if we did a benchmark of checkout SCM in a pipeline and said, hey, let's compare checkout SCM for CLI Git and checkout SCM for JGit. For me though, the number of variables there explodes. It's even harder now to decode which are the significant variables and which are the insignificant variables. I think as far as I've learned about, I've researched about micro benchmarking, the general advice is to micro benchmarking exactly means to isolate to a single point in your operation and then try to test it because if we are trying to benchmark the whole operation, the amount of variables we'll have, the unknown variables, that's something which for that, I considered that as well to time the whole process as well because I discussed it at one of the meetings about the scope of the benchmark, a big benchmarking strategy that maybe we could also write something, write some kind of benchmarks for the plugin, Git plugin as well because that information might be more useful for the users because the benchmark setting in the Git client plugin would be benchmarking the same operations and there might not be too much change in the operation, the way Git fetch is working, so benchmarking the same, the benchmarks just serve the purpose for once that we benchmark them and we gain insights and then we use those results. But if we have some benchmarks at the level of the Git plugin operations, which would involve, like the SEM check product involves a combination of Git operations, so if you're able to benchmark that at each build any developer creates with the master branch, that would be very useful for that person if they're changing anything and the performance is increasing or decreasing or if it's being affected by that. But then at that meeting or what you, the mentors, we decided that that would be, for the current scope, it would be, it could be something we could do once we achieve what we want to right now, but right now that would, that isn't something we're looking for. So if it's something we want to consider, maybe we can, right now I think the current issues we have, it's first, for an example, first I need to, the benchmarks I do for a single operation, I need to be more confident that I am able to get good results and we're able to actually use those results in the plugin. Once we, maybe we exhaust those avenues of exploring single operations and we're not able to find any useful insight to improve the performance, maybe we can move up. Would that be a good strategy? That suits me just fine. I'm still, I think, I think you got it. Oleg's, what I took from Oleg's suggestion was if we had completed the other objectives that we had set, we could go to a higher level. I don't think that we've completed those other objectives yet. And I think there's much to be learned still in the, in the, at the level that had been defined. So not, not to discount, it's very dangerous for me to ever discount that genius that I work with. He is, he is a delight. Oleg is absolutely brilliant. But it's, it's also, I think we had a good plan. Let's continue that plan. And as we learn more, we may then broaden it to say, Oh, yes, he's got the right suggestion. Let's go higher level. Okay. Okay, that sounds good. Okay, so the second thing is the bottom of the demo, we, so we, the PR for the fix for the redundant fetch issue was merged, but there was also Mark requested an opt opt out global switch, which would be helpful for some of the users if, if our logic is not correct. So, so with that, before discussing the issue, I would also like to ask, should we, should we add any issue track, should we start tracking our issues with Jira? Why I say this is because when we were discussing this issue, I, I thought by mistake that when we're talking about the global search, we're talking about the switch, which we will implement in the performance improvements once we have the performance improvement. So I was, I was actually not aware that at the time when you asked by merging after the merging the PR that we need an opt out switch for this fix as well, I was not aware, I forgot and I did not track it. So I was thinking and also, also one more thing with this issue is that sometimes we discuss what we're going to do, I raise a PR, but it's maybe not everyone is involved with the PR. So for for everyone here to follow what is happening with the issue and the sub tasks and if we have people can go and update it, I can track maybe track things better. So it might be a good idea to, to now for from the second phase, try the issues via here. But if there's, if there's any issue with it, if there's something we'd like to discuss about it, please. Yeah, if if if JIRA, I love that you've suggested using JIRA, that's great. And if you think it will help you in any way, you have my wholehearted support. That's I flinch, because Git plugin is the number three highest owner of Git is of open issues in JIRA. It's second only to Jenkins core and the Maven plugin and Blue Ocean. So so it's, it you're not you're not helping my warm fuzzy heart feeling that says, Oh, I'm improving the plugins improving. But you're doing it, you're proposing a really good way to track our work. So that that I no problem, let's use JIRA. Absolutely. Okay, okay, Mark. So I and with the so with the issue with the opt out switch I was talking about. So I implemented it and I raised a PR for it, Mark has seen it. But with the implementation, there is a one problem. And the problem is that it's not persisting for each build of one build, I configure it and I, and I, and I start the build, it works, the switches working as it should. But once the build is complete, I again go back to Jenkins configure page, it's the checkbox should be checked in. If it's checked in for the first time, but that's not happening. So I did not get a lot of time to actually look at what do you do you have any pointers for it, Mark? Do you do you? Oh, okay, is it something in the code? Do I have to add anything else? Yeah, I think, I think if you were to do a side by side comparison between the pull requests that I just merged from Bart Deverent, yes, your pull request, you'll see a very simple difference in terms of the API naming convention. And that API naming convention is all all the crucial thing here because it breaks the coupling between the user interface, and the getters and setters. And and it tragically, it's absolutely silent that the connection is broken. It just it doesn't show you Oh, I tried to set something in Java ignored me completely. Oh, that's I think it's the fault. Yes. Yes. You think something. That's it. So I think the mistake is that I was confused with using redundant fetch or the second fetch. Because if we are giving an option to the users, I feel that this we should not call it a redundant fetch, because we it is not redundant for them. So it should be called second fetch, but we were using redundant fetch for a long time. So I think I I redundant fetches. So there's a confusion there in the code. I like I'll change it. I look it up. Okay. Yeah, well, and and but your your attention to Hey, we don't rename things just casually is actually quite important. We haven't yet released the redundant fetch removal. So we can still freely rename things. Once we've released something a public symbol in Jenkins becomes tragically and unforgivably part of the public API, even if you didn't want it to be. So your naming mistakes, my naming sins from four years ago are still very much in the in the code. And they have to be otherwise we break compatibility. So so we this is a great time for us to choose a good name like allow a second fetch and get rid of bad names. Don't be shy. Once we release, we don't get that forgiveness anymore. We're stuck. I understand that. Okay. Okay, so I'm going to do that now. Okay, so after that, I was thinking to plan what we want to do for phase two. And and also with the planning, I think what I wanted to discuss was what we did right in the phase one, what we did wrong, according to you guys, and what we like to continue and what we'd like to drop. So something in that format. So let's start with what we I think I'm not sure what we like. Let's let's start with that. According to you guys, because I think your review is more important for me to move forward. Then I can give my review and that's important. Yeah, Mark. So I was delighted with your progress and with your engagement. Really thrilled and excited to watch the progress excited to see the engagement. Very positive. Love it. And that was done by code requests, pull requests by looking at code and by and that you were willing to do interactive testing at the places where interactive testing actually was more effective. I like that it's, it's uncommon for a student to realize that interactive testing is actually sometimes the most valuable thing you can do at certain points in a project. That's oh, but I want to write code. No, no, sometimes we test. Well done. Well, that's an important thing. I one of the important things I learned that just writing a fix is not important. Compatibility is a is a very essential. I would say thing to remember while you're writing anything for for a report for a plugin or for a utility which is being used by so many users. Well, it's a good it's a fair point that in different environments, the compatibility requirements are different. Right? If if you join an employer that's writing Linux kernel work, they care much, much more about compatibility than we even dream of. If you're doing brand new blue sky code, compatibility is not relevant. Go as fast as you can. Okay. Justin. Yeah, I mean, I would echo Mark. You're very enthusiastic. You seem to take take a task. We say take a bull by the horns. If you know the idea. So it's good to see that, you know, you take take stuff on you do some research on on stuff present some options, which is nice. It's good for us to hear. I think in any like software development project, it's good for if you're working on a task, you're showing the other people on your team the options. It's always helpful because they may come up with other things, but they may they also have a head start in thinking about the different options because you've you've given some of those. So that's nice. Did a great job on the presentation. Yeah, I'm trying sextant. Okay. Omkar, would you like to say? Yes, so yes, I jumped in pretty late for that thing. But yes, it is true. Yeah. Yep. So till the time like last four weeks, those have been amazing. Like your work, it's pretty pretty, like, more than pretty great. And the yesterday's presentation was amazing. And the best part of you that I felt was like, you're able to go with the approach. And if you find anything wrong, you're like, open enough to expose that fault in the approach. That's the best thing that I've found till now. Yep. Okay, and I look forward to get engaged more with you with the phase two. And yes, and it's definitely it's exciting. Yeah. Okay. So let's let's go for the issues that we the problems with you think you guys think that I should address. That's that's that's the most important, but I why I don't know why I orchestrated this thing like this. I actually want to know what were my mistakes. Then I was during phase one, which I should look into in the phase to some things like maybe the speed or the speed with which I'm taking tasks, or maybe the documentation part, the way I'm taking that any, any, in any sense, you guys feel so one of my worries was that if we were as mentors, redirecting you a little too comfortably that we were, Hey, let's go off plan a little bit, a little bit, because we think this other idea looks interesting and attractive. And and I wonder if as mentors, this is not not your fault, this is really a mentor choice where sometimes I am, it's very easy for me to get interested in the most recent shiny thing. And and go after the interesting recent shiny thing when when the plan we assembled was probably better than anything recent shiny. So so I think it's a point for us to be careful with each other to be sure. Hey, don't let recent things be received undue focus compared to to the big picture of where we were going. Okay. Okay, man, that's a great feedback. Okay. Just in home car. Anything? Yeah, I would agree. Like, I think I've probably given some suggestions that may have like taken you. I don't think you, I think you justifiably were like, took the suggestion and maybe I think we stayed on schedule. So I think we were good. I guess I don't went too far. Yeah, I think the only thing I can think of is probably also schedule related. And I'm not sure that this is necessarily something on you. But I guess we're have we have we done a check up on like where we are on schedule. And if we need to readjust any of the dates or anything like that or the projects, maybe that would be worthwhile doing periodically. Yes, I think that that's an important thing to do. It is it is and we haven't seen that I should I open the proposal where I actually propose the goals we had or the way we are going to do and then compare it with what we have done. So I generally keep on following that particular thing like I follow up the agendas for meeting and I also check on the proposal. So I think we are definitely on the track, but I'm not sure of the timeline, like what, what timeline exactly we need, what milestone we need to achieve with what time exactly, but we are definitely on the track, correct track here. So, yeah, I guess I'd say that's only really like, if someone is tracking it, if that was already you, like, that's already good. And I have it also say like, Yeah, but just we need to check if the time time stamps are currently like, we're not running too much behind the schedule, we need to check that. And it's a software project too. So I mean, software projects, you're not going to be 100% accurate with what you thought a month ago, or maybe even a week ago, sometimes with how long it actually takes. So that's just a reality. So sometimes you just need to readjust. So that's kind of where that cadence of maybe, are we in the right spot? Or do we need to readjust? It seems like we're moving pretty well. Okay, okay, yes. So, yes, ma'am. I like the idea of looking at the timeline, I tend to ignore timelines. So we may want to make a systematic thing in these maybe one one of the sessions in a week, where we remind ourselves the timeline was this, are we okay with deviating from the timeline or not? Because I've I've been totally oblivious to timeline, thinking instead about task and photo, you know, what we want what what I think we should get done. But timeline, you put effort into thinking about the timeline. And we would honor that effort, I think by looking at it and deciding, shall we accept this as a deviation from timeline? Or shall we, you know, redefine the timeline, which is perfectly okay? So according to me, what my expectation for phase one was that I would to fix the redundant fetch fix because that was something for the for that I attempted to fix even before the GSOC, the community bonding phase started. So I was so what I thought was that I would write the benchmarks, I had one benchmark, I would write more benchmarks, and I would possibly try to discover more issues, just performance related issues, and side by side try to find a solution on how to implement those in the plugin. But I think where I've lagged is that I've spent too much time on benchmarks. And I think my, my goal was to find a difference. Even if there isn't, my goal was to find a difference. And I'm not sure if that's how much that's helpful. Maybe it's helpful to a point. But if you overdo it, then you maybe you'll, you know, I'll try to see results where I where they're not they're not existing. So that's one, one concern I have with myself. And I think I'm going to change that when I'm doing the benchmarking thing. And also, I think I need to maybe I need to put a stop at where I go with with the results I have. And parallely, try to focus on the fact that we need what's more important is that the existing results we have, we have a system to add them inside the plugin. And that's going to be that's going to take a lot of time because when I was planning the timeline, I was not aware of how much time it would take. I thought that it's going to coding a certain task would not take too much time. I was not aware how much time after the coding process would take that is the testing of it and then taking it to production. That whole process I was not aware of while I was writing the timeline. And with the git fetch issue, what I've seen is that it would take considerable time in this case in this plugin for this repository. So so so I think there has to be a fine balance between how much I research because I feel that this project is not 50% but it's somewhere around the research we're doing exploring and 50% it's using that to code things. So so it's really important that I balance both of them. I think I could not balance this time for this phase coding was less and research or finger figuring out ways to consolidate the results I have profiling and then looking into how the operation is working also. So I do explore the code I do try to understand as much code as I can. It's not like I'm not reading code. But I believe that I did not I should have find found a way to the heuristics. I was thinking that at least I would have a prototype by the end of phase one, which I could not because I think that I spend too much time on benchmarks. Because I feel even after spending too much time, what clear observations we have is with the benchmark results of git fetch is that J get the size of positive and the J get thing we have. And with LSD more it's it's it's not much of a difference. But I think that I, I've spent much more time to find more things. So if that's a good thing, I think to a limit it's good, but not at the cost of not providing a solution, which is actually what we want ultimately for this project. So I think that's one of the biggest concerns I have. And I would so with tracking what I want to do is that I think that would help me more to stay within the line and hold as much as I research for the benchmarks. So yes, that's I that's what I feel for the phase one. And yeah, I mean, I think some of the beginning of this is like some of what you said is the nature of you're teeing everything up your bootstrapping all of this stuff to gain advantages later too. So I yeah, it's probably a lot of nature of the beast kind of thing. And yeah, you don't know what you don't know. So yes, that's true. Maybe all of the experiences I've gained through this phase one, it'll be easy for me to in this phase two when I'm exploring benchmarks or I'm coding stuff. Yeah, I like I'm I'm very interested in and like the idea of that sizing heuristic being a good coding task and a good coding task to include in phase two. I think that that has real potential. It's real code. It's something you can do while we're making progress on other things. So yeah, I like that a lot. Balancing that which probably has very specific concrete things with exploration of are there any other operations besides fetch where J get is substantially slower or substantially faster than CLI get because right now we've we've you've benchmarked two or three and one of the benchmarks says yes, clear difference. The other one says no, not a clear difference. The question is that all right, which or should we continue writing more benchmarks or rather is put you focus fully on have you you choose to focus fully on the heuristics and using what we've learned from that one benchmark and I I don't I don't have a good answer on do both or do one. I think I'm open to either either process whichever you prefer. Ideally I'd like to do both because if we don't explore then then I'm not sure how much we like we would we could cover at the end of the project. So I would not like to stop the researching into the different areas of this plugin but but then again focus on the coding task as much. So the last question I have is so now I'm going to write ideally either I'm going to add the size estimator part of the code to the get sm telescope or I could create a new class that's I work on that more first. I have to give more time to that thought but I wanted to ask since I would have to design a class and what would be good design principles I should maybe read about or if you guys could give me some advice on how I should design such a class where I'm I have heuristics and I want to I'm not very sure of the thing I'm delivering but I have some kind of an estimate. So this is a very new type of I would say a functionality for me to get so I just like if there's some possible advice or maybe things you would like me to explore before I write this class because I think if I have some if I read some good principle and then I try to code them it'll be easier for us to then review it and the process will be faster. I think that's the right way to do it. So yeah. I wish I could say I'm a good designer Rishabh. I'm not. I'm hoping that Justin and Omkar are because my my my usual technique is do something very badly and iterate on it until it becomes somewhat less bad. That's what I was thinking about but I thought maybe if if there's something which is essential when we're trying to create a class like this or something like this. Get SCM telescope for me feels like a great pattern and a great place for you to explore it. I think that is exactly the concept and if you if you read the SCM API documentation as written by Stephen. He has a page called consumers for consumers of the SCM API. You can read more about his strategy and why he did get SCM telescope. What his concept was so that consumer documentation on SCM API is a really really good read and therefore you're reading from somebody who actually is a good designer as opposed to listening to me who is we know not a good designer. I'm I'm I'm pretty solid at maintenance programming and pretty solid at testing. I'm not really great at designing new code. Super good gem. Like I think what you hit on is is kind of what I was going to say too. I think different code bases kind of sometimes tend towards different practices and stuff like that too. Java's generally has good practices and there's like books you can read on on this and stuff like that. But I think I'd agree with Mark that take a look at what you have for this. Take a look at other classes that have been designed in here that might do something somewhat similar and you'll be able to potentially snip some of those things from there but also combine that with iteration because you're not going to I don't know. I think as many times as I've tried to design classes like I usually find something or another thing when I'm testing it or writing unit tests or something like that that I might change some stuff up anyway. So okay yeah that sounds okay. I'll start with exploring the data SCM class. I'll be focusing on it first. The SCM telescope and reading the consumer. Then maybe come up with a prototype and then we can discuss the design. Maybe it's not the right time to ask the question. I wish it were. I wish I had the answer but the answer I have anything I give I would be making it up and I'm sure what I would be making up is much much less quality than what Steven has already described. Okay but I think it's great that we already have a class which is doing similar. It's designed for the similar purpose so I think I should look at that first and then yeah common. Okay so okay just one last thing before ending the meeting. I was always planning to document the process we've gone through for the benchmarking. I never did it. I could not do it but now what I was thinking was to create a document which basically acts as a repository of the results we have and as a public I would say document which would have the results and the observations or the process we've gone through for each of the operations because I would say that even if we implement some of the features using those observations it's important for us to see where those observations came from so that if there's an issue in future we actually know how we got to that point. So yeah that's also one big I would consider it's a very essential task. Contributing.adoc is your destination. Yes. Yes Mark. Great. Okay thank you guys thank you for giving your time. Just just just a second so are we planning to include that OS level testing like environmental testing that Mark mentioned yesterday in phase two? Yes I would say benchmarking with benchmarking as I've already mentioned in the presentation we would widen the scope. Okay I haven't discussed the benchmarking I think we can discuss it in the next meeting as well. With benchmarking I want to focus on the repository structure that is the commit history number of branches and other parameters in the size keeping the size constant if that's possible. Then the second thing would be different platforms. I haven't focus enough on how different the operations are working on windows versus linux. I would like to focus on that and then once I have observations from that experiment I would move on to the suggestion given by Mark to run it on different platforms on the infrastructure. Can you give me the .io? So yes yes Mark. Aren't we going to get that we're already running the the jmh benchmarks that you've created on different platforms on ci.jankens.io now? We can't see the results nearly as conveniently as what you've what you've what you've shown in your environment. We don't have the jmh plug-in we don't have that benefit but but they are running and so I yeah I think I think it's not it we've already made progress on it and and yes we can make more progress. So the the progress I would have to make is just to analyze them. I did not take both of those side by side and analyze them. That's something I missed. Yes so I would do that but Mark with that okay I'll I'll discuss in the gated channel. I think the time is over. It's it's a small thing. I'll discuss it in the gated channel. Thank you guys. Thank you so much. All right thanks everybody.