 Welcome. This is the 22nd of March 2022. And we're having a Google summer of code brainstorming session on the get cash maintenance project idea. And we should really call it that to make sure it's correct. So the idea is that the Jenkins get plug in has many caches that it maintains as on the controller. And those caches, by their nature, sometimes become suboptimal, because get operations are not focused on maintaining long term optimization, they're forced focused on short term performance. And so this idea is, hey, let's let's find ways to automate the process of maintaining those caches and keeping them healthy. So, so the idea was, I was thinking and scribbling about something with now you'll notice my lovely this is such a beautiful user interface picture I know you all wish you did user interface pictures like this. The idea is on the manage Jenkins page. So let's bring up a real Jenkins and look at it. So that we can see how real. Okay, so on the manage Jenkins page here. Today, there are these, these things like the label implications and like configuration slicing and like configuration is code each of them is its own sub page if you will and I was thinking. Maybe this get cash maintenance maybe belongs in some sort of a sub page of manage Jenkins like this so that was the first. Now to the rest of you, does that make sense to you or is there something you would recommend instead no it'd be better if we did it this other way. Mark one of the questions. Yeah, I think definitely we should have a separate page because I was going to get maintenance documentation yesterday and I saw that there is a lot of behavior that that is customizable right and we would want the user to be able to have that in a separate page instead of doing it in the, let's say, configure system or the global configuration. But my biggest concern with having, which I saw in the document as well in your ideas was that having a page word would be a global settings right would be like a system wide configuration where all of the repositories would have the same configuration for maintenance. This is at least my assumption so I think what you're highlighting is there may be cases where I need to do specific repository configuration for example, I know the Linux kernel needs some different cash maintenance operations configured than every other repository in my system because that Linux kernel repository is enormous. Is that sort of what you're alluding to Richard. I have two concerns there. One is that how is my get executable chosen. When I'm running this command, I mean, I was looking at get maintenance start, and you do that, where on whatever repository you're doing that it's going to get executable on the basis of that repository and in, let's say in a system where we have multiple executables then how, how is that going to happen, considering the fact that different get versions are going to limit or, you know, give us the ability to perform various tasks, usually, you know, involved in something. Good, good point. Okay, so, so let me let me for those who reshob is showing his incredible value and having done that project two years ago. So, if J get is selected. Let me let me highlight this one so if J get is selected, then the controller process address space or controller memory pro memory footprint will increase. So J get is performing the operation inside the controller JVM as one example right whereas if command line get is selected, then a separate process is run, and the memory footprint shrinks, or the memory footprint is not inside the controllers JVM. So, and now now to back to your question. How is the get executable chosen. I think that wouldn't you think that would need to be some sort of a global setting say, I want to use get, or I would you see all I get or I want to use J get. Tell me more of your thinking we're shop. Yeah, I, I agree. When we're talking about a global configurations you need to make sure that we are consistent with what we choose in the I believe the global tools configuration page, when we're trying to choose the get version, and type of get implementation that you want to use. Okay, so, so, and when I think about global tool configuration, what it presents to me is possible get implementations but it doesn't really choose one. Right, it presents. I've got one I named get windows I've got another one I named get dash two dot 11.1. Those are any one of them I can choose but none of them is selected, if I recall, is, maybe I'm wrong is default selected as the default. I don't remember. Good question. Alright, so, so it's, it's a it's a valid thing to say. I think if we're setting on the page then we would expect all tasks on the page to use that image. Or to use that that version that get tool. Okay. All right, so before we go further with that. Any questions from others around that topic of of how is the get executable chosen. I was wondering that is it possible if we can have have something that that uses both the jget and the CLI. Good question. Okay, so let's put that. Are there cases where it would be useful or helpful to use both jget and CLI get and and that might let me let me give a hypothetical. A hypothetical would be something like what reshubs project did two years ago, which was what if J get is significantly faster, faster at some operation. What if CLI get is significantly faster so reshub found by benchmarking that with large repositories CLI get is significantly faster for fetch operations reshub did I say, say that correctly. Yes, yes. So it's a good question should we should we consider the potential that we might need to do some performance based selection. So this repository we know is this size and we've got in our toolbox, both jget and CLI get version such and such, and we've had run benchmarks previously that tell us with that repository size or this some some characteristic. We should choose this implementation, I think it's a valid thing now for me performance optimization is usually a late stage thing. The implementation is working and delivered. So, for instance, we didn't do reshubs project until the get plugin had existed for over 10 years. I think 2007 to 2000. Yeah, it was an over 10 year old plugin before we actually applied reshubs optimization. So I'm not terribly worried about this optimization but I think it's a valid question to ask. Did that address your question. Yes, and sorry I wasn't sure that was that was that who should cash or was that. Oh, me. Okay, thank you. Thank you. All right, so. Okay, so well so we've we've talked about choosing the get executable and possibly choosing to mix implementations. Any other questions that we want to raise around those sort of topics. Yeah, I'm at the second concern that I that I have with global configurations is that the tasks that are going to be performed with gate maintenance. Some of those tasks are directed are correlated to the size of a repository. So, there is a possibility that I don't want to run GC, let's say for a huge repository with the said interval that I've set in the global configurations because I know that that repository will take a lot of time GC operation will take a lot of time. Do you want to do you want to give an override will be somewhere. I think that would be possible that I mean, having a global configuration and then a way to override that configuration for repository. I think that's and I think that's a very good question. How, how what, what mechanisms, can we give the user to to provide finer grained control of the maintenance tasks right because I think I think you raise an excellent point. The collection on the Linux kernel repository takes a very long time is very CPU intensive and very memory intensive. It will with command line get use every core on your system. And if I remember correctly it's willing to use almost as much memory as you give it. The Linux kernel people are not at all shy about using memory they think memory is something that should be used. It's a very good question. What, what might we consider so one might be override rules. Maybe where we say, or override settings based on based on repository size. I mean we could, we could call a call a shell script, a user provided shell script to decide if this thing should be run or not. We could allow call a user provided groovy script. Since this is system level stuff, it could be doing system level groovy. There's a little bit of danger hiding there but we could. Any other ideas on mechanisms to provide fine grained control of the tasks. We could how about this we could just say repository a repository based exclusion list. So, here's, if the repository you are if the repository URL is get colon slash slash kernel dot org. I'm going to bring things up now, sorry slash Linux, don't GC, or maybe GC only GC monthly, something like that. Other ideas. So do we absolutely restrict the user from GC even like once a week or twice a week, or do we just like strongly warn them that it could be, it could be eating a lot of memory. So for me, I would, I would generally it's good question. I've preferred in the past anyway, to allow the user to choose to do it and where necessary offer them a warning, or even better offer them hints if things are going badly that would tell them why things are going badly so should we, we prevent users from doing certain tasks. And my thought was no, but, but I'm open to difference there right to get maintenance man page definitely says. Okay, we intentionally do not run GC as part of maintenance, but we allow you to decide that you will run GC that may lead us to this next question of which tasks should we enable by default and how would we decide. That is a, so I was reading what commit graphs as a task and I got to know that there is a, there's a setting which is not enabled by default, which is called right commit graph fetched or right commit graph. So what it does is if a so how commit graphs would work right now is that your GC whenever the GC run that's going to update your commit graph. And after that, whenever there's going to be a fetch in your repository so the commit graphs. So the amount of time that it's going to take for it to update the commit graph depends on this, the number of comments that are going to happen to your repository. So if you have an active repository, and between the interval of updating the GC and then you performing a get fetch you're actually could potentially slow down the operation time of a get fetch. So you won't have this command, if this setting enabled which is right commit graph, and I believe this is not enabled by default, according to the man page for the current graph. So we need to look at the individual tasks that we were enabling by default and then see how they're going to, you know, affect the existing user behavior, or if they're going to affect the existing behavior. Well, and now to take that theme. How could we how could we make the information about that task available to the user what if we gave them a an entry on the UI something like this, let's see update commit graph. Down here, and one of the one of the data points we show them is the trend graph that shows how long that ran on their repositories. And hopefully they look at the graph and say, Oh, wow, here's this repository, where the no no that that's maybe not good enough is it because your point Richard was that if I don't update the commit graph, I may get slower performance from get fetch. Yeah, I mean, if I know, if I don't enable this setting, and if I have large active repository, then there is a potential of slowing to get fetch operation itself. That is what the man page says. Can the get pre fetch command help in this case because pre fetch hood, you know, do the fetch operation before hand. Well, I thought that I thought that Richard concern that the way that it was describing the commit graph let's get commit graph. I thought it was that it, when the fetch is performed, then it does the update of the commit graph is that, did I understand that correct correctly Richard. Yes, so if you search for right comic graph fetch dot right comic graph if you search for that. There's a setting which is not fetch. No, it would be right camel case. Yeah, but okay so right. Okay, here's commit graph right. Interesting. Okay, so maybe I'm on the wrong page Richard. So here's this core dot commit graph. Is that is that what you were referencing. This is this is what I think enables commit graph as a setting global setting. Okay. Okay, so what we're looking for then is use of the word right. Let me send you a link in this. Oh yes, absolutely. Oh, oh it's described in get config not in. Okay, oh very good. Thank you. All right. And it says set true to write a commit graph after every fetch that downloads a pack file. If the split option is used it will write a small one, and occasionally they will merge and writes may take longer. Interesting. I believe I'm not 100% sure but there is a way to change the coming graphs so that you know they take the delta is not in the whole they don't update the whole comic graph on the basis of you know every fetch that they do if you have this setting but if you don't then they're going to write it every time, which is a costly operation right. See, I don't know how costly it is but I think, I think it's worth us worth us just doing exploration, they chose to disable it by default. So it's certainly a cost that I'm not paying at all right now. Right, I'm when I do a fetch, I, none of the get repositories that I handle are doing this. And yet it says it would, I do get log minus minus graph, all the time. And it says, Wow, I probably should turn on fetch right commit graph, so that my log minus minus graph calls are faster. I mean, my point was just that we need to look at each of the tasks and the settings that they're providing and then think what strategy could be implemented, which ones to enable by default which ones to not. Right, right, very good. Okay, the idea being, hey, should we should, should commit graph be enabled by default. Now let's, let's get maintenance. Okay, so get maintenance. This one. So commit graph is enabled by default, right and it's scheduled to run hourly. So if you register the maintenance, it will run every hour. I think this might be something that was implemented in one of the later versions of get. I read this one article where it mentioned that in version 2.24 of get it introduced a new thing in the commit graph, and it was called a generation number. And it was like it significantly reduce the number of commits that it needed to read through and I think it used like some cons algorithm and computed the number of degrees while it was traversing the the graph but but like after after the generation count, after the generation count was implemented it didn't need to and it got a lot more efficient. So before that version I think it was the commit graph was very inefficient so if we were to implement it for something like CentOS. So I think for those cases it might be inefficient. Oh, that's a good insight that there may okay so what you're saying is, there may be versions of command line get where these settings should be quite different. Ah, okay, that's very wise because well and, and to your point commit graph may not even be available on some of the command line get versions that we run and may not help if it were available right because if I'm using doing a command line get implementation and the command line get implementation doesn't know anything about commit graph it certainly can't use it. Interesting. Good. Okay, very good. And earlier it used to do a commit graph update while it was doing the GC tasks so the rationale there was that compared to the GC tasks I commit graph won't take you know much of the operational time. So they they club it together. And that is what they should. Okay, and that that makes sense to me at least it's like yeah garbage collection is very expensive right it's doing it's doing recombining and then it does this large compression operation and and compressing files is is almost always very expensive. So, so, yeah that makes sense you could easily hide a small operation like commit graph inside all the time you're spending doing garbage collection. Good. Okay. All right, well so then, go ahead. No, I just, yeah so we probably need to see that. I mean we just need to decide what kind of strategy we're going to implement on the basis of. Yeah, so, so, well so for me. It would be okay if on the task selection. I'm going to propose an idea and let's let's test it as an idea and then we certainly can throw it out. My initial thought was Tesla, the, the task selection priority, here's my proposal. Okay, so I think prefetch has the most. My words mark thinks prefetch has the most opportunity to improve things because it is. It avoids. How would you say it. It loads. Oh it does network traffic. Which is very, very slow compared to this traffic network traffic reduction. Right so one of the best things we can do is do less network traffic. This thing, when I'm doing a fetch in a from a repository that has already done a prefetch, it avoids a whole bunch of network traffic because it's already been done it's already been pulled in. I think this one is should be priority one first choice, make sure that works, and we get good results. Now if we're doing prefetch, then the next question is okay now we're potentially every hour bringing in or every some time bringing in things that are come in as loose objects they come in without necessarily well well placed inside the repository. So should we then consider other things as second as later priority. And now this is where I don't know which of the next one should be should be preferred any any insights to offer anyone there. I feel the incremental repack should be placed after the prefetch. Can you tell us more the incremental repack basically I feel it works like a you know, be three. Okay where all the objects are placed in a sorted manner. Okay in in the in the MDX file. Okay, and each object is referred to a separate pack file. So it would be easier to search through the comments. If we have an incremental repack as a second option is what I feel. Good okay and that I think that's a that's a testable idea and and that feels reasonable to me. So for that I just have one question. Prefetch, what is it exactly is it just getting the references the updated references into a separate directory or is it actually downloading the objects and not yet present in the local depository. My understanding is it's getting the objects. Okay. So, so my interpretation of the way this is described is it's doing the equivalent of a get fetch minus minus all, but placing the refs in a different location, so that the repository doesn't doesn't. So that the repository state of the mode of the, for instance, the master branch pointer is actually not updated. So it says, this is done to avoid disrupting the remote tracking branches my interpretation of that is pre fetch does the fetch, and hides the result of the fetch locally in a way that get can find them, but does not update the remote tracking branches. And for me this was a, oh that's smart because I would have just done a fetch. But the problem with doing a fetch is somebody else may be depending on that cash staying in its having its remote tracking branches stay in their current state. The thing that's maintaining that's that owns the cash thinks it has control over when when remote tracking branches are updated. Did that answer your question Rishabh. Yes, I mean so the subsequent question is that if objects are going to be downloaded and they are loose. They're not. So then would will be not want to run something which are going to put them in a fact. Well, and I think that get fetch fetch will on later versions of get actually place objects into additional pack files. So, so there's, if I remember right there's this thing called the multi pack index that will allow that allows get to use multiple pack files. And if I understand correctly, that's a recent, a relatively recent like within the last two or three years feature of get it does anyone else have experience with multi pack index that that they can enlighten the rest of us. See if I can find multi pack. Oh, here we go. Look, okay. Okay, so incremental repack. It uses multi pack index to repack the objects. First by calling expire to delete unreference pack files, and then by calling repack to combine several pack files into a single bigger one. So, so this feels like for me. Okay, back to our question it was. Should we put, I think this is lobbying that Russia cash on it right that incremental repack is a really good choice as as very, very close to prefetch in terms of its values. Mark, I'll share you a link okay one minute can you open it one minute. Sure. So are you sharing it through the Gitter chat or through. Oh good you did. Okay, here we go. Perfect, here it is. 2.20 introduced a single file that consolidates all of the index files. This actually gave me an overview like actually gave me an overview of exactly how this incremental repack works using the multi pack index. So both the commands that is the expire and the repack command has been explained in this. Okay, so, so this this article so let's be sure we include a link to this. Okay, see the stack overflow article for incremental repack details. Very good, excellent okay. Let's get off. Excuse the muting. Okay, very good so. So what this is telling us is. It's not especially healthy to have many pack files and what this has done is if we do. Yeah, okay here we go. This is talking about let's talk about the Linux kernel, multi pack files. It can cost cost time but we may not be able to repack into a single pack pack file because it just takes too long, right or consumes too much space. And so what this is offering us is the multi pack index, and we get that by doing the incremental repack. Correct who she gets. Yes, yes, Mark. Okay. Good. All right. Okay switch which feels like that gives us a strong reason to say yes, prefetch and yes to incremental repack those should should both be on just as they are in the. And now I assume we've got a challenge there of maybe I should make a note here. We need to assess the operations based on and their results based on the different versions of command line get right because multi index multi pack index looks like it requires at least get 2.20. Okay. Sorry, one question that I have is that there is also loose objects maintenance task right. So, I guess this is more of a confusion for me if we're doing a prefetch. Are we introducing more loose objects into the, the local directory or we're introducing more pack files that need to be repackaged. And I think that with current versions of get it does a. We could easily take we could test it really quickly if you're okay if I run a test. Let's just go do a quick look to see. So, I happen to have a repository that is rather large. And let's go look at it just to see so. What's it all about in. Yeah, there we go. The, this directory has a copy of it. And let's see what a mess it is. Okay, so here's something and now what's in its pack direct yeah. Okay, here's a here's a terrifying example of this is 100 or 150 megabyte repository. We use it to test all sorts of awful things. And so, but what you see here is an embarrassing number of pack files. Right, they really should be in an ideal world to an IDX and a pack and that's it. And this has many many more than that, and it's got all sorts of loose objects now if I do a get poll. Now about let's count the number of those there are. So we have 62 files in that direct director right now. Let's see if it has to bring anything okay so it's bringing in some new content. It added four more files. So I think that indicates it did add new packs, not just new loose objects. Did that address your question. Yes, Richard. So we should and we should be able to see that by doing this we should see that. Yeah, notice here is here is something was changed February 12 and then there are four more things March 22. And yeah, so here's a good indicator. Notice the size of this monster. This file is 77 megabytes. It's embarrassing, nobody should put 70 megabytes and get repository that's sick and wrong, but but that's what this one has done. It's got one of the this pack file is enormous and there are other pack files that are pretty hot pretty large this one looks like it's 25 mag, you know so so it's a big this is a big repository. And I suspect if I run get GC it will run for 15 minutes or more. So, so did that address the yes we're confident we want prefetch we want incremental repack. And Rishabh I think was do we also want to make loose objects. A standard, a standard part of it, like, like get maintenance does right because get maintenance has chosen to do loose objects daily less frequently than prefetch, but but it much more frequently than GC. Yes, and if you do if you're choosing to perform loose objects then we will do it before the incremental impact right. To have more backfiles first and then the package then whatever the back is. Good good point yeah let's so let's see it says it place cleans up loose objects and places them into pack files. Yeah so well so okay so I'm going to try something here in that that repository. So it's got some stuff in objects. Is it loose objects. I don't think that the comment yeah. How about maintenance minus minus task equals. Oh come on, there's got to be a way to do it. So there it is loose objects, loose objects job. Get maintenance run. Oh, thank you. Right, of course. Clearly I don't have enough experience with this do I like that. Okay, get maintenance run. Maybe it's task. Oh, yep. Okay, that was it. And now what did it do to our. They're still there. Except, did we get a new entry in the, let's see LS minus ALTR dot get such object such pack we had 66 before. And now we have oh and look, there it is loose dot pack. Okay so so it and and back to their comment they said hey we're going to do loose objects, and it's going to create the new pack file, but it did not apparently delete all the other things it left them around so there's a pack file for use, but the loose is still seems to still be there. Interesting. Okay, cool. Now I have no idea I assume get must be able to use the loose thing. Okay, good. Rob, back to your question. Are we answering the question that you had about how do we approach it. Yes, I think we would, we would run, we would want to run loose objects first straight like refetch loose objects and then incrementally back so that. I see what your point is. Okay and but now let's let's test that so they say they run incremental repack and loose objects daily, but they run prefetch hourly by default. So should we be considering their 24 times more frequently running prefetch than they are running incremental repack and honor the same idea. I have a doubt here would this incremental repack would, would it even consider loose objects as part of it. I thought that it did not, didn't it say that it only does multi pack index. It's unreferenced pack files and then combines pack file so I would think incremental repack does not do loose objects. That would lobby for reshops argument that we should do loose objects and incremental repack as sort of two steps. Close to each other one right after the other. Is that what you were asking. What was it like my question is like would increment the multi pack index file, would it consider the loose objects pack file which has been generated. Oh, oh I see what you're saying. I, that's a good question I don't know. Let's, well let's let's try it and see. We've got a task here so this time we're going we just did loose objects. Now let's do incremental repack. Now it added three more files. This multi pack index. And two more pack and another pack. Now I don't remember seeing a multi pack index in the list at all let's see and the count of files it was, we went up by three. So we got a pack. The multi pack index did not exist before I did get maintenance run incremental repack. So I've been running with a sub optimal setup, because I wasn't using get maintenance at all. Oh this is really great thank you. You're all wonderful to be teaching me more about get thank you. Okay so so I think what that's saying is man page review indicates. That loose objects should be run as well. And there it's daily. And this one is also daily. This one is currently hourly right in the get maintenance default. There's also another command directly below the incremental repack called pack reps, and that collects the loose reference books. All right. Next so this one. Oh, okay now now then this is so we saw this one loose objects created entries in the in the pack directory loose dash something what you're thinking is this may actually create them as real packs. Is that is that correct what you're saying are on. Yes. So let's try that pack dash reps. Okay that was very fast, and it still seems to have left the the the loose objects in there if I if I'm seeing that correctly. There are definitely loose objects in that directory. Okay, so what was pack refs doing it says collects the loose reference. I put branches and tags by default. That's not for all the objects that's what specified when you open when you open that link there. Okay so this optimization is not for the objects right it's that the word read the words reference files you're very important I think is what you're saying right is that this is doing. It used to write right okay it used to store one file per ref in a directory. And if we look, I think I can see that. Yes, now if I go, let's go up what we already taken it out let's try a different. What's a good bug report how about. Okay so I'm not seeing. Oh, if I look in tags. There are a bunch of tags. Now if I go back to the master directory. I might expect that those were somehow less because it's somehow done a small database of those stored in some other location. Is that is that what you're telling me that pack refs really is creating. Instead of one file per ref. Now, it is stored in, it is stored in directory, it's stored in pack draft, there should be something called progress there. Right, so if I look there. And here is this thing that is some sort of a better representation than a single file per tag, or a single file per tag plus single file for branch. Good. Okay. All right, so, so for me pack refs. Now, now my repositories typically don't have an enormous number of, of references, the, that 100 megabyte when that yours, you were seeing is probably has several thousand might be as many as 10,000. So just get plug Jenkins plug-in repositories have far fewer than that. Right, they have on the order of hundreds, maybe. Interesting. Okay. So back to the question when now, wait a sec, they don't even list pack refs here as a task. Yes, it's not in the incremental strategy. Oh, okay. Now, so you think there's a reason for that. Interesting. It's not in the incremental strategy. Okay, so, so does that lobby that we should probably only use pack refs in very special cases. I mean, it is saying that it speeds up operations that iterate across many references and I don't know how many of those we actually have. Okay. So I was just saying that the operations that we would want to optimize would be get fetch. I mean, the operations where there is a significant network bandwidth usage. So I believe the priority of the tasks should be should be tuned to optimizing those operations. Right. And this one pack refs is pack refs is not as far as I can tell a network, a network related one it's not going to help network performance it's not going to reduce network traffic or spread it spread it out so for me, assumed, not part of our default set. Is that a safe way to say it. Yes, and could we could we do a benchmark on I mean we already have the benchmarking framework within the get client depository. So this is a test that maybe could be an improvement in the proposal but it's an experiment that could be done should be interesting to see if. Yeah, the challenge for me would be how would, because of what it's doing how would we how would we do that benchmark it's. So it's. Okay, so here we go a repository with too many refs should pack all its refs with minus minus all once, and then run pack refs. So I assume so be get pack refs minus minus all. And then every so often run this. I think this would be useful if you have a lot of branches, a lot of, if you have too many branches or get repository has too many branches. But how, how I mean if you have a lot of branches what does it affect has it affect the gate of the gate fetches operation time, how is it affecting. Because we have to look through a lot of like through if we go through the ref folder, we'll have a lot of branches right so searching through that I think would take a lot of time. If we do a pack ref everything we be put into one place. So, the reason why I'm stressing on that is we so when we did the benchmarks on get operations, what we found out was that the time it takes for a good fetch to happen is is is a function of the amount the size of objects that you have in your repository rather than the number of commits or the number of branches on of the tax. That is what we found at that time. Okay, if you think there is a way for us to demonstrate that the number of branches are going to affect the network intensive gate operations then I think we should definitely consider it. Yeah, so, so and I think I think that's a, that's a valid point that pack refs may be a later optimization that we consider, or we might. What if, what if we said, Hey, one of the measures we take of repositories is the number of references. I don't know how we would get that but if we, if we computed the number of references and if the number of references was beyond some certain threshold. Like they say here right up repository with too many refs so if we did some measurement periodically and said this repository has this many refs some arbitrary number 100,000. But as more than that we will at least once do a pack pack refs minus minus all, and then automatically schedule it to do a get pack refs once a week something like that I mean that might be. I don't know what that threshold would be I'm not sure how we would obtain that threshold and identify it but it could be a way we handle this this comment, what the documentation says. Yes, I agree. Interesting. Okay, well, this this has been a most effective session thank you very much to everyone who's been here. I had wanted to limit us to an hour. What I'd propose is if you're, would you like to do this kind of session again. Are you willing to have these kind of discussions, and would it fit for you if we did it. Next week or early next week would that be okay are you interested in that or is this not nearly interesting to you you'd rather just focus on other things. What's your feedback. It's good to have these sessions Mark because you know, I'm learning a lot about what exactly is required and how to proceed also with the implementation. It's better if you have a good if you have these kinds of session. Okay, good. Well, so others. I like that that that's that's great for me. Do others have the same feeling. Yeah, I agree. The session has been like really helpful and understanding and and knowing about how we can proceed forward and how we can look at things. What I'd propose is let's plan for an hour a week, if that's okay, and it would be it would actually be a little better for me if we were willing to do it on my, on my day when I already am doing office hours. So would you be willing to do it Fridays, rather than doing on on Wednesday like we're doing this one. So we would just do it right after Google Summer of Code office hours, or is that not a convenient time for you. So we would basically make GSOC office hours 90 minutes for you instead of 30. For me it's fine Mark. Now, I'm ready for it. That's good. And Shab, how about you and Chris, you would with that would would Friday work for you or it's both of them are right in the middle of your working day and I apologize for that it's you India time and Rocky Mountain time in the US are different enough it's always going to be complicated. You're okay. Yeah, it should be okay for me. All right, so then let me take, I'll take the action item to schedule schedule recurring sessions of immediately after the Asia GSOC office hours, and we'll try to meet weekly meet to discuss so that means let me double check my calendar in just a minute to be sure I've got the right. So that means we would next meet on Friday the first of April. Is that okay, or do we need to meet sooner than that. What time is done. So Friday, April one, it would be at 330 am UTC, which is about 30 minutes prior to this time. Because we are right now know let's use that we know it is. It is exactly at this time so it's right now. No, no I take it back it's 30 minutes after this time. So what time is it locally for you in India right now. Okay, so it's 830 am now. So the meeting then would be would go from nine o'clock am India time to 10 o'clock. Does that work okay for you. Yeah, it's fine. Okay, great. So that's the plan for that. And if we and then we'll, we'll try the same thing the following week. And, and let's make some progress. Thanks very much. I'll upload the recording of this, probably 24 hours from now it's I'm a little behind schedule on recordings right now. Thanks everybody for your time. Thank you so much. Thank you. Thank you.