 Anyway, so I don't know what you guys talked about for the last five minutes, but I don't have much of an update. It seems like Josh and Derek are done with filming in Kansas there. So he had some observations about the performance and the demo ability of the unit right now. And I reminded him that it's a work in progress. We actually all of his complaints are on our timeline, which is good. We're all aware of the issues that are happening. In particular, he called out skill interactions like one stopping and the other one starting and that kind of stuff. Derek kind of gave us the look for yesterday. Yeah. So, well, you know, I get it, like it's frustrating, but we're planning for it. Better than it was, right? I hope so. I did file like a sort of blanket bug report because I didn't want to have the time to like break up all of his issues into several year tickets, which would have been the correct thing to do. So instead I just put together one year ticket and I assigned it to Chris and said, hey, Chris, figure out which of these are yours and your skills and pass it on to the next guy. You know, so you guys can just circle that around until all the issues are off that list. That was to Chris. Yeah, Chris was the first one I gave it to. So some of those may be legit bugs. Some of them may be things that we already know about at least one of the skill that isn't even in our essential skill list. So who cares. And some maybe things that we're going to do just haven't gotten around to yet. Exactly. Yeah. Yeah. Some of them are definitely skill interaction things or maybe even like, you know, it's just designed to work that way and maybe our design is a bad idea. So anyway, take a look at them see what you think and extract out the tickets that you think, you know, pretend your parts and just delete them from the list and pass it on. Okay, so that's, I think that's pretty much it for me. We're starting to see resumes come in for the job position, which is great. And we've had at least a few people complete the test. So I expect we maybe get a few more over the weekend. Or maybe a bunch more we'll see time people have. So, so that's good. We need to start budgeting some time for you guys to now review those tests. I understand that that's kind of eat into your dev time, but you know, you guys can manage that as you see fit. We'd like to, you know, respond expeditiously. I don't want to, I don't want to lag in for like a week, you know, for example, but, you know, obviously, it's not a jump on it right now. So, let's see. Yeah, we're getting pretty close to announcing our production partner. We've been plowing through the details of that contract. So that will be good. Get that out there. That'll be good news for everyone. And, okay, now I think that's it. Yeah, so let's go over the question. So, um, yesterday we talked about how the white comp tests are still flaky and by more specifically random seemingly random tests are failing every single time it runs. So, that's new. It's going on forever. Yeah, I mean, it feels like we got to cleaned up for like a, I don't know what guy's a week or something and then it's started failing again. I think there's something endemic there I think there's something underlying that's that's, you know, because it's the fact that they're all that the kind of skills, the things that are failing are pretty much random. But they do appear to be like skills that like the timer skill where the interactions we're trying to test are probably much more difficult than just a quick one and done, you know, kind of interaction. So anyway, I'm in the reason this came up for me is I'm trying to get the timer skill. You know, moved in so I spent my morning. Yes, there were a couple of timers go VK test that were broken I fixed those. And I'm starting to look into the actual VK tests I found I did find a bug in VK and that's there's a PR running right now that just failed for the first time on a whole new set of things so. I need to. It's difficult because it may be a hard one to pin down since the fails are random seemingly. But I'm going to spend a little bit of time see if I can figure it out because if we can't get this thing to run, you know, reliably, and only fail when you know it's supposed to fail, then it's not the tool we need it to be. I agree with that. Yeah, I mean that that needs to be our priority then it's like, yeah, that's if we can, you know, the whole point of that system is to make sure we only go forward in terms of, but not that. I'm not sure it's actually the code. It's fail. I mean, my timer tests. Oh, my Piper off doll pass. But they're sometimes you're feeling randomly in VK so it's something's going on, you know, underneath the covers that figure out in the say I mean sometimes they fail in the sea. Yeah. So my guess is that it does feel like they've been gotten they've gotten a whole lot more stable like we no longer get, you know, well, I maybe I should wait and reserve my judgment but like, it seems like it's like three X fail things to get to get as a go. I mean, that's just in the time scale and that was before we had all these changes included. Yeah. So, but even that it was all in the one skill as opposed to like the stop, you know, telling telling the new skill to stop fails, you know, one out of 10 times and telling asking the time fails one out of 20 times and asking, you know, blah, blah, blah. Like, I do feel like it's gotten it's gotten better. It's still and I know that people in the community are super frustrated by it because they're just like, what is the point of this thing if it just blocks every PR that we want to merge, you know, that we want to put forward. So we can promise everyone listening from the community, we are very frustrated as well. But yeah, as Michael said, it's about making sure that we're going forward and we're not, we're not adding in regressions as we go. And once we do get it in place, once we do get it solid, then it'll let us move a lot faster. Yeah, I mean, that doesn't need to be a priority like we just can't like, we can't really accept PRs unless they pass right so if the system is broken and we need to, you know, we have to go down that side road, you know, get it fixed. I'm all I'm on the side road now. Okay. Made some progress today, just by fixing one bug but Yeah, there's just more to be done, I think. I mean, I was aware that there were issues with the system but I didn't really, I didn't realize they were endemic like I thought it was like an issue would pop up and then you guys fixed it or something but I didn't realize that if we were in a state where like you had to run the test five times to get it to pass or something like that. You know, I thought a few weeks ago that we would, you know, this is the reason the xdg, you know PR finally went in is that we got the test to pass. Now I don't know if that was a fluke or not, but I mean they just, it's not reliable. And, you know, if it was the same scenarios failing over and over I think that would be oh yeah something's broken here. This is just breaking here and then breaking here and two breaks this time and one break this time. That's not, you know, that's not the problem of the code being pushed it's. Yeah, so that's the framework is broken somehow there's bugs in the framework. That's my guess so my guess is all around message about handling and, you know, and weights and race condition that kind of stuff. We already fix those, which may actually well you know what I do recall discussion a while back like we may actually be resolving revealing race conditions in the code which are legitimate bugs right like there shouldn't be race conditions in my humble opinion. Like, you know the system should be well defined. But it's not just necessarily race conditions in the code because you know there's, there's a whole different message about mechanism in the, in the vk tests that does some different things there's, you know, there's there's weights in there that maybe wouldn't normally be put in the sleeps and stuff that you wouldn't normally see in normal operations so it's, I don't know if it's apples and apples necessarily but I think a lot of stuff I see the comes problem is when we start doing things like waiting. And then back to the whole, you know, sequential versus asynchronous stuff to is really all the vk tests kind of have to be sequential because you can't say, you know, did this work until this event pops up or this thing is spoken or whatever. So that's a good part of yesterday, looking at videos about and reading articles about how to test event driven architecture is because it's, it's actually it's not an easy problem to solve. Right now what we're doing is it's certainly one way to do it which is to, you know, make it synchronous. Instead of asynchronous but not necessarily the best way to do it. Yeah, if you've set up a lot of givens for a scenario than you. Yeah, you're kind of constraining it to be like okay well we need to have these initial conditions so you have to finish this other thing first so they don't agree with each other, but yeah so forces it to be synchronous but yeah there's a better way to define it. I mean, if you think about it some of our stuff is synchronous I mean if you do a request then x, y, z, a b and c need to happen a lot of times in sequence. You know we just, you know, we usually asynchronous architecture to do it, but it's a synchronous process. Yeah, which goes down all down private hold that we won't go down the day. Okay, so are you working on any specific areas are you mostly working on the vk test suite. Well it's boring it was getting the timer stuff, you know, and I had some failures and now on. Yeah, I'm kind of looking in general to vk test suite and that may be more mostly the helper functions. We use looking at those some more and see if there's anything in there that's might be causing problems. Yes, entirely possible is one little line fixes at all. You know, so that I just submitted a fix for this will find out one character fix. Yeah, love it. Not even changing the character just bumping it from here. Yeah, I have been spending a fair bit of time responding to people in the community. You know, in line with the frustrations I just talked about people, you know, particularly the open voice crew are pretty frustrated about about what they do. What they see as a lack of progress and, you know, I think I think we need to spend some time unpacking it with them because, you know, there must be more than just like these are not features that you're pushing. And I think my what I'm thinking at the moment is that I'll do a blog post that kind of talks about what we really, well, some some, I think we should put out a blog post that talks about, you know, what we've been talking about and what I've been talking about in the community, but just doesn't a much more like here is, you know, on our blog, here is a thing we can point to that, you know, we really need to get this solid stable robust foundation in place first. Before we before we put lots of shining new features on top of it. And, you know, just like with the VK stuff, if we can't trust the foundation, then every time we add things, we're just moving, we're just creating more seismic shifts in that and potentially adding more bugs, but even just adding more complexity, which makes the existing things harder to look at. And, yeah, it's going to feel like it's going to feel like slow progress in the interim. But it's going to create a much more, you know, reliable system that people can build upon so yeah. I'm curious, I mean, are they expressing frustrations at the same sorts of bugs that we are trying to fix right now or that we have in our backlog for the next couple of sprints, or are they not seeing those bugs and that's why they're frustrated because like they don't see that we're actually working on something that seems important to them. I mean, I think a lot of it comes down to like communication, they're kind of seeing in their view that, you know, our internal team are working behind, you know, over here behind closed doors and and not communicating. But like, I have to push back on some of that to like, you know, like the Chris doing the PR around, you know, using a particular way of emitting a message on the bus. And then that turned into you're going to deprecate the skill API. And like, I know there was some conversation around that, but like, they really jumped very quickly to like your pulling out code, which was never suggested and, you know, isn't going to happen. So like, or at least, you know, not if it did happen, we would have a big conversation about it and we'd talk about why we were doing it and all that sort of stuff. So like, at the moment, I think, you know, there's there's they're feeling like they're not there isn't there is a lack of communication. And I feel like, you know, we've lost some of that trust with with, you know, a particular group in the community. And we need to we need to build that back up. So yeah, I think I've said it on here before, but I want to try and pull back a little bit from from active development and and spend, you know, even though it feels like that like moves us forward. But I think it's more important that that I spend more time, you know, trying to trying to facilitate this conversation a bit more. So yeah. So there's that but there's a there's also the like, you know, here is a great feature. Why have why hasn't it been merged? You know, why don't why don't you even spend time looking at it? And, you know, that might be a large feature. It might be a very small feature. But like, you know, but all of that is like, well, that that takes time, you know, because we want to. We want to, you know, review things properly. We want to add tests to them. We want to add documentation for them. You know, that all takes time. And, you know, even if it's a relatively simple coding change, you know, maybe it defines a particular interface that skill developers will use. And so then we want to we want to properly think about what does that interface look like? Not just put the first thing that comes to mind in the code, because then that's that's then defined as the interface. And so then if we want to change that, that's a, you know, pain in the butt for skill developers. So yeah. Yeah. Regarding that process, I guess I've been making an assumption, which I now want to question. If a community member makes a PR for a particular feature, is the next step that they're expecting to happen that we will look at that? And, and, you know, you have comments on it or does the community at large actually weigh in on those things before we take a look at it? It's kind of a combination. But I think they're that we've we haven't prioritized in our time, you know, reviewing PRs. And so it has been largely the community commenting on them. I think the misnomer there is the community can comment on PRs till cows come home. Sure. But if we don't act on them, they didn't ultimately get active upon. Yeah, I guess the question I had was really more along the line. Well, when, you know, the, our team submits a PR, obviously, you know, that, well, the way things are right now, that might be the first time the community sees that piece of work, right? And it's their first exposure to it. So what we get when we submit a PR is the community reacting to our, our work, right? Yeah. And so, you know, then we work through that process. But, but if, but it's not a one to many relationship, it's a many to many relationship, right? So any member of the community can submit a PR. And my question was, you know, it was really, I don't understand. Yeah, I'm trying to explain the my question was whether the expectation was that. You know, community member a submits a PR and then the rest of the community can all will and not just can, but will all look at that and make render an opinion as to whether that, you know, that new thing that they've defined meets some kind of, you know, standard of goodness. Or, you know, there's an alternative way to do it or a, you know, you're, there's no test for this or whatever, you know, when, when, when the community commits a PR, unless you're, if you're a community member, unless you're looking for it, you're probably not automatically notified that something somewhere has changed, right? So the only community involvement, I believe is when community members know each other and communicate, like is the case with hockey and Jarvis, for example. But I don't know that if somebody else out of the blue commits a PR that anybody but us would become aware of it, right? And they would get any feedback. So I don't believe it's the case that there's an inherent expectation that the community would review the PR before it made it to us. I think the concern is more that they don't believe where they're making it to us fast enough and that we're feeding back on them in a positive manner. And it's nuanced because not all PRs are created equal and not all of them deserve to make it into our code line. And we've been really, you know, to the best of my knowledge, I've never seen anything nasty, right? That says this PR sucks and we're not going to, you know, allow it because it's bad code. But the question needs to be asked, how do we respond when that happens? Because I haven't seen it and I have seen PRs that certainly could be considered worthy of that kind of response. So I think this is just a bigger issue of how do we engage the community at a PR level? How do we communicate that some PRs aren't necessarily aligned to our roadmap and therefore not finding a high priority in our review process at the moment? And I think a lot of it falls back to what Giesel is mentioning, which is we're kind of singularly focused right now. We're trying to build a strong foundation and unless the PR specifically goes towards that activity, they're probably going to be placed on the back burner for the immediate future. And perhaps he can communicate that in his blog in a polite way. Yeah, well, we've talked about this and maybe not in any of the recorded conversations we've had, but definitely our desire, my desire to share our roadmap with the community. And I guess has made some efforts in that respect in terms of experimenting with the GitHub and some of the candidate features about outlining sprints and trying to give some visibility as to what our priorities are in the near term. But I definitely want to expose more of that. Like we've got, I don't even know off the top of my head, like 10 or 12 sprints that are kind of loosely defined right now around specific areas of concern in terms of what we think the priorities are. I think there's no harm and certainly I think a lot of good in sharing at least the outline of what those sprints are. So that they know where, where certain things, you know, kind of fall into our story. And they can either give some feedback on, you know, on the list itself or start to make some plans around like, oh, okay, well, this is a, you know, this is where my current submission kind of fits into that or, oh, they need help in this area. I've got some ideas about that maybe they can help us out. That's that's a good start as long as the process is such that the PRs that are submitted, get some sort of feedback that says, we're not looking at this right now because this falls under sprint, you know, 27 and that's going to happen late October and we'll get back to you then. And then somebody should be tracking that right so that in October we get back to them and say your PR was approved because it's great or it was rejected. Or we want to modify it or whatever. But yeah, there's a process behind that, right? Well, and that's one of the benefits of using the GitHub projects is that once we do open it up, which one of my questions is also, are we happy for that to be open now because, you know, it's never going to be finished. But like, I figure just open it up as long as you're happy with the basic structure and we'll give it a go. Then people can see when tickets, when, you know, PRs or issues or whatever get added to that to that project board, like, you know, in their PR will say this was added to sprint 25. And so they can kind of they can see that as well. It doesn't mean that we don't also provide a comment that says, hey, you know, we're not yet ready for this, we're going to push it. But we will we will get to it. But yeah, there is also that automated layer, which which will be helpful. Well, I mean, just start to bear that out specifically for the relevant issue at hand. Do we not have a GUI related sprint coming up? Well, we're in one right now. And was not the PR in question a GUI related PR? What are you talking about? The one that we discussed yesterday with the unhappiness from one of our friends. Oh, yeah, I think that one. So you're talking about the notifications stuff. I think that will come into the skill interaction sprint. Perhaps they were confused. They saw that we have a sprint for GUI. They thought it was a GUI related. Well, no, I think I mean that that was because that's it's a little bit separate because they they proposed that before we defined all of our sprints, you know. I just fortuitously fell into that sprint was what I was getting at. Yeah, yeah. Well, we looked at it during the GUI and we didn't believe it was applicable for this sprint, but interactions are coming up and we believe it's applicable there. Well, no, yeah. When we're going to detail that. Whatever. I'm not saying that exactly. Yeah, yeah, yeah. I'm just saying that by categorizing the PR is into potential future sprints, we open up that line of communication. Yeah. And then and also, you know, it's not just like here is our list of of what we're going to do. It's like, please, please suggest, please add things onto this if you think that they're relevant. So it's, you know, because it's entirely possible that we've missed something, you know. Yeah. Yeah. I'd like to ask you is so we've got in this this Confluence document, a list of of all the sprints are referred to that the dozen or so that are sort of loosely defined. Maybe it's not that many, but there's a bunch. The are all of these represented in the in that good hub projects. No, just, I think I've just done the, the, the top five or whatever there are, you know, the. The 30s and 40s. Yeah. Yeah. Because they're a little bit more defined and and right in it. But I mean, I can, I can extend that to the others for sure. Yeah. I think it's, I think it's worth putting them all in there just so they can see. So everyone's got, you know, at least a broad overview of things, even if they're not very well defined. You know, they can see, oh, look, they plan on working on the how, you know, this far into the future, but not until we do these other things. You know, yeah. And like, oh, hey, look, skill settings is on there. That's the thing that I think, well, everyone, everyone will appreciate doing some getting some work on, you know. Yeah, yeah. Yeah, sure. You know, and subscriptions is in there, which I'm sure no one but us cares about, but look, hey, we got to do it. Yeah. And hey, if you want new features, you know, installed, let's get and help do some PRs as a bug fixes, help us get it stable. And then it'll, that'll help in everything along too. Right, which they don't really, you know, really nobody has the option of doing right now because they don't know what we care about where we're going next to where the bug fixes are. Like, obviously, if somebody fixes a bug and they submit it at any given time, like, you know, depending on how much work it is to like verify that that's a real bug fix and a good bug fix. Yeah, those could be going in at any time, I think. But, but if you, you know, you're talking about like being proactive about like, oh, let's find bugs in the system or I've noticed the things, but I haven't really reported it. Like, you know, if they see that that's on our roadmap, like, you know, a few weeks or a month or so in the future, then people at least have the opportunity to chime in on that. Yeah. I thought I'd try and do a renewed effort of going and tagging things, you know, with like help wanted and things like that. But it also comes back to what we talked about in, whenever we talked about it, that, you know, if there are, if there are issues in a public repo, then we should really be tagging like put logging that ticket in the GitHub repo rather than in Jira. Yeah, yeah. Absolutely. Yeah, I think I, you know, until, you know, we decided that it's important to try to get those systems to coordinate. I think that we use, I think the one level of duplication that we have across GitHub and Jira is just the definition of the sprints themselves. But like, you know, the community bug reports stay in GitHub, you know, our internal, you know, bug reports, which we can start using GitHub for, you know, for bug reports and stuff, but maybe our internal tasks, you know, can stay in Jira. If it's easier for us to track them there. But, but yeah, I definitely don't want to get into the issue of like, we're trying to replicate things across until we get an automated way of managing that. Yeah, yeah. So, so I'd like to keep the sprints, you know, sort of, at least in GitHub defined at a high level, you know, with a clear definition of what the scope is and the intent and that kind of thing. And then let the community like either assign their PRs or their bugs against against those broad things and we can start to manage that. Okay. Well, I also did a big translation push yesterday. Sorry, I saw many, many, many emails. No, it's good. I was happy to see it. All right. Ken. So regarding bugs in a previous life and I'm not saying we've reached this point. I ran a company where we had hit critical mass on bugs. And so what I did was I developed a system where we had bug bounty points and swag. And sometimes that works. Just throw that out there. If that's an important activity. Today, I went through my open tickets from the previous sprint and tried to close up all the Jira tickets. Most of the outstanding stuff was assigned was associated with wiki or misinterpretations that primarily were corrected by the confidence level fix PR. So I reassigned those to the reporters to verify that once that gets pushed, these things work. And then everything has a story, right? The last bug that was open was related to wiki performance and timing out. And I spent the day remembering that I had already submitted a PR to fix this month ago. And so I'll, and I don't care about that. I'm just saying and I can do it again. It's there's a decision that needs to be made. So the problem with the existing wiki skill if we recall is that it's doing too much. So it tries to support this ambiguation. It tries to support more. It tries to support auto suggesting it fires up parallel tasks to go and hit wiki twice, one with auto suggest on and off. And, you know, then then and then what it does is it kind of goes and gets summaries first and then it, you know, selects one of those and then it goes and gets the page detail and yada yada yada. So the performance issues in the wiki skill are endemic in its design and I've already gutted it like a fish once and fixed it. So the problem is with this architecture it responds anywhere between seven to 10 seconds later most of the time once in a great while five seconds. But the there's an issue where you'll hear it. Read the correct answer and then say, I don't know. And that's probably due to the fact that the underlying framework doesn't have to deal with the fact that it sent out requests to participants and it gets back a response from Wolfram and it gets back to response from duck duck go and it gets back to responses from wiki and it doesn't really know how to differentiate those because it's keyed on the same value which is the skill ID. So that's probably what's causing that problem. But the bottom line is when I submitted that PR was rejected because it changed and altered the existing behavior and the community didn't like that because they wanted more functionality and if recall I made it a configurable parameter. So you can even live with the skill and its performance and its timeouts and associated behaviors, or you can gut it like a fish and get rid of that backward compatibility and it'll come back in two to three seconds at your call. So I gave the ticket back to you. And that's the end of the tickets I believe I had open for that sprint. So hopefully tomorrow I can get back to stuff I was working on. So when you say you. Chris guess because yeah he is the originator of the ticket so my point is here's your two solutions you can live with this architecture and this performance or I can go like a fish and you can lose this back with compatibility tough decision. Well, what is this having to do with the state stuff or is this no no no no this is inherent in the architecture of how this wiki skill was implemented. It's a it's a multi fetch kind of process so you go and you get the titles and then you go and use the title to get the page and then you extract the summary and the image. I don't do that I go to a different endpoint. And I grabbed just the abstract, and I'm done. So I don't support this ambiguation or more or double fetching and all of that so it's just different architecture. Well okay so this is where you know regression tests come in right. There should be an objective test as to whether the new system performs better than the old system. And it may be that the test is in terms of, you know, if the priority is in terms of. I've gotten, I've gotten the tests because they don't hit some of the things that cause the timeouts right the occasional timeouts. In both cases to pass 100% and added a bunch so it's like, I think there's like 40 or 50 wiki tests the test. It's not a test issue that it's not a qualitative performance better issue. It's a, you can't have it all right like Derek wanted the abstract so I went to a different URL to get that with a single shot. This one doesn't get the abstract and it does multi fetches so it's just a different architecture in a different design. They both functionally work. They're slower than the other, but it has more features and functionality. So, well, you know, so you're saying that they would both pass the same vk tests as right now. Yeah, I've had them both pass 100%. Okay, so then the question I guess I would, you know, I would ask. They can't possibly all pass the same test if if like one of them you can say hey read more, you know, tell me more and the other one you can't. We don't have any vk tests for reading more. And that's so that's where I was going I'm like okay so for the reasons that people want to reject this PR. I think what they need to do is submit, you know, the tests or the test scenarios or that's fine but they communicate. The community was pretty clear that they will that this altered the behavior and I specifically went out of my way to make more a configurable parameter if you recall so to retain that back with compatibility. Okay, that's a red herring. It doesn't matter whether it's on or off. So the performance is inherent in the architecture of the skill. Are your changes in a PR there? Excuse me? Are the changes in a PR on the on the skill? What change? Your fish cutting. Your fish cutting. That was a PR that was submitted probably two months ago. Right, so you're talking about the same the old one. There's several. There was several old ones that went through a lot of iterations which is why I wouldn't recommend going back to that and trying to piece it together. Just give me a decision if you want to live with what you have then what we have now is good enough. If you want me to gut it then I'll gut it. Take me less than a day and I'll push up a new PR but you know then you have to protect me from the flak I get from the community that I've broken backward compatibility. Okay, so I think maybe we can just pose this as a question to the community and say, Hey, so we've got a potential fix for this for this issue, which is that it takes too long to respond. I can can you characterize the performance difference in rough terms. Yeah, three seconds versus 10. Okay, so we've got a proposed change that will give us this performance improvement. They pass all the vk tests identically as the vk tests are written now. So the question for everyone out there then is, are you either a willing to accept a drastic rewrite of the skill that retains the improved the performance by 3x and retains the correct answers in the vk tests as they are currently defined, or would you like to add more vk tests to cover scenarios that are not currently in the tests and and make sure that we, you know, that the submitted change also covers those new tests. That sounds reasonable and then whenever we think about changing behavior, you know, the way the tests are defined now there's no change in behavior right so we need to define new tests that would identify the the offensive change in behavior. Otherwise we can't, you know, we do we can't do regression tests. I guess does that sound reasonable to you. Yes. I fall on one side of that fence so that it sounds reasonable to me. I don't have an opinion when we're the other. It's it's a decision that's above me. I will, as a good soldier do what I'm instructed to do. If we want to meet maintain backward compatibility then leave it as is I have nothing to do. I have nothing to improve the performance and get rid of it periodically timing out that I have work to do. It's that simple. Okay. Well, I think as an experiment in at least as an experiment. I think it'd be interesting that you know pose this to the community. See if this is, you know, because you know from where I'm sitting. We have not identified the VK tests. We've identified the VK tests that we care about. We could probably sit here for, you know, ever and define more and more of them right. That's the tone of knowledge but So there's an old saying we had an IBM which is elegant sucks. And part of this falls on me right so I looked at the skill, and it was written very elegantly. And I really didn't want to got it, which is why I went ahead and said okay I'll take this old one and, you know, put it back and fix it up right to pass these bugs that were reported by Jira. So, you know, I'm as guilty as anything because, you know, and I'm looking at it and I'm saying well it's elegant but it's probably not wasn't designed to fit into this architecture because I really believe that at the foundational part of the problem is that, okay decided well I'm going to fire off two parallel requests even though those fire off like four. But you know I'm going to fire off two parallel requests and one is with auto suggest on them was all suggest off. I mean I had to disable the auto suggest one because that was giving us the problem with what's an automobile and an answer to cat. And that's due to auto suggest being on, but I left that in there and then today I realized the system doesn't know how to deal with that right has one key which is wiki skill or skill wiki whatever the key is for the skill ID. And it's confusing the responses so it gets one back in time and it starts reading it and then it gets done and it reads the other one and it says oh here's the wiki skill response right and it goes oh this one timed out well yeah because you just read the dialogue for 20 minutes so it's just the architecture probably wasn't isn't designed to handle each skill reporting multiple responses back as well. What's that the common query. Yeah yeah when common query sends out and says who's out there and you know do you want to get some extended time to answer and know you have an answer here's your answers right and when the same skill responds with two different answers, I think it gets confused. So, you know, when I saw that cut I'm like Jesus is so elegant I don't have the heart to get rid of it right so I just set auto true equal to false and both requests but in retrospect I should have taken him out. But again, or I mean it sounds like common query has that's that's a failure to properly define common query. Well, maybe maybe it's the skills responsibility to figure out which of the two responses is the one real or the other when you find it. Yeah, so I don't know but all I'm saying is that part of it falls on me because it was a very elegantly written skill. It has classes in encapsulate classes for just like a string that's the summary or or a list of strings and stuff like that and, you know, like I said, I know better but you know, again, I didn't want to also be discouraging the community because it is a community contribution basically and I want to encourage it sort of stuff. So, so some of it falls on me but but that's all what are under the bridge at this point in time it's real simple if you want it to be fast and not time out as frequently. I can do that if you're comfortable with it the way it is it sounds like reach out to the community asking to provide some vk tests to exercise the additional functionality that the concerned about losing and we'll move forward. I'm cool either way. All right. All right, guys, I will leave that with you to handle as as appropriate. Anything else we should discuss. No, that was it for me. Cool. All right, well thanks. I mean, I think a lot of our discussions recently have involved you know, an element of the community and their feedback and whatnot which I think now that we're recording and hopefully the community will be able to see and I mean this has been going on for a long time I think it's, I totally get that because it's it's useless to them. And so, you know, it's, it's my goal to try to raise the visibility of that so and, and guess is taking some more time to to focus on that now as well so I think, I think that's good. And, you know, look forward to smoothing out this process is, you know, in the coming weeks and months. Anyway. All right. Well, thanks everyone. We'll talk again tomorrow.