 Okay, welcome to 28th September 2020, mycroft depth thinking. Okay. So we are now officially halfway through our sprint. Let's do a check in and see if there's anything that has become roadblocks and check in with everyone and see how we're progressing. And then at the end, I'd like to do a quick review of the current milestone that we're heading towards and see if there's anything we can do to focus our efforts more tightly on the table. So I'll start with Gez today, because Gez is in the top center position. Cool. So I did some com stuff, mocked up that partnership place that we're working on, which may be ready to go soon. I'm sorry, an email from Johnny, but maybe not. I also started a blog post outline for our October mark two update, since we were already getting some things to talk about there. The 2008 change that Chris did, looks good. And so I've done that ticket around removing wait for message, just so that we can test it by having something that is definitely gonna fail if the 2008 image doesn't exist. So I just need to get those merged and in and that will be a pretty firm test on that. And I want to get back to exposing the logs in the bloke conference next before anything else, I think. So that's where I'm at. Any cups or emergencies have come up? No, just lots of reviews and things like that, but nothing blocking. Great, great. Chris, very. It's going awesome. Before I start, I want to say, Go Chiefs, Ravensuck. So things are going well. I have all of our third party UI packages upgraded to their latest right now. And I'm going through all the UI components and making sure I didn't break anything by doing that. I also talked to Derrick this morning. We have a little more clarity around the first version of the Tagger UI. He made a few small changes and I think we may, he may have another small round of changes after our discussion, but so once I'm done getting, I'm sure the Angular and related software updates work, I can start looking at that. You lead in by saying, sometimes when people come from Baltimore and their team is playing Kansas City where they live now, they could be conflicted. It's good to see that you definitely have an allegiance. I've been living in Kansas City now longer than I ever lived in Baltimore, so. So I had an interesting weekend, but just to cover real quickly the tickets I'm working on, I closed a couple of tickets out to do with our reassignment for yes, no, and for the initial switches since all that's been fixed and addressed. The enclosure ticket I'm working on, the reason I was having such difficulty is in getting the leads working last week, I had to install a bunch of stuff as pseudo and it wasn't clear at the time what needed to be pseudoed and what didn't. It's become clearer now that just the leads require pseudo, the switches don't, the volume doesn't. So that's something that had to be addressed. I also had a system-wide parameter set in the virtual environment to allow system-wide installs to work, but that was causing some trouble with clashes between modules. I turned that off and still was able to get it working with pseudo. A lot of the problem I was having was that a lot of the sockets like message bus and stuff were opened under pseudo last week and then when you try to close them or take them down you have to be pseudo. And since the startup was set to pseudo but I didn't do stop, there was some problems I had to go back and clear up and clean things out and remove caches and stuff. But I'm back to normal where it's working. A little concern with the hardware though. So over the weekend I think I pointed out in the thread some of us might have seen that my particular SJ201 that I got, the second one is defective, we knew that. But it's obviously not showing the I2C device which was the first trigger for that. But more importantly, my friend came over the weekend and noticed that one of the two speaker drivers were on at full power constantly and he was freaking out. He was like, that's gonna burn something out. Turn it off, turn it off. So I basically unplugged the one. So the other one is okay, but I had to unplug that. Now I don't know what that might have taken out or how it got like that, but I do know that since I had the new SJ201 or the old one, I'm sorry, plugged in, so the new one never worked. The old one always worked. But recently I took it from just being an external connection with the amplifier to actually plugging it in to the pins. And ever since I did that, it doesn't record. It thinks it is, but it doesn't actually record any sound. And I verified by taking it back off and connecting it to the re-speaker, which was originally and still here, and that works. So that's a little bit disturbing that that might have taken out the mic channels as well. I don't know exactly what happened, but that's where it's at. And I'm afraid to plug any more back into the pins because we're getting short on SJ201s. And I don't have any working and I've got software that expects to be able to do power-on self-tests with new drivers and I don't have working SJ201s. So I know Kevin will be back near the end of the week. I can certainly get through between now and then. But that's something I think we need to address and look at because that's a little bit of a concern. Other than that, like I said, I got the pseudo stuff for the LEDs that seems to work. I've got a couple of things to finish off on that. And I should have the enclosure stuff integrated and ready to do a pull request before the next meeting on Wednesday. I'll assume it works with working hardware. And then I'll give that to Chris V and he can do the code review on it. And so yeah, that's what I've been working on. The power-on self-test is gonna be interesting too because we're gonna have to really be creative, right? I'm gonna have to do things like turn on the speaker, turn on the mic, play something while something's recording and make sure that we got something recorded. Because other than that, what am I gonna do? There's not gonna be somebody sitting here watching the LEDs blank. There's not gonna be somebody, you know what I mean? So power-on self-test for audio and visuals is tricky. That being said, it's not impossible when people are doing it. So I'm just gonna have to buckle down. But that's in the near future. So that's where I'm at. I'm gonna try to button up the enclosure code, try to fix the, or add the code that reports to mark two during device bring up or pairing, initial device pairing and then handle the case where maybe I can get some additional information that I can stick in there so that we can know whether it's a mark one mark two, mark two with re-speaker, mark two with SJ201, mark two dash three, maybe somebody's Linux, you know, all the different stuff so that we can have a better handle on where we're running. So that when we look at issues in the reported, maybe we could even, you know, align them and say, well, that's on an unsupported environment block. So that's kind of where I'm at. Okay, yeah, thanks for that update. I'll address some of those hardware issues when it's metering, but for now let's go to Derek. Yeah, so yeah, today I, yeah, like Chris said, I met with him on a GUI for the tagger. There's a couple of things outstanding so don't get it completely marked that it's done but that should be wrapped up pretty easily this week and I'm not blocking him, which is good. So other than that, the couple of quick changes, quick updates for Michael's pitch deck and then just been continuing my work on the first 3D printed FDM version of the SJ240 as we're calling it and kind of balancing some of that with going back to 4th on sourcing stuff. The latest thing that I've spinning a little bit more attention on it was a camera module, which isn't necessarily the highest priority in terms of the first spin here, but we've got a lot of actors that selected that. So gotta figure out how that's gonna work and the camera version that we're looking at originally was the pie camera version one. So I've been tracking down the best way to get one and price-wise and then how it's all gonna come together. So that's been all I've been up to. Okay, great. So Josh, do you wanna say anything? Stuff you've been looking into? Sure, so a couple of quick things. I loaded a Pi 4 to use as a video conferencing system. It's still a little laggy and there are tweaks that need to be done, but the first time I tried it, everything kind of died and it did not work well. So I spent a little bit of time digging into the Pi 4 and how to make it the performance improve and discovered that the thing's pretty severely underclocked out of the gate. By default, they run it at 700 megahertz and the thing will run pretty reliably and pretty easily at 1700 megahertz without overheating. I did hook a heat sink to it. The other thing that I discovered is that for using that GPU, you can vary the amount of RAM that's allocated and the default RAM allocation is dynamic and does not do it justice. And so as a result, out of the gate, if you wanna use it for video conferencing and I can only assume for other high intensity computation, it's not really a solid out of the gate, but once you tweak the memory and you tweak the clock speed, you really can get pretty decent performance out of it other than I can't get Bluetooth to do what I wanted to do out of the box. I'm pretty happy with the Pi as a video conferencing system. The other thing I've been playing with is a technology from Google called Coral. So Google has developed a series of chips that run their TensorFlow Lite and then what are called auto ML models. And what they're targeting is, well, they're targeting a number of things, but one of the things they're targeting is industrial automation. And so basically building inspection algorithms that use machine learning at the edge in order to classify images and do inspections of what PCBs or whatever you wanna do. So I got one of those from Mouser in a USB form factor, but they do make it in a chip form factor for sub $20 and it is demonstrably better for running TensorFlow Lite models, like by a factor of 30 than the existing software. And so as we step into the Mark III, I think it's something we should look at as a feature on the next device that we integrate one of these TPUs onto our daughter board so that we can do TensorFlow models in the field. In the meantime, these USB sticks are only 60 bucks. And so as we're doing development on the Mark II, as it becomes our default platform and we start looking at new applications, it's really easy to add that feature to an existing Mark II by plugging it into a USB three port. And then all of a sudden you get this, massive improvement in your machine learning models. The one caveat is those models do need to be compiled down to run and to take advantage of the processor. So you do need to, when you compile and flag it appropriately, but I've been really happy with it. And I've been doing image recognition on high resolution images using an off the shelf model from Google and getting five millisecond classification times for eight meg of it JPEGs. So I mean, there's a lot of really cool things that that facilitates. Other than that, I'm mostly, I'm jealous that Ken got two Mark IIs and I got none. And then I've been on- Oh, you were working well. And then I've been on with the patent attorneys and I mean, it's really clear that we're gonna win the litigation and it's really clear that the unified patent IPR is gonna be successful. And it's really clear that we have a stellar shot at recovering our money from their shell company if their shell company even exists at the end of this. So, I mean, there is an ass kicking in the works for the patents, but it is expensive. That said, we are committed to spending as much money as it takes to kick in these people to the curb. And for those of you who saw American History X, just envision the scenes that sent our good friend, Ed Norton, to prison. And that's exactly what I figuratively intend to do with our friends. Well, Josh, let me just say that I'm proud to be associated with a group that decided to not cave and just send over 30K and have them leave us alone. But actually give like a quarter of a mill retainer to our attorneys to fight it because that's really what it's gonna take long-term to get these guys to stop their nonsense. That being said, on the TensorFlow, the TFU, what is the cost of those, the USB3 TFU? There's 60 bucks you can buy them from Mauser and Pugamon and then in the chip form in the surface mount form like one off for like 21 bucks. And then it goes down from there. So, as to whether or not our friends at Google are subsidizing that, I don't know. I suspect they probably are so that they can become the standard, but it's a feature that we should look at. And we don't have to marry Google to make that happen. We could both Intel and Nvidia are making similar products. And so, it's just that Google. Michael can light one out for us eventually. The thing is, when he has some free time on a weekend. Now, the thing is twofold. The reason I've been apprehensive about TensorFlow Lite is I have a friend who did some work with TensorFlow Lite and he said that TensorFlow Lite has some bugs that he's kind of waiting for them to be corrected before he moves forward with that in a big way. The other thing is I noticed that, and this is common with ARM processors, all four of the cores when you're idling will basically reduce down to the minimum frequency. And then I noticed that when precise gets loaded and kicks off, it'll actually run them up to their max, which is 1.5, and that'll be across all four. Somebody had mentioned that they thought we were only running and consuming one core, but my experience has been, and you can see it with top as they ratchet up, that we're precisely using about 50%, 45 to be technically accurate, of all four cores simultaneously. I did not see it just using one. So that was my experience. And it ratchets up as precise runs. So, it'll try to go down. That's really something that all these arms do that we don't need because we're not worried about conserving battery. So yeah, forcing them all to max frequency. And I don't even know if it would get that hot, but yeah, I don't see any issue with that. And that would certainly be better for us. But yeah, I'm a little apprehensive about TensorFlow Lite right now. So running them at a fixed frequency does a couple of things. First of all, the power used on a chip like this, a inordinate percent of it is just from the clock line itself. Basically, power is consumed on an integrated circuit when something switches from a one to a zero. So the thing that's switching the fastest is the clock line. And there's a ton of buffering on that throughout the chip to make sure that it stays synchronized. But you won't really overheat the chip or really get it towards this max temperature unless the data pipelines are actually being used because the clock line will be running and sure that's using more power. But if the data lines aren't actually switching, then you really won't be using a lot of power. So I think you're right. I think we can probably try to set the frequency to a fixed higher frequency and get something that's a lot more dependable in terms of, or predictable in terms of its performance. I think that's a good idea. The other question I have though about that is can we restrict a process to running on a particular core? That's something that we might want to look at. It's certainly not a priority, but that'll help with the predictability again of the process. That's a good question. I don't know the answer to that. I just thought that somebody had made a comment last week that we were only supposed to be running precise on one core. That was something that occurred when talking with the guys here. But clearly that wouldn't work if it's consuming 50% of core cores, you know? Basic math says you're not gonna be able to run it on a single core without degradation. Is it at what speed though? That's what the more all about when they get. In other words, you can see on top it starts ramping up and it spikes and it starts coming down and when you finally get a chance I have a little monitor program I can share that'll show you the percentage usage of each core. It runs them all up to 1.5, like right away. They're all at 1.5 at 30%. Yeah, so two cores is gonna be the minimum it looks like but I don't know for sure. Maybe there's some, maybe there's some, I don't know some wax idle cycling in there. I just don't know enough. I strongly suspect that training precise using TensorFlow Lite or a machine learning framework that's a little more modern will be helpful. I mean, precise was originally built three years ago. That field is moving with a rocket engine underneath it. I suspect that it just comes down to expertise. Let's get the data in order. And then if necessary, I suspect there are people in our community that can probably do it in the afternoon. We will ask and see if we can get some folks to help us but unless we can provide them with training data that's pristine and that's where the piece that we need to provide. Yeah, that's what we're working on actively, right? So that's good. Did I, I posted, I reposted the article by the way on my LinkedIn. Did I read that article right? That we're just a couple of percentage points away from the same performance that our competitors were getting and they're a little bit larger than us? Yeah. That's really encouraging. They were tying our stuff to, well and our stuff's available and you can use it without like having to sign an LOI and like go off and talk to the wizards, right? Where's this article? It's not even our latest and greatest. So, yeah, that's encouraging. Well, Nora, are we doing any of the transfer learning stuff that we should be doing at the top of that model in order to narrow it down to this specific user? So we've. And if I read the article right, that was with a custom wake word we did for that particular experiment, right? I didn't do the wake word. They did it for our model. Yeah. So who knows how they did it? Because we haven't really. Yeah, that's that's that's a formula for failure, too. OK. So yeah, it's encouraging to know if we're behind without that problem. Excuse me, I'll be right back. What was that article? Was it Sheridan team? Josh, where do I see it? Do you have team or team? Let me see. I got it from a number of different people, but one of them, I don't know. Daniel Pomp, I think. No, but I think he just wants to see where it is so you can see it. I got it from MatterMos somewhere. Yeah, hold on. I'm getting it for you. Here's a PDF version of it in the chat. Yeah, I got it. A number of people flagged it for me. Our investors are awesome. They are a giant information coming to it about all kinds of things. Well, I thought it was very encouraging, considering it's three years old. And God knows how that model was created. And you're talking about those guys being able to throw a ton more resource at the problem than we can, so all good. Yeah, I was able to train models for Precise in a very short period. I think I spent a couple of days on it before I was able to train custom models and stuff. But the challenge came down to, it was in January, just getting the data pipeline squared away. And then I didn't do any edge work. I just implemented it as it was originally done. And plus whatever, fixed it for me. And then I checked my own code in and approved it, which I got. People didn't like that. I think we're really at the point where the model is pretty good. I don't know. I can't speak to the internal algorithm, but from an outside perspective, looking in and talking to Matt, it seems like the de facto way people attack the problem. I really suspected some matter of data at this point. Yeah, so sorry about that interruption there. Yeah, I think this really highlights the fact that what we're trying to do right now is get an instance of the whole system up and running, from the hardware to sending me to the submission and review and tagging and training process, get all of that up and running in a solid system. And then I think we'll see rapid, rapid improvements in the software once we can get to that point. So that's really where our focus is. So before jumping into the milestones, I'll just give you a quick update on things from my end. I was hoping to get the SJ201 next spin out over the weekend, but there was a little technical detail I didn't know about in that some of the mounting holes were moved, which seems like an innocuous change. But it turns out that was the biggest change on the board that caused, I don't know, Kevin said, something like almost every net on the board had to be really relayed out because those mounting holes changed, which theoretically isn't a problem. From a logical perspective, it's not a problem, but from an electrical point of view, you've got to go through and review all of the ground planes and make sure none of the clocks are routed next to signals and stuff like that. So it's kind of a bigger deal now. So we're going to have to get a little bit of a design review process tonight and make sure that that's not going to cause any problems. And so also as a consequence of that, we're going to do a more limited run than I was hoping for. So we'll do enough that if they do work and there's a very good expectation on our part that they will work, everybody in our company will at least have two devices to work with. And I want to use the same process we used last time. We kind of do it half there and half here. We're going to use a PCDA factory that can do all the boards, source all the parts, and that sort of thing, so that we know that if these work, we can just go back to them and say, send us 500. And we'll fulfill our Kickstarter on the devs board versions of that at least. And start that pipeline. So that's where we are with the Mark IIs. And on the component side of things, the changes are really, really minimal. We're fixing values on two capacitors. We're swapping out the sound card for a different, a much simpler part that we've already tested by Kevin kind of wearing it up externally. So all the changes that we're making are on that side are pretty simple. So I have a pretty high confidence that this is going to work. One last thing on that hardware, my friend, and I'm not an aficionado, he looked at the board and he said, dude, who laid out this board? And I said, well, that guy Kevin you talked to. He said, that's a damn good job for what it's worth. He thought he was pretty crap. Well, I was also looking at the TPU stuff that Josh brought up. XMOS, the company that's making the audio front end chip that we're using, also has a TensorFlow processor accelerator that they're touting. But like the Coral system, they're not really generally available yet. So I've applied in both cases to their program to get into, see if we can get an engineering sample and that sort of thing. So I don't want that to become too much of a distraction, but it's definitely something that you'd be looking at for the Mark III. And the sooner we can get into that, the better. Yeah. Any other questions people have? All right. So as far as the milestone goes, I'm looking at, so we've got three tracks that we're working on. We've got the hardware track. We've got the software track. And we've got the update track. There's the core software as a unit that does something useful. There's the voice assistant stuff. But then there's the overarching, how do we deploy this, how do we do updates, and that sort of thing. So that's the other major track. And as part of the core experience, we've also identified the Wi-Fi setup as a key issue that we need to get right, because that pertains to users out of the box experience. Plugging it in and setting it up, that's got to be good. So as a sort of minimal viable product, we need to get all of those things into a solid state. Because if we can at least do updates, then we can rapidly improve the software. So I want to make sure that as we go into our next sprint, that we're focused on those issues. Obviously, we've been focusing a lot on hardware. And Josh has been doing some investigations into some of the various third party solutions that can help us with the firmware update process. And some of those include Wi-Fi setup and some of them don't. But regardless, I want to make sure in our next sprint, those two items are part of our regular planning process. Now the Wi-Fi setup, the Wi-Fi setup is an issue for a consumer product or an appliance. But for a developer kit, there are any reason why they can't plug in a keyboard, and for example, and edit their WPA supplicant file themselves. I mean, what are we shooting for here for the developer kit? For the developers, we need to make it clear what they're getting into. So if they're developers who are just waiting for, like, look, just give me a piece of hardware. I don't care about the fancy mess, right? Then sure, here, download this USB image, flash it to your USB drive, and plug it in. That can be our update process, right? But we still got some stuff to do until we get there, right? For example, the firmware isn't all on USB. Some of it's in the eProms on the XMOS chip and stuff like that. So we need to make sure that we can update that stuff reliably. And we don't want to be breaking people's devices, so even if we can just ship them a new USB update. So there's a couple of little minor things. I don't think that's a huge concern. But yeah, if we can tell the developers what they're getting themselves into at any particular phase, I've got no problem with releasing those certificates without necessarily 100% of those features. But certainly before we go into any kind of beta program with end users, we need to have that stuff sorted out. And because I think that's going to be the long pole, I want to start focusing on it now. Like, we can see the light at the end of the tunnel for the hardware, right? These other processes, we haven't really begun in earnest. So I want to get those onto our development app. Michael, I'm assuming I'm the only person that's had the issue where when he plugged his SJ201 in his recording stopped working, correct? Because I would be apprehensive about ordering some runs until we had one maybe monkey wired to make sure that that's not the case. Like I said, it didn't happen until I actually plugged it into the edge connector. The only thing going solely by what you just told me that sounds like a software problem. But it's not because, yeah, but I can just unplug our board and plug the same things into the speaker array and it works fine. That's the first thing I thought was software. All right, well the re-speaker array and our USB is behind another USB. Kevin's able to do recording just fine, but you're saying that, and you're saying as long as it's not plugged into the pie. I'm saying, I'm saying the SJ201 used to work fine with recording until I plugged it directly into the edge connector on the pie and now it won't record anymore. Oh, you're like, it's a permanent change. It seems to be. Does it work now? No. Okay, so you think you might have cried the mics or something? I mean, if Kevin's testing it plugged into the edge connector on the pie, then I would be comfortable with that. But if he's testing it like plugging it into his Mac, then it's not apples to apples would be my only concern. Yeah, sure. Can you file a bug report on that? I will. Yeah, I'll talk to him about that tonight. So one thing to keep in mind is that all of these devices were hand assembled at some point. Kevin soldered half of these things by hand, including the highest density or the highest pitch chip, which is the X. The QSD. Yeah, it's a, you know, it's currently a bugger to solder. So all of these, you know, we're gonna be assembled at the professional five house next time. And the other thing to keep in mind is we don't really have a test jig for these, right? We're not building a test jig until we know that we can at least get them working. And so that'll be one of the next things we work on. So if we're gonna make 250 of these or 500 or, you know, 10,000, we need to have a test jig so that we can test them when they come off the assembly line. So, yeah. I think that's what Kevin thought it might have been was like cold solder or something. I told my friend and he wanted to throw it in the oven at 425. But, and he said that would fix it. But then I said, well, what about the plastic connectors? And he said, yeah, they'll be fine. So I had to grab it from his hand and not allow him to cook it. That probably would be okay. I mean, if it's not working, we'll be able to use it. So the one thing I do want to point out is that I will never be able to get the audio on this conferencing system working properly. If you plug it in, that should work. Yeah, eventually I will. But it now lets you bond your phone call to the thing and I have a mute button on my headphones. So we will use this for the time being. I do need to make a little arm that swings down that brings it down to the level that I'm at, but that is a project for another day. I would like to point out that even if the SJ201 was perfect today and even if we could do a couple of the big rocks that we need to do like Wi-Fi setup and updates, we still can't provide after five years a user experience for the top eight skills that we can stand by in a demo for a company like Walmart or Target, like the software side of things. From everything from the wake we're spotting, it's continuing to have some accuracy issues to, as far as I know, we haven't resolved the music problem that our friends at Spotify caused us this past couple of months. We're not ready to go. And I don't think that that's not gonna wait until, unless we solve those problems, it doesn't matter if we spend $100 and something thousand on shipping dev kits, yeah. No, I share your concerns, Josh, but I guess I've been comfortable in believing that once we get the enclosure level stuff and the hardware and all the system level things going, then we would have the time to revisit the skills. And I've always looked at the skills being applications that run on top of our framework as somewhat trivial to correct once the framework is perfect. And so I've always kind of felt comfortable that we were not addressing the top eight skills right away because they're a trivial issue once all these other things are out of the way. I would think. Well, I wouldn't necessarily say they're trivial, but definitely by getting a limited number of units out into more people's hands and setting up the proper feedback and error collection systems, we'll be able to do a lot of this UX work, right? A lot of UX work is some of it's kind of, obviously broken and we should fix it, right? But a lot of it is also, it's just using it. And which is why we all wanna have two of these things in our environment, right? Like one for death and one for work, one for home, basically two different rooms in your house at this point, right? But we wanna have these things around so that we can work on them and use them and eat our own dog food, right? And then, and when we really do a limited release to the developers who are most interested in working on these things, we'll get even more feedback on that sort of thing. And hopefully even assistance in improving those issues. And yeah, so I'm really looking forward to doing that base because that's actually really fun. I like doing that design work and doing the feedback and doing that usability testing. So, but yeah, we need to have a platform on which to do that stuff first. Okay, well, if there's a, you know, we still need to make a decision between some of the update stuff and a few other things so we can start chasing through that. Sure, yeah. I'm not talking about anything, changing anything about what we're doing right now, right? Just thinking for our next sprint planning, which we'll kick off on next Monday, we should be keeping that in mind. Those are some priorities that we don't want to become blockers for us actually shipping products. I don't like sounding like a broken record, but some of those I feel are dependent on which GUI platform we go with. Sorry. A bad out for my own heart. I'll tell you what, the Kiwi image is working good. The only concern obviously is the community and the existing skills out there. Maybe we could even figure out a way to somehow build the conversion in. But I just don't feel like we have to make a decision on that just yet. I mean, I thought it was great to get it out there and get feedback and discussions, but, you know, I kind of agree with Josh. If we can get this going, then the next thing is we really probably should attack those skills and make sure they're usable. And then we can come back maybe and look at that. Do you believe that the performance of some of the skills and some of the problems that we're having are due to the selection of GUI? No, I think it's... Go ahead, go ahead. Okay, all right. Well, so I do think that there might be some latency issues between, or some discrepancies between the two in that there are some extra flourishes in the Qt version that may be causing it to be a little slower, but those could possibly be turned off, just like extra animations and such. But the problem that, you know, it's been on my plate to do what was kind of supposed to be a simple fire both versions up as they are and take a video and bring that video in and, you know, create a comparison for you guys all to look at it, see the state and where the both at. But I've been kind of blocked by numerous hardware issues. I really only have one of the re-speaker-based devices and lately it's given me trouble. I'm not sure, actually, I've finally come around and thinking it is hardware after being very, very stuck that it wasn't. It was like, okay, I got this to work at one point so it can't be hardware, but you know, this thing's unreliable. So I guess it's because Blocker, yeah, you know, and this is back to the SJ201s. Yes, we all have two of them and we all should be able to then say, okay, this hardware is working, it's good, we've got past that goal. Let's fire up the Kevian Qt and everybody take a look, you know, because I couldn't guess that really only Gez on the call right now has got a current kind of perspective on the comparison of the two. Yeah, and this is something that the community can help us out with because very early on, and this may very well be outdated information, they understood that the Qt API relied on hardware acceleration and needed a lot more processing power to execute, regardless of like the flourishes and animations and the sort of things that you can actually do with it, right? So there's, you know, there's, I guess, the performance can be associated with two different things, right? There's a thing that drives me crazy in that a lot of people design UIs like the, like TAN, you know, a lot of people design UIs like the TAN animation as a way of making it look nicer, but it actually ends up, in my opinion, impeding the personality of the Qt. And so that's one issue. We can separate that off and then focus on just the responsiveness of it as a Qt, right? And how much of the system these things have, you know, in terms of memory, CPU cycles, and that sort of thing. Getting a benchmark of how much, how many resources that the Qt system requires versus the Qt system, I think is ultimately where we need to go. It'll be nice to have like a visual comparison like you're talking about, Derek, but I want to go under the hood too and look at, you know, actual MIPS, you know, to do a Blit or whatever, right? You know, all of the stuff should be pretty straightforward at a certain level, but I honestly, I have no idea how the Qt system works and, you know, what kind of overhead it might be. So if there's anybody out in the community who, you know, can enlighten us about that, at least give us a heads up. That would be awesome. Well, isn't there also, and this is something I didn't realize until I actually burned my new Pi 4, the instructions Derek gave me, you know, required me to put an SD card in and put Rasparian on it so I could change the eProm to boot off of the USB drive. Okay. When I did that, what was really cool was I could use the touchscreen like a touchscreen, because I don't have a mouse, I just have a keyboard, but between the keyboard and my finger, I had a complete UI. Now, I'm looking and saying, how's that going to work using either Qt or Kivi? Do they have drivers for the touchscreen? Because if not, we're engendering a hell of a cost burden for no functionality and that doesn't seem like a wise decision. Well, I can answer that. Both of them work with touch. If on Qt, so like I was trying to exit, well Qt just has it, it's part of the GUI already. So like you can enter in your Wi-Fi password, everything. So you've got touchscreen on Qt by default. Right, and it's got a nice menu you can pull down and do all kinds of settings and stuff. Qivi, we're not doing anything with it, but it actually works, because I was screwing around with trying to exit to terminal on the device itself instead of being shelled in. And if you hit F1 in the Qivi version, it's going to bring up the Qivi menu and the touch is active. You can actually touch that menu and it's responsive. So if you just hit, if you're in Qivi and you hit F1, you're going to get a menu and you can test it and see if it works. So both work without any extra setup or driver. So it's a nice thing about using that DSI display because it's just supported. So I guess my thing is, we've talked about this stuff and Derek's been trying to do that side-by-side at least from an end user UX perspective. But I just wonder if we need to get some of those tasks on the board of actually doing the baseline test. Whether that's me, if I'm the only one that actually has two running devices or they are ATS devices, but they're working or someone else. Yeah. If you want to propose to put that into the next sprint and the community is really primary and current answer there, then sure, we can try to prioritize that. It is something we need to figure out anyway. Well, I think, I mean, the community are asking me about it obviously, but also, when we talk about things like the Wi-Fi setup and that is inherently tied to which platform we're going with because they both use different systems. I don't agree with that. The Wi-Fi setup stuff, you can just assume that there's a command somewhere in the interface that lets you put that chip into access point mode because every modern Wi-Fi chipset has that capacity. So like the Wi-Fi setup stuff is entirely UI. And I think we've got it or we did it. It was pretty solid for the one that we were working on last year. Like we'd gotten through it, you know, I don't know, 100 times. We just need to get that codified and get it into the current software. We talked about the process where it goes into like mini server mode and then you can connect to it from your phone. Yep. Yeah, that's trivial. Yeah, that's the problem. Well, you say it's trivial, but I've been waiting for a working version since 2018 or 2017. So like if we can, if we can just get through the, hey, we can set this thing up and get it on the network, connect it to the person's account, like play some music, right? Ask it how tall the Eiffel Tower is and have that entire experience end to end, not fall flat on its face. We could probably get some, some retailers to give us some pre-orders. But as it stands now, we can't eat, you know, we can't get to that point and that's all software. That's almost, you know, we could use a laptop to do that. We could use a, I mean, there's a variety of different form factors. We could use to at least go do the demos and stuff. But yeah. And I get that we need that for any, for any voice-only device, but if we have a device with a touchscreen, do we actually want to make users go off and grab their phone to set the device up? What do we want to provide? I agree with you 100%. I agree with you 100%. If we've got a touchscreen, why are we making our life difficult on ourself? I don't, okay. So this is getting a little off topic, but I don't understand why the choice of Qt versus Kivi has anything to do with Wi-Fi setup. It doesn't. I mean, the choice versus Kivi, in my view, in my view, who's arguing for Kivi other than Chris Vader? And I love you, Chris. You're awesome. Let me just start with that. Well, it's not just Chris. I mean, I really want to know the resource uses like, okay, so from a resource perspective, it's gonna be a challenge. And I think that's the reason we selected it was the reason we selected Kivi for the, all the demos we did two years ago was number one, Chris Vader could work on it, right? Like we had somebody in-house and he raised his hands and said, I'm going to do this. That was the big one of them. And the other one was resources. But that would, you know, whether or not there was any science behind the resources, I don't know. Like, you know, I suspect anything I was told in, in 2017, late 2017. So. Yeah, I mean, I can't imagine that there would be much of a difference unless there's just like some in, inordinate overhead of memory allocation that Qt requires, or there's a bug in it that just, you know, it cycles in a loop too often or is too resource intensive in that sense, you know. I mean, ultimately, when it comes down to it, they're doing the same thing. They're doing blitz and, you know, rendering text sometimes and all that stuff is really straightforward. So from the, from the perspective of, you know, so if the only other question is processor time, if we put that one to bed, are you still with us, Chris? Guess what? So get with the detail and say, Hey, man, like the only thing standing between us and adopting your stuff that you've been working on for the last four years is processor time. Can you assure us that this is going to run on a pie for and better yet? Can you show us a demo? Then we just say, because they are we love you, brother, but you're going to have to learn Qt and we're going to have to port this stuff across and the decisions may, let's move on. I mean, I, I, I don't, I don't think it's necessarily that easy because, you know, the other reason they give you as chosen as a, as a system is that it's Python based rather than C based. Right. And I don't, I don't know enough of the details to make a good argument. I tend to, I tend to take exception to that. At the end of the day, all of these are going to be C based down low enough. And then what they expose or attend upwards may be more Python friendly. That could certainly be the case. I don't know enough about it, but I would say that if you had a working system of each in front of you and you ran top on them on a terminal and then put it through its paces, just visually you'd be able to get a really good idea of what's going on. I mean, and if that wasn't that different, then you could get down to a gnats ass and actually write some code, but I don't think that would be necessary. Yeah, I just think side by side with top or there's another better, there's a better version of top out there. I forget what it's called. You'll be able to see, you know, what it's consuming. And, you know, I haven't going through the paces with our code and precise running and our overhead since that's really what it's going to be doing and see if there's anything perceptible that's different. And I don't think that's that tough. Once we have a working, giving and Q key system side by side. Okay. We're rehashing this. We've been over this a bunch of time, but we've actually taken all these notes before. So, you know, I guess I'll leave it to you to decide, you know, what's the right time to tackle this in terms of, you know, in so far as it's affecting the community, right? It's, it's not a, you know, it's not a critical issue for us just yet. So let's try to focus our efforts on the things that are going to be followed. Okay. Any other, any other things people want to talk about today? Okay. Great. Well, it seems like everything's sort of proceeding at pace. I am saddened to hear about your hardware problems. But it could be self inflicted. I don't know about, I really haven't been doing like a lot of messing with the hardware, you know, other than plugging the edge connector in and out. And I mean, remember that you recently was simply lying out in an external power unit. So it can't have anything to do with that. So the only potential difference, which I've run by Kevin and he doesn't seem as a big deal is that he's soldered over the USB power jumper. And he was under the impression that should be fine. Just the way it is. So I'm at a loss. Yeah. But it sounds like that's about to start moving up your work. Right. Although you're taking in your. Inclusion code now. Right. So you kind of stopped working on that for the time being. No, I'm going to go ahead and proceed with the pull request for Chris V on Wednesday with the stuff in, in their working, assuming in the absence of working hardware that it should be working and then we'll take it from there. But yeah, the sooner I can get a good working piece of hardware, that'd be great. And I'd really, I'll create a ticket and I'll follow up with Kevin and ask if he's actually plugged the SJ201 in and pulled it out and did it do anything to the, to the microphones. Okay. Yeah. So we'll get you a, a working, a fully working. Yeah. You know, prototype. Just the SJ201. I have two, five fours here. I have everything else I need. I just need a SJ201. I mean, I, I don't mind having one of them using the old amp and one of them using the amp built in. That's fine. That would be what I would have, right? Okay. Well, I'd like you to have one that's just a mark two and then the other one can be the, the bare SJ201. Yeah. So that's what I would say. So really just a working SJ201 would do it because even the one that's kind of defective with the, with the mic, I could plug it into the re-speaker if I wanted to and get by. So just a good working SK201 would be fine. And I'll, like I said, I'll follow up with Kevin and make sure this is not a potential issue. Cool. All right. We want to get to the point where all the SJ201 devices we all have, or as identical as possible. Oh, no. When we get a finally working version, we're going to chuck all of the old ones in the garbage. I may insist that you send them back together. So we don't have it sitting around. My first PC came a friend. They go, they go up there. They go up there on the, on the shelf with, yes, I'm here. My first PC came from a friend who worked at the warehouse that was supposed to destroy PCs for IBM on contract. Nice. Is there anybody you don't know, Ken? No, well, yeah, I won't go there. Tell them we'll drop by tonight. I was just going to say, I've never met the cheetah with chief. On that, I guess we'll call it a day. Thanks everybody. We'll talk again Wednesday. Everybody on Wednesday.