 Who's taking lead? Well, I guess I'm here still. You're not allowed to talk. Welcome to Dave's DevSync. It's Thursday the 14th of October. It's still 2021. How good is that? Michael is not talking too much today, so let's start with Derek. All right, so yeah. I did a little bit of stuff on the alarm skill this morning because Chris noticed some screw-ups. I kind of didn't really take into account 24-hour time for the alarm, so I had to scale some things up. And that was about all I could do on the GUI kind of skill side of things today. I had a bunch of requests from the marketing side pushing out some new ads starting next week. About five ideas there that I started working on. We actually just had a conversation I just kind of started working on. And then Joshua has been in town, so we met briefly today. And then I've been trying to get the 3D prints put together to bring with for next week, so I can convert some of our dev kits that are in Hawaii to 3D printed versions. So that's been kind of all over the place today. And then we had a real quick manufacturing discussion as well. So, Koko, Ken Smith. Okay, there's no feedback, right? You're okay with that level? Okay, good. So I have the alarm bugs, fixes, PR, ready to go. I don't know what branch I'm supposed to be branching off of. Is this the alarm skill that's out? Is this like 2104 or whatever? Or is it off somewhere else? What's what's the branch I should be generating pull requests off of for the alarm skill? I've been branching. I've been committing PRs to 2102. And then once those PRs are merged, then I've been bridging them back to the skill control branch. Because the skill control branch is the branch that's on the mark too. So should I go ahead and create the PR against the skill control branch? Which either one, skill control or 2102, we'll get it across the alarm. Okay, I'll go ahead and put it against skill control. So that was the first question I had, because that PR is ready to go. I fixed all those bugs. Derek, I reached out to you earlier. I reassigned them to you for verification, but you can't verify them until they actually get deployed. So I'll ping you when that time is. Cool. Yeah, I'll take a look. Yeah, and then what else do I do today? So I ran the alarm VK tests. Yeah, there's a lot failing, but something went on because I yanked all the snooze tests out of there and put them in a separate VK test file. So we didn't run them. You could just like rename them feature not or something, right? Because we don't have snooze. That's a feature request. So something happened there with those VK tests, but there's a bunch of them are failing. It seems like most of them are snooze. And that's kind of what I was working on right before the meeting. I reviewed Chris's PR today and proved that. I'm working on isolating this mysterious volume change issue. But what I can tell you is that I was not able to reproduce that behavior with this build by using the alarm skill alone. And I noticed in your video that you would run the timer skill before that. So I'm going to try to do the exact same steps and see if the timer skill is potentially going off in the background and lowering the volume. Because I do see that it's active quite frequently. It'll periodically show up in the log and say it's looking for any active timers and stuff like that. And so it could be a byproduct of that. So that's what I'm chasing down. That's the last remaining bug. There are two feature requests for the alarm skill. How do we indicate in JIRA a feature request versus a bug? Task versus bug. Make it a task. Task versus what? Bug. Okay. Okay, good. Good. I didn't know that. Yeah. So I'll be buttoning up probably all the alarm skill. Because I did need to test some issues. I fixed them. Yeah. I mean, other than that, non-work related, I reached out and had a very good conversation today with a visually impaired person from the Broward School for the visually impaired. Actually hit seven because on the menu options, it was seven for technology. And so that put me through to who I thought was at this school. But it turns out they're the vendor that handles technology for the visually impaired like setting up Brown machines and stuff like that. And he's completely blind. His name is Jose and we had a great conversation. I told him what we would have and he was really excited because he said, yeah, Derek, you know, you showed us is the crap they have to put up with. And I said, well, that's pretty, you know, crappy, right? So you can say like next, next, next tab and, you know, tab through it. Why can't you just say like, you know, for a wiki article, show me the table, read the cut table of contents, jump to this section and he says, Jesus, if you could do that, be freaking great. So we have an opportunity to make millions of people's lives better possibly. I'll attack that on my own time. But that's something I'm interested in because it seems like it's a natural fit. And that's something I thought about since I joined the team was that, you know, we have a really golden opportunity to improve the quality of a certain sector of the, you know, our fellow citizens lives. And we could do that. I can even probably throw together a couple of mark twos at a table scraps that I got laying around and, you know, put them in over there and let them know. Because I think that I think that's a different product on the mark to just, you know, I think it's a very specifically devoted UI very specific product solving a very specific problem but our hardware can handle it and our framework can probably handle it. It's just unique set of skills. You know, obviously the screen doesn't do them any good. Right. But, you know, I mean, so what the mark to would handle, you know, a lot of great stuff for them and improve the quality of their. Internet experience, which he told me is pretty crappy, to be perfectly honest. So I thought that was interesting. He's going to reach out and send me an email and we're going to meet in person probably over the next month when I get back from Hawaii and stuff like that. So I thought that was exciting for non working with stuff. So that's great. And yeah, I'm still at night after these meetings, getting back on my jets and nano and trying to get the GPU working so we can do local TTS and STT. Without internet connection at some point and I'll put my cross on it as well. But what I'm really excited about is probably buying a second one and plugging an SG201 into it because it is pin for pin compatible with the Raspberry Pi 4. And so that's my status update. Yeah, cool. Chris there. So I submitted a alarm skill PR for the UI. As both Ken and Derek alluded to, can I address those issues? So if you want to take a second look at it. I did do a couple of commits. And I am looking through comments on the. Pairing skill. PR I have out there. So there's, there's some sort of, I don't know if it's a race condition or some message bus messages stomping on each other, what it is, but. You know, I thought I had fixed it so you can get to the home screen as soon as you're done with. Pairing and it's not always working like that. So. And guess point out some issue where the home screen showed up first, which I haven't seen yet. So I'm going to try to recreate your steps guys and see if I can. Get that too. So I thought I was done with the pairing skill, but not quite. I'll probably look at that this afternoon. I took a very long lunch today because I drove to Lawrence to hang out with Eric and Josh. So that was just been a good part of my day so far, but I'll get some work done on the skill pairing on the pairing skill this afternoon. Cool. Yeah, I've been continuing a little bit on the wiki skill, but didn't get too much time on it. But I moved on to the GUI for it because that's a bit more exciting. There's some interesting work happening in the, there's a lot of good stuff happening in the home assistant skill from the community there. So the new additions have been adding support for covers, which are things like, you know, automated blinds and basically anything that can close over things and binary sensors. So, you know, as opposed to range sensors, I guess. And yeah, the CI stuff, which has been a huge PR and a huge amount of work from Tony. So yeah, just doing a bit of a shout out there because it's a whole bunch of work that's been going on primarily by Tony and a guy called Matthias. So shout out to them. Yeah, I've been reviewing those things. There's some stuff going on in the Mimic recording studio from Thorsten, who's continuing his enormous TTS journey. And I also this week had a chat with a master's student. There's this quite cool program in Australia called the Masters of Applied Cybernetics. And they're looking for a placement to finish their masters. So he's a web dev by day and is obviously interested in this whole area of AI and ethics and, you know, language and culture and all those sorts of things. So we had a good chat about where his skills and interests might fit in. And it seems like the whole precise training loop and, you know, web interface would be a perfect kind of area for that. So he's going to put together some stuff around, you know, in writing around, you know, what he hopes to get out of the whole thing. But yeah, I thought if we keep an eye on that, when that's going to potentially come around and try and line it up with him, it could be a really good extra bit of resource for us and yeah, good for him as well. So same like a mutual beneficial thing. I think that's about it. I actually have a web based GUI that you can build your models on your local computer, you know, a brow. So that's been on Backburner for almost a year now, but when I was working on that stuff, it was a side project I did. So if we had a web dev that could take that and professionalize it, that would be a relatively valuable tool because then our community could train their own models and improve the wake words and customize them for themselves. So, you know, and some of them even have GPUs and they might be faster than our systems. So that was something I was excited about at one time. Yes. Are you still running the tacky on a TTS your mark? Yeah. The kukui. Yep. My units probably dead because I'm running the... I've been with slower. Is that me cutting out or Ken? Well, I'm hearing it. So I assume it's Ken. Okay. Cool. Yeah. I think I heard your question, Ken. So I am still running the kukui TTS. It's good, but it's got a whole lot of interesting quirks that, you know, I'm sure that they'll be keen to iron out. I've been meaning to reach out to them actually and find out how what's the best place to report those things. So if you... Anyway, it's not about that. I dropped out, but that was it. Yeah. I was just curious because I've been running it and it sounds really good. Yeah. There's some interesting ones like... Oh, that actually wasn't too bad. Sometimes when it says Nelson Mandela, it says... It says Nelson Mandela. Yeah. It's got some... There's some gotchas where if it hits certain mixes, it gets off of the weeds. You know, it's probably branching into some strange place. Yeah. And it just babbles on. But because of the female voice, I'm used to that. So it didn't bother me. Okay. I hope we're recording any more. All right. All right. Now we have to censor Ken. But that's it. Yeah. I was just curious because it works really good. I like it. And I'd love to see it running locally. Yes. There's also larynx that can... So Raspi, the crew at Raspi did, or at least Michael, built larynx, which is the whole purpose of that is that he wants it to be able to run faster than real time on a Raspberry Pi. For TTS. So I haven't really played with it, but that could be one to look at. Does he use GPU? Oh, he can't. The Pi doesn't have a working GPU for this. The Pi's GPU is very... Does the Pi not have a working GPU? It's GPU is a disappointment. It's not what you think. It's a graphics GPU. In fact, the only code that supports it is like specifically for 3D graphic rendering. And for some reason they can't use them with TensorFlow models and stuff. Oh. Yeah, right. Yeah, the support's not there. It may be there in the future, but it's not there. That's why I bought the Nano. Because, you know, you're dead in the water without a GPU trying to do stuff local. Oh, that also... I don't know if you guys saw, but Pandacore have been testing out the 510 kernel on their systems. So I'm hoping we'll be able to have a test of that for the Mk2 shortly, which will add GPU support. So at least for the graphics, if not for the TensorFlow. It will render the QML faster. That's what I wanted to say actually is that the GUI, the UI is just top-notch. I'm really impressed with the work. It's just looking really good. And, you know, I saw it on the AlarmSkill timer. It's just really coming together very well. Yeah. Yeah, I feel like there's a whole lot of polish that people are not going to necessarily see because it's like just the way that it should work. But they would see it if it wasn't there. So, yeah, I think we're doing the right stuff. Yeah. It's coming along great. I will give kudos also to the design framework we came up with. Basically, you know, putting a screen together is now a very easy thing to do. You know, the song that I put the grid lines on and count, you know, where the things go in the screen, it's cake. Do you actually show like have a thing that to show up the grid lines on the screen and shift things around? As a way to do it. If you're in big money, you can at least I have it running on my Mac. You just go, you know, view grid lines and they show up. Yeah. Thanks. I think that's one thing we really do. I haven't really written up a good post on it. But I think maybe we're getting close to what that might be needed or we get it in the docs better. I might have a good blog post on how to use it. I would kind of created some templates. It'd be cool to do a video, particularly with you Chris, around like looking at like how you take some of these designs and turn them into QML and do that as a little, you know, screencast for our YouTube channel. Because I think that'll help a lot of people who are wanting to add goodies to their skills, but just don't really know where to get started beyond using some of the, you know, base templates like show image and show text and that sort of thing. Yeah, it took me like an hour to do the long scale. Yeah, cool. So yeah, it's pretty straightforward, which is nice. Cool. Yeah. All I have to do is because I have to pair it to test it. And that's kind of everything. Yeah. Maybe I should just kick the wiki skill over to you too then. Or you can learn how to do it. Yeah. At the moment, I'm just stealing your stuff. What's that? At the moment, I'm just stealing your stuff and then sticking it on there. Yeah. I was treated to a presentation of the new video player, music player, I guess media player that Ova was just getting ready to release Jarvis. You know, demonstrated that to me. And I thought it was really just top notch. I mean, I have some, you know, concerns about the underlying architecture, but, you know, the UIs. Yeah, yeah. Yeah. We should also just shout out to Addition Blue Systems crew and putting in some good contributions. The volume slider is really nice. That pops up when you do volume change. You can slide it back and forth. You know, it works really good. You like that? Yeah. My first inclination was, geez, you ain't got to show it. Well, that, I mean, yeah, maybe we could have a setting where you don't, where you can, you can show it or not show it. But, you know, we're also talking about, and this is where skills interaction I think is going to come to play. If you are actually actively listening to something, maybe it's not needed. But if it's a system idle, maybe you'd want to show it, right? So this is kind of where that distinction of like, well, if you're listening to music, the volume is going to change. You're going to be able to tell the volume change. You can, you know, but yeah, I think that's a discussion more for skills interaction. I think the system is totally idle. Then yeah, 100% I want to see it. Yeah, it would be less of an interaction thing and more of a skill state thing from my point of view. Because if the system were idle, then you'd want to basically go into a, you know, you might want to go into like an interactive volume setting mode, rather than just a, you know, set the volume kind of command, right? That's what seemed to be what you were talking about there, Derek. Yeah, I mean, it's idle. You're like, set the volume to seven, it's not doing anything. Or like, well, it gives you multiple ways of a multimodal interaction, right? So you're like, oh, I don't actually like seven. That's not quite light enough. Well, then you have the opportunity, you can either slide it up to eight. That's going to give you the beat. Or, you know, which may be faster at that point if it's at night or desk than saying change to eight or whatever. So I think, yeah, I really, at a system idle, I really like it. I think the argument for whether we want to do it while listening to music, you know, I could, I could see an argument against it there for sure. If you're on the latest build, you should also notice that when you get to wake or recognize the gutter, our first use of the gutter is now visible. I did notice that. I've actually been appreciating the more frequent updates and I did notice that the other day and it was really cool. And that's another desk. So it was shot out again to ditch it for that. And that showed up on those first, but, you know, we've been talking about it too, but they had it over there first, so they brought it over for us. Yeah. And the idea, as Derek was telling me behind that is that it will mimic what the LEDs do. So, you know, the same experience, whether we have LEDs or a screen or both, it'll all work the same. I think of it. Yeah, eventually that's the kind of the first is just the wake we're going to, you know, reach the whole essentially whatever the LEDs are doing, we want to do with the edge thing too. That way, you know, in the future, if you've got, because actually a lot of smart displays don't have LEDs, right? They just they use the screen entirely for their feedback. And so you got both bases covered there. Right. Awesome. Okay. Well, yay to everyone. I love that shout out first. It's good. Michael, did you have anything else that you wanted to raise? No, I'm apparently still recovering from my surgery a couple of days ago. So I'm trying to take it easy. Cool, cool. I did do a blog post for the skills interaction stuff. So if you haven't seen that, or if you can't find that, ping me and great. Cool. All right. Well, then everyone will see you. Actually, I'm not going to come tomorrow. Just so you know, I'm going to go away for the weekend and tomorrow is Saturday. So, yeah. It seems reasonable. Can I work five days a week, please? All right, I'll see you all next week. Bye.