 All right, welcome to September 9th, Microsoft Software Developer Sync. Woo! Woo! All right, we had a nice long weekend there. And we're back. We've already done sprint planning for this week, so today we're just going to discuss any hiccups or things that have arisen since yesterday and or other issues of concern. So I guess we'll just go around the clock here starting with Gens. I don't know anything, blogging me. If we have time, we can talk about either the RCS No Stuff or, you know, adding video support or one of the other things into there. But I will set up a separate meeting for the RCS No Stuff anyway. Okay. There's nothing I need to talk about. Okay. Ken. Cards. Tomorrow, I checked my tracking number. So we're heading back to the house in the morning to get those. And then I'll start working on the enclosure code for that. I had a really good conversation with Chris today regarding history, how we got where we're at, why I'm seeing what I'm seeing, and what we can do moving forward. I'm trying to build out the existing closure code that we have now that supports the re-speaker array such that it will continue to work in the absence of the new hardware so that we don't have to have yet another configuration image for re-speaker versus our board. It's quite easy to do because if we don't have our amplifier, for example, we feel an exception trying to hit the I2C device, I can simply trap that exception and say, okay, we'll turn up the volume or turn it down using the normal means and things like that. So depending upon how different the interface Kevin gives me on those cards is regarding the USB control channel, we should be pretty good to start building something a little more robust so that it's not everything's a one-off. One of the things I'd also like to do moving forward but not right away, obviously, is to get one of the complaints I heard from the community is we don't really allow them to select the devices. We just kind of force the devices we want, even in a laptop version. So if your machine has three different sound cards, Core kind of doesn't really give you an option to say, use this one versus that one or if we have, for example, on a Pi 4 on the Mark II, we have three potential output display devices. And so it would be really nice to be able to allow the skills to select one of all three or all three of them. Maybe they want to play the audio stuff out of my surround sound on the HDMI TV versus the speaker array or versus the mic array we're going to have. So that's kind of the stuff we were talking about. How did we get where we're at? What's the difference between the TV and TV images? Why historically? I think we're good except we have to support the Mark I, which is kind of a problem. So I don't know what we're going to do there. The Mark I is kind of an issue. But the point is I had good conversations and started looking into that code today and understanding the skills. I understand now why we have so much of our code jam in this skill. It's bad. And like I promised, I'm not going to gut it like a fish, but moving forward, probably shouldn't have an individual skill for each device. I mean, what we should really have is a much like Linux, the operating system. We should have a call to get or maybe just during instantiation get the capabilities of each module. So for example, if the display module says, hey, I have leds or I don't have leds. And so the skill can query the capabilities and say, OK, well, I can't put all the leads on this device. It doesn't support that or I can. But the way it is now, it's not doable that way. But moving forward, that would be really, really nice. The thing that Chris and I discovered, which was more alarming and which is something that we're going to be drilling down into a little more moving forward is why our images are so different. For example, my mark to reports, headphones and a re-speaker array. Chris's reports an ALSA device and a re-speaker array. Why is that? My kernel command brings in a bunch of stuff for BCM 2035. His doesn't. How did we get there? So that's part of what we'll be drilling down into a little more moving forward. In other words, we really would like to see at least all of us having a modular system so that all of the eMessage kernel commands and all the hardware reporting and stuff seems consistent. It shouldn't be, not be, and it isn't. And so why is that is the question to be answered. And there was something else we had discussed, Chris, that I was going to follow. Oh, what was it? I'm getting old, quite old people keep notebooks in their top pocket for short term memory steps. Yeah, so that's basically it. Why are we seeing differences and, you know, can we get rid of that and make sure that everybody is on the same platform? I have no idea what build or what image this is. Oh, I know what it was in a sec. It's not the blue system's image we're using. The blue system's image that Ake worked with them on is an Ubuntu install. The current image we have is an old pie crop based on W. So that's our current image right now. I just wanted to bring that up. So anyway, that's where I'm at. I'm not blocking anything. I'll have my boards in the morning and hopefully see you and I'll have the new boards integrated into our enclosure in a little more modular manner in an attempt to start being able to support some of these. Okay, well, that's fascinating. So fascinating. So I'm going to move on to Chris Vair quickly because there's a lot of noise here. Yeah, so not really, I'm blocked by right now that I'm trying to change the ownership of the new way forward directory on the upload server to be precise. It was my user and that doesn't work out very well with my script. So that's still running and probably has been for half an hour with some million files is trying to do. So once that's done, I'll be able to finish testing the move script and then I'll move on to the deletion logic we just talked about. And of course I spent the good part of my early late afternoon early morning or late afternoon late morning early afternoon with Ken today. So that's right. Okay. So you're making progress on things, but you actually have a blocker on the username stuff. It's only blocking as long as this command runs. It's I'm changing the ownership of these files right now, but there's a million of them. So, okay, just take it. Gotcha. So it's not so much that somebody is blocking you as much as this, this process that is running is blocking you. Yeah. That's my script. If the files are not owned by the precise. Okay. I'm working on that now. And I also spent some time today on the precise box. Kind of making it more homogenous with how our servers and digital ocean are set up as far as users. I gave Ken a username. I, you know, that kind of stuff just little, little things so that when I'm on this box, it looks and feel just like any other server. Okay. All right. Great. Derek. Yes. All right. So forgive me if it gets loud here. So mostly just trying to wrap up some more hardware stuff for Ken and Chris bear. And also balancing a little bit of work on the wireframes for the new, will be the new tagger site. I didn't put question on that. Yeah. I don't think there's anything that's blocking me there, but are we still. So, okay. So what I got from Josh was talking, we basically have seven different tags in that UI. Is that still kind of up in the air? Or we've like locked those down at this point. Oh, Well, I think you should anticipate that those tags are going to change over time. Right. So. Yeah. Don't count on those staying the same. Don't count on those tags, even remaining there, like we might eliminate them from the UI at some point. There's, so there's kind of two users of the system, right? There's the people who are using it and helping us tag things, but another user of the system think of as our internal their team. And, and from their point of view, you know, they're going to be setting up. Oh, these are the tags that I care about. Right. And so they may want to expose through the UI only, you know, a certain three of the 20 tags that are possible. And a given time, right? Until we catch up until we get enough things tagged with, with those attributes. So, I mean, think about it being flexible from, from both, you know, the people who are tagging in from the people who are collecting, you know, directing which data to collect. Okay. Yeah. So one thing I was thinking about is trying to be a little bit more binary in the responses. So a lot of the suggestions, you know, Josh, you had were like, some of them are yes and yes, but so like, for example, yes, the sample contains the way for my crop. Yes, but the sample contains multiple samples. So we could split that out and instead of trying to do, you know, some of those kind of multiple things. So tagging say, okay, this, this includes my crop, but we don't care at this point, whether it's multiple, you're just tagging is my crop in there. And then later it gets like, okay, well, all of these samples you're listening to have my crop. Now you're just tagging, whether it's multiple or not multiple. So, you know, it's more steps, but the binary, I think we want to do a B testing, right? So, you know, I would, you know, make each system, you know, as user friendly as possible, but I wouldn't try to guess which one is the right way to get. Yeah. So I think, yeah, even with something like a pitch, even with something like a pitch tag, you know, I think we want more than high and low, like, you know, masculine, feminine and neutral kind of thing. So, yeah. Right. Right. I feel like it's going to be others that you can't really do that binary tree style. Well, you can always do it. You can turn anything into a binary. Like, does this sound like a male voice? Right. You can say, right. Right. Yeah. That's kind of what I was thinking too is like, you could go that approach where it's like, it's super simple and binary. Like, does this have multiple or, you know, the first password does this have the wake word? Yes. No, that's it. That's all we care about. We don't care about how many times they say it, you know, and then so anyway, I've been working on thoughts like that. But I'll just continue on the path of that. Just keeping it open. We don't have the side yet. And then we can take tomorrow. Get into it more. Yeah, another thought I had since it's going to be so dynamic or can be so dynamic is maybe making each tagging event, its own task, right? So you don't have five or 10 things on one screen. You're, you're, you're tagging the pitch, you know, and one thing or you're tagging the, you know, but there's not a wakeboard in the next step and it's, you know, it's, and you could be, it keeps it more interesting for the user because if you're going back and forth between different things to tag and, um, you know, it could be depending on what we want tagged, we could come up with what we could present whatever question. You know, we need tagged at the moment rather than showing a whole screen of things that maybe have been tagged partially or not. So that's just, you know, something that entered my brain recently that I wanted to do before. Yeah. Yeah. That makes sense. I've been thinking a little bit along the same line. Okay. All right. So yeah, we can talk more about that tomorrow. So that's, that's what I've been up to. Okay. Thanks, Derek. I was just going to call on Josh, but he just disappeared. So, uh, let's see. Um, I don't think I have any updates since yesterday. Um, I've been very busy with things not directly related to software development. So, um, yeah, I guess we can make it a short meeting today. Oh, there's Josh. Josh is back. Josh, would you like to, is there anything you, you've got to want to give an update on or any, any issues you want to raise? Um, no significant issues. I'm got my printers running for, for enclosures. I'm ready to, to, uh, re-entry to. Whenever I get at SJ 201 ready to start pitching in there. And, uh, Looking forward to seeing the UX stuff from Derek, I, I would caution. There's the. Temptation to simplify that interface to the point where it's really binary. factors piece when in the wakeward tagging becomes important. I'm not sure what the answer is, but I suspect that some variation in the questions we're asking and which pieces of the tagger we're working with are going to be beneficial, but too much variation may end up with a bunch of errors. In other words, if we're asking a variety of different questions in a variety of different ways and it's a task that's not really huge we engage in, people will make a bunch of mistakes, but if we ask two, if we're asking two simplistic questions or we're always asking the same question, people will zone out and stop, you know, contributing or whatever. So I think striking a balance there is going to be really important and that's that's something that I think is on your plate more than anything else. That's a really good point. I mean I think we should think about the user experience not just the moment-to-moment, but what's the overall experience like how long do we expect them to sit down and tag things for? You know, is it two hours? Probably not. Is it even a half an hour? Probably not. Like maybe, you know, we set it up so that we expect people to, you know, log in for five minutes and, you know, tag as many things as they can and then that's their session, right? And if we think about it as a flow like that, then maybe, you know, maybe we can help find the balance between that monotony and accuracy. But I still think A-B testing is going to be a big part of that as well. Yeah, and then a long run, you know, if we can come up with a system that's, you know, robust and, you know, it really does keep people engaged and contributing in a positive way. I think that's where the power of the company, I mean, I think that's really the innovation that helps us to really accelerate going forward. The really having the community engaged in this training mechanism, you know, and then starting to do novel things with the data, right? And, you know, throwing new and novel questions in there will help us to build a machine learning algorithm that's really powerful. And that's really where I'd always envisioned the company going and it's really great to see everybody working on that now. I had another thought here on that. I've brought up A-B testing a couple of times. I do want to make it, I want to make sure that when we're talking about the architecture of the database and the source of the samples and the source of the tagging information, it should be clear. We talked about having a session as one of the things that we can track. We should be able to get at the which version of the UI, if we're running multiple versions, the user was using when they submitted a sample so that we can do some correlations with the accuracy of the data, you know, through one method versus another method. That sort of thing. That's a really good point, Michael. Yeah, you're going to have to change the database schema a little bit to handle that, Chris. Maybe, I don't think so. I don't think we will. I just think it's in the implementation of the GUI when it generates, you know, when it's submitting the data, it has to just include that information in its session info. Yeah. Now there's a meeting scheduled for tomorrow with no subject. Is this the meeting that we're going to talk about the tagging? Is there not a good subject on that? That's about in my bed. Oh, yeah. That's not a meeting? Was that kind of for you and I? No, that's tomorrow's is for talking about this design. That's the one that's at one o'clock your time, right? Yes. Okay, I'll add a title to that. I'm sorry, that's my bad. That's okay, but it's to talk about the tagger, right? Yeah. Okay. Awesome. All right. Any other things we want to talk about? We can make this a short one. Yes, what time is one o'clock central where you live? To me? Yeah. God, no. No, I don't think I'm going to be there. Now I suspect you can leave Guest off of that invite list. Yeah, I think you did already. Yeah, yeah. Yeah, 3.30, 3.30 a.m. I'll see you in my dream. You'll just be coming home from the pub, eh? Yeah, exactly. Yeah, yeah. Yes. I did want to catch Josh up real quick on the discussion we had before we started recording, actually. There's an open source company I talked to that kind of exactly what you were requesting yesterday. They're targeting IoT devices, mostly routers, actually. But they said, and I'll quote this, they said, one click convert from Debian to Android, a push can completely change the iOS. If it fails, it retains the last update and rolls back to the last known working. Anyway, proud of the interesting. I'm sorry. Can you step back? One button convert from Debian to Android? That's what he said. I mean, they can push firmware updates. They can do everything at the kernel level, down to the kernel level on a device. It's a couple of former canonical guys. I'd love to have a conversation with them and see I'm highly skeptical of that, given that there's no Python interpreter for Android that we've been able to identify. That's not for us, specifically. It would just be like they've shown on a device, like a device that's capable of running both Debian and Android, that they can remotely. Oh, shift between images. Yeah, I mean, I suspect that the way they're doing it is exactly what I ... There's four partitions. One's a boot partition. You've got two operating system partitions and a config partition, and when you recycle the device, you just change the pointer, right? And you note the partitions in as a single part. But he mentioned other ways of flashing just the firmware. You just want to update just the firmware outside the OS and this whole containerization of stuff that they've got. But it sounded exactly what you were asking for. Yeah, let's talk to them and then let's also ask them the question that we get all the time. Who are your direct competitors? Well, because if you're proud of your product, you should be able to point at your competitors and say, hey, here they are, and this is why we're better. Let's have a conversation with all of them. And maybe this company is better than their competitors and we can work with them. I'd be really excited to have that particular problem fully solved so that we can stop messing with it because I honestly think that of all of the things that are tripping us up, I think that's really high on the list and nobody in the company spending time on it or has the time to spend on it or is inclined to spend the time on it, depending on who you are at the company. Well, that's the Pantahub company that you talked about, Derek, right? Yeah, yeah, yeah, we can get a follow up with them next week. They're called Pantahub? Yeah, P-A-N-T-A, Pantahub? Yeah. Yeah, P-A-N-T-A. I'm really, I'm excited about their name. I think that they'll win just on the brand. All right, great. I thought it was a fortuitous kind of coincidence, so we'll see if anything comes from it. All right. Okay, well, hey, one last thing. I'm printing this enclosure in Lid, Derek. Is there any reason when I switch from one resin to another I need to rebuild the scaffolding stuff of just changing colors and resolution? No. Well, inform labs, they might make you do redo the scaffolding if you're changing resin. They let you import the other one, so we'll see how it goes. Well, yeah, just thinking, you know, it's an optical thing, so if the color changes, you might actually change how much time it needs to hit the laser. All right, how many babies are on the line? You got yours there, guest? I think it's just mine. Derek has your baby met Derek's baby yet through the video chat. You guys should introduce. First we'll play dates. There you go. It's definitely a thing. Thanks, folks. All right. Thanks, everybody. We'll talk again tomorrow and then we'll be back here on Friday.