 All right, welcome to the August 8th, 2020 Developer Sync Meeting. So how's it going? Awesome. All right, great. So we done? All of it. Time to ship it. Excellent. OK, so Chris has been working on the schema for our new and improved Workward Tagging System. And Ken's been working, I assume, on the Workward Training System. So yeah, let me just go around and get an update on how things are going there. So let's start with Ken. OK, let me see if I can drag and drop just a minute. I made a little video for everybody. Ooh, fun. Yeah, so let's move this guy out of here. Well, can I just drag and drop you over here then? How do I do that? Let's do here, OK, and this. Well, I don't think that worked. Yeah, I only did that once during my VC call today. I guess I'll just cut this section out of the video. Welcome back. I don't think that went as expected. Did the video make it into the chat room? No, but you disconnected. Yeah, and it just started playing the video when I drug it into chat. All right, well, I will describe what said video demonstrate. And I am planning, actually, on doing a YouTube video. But what I did is I created the first rough draft of the Precise Studio. And it is a UI. So basically, if you check out the code line for Precise, you'll get a new directory called train. And under there, you'll have a UI directory. In there, you just type in Precise Studio Enter. And then you go and bring up a browser of your choice. And on localhost 8000, you'll have the Precise Studio. Let me bring it up real quick, and I'll explain what all it has about it and what all it does. But yeah, so of course, I'll have to run it, bear with me. But basically, it allows you to create and do everything from a UI instead of having to do anything off the command line. So that's annoying that that video didn't work because it demonstrated everything I'm going to show you, but OK, or tell you. So when you bring it up, you have a menu that allows you to go to models or test data sets or training data sets. There's the measure accuracy page, which you guys have already seen, record samples. So the concept here is it's running locally on your machine. And you're creating models on your machine for whatever you want, and you can install them there or whatever. Now, part of this page, one of the options is record samples. So when you go there, it'll say, OK, press record, and then you can record, and then you can say stop, and then you can say save it. And you can build up really quickly using your browser, your samples. Then you go into training data sets, and you say, OK, move this file from samples over here, move this file from samples over there, and you construct your data sets all using UI. And then you go into your model directory, and you say, create a new model and use this training data set or change the training data set for the model. OK, train the model, then you can test the model against the test data sets and get numbers. And the last feature I'll be adding is where you can actually do an interactive test. But ultimately, the goal would be that you could click a button and say download it into your mark 2. And it would replace your wake word with the custom wake word you created. So you could actually create a custom wake word, not being a programmer, and probably do it in about 15 minutes. And then you could iterate through and add more samples or delete samples or balance or whatever you want to do. So I'll have a video. This code should be done by next week. And I should have a video by this time next week on YouTube that demonstrates how to use it. And then we can figure out what to do from there. But it's pretty much the culmination of what I'm working on, which is for non-programmers to be able to create custom wake words using precise and deploy them to either their mark 2 or their mark 1 or their microafcore installation. So that's what I've been working on. I was going to do some more models, but I haven't gotten enough feedback from the models I have put out yet. Although I did get positive feedback from Gez. Michael, have you installed the new model and how is it working? I have not had a chance to test the new model. Okay. Well, once I get some more feedback, I'll let me know, but I'm pretty much going to probably make a bunch of models this weekend. But I wanted to get the actual precise studio going so I could use it to do them and make sure it can handle large data sets and things like that. So yeah, that's what I've been working on. The NAS issue, this is kind of the hemorrhoid of my life lately, it's not going quietly into this good night. So I got an email from Josh. It says, whatever you do, don't bork the NAS. And here's how you can get into it. Which made me think, I think what I really want is to ask Josh to have somebody physically go over to the data center and copy the NAS directory onto a thumb drive and get it away from all that water. And then once he's done that, I will go and execute my script that will begin moving those files out into the directory structure that Chris V and I had agreed upon because that's going to be a destructive move. It's going to move them from that main directory into their new subdirectories. So if something could go wrong, if we have a backup, that would be wonderful. The assumption is it won't take 26 days even though that's what the original estimates are because the belief is as the subdirectory shrinks in size, since I'll be moving files out of it, it will perform faster as I go. So I don't really know how long it's going to take. I know just straight copying, it took four days to get about half of the files out. So, and part of that was that the problem was as the target directory was getting filled, it was taking longer and longer, right? Because now it had two really big directories that had to deal with. So the assumption is if I move, that will go faster. But that'll be destructive. And so I'd love to get Josh to have somebody back that up to removal media somehow before I start this destructive move. So that's kind of my status. And hopefully this time next week, I can point you to a link on YouTube that will show you what I've done. And then Giz will actually hopefully check out the branch and test it himself. So that's where I'm heading. Okay. Yeah, that sounds really great. I was gonna say we can... I've sent the model to a couple of people, but we should just post it in the precise channel. I think you can say anyone that feels like testing it out, let us know. Okay, now I have a page that I did for Josh, which is for the project that's going on going, which was a description, a page that describes how to install them and here they are, the PB file and the PB programs. And then Josh's feedback was you should probably put this on GitHub. So it sounds like maybe I should create a wiki page on GitHub, Giz, and put that up there. And then we can post the link to that in the precise chat room. Yeah, you can put it in the precise repo wiki or if it's worth putting in the main documentation, then we can do that. Okay, I'll put it in the repo for now. Chris V, I believe you had a question or two, probably what did I write this in? Well, that and did we let Derek have a shot at designing the UI for this stuff so that we're consistent across our web applications? Right, so the first question is, it's just Python code using the built-in simple HTTP server. So you don't have to install anything. It comes as the, when you check out the branch, it comes as part of the precise code line. And I didn't want to have to put a bunch of onerous requirements in there for, you know, web servers and things like that. So it's just a Python script. You just run it from the command line dot forward slash precise studio dot pie and it runs. The second, the answer to your second question is, yes, I would love for Derek to have at it. And once it's done, which I'm anticipating is sometime next week, I will walk him through getting it running and then get all of his feedback and implement it because it is basically just plain vanilla right now. So yeah, it could use some love, some CSS love and some UI love and UX love and all that. So I hope that Andrew- That love we, one of our things we do now is material design. I don't know if the Python is going to support that or not. So we'll have to see. Is it, is the intention is so, this will just running locally right now, right? For you. Yeah. In other words, the intent is that everybody enable the masses. Everybody can create their own custom wake words. Really where I'm going with this is I'd love to be able to get a mark two and be able to have it installed in the mark two. Because I look at the mark two as an appliance, right? I would think most people are never gonna SSH into it or get into it, it's a consumer device. So anything that I'm looking at mark two related would be existential or external to the mark two and would allow you to modify your mark two. So this would run locally. You basically check out the branch that I haven't created yet, which is a just a branch of the precise code line. You run the precise setup.py. And then once you have precise running, then you would just run this by saying, precise studio, and then you would bring up a browser and go to localhost 8000. And everything it's doing, it's doing on your local machine. So it's your personalized version of your files. You're not sharing with anybody. You're creating your own wake words locally and then you can push them wherever you want. Yeah. So it's a local, yes, local studio, right? I think, what's that? So that doesn't make a difference because it's not really necessarily one of our, you know, internet web applications at that point. It's just something that's right. But it does need locally. So I think that may change. No, agreed, but it could use some UI attention. It's a very, yeah, it's very vanilla. So at a minimum CSS style, right? To match what we have, but nothing elaborate. It's really just straightforward, five functions. Well, it sounds like there's, yeah, there's some core capabilities here that we're gonna need for our eventual web collection and tagging system, right? So the ability to collect samples and review them and sort them and tag them, I think is, it sounds like you have those functions in there. And I think that, you know, do you have a migration path for like taking what you've done now and turning that into what will be our eventual web application? So I don't have a roadmap because I don't know where we wanna go with it. I don't know if it's something that we wanted to sell like a developer studio, like a .NET developer studio. Maybe it could become the beginnings of that. Maybe piggyback a skill developer studio onto it so that you could develop skills on your local computer using a UI and then download them into your Mark II, wake words, whatever else. So I didn't know if it was ever going to migrate anywhere beside your local computer. And so I don't really have a migration path to move it anywhere else. As far as the tagging goes, that's strictly a personal issue regarding how you choose to name your files when you record them. Remember, you're recording them in a browser and then you're saving your local machine so you control what they're called and where they go. Okay. Well, it sounds like we should have a meeting to discuss the roadmap for this because this I think will be a really useful tool that a lot of people want. But, you know, let's figure out how it's gonna fit into the overall, you know, grand plan of involving the community in collecting wake words and tagging and whatnot. Yeah, this is kind of a side of that because it doesn't really do anything that that Dataflow works on. This is just for you to create custom wake words locally. But we can talk about it because once I have it up and we can see it and I can post a video like I said on YouTube, then we can discuss where we wanna go with it from there. Yeah. Okay. Well, and we also need to discuss what's the priority of, you know, enabling people to create their own work words. I think that it is a useful feature, but I'd rather do it in the context of our overall system rather than, you know, it being something that only developers can really interact with. And then, you know, if you're creating your own wake word and verse, you know, if it's a low level tool that we're expecting only developers to use, then that's one thing. And we can, you know, spend a little bit of time on it and wrap it up and put a bow on it and set it aside. But I think that we can do more than that. I think that we can make a tool that everyone can use. And, but in order to do that, we need to figure out how it fits into, you know, the Minecraft, you know, is it a new skill? Do we package it as a skill that people can install to train new wake words? Or do we make it a part of Selene, you know, or expose it as a separate service through Selene? You know, that kind of thing. Like what is the architecture overall for the, for the data flow? And what's gonna be useful for our end users and, you know, how to best service them? Because if, you know, if I'm creating a wake word, I'll want to deploy it to all of my devices, right? Presumably, and, or potentially, right? So, you know, we just need to think about those things. Absolutely, yeah. And like I said, I think it'll be... So it's gonna be hard to make money off of it, maybe. So, you know, we give people too many tools to be able to do things themselves. And that may minimize our ability to say, oh yeah, we can create a wake words for you, right? Yeah, yeah, yeah. Again, I think it'd be best that next week when it's done and ready to show, everybody gets a look at it, and then we can all step back and say, okay, here's what I think about that. Okay. Well, why don't we prepare? Can you, you think you'll be ready to do a demo on Monday? No, I was thinking more like next Friday. Okay. Well, let's... But I can do a demo next Friday for sure. I can commit to that. I just don't wanna over commit and under deliver in Mondays a bit tight. Right. I guess what I'm concerned with right now is, you know, this seems to be a little bit of a sidetrack from where we're going with the wake word. So, you know, with the overall system. And so if it's a matter of like you don't have enough direction to, you know, then I think we should have a discussion about that. But if you think that this is a really useful tool and it's worth, you know, spending an extra week on it, then I'd still like to have a discussion about that as well. So... Yeah. Yeah. Yeah. Yeah. Yes. Do you want A or A? So... Yeah, exactly. But what I would say is that I'm... Yeah, I think it's very useful. The thing is in a pseudo holding pattern on the NAS stuff. And also a little bit of a pseudo holding pattern. I'm ready to implement the third piece of the roadmap for precise, but I'm in a holding pattern until Chris has made some progress and we get the data moved out. And that third bullet point, I guess, or third project is the triggering of, okay, new data has been gathered, a level has been hit, a trigger has been generated, create a new model and test it and decide what to do with it. So I'm ready to do that, but I'm blocked a little bit. So that's why I kind of said, hey, rather than just building models, let me just do a really easy, non programmer interface so everybody could build models. And that's where I came from this. It's just kind of the culmination of what I had learned. So even the hyper parameters are extended to the model training process. But anyway, that's neither here nor there. We can talk if you'd prefer maybe Monday or Tuesday about the direction and stuff, just to make sure I'm not going too far astray, if that makes sense. Yeah, okay, let's do that. Okay. Okay, Chris there, you're not here. Okay, that was easy. I guess, I muted telling you that you're muted. I think it's been a while since I've done that. But mostly good this week. So the process status, the readiness check PR. Okay, and I kind of redid that a fair bit. Essentially, I had some ideas around a different implementation. So all the statuses are now implemented as internums into juror enumerators. So they're kind of like, they're comparable against each other. So it just makes it a lot easier where, if you have a status of ready, then it also means that clearly the status, the process has already started and the process is alive if the process is also ready to do things. Anyway, so there's, I put a link to the PR and documentation in the ticket on JIRA. I added public documentation to it already because it just seemed way easier to get it to everyone. But yeah, so that's feeling pretty strong now. And so hopefully Chris can have a look and we'll get that in very soon. And what else? Also been working with, okay, on the plugin, the audio system plugin changes and he's just put forward some initial documentation on that, which is great. So hopefully we'll have that in before 2008 as well. We were talking about the structure of repositories for those plugins because I am conscious that we don't want a separate repo for every single plugin and extend our already copious amount of repos. So I might put some detail around that somewhere and get everyone's feedback on that. What else has happened? Been working with El Pacino on the Georgian voice. We were having some weird issues with commas and it turned out that half the commas were completely different random character. So normalizing your data always helps, but just wasn't really expected. So, but anyway, so that's probably the worst of this week but progress there. Actually, well, I also overworked on the QT, had a quick look at booting the QT image from USB and couldn't get that immediately working. So I'll go back to, I'm gonna have a chat to the blue systems and okay and stuff and see if they have any ideas. So I'm not wasting too much time on that because someone might already know it. And also looking at screen rotation so we can go for 316 or just sort of back and forth which seems very easy to do. That's it, I think. Okay, thanks. Yeah, I've been having some problems with my mark two GUI interface. The device itself seems to work fine on the audio side but the GUI is not updating reliably. So, I'm gonna take that offline. But Derek, what's the good words? Hey, so for Ken, I would love to take a look at that UI once we decide the priorities there. And I think what Chris was talking about earlier, whether it's local or it's on the web, whatever, we all kind of want everything to have a similar looking feel experience. So, yes, I would gladly help out when the time is ready for that. Did that movie come through? Oh, I didn't see. Oh, okay. It's posted on the chat. Yeah, it's on the web now. Oh, it looked like a link to a local file to me. Ah, all right. Well, I was hoping that. All right, like I said, I'll put a video together. But yeah, definitely Derek for sure. Cool, so what I've been working on still is mostly the hardware ruled with the project rollover, almost officially done. I have the next three prototypes ready to go. I didn't quite make it today to get them shipped. So I will ship them tomorrow. We were actually waiting until yesterday for one more. We didn't have some heat sinks. So I wanted to make sure we got those and keep it consistent with the rest. So what I've been doing in the meantime is just working on what would be the first enclosure that we use for testing for the first kind of validation tests of the SJ201 design, which I've laser cut one of those today and I'm starting creating one of those together. So I'll probably send the first one over to Kevin. But Kevin also has access to all of this stuff to make them self. So I'll also give him the instructions to make subsequent ones for more testing in the future. But I'll give him the first one because he's busy with other things right now. Yeah, and that's really kind of taken up most of my time. He's working on that. I don't know, do you wanna give a quick update, Michael, on where Kevin's at? Sure, yeah, I can do that. So Kevin has finally received all of the parts for the SJ201. He's tested the top half of the board and that seems to be working okay. He's found a couple of bugs, things that are fixable in place. So that's good. So basically he's verified that the voltage regulators are working and the USB sound card is recognized by Windows. So that's a good start. The very interesting, the more interesting parts with the XMOS chip and that sort of thing are on the to-do list. Those are some of the parts that he has to hand solder on himself. So he'll keep a surprise to that. But so far, so good. It's coming up. Yeah, and so once I kind of wrap up this laser cut design, I'll start back on the detailing of the actual enclosure design. It will be the kind of pipeline to production. So yeah, that's me. Michael, I have a question about our functionality very quickly. Sure. When I mark two is brought into the room and it is connected to the same network or via a USB cable, does it function as a speaker and microphone for the host operating system? I think that would be a really cool feature. I don't think that that is part of the existing core functionality right now. Okay, so we'd have to probably develop a, well, we could develop a device driver, but the slicker way to do it would be to piggyback the generic device driver kind of old stuff and just report over the USB port as if we're a speaker and a microphone. Exactly, yeah. You don't have to have a device driver on the host, right? Right, exactly. Well, you can't do that right now. But if you unplug our little USB connection to the Pi and just plug it into your computer, as long as you have, wouldn't that work? Yeah, well, there's some internal software routing you'll have to do, right? So, yeah, the, yes, the Microsoft does appear if you plug it into the USB port as a USB sound device with input and output capabilities, but that doesn't mean that the Microsoft core is actually paying attention to that at all, right? Well, understood, but the point is worst case, you could just not run Microsoft core and use it as a speaker and a microphone. I believe so. Very cool. Okay, so Chris is back. Let's see if we can get to him before the internet does. Okay, hi, everybody. I don't think my parents' network is used to my level of internet usage. I just rebooted the router. Hopefully that'll get me going for a little while. So, lots of good in the last few days. I published the newer version of the schema using the input from last meeting. Haven't heard any feedback yet, so I'm hoping that's a good thing. But I have started coding based on that schema. So I have all the DDL written, I've got my local database with the new, not the whole new schema, right now I'm just building what I need for the collection piece, which is the wake word table and the sample table. And so I've got all the DDL built for that. And I changed existing code that used the old work wake word table to use the new one. And that's all committed. And right now I'm working on the endpoint to go to the device API that will replace the current endpoint that's being used to upload the audio and it has authentication and all that good stuff in it. So that's almost code complete, I'll have to write some tests around it, et cetera, but making really good progress on that. So after that API is written, then the remaining thing will be the script that copies, well, that moves the data from the machine or it's stored to the NAS. So, and then that'll be most of what this first print is. I also reorganized the sprint in JIRA, I know we haven't even looked at JIRA much recently, but when I originally put the sprint together in JIRA, I put it together as moving the entire precise API over at once, but I think it made more sense just to move over what I needed for this piece and then move over the rest when I'm ready for the tagger. So I rearranged the tasks there a little bit to make more sense for what I'm actually doing right now, which is the upload piece. So I did some of that and that should all be up to date now. So yeah, so lots of good stuff. I don't think there's anything really bad to report or ugly. So it's not really blocked by anything except for if anybody has any other feedback on the schema, I know Ken just said something about putting the paths in a separate table so that we don't have to repeat the path for every single row on the sample table. I may go ahead and do that. But yeah, unless there's anything else, I'll just continue to go forward coding with what I have now and I mean, luck in the next few days I'll have, I'll be at the point where maybe we can start talking about the taggers. Quick question, Chris, the Minecraft core code won't have to change, right? You'll just be backward compatible with it. It won't have to, it will, okay. It won't have to change. I think for 2008, I'd like to see it change. To be, actually, no, it will have to change. I may have to support both for a while because the interface in core to deal with the public API is in a whole different part of core than, basically this API call is just stuck in the audio service right now directly instead of using the API mechanism in core which is what the rest of the public API uses. So if that's gonna break, I don't know if we released that as a minor version, if that's gonna impact compatibility or not, but since we've got the major version coming up, 2008, maybe we just include this as part of 2008 using this new API, sort of the old API. Any thoughts? It'll be coming up very quickly, just in the context of, talking like three weeks. Yeah, I shouldn't have any problem meaning that data. And I can do the changes to core myself. They're not rocket science, not really hard. It's just, it is gonna be calling this a different API with some different arguments. Yeah, I don't think that core stuff would be difficult just in terms of having the backend services ready. Yeah, so if I just, so what I'm hoping is if I just, this part of the project that's upload part, once I have everything tested, I could do a release of Selene that will go up with the core release, basically. We can coordinate it either that or we can support both API calls initially and then remove the support for the old API call after we're happy that the new one's working. Just nevermind scratch that, that's not gonna work. So yeah, we'll have to coordinate moving 2008 and the newest version of Selene together. Well, I don't think that's gonna work either, right? We need to be able to, like, if we're gonna change an API such that, yeah, we're talking about, you know, you're talking about changing the Selene API, right? Yeah. So we need to be able to keep those backwards compatible. Yeah, that's kind of the point I was servicing. That's a problem. Yeah, so the point I was servicing and all the points you guys made are valid too, is if you're planning on adding authentication, then you're basically deprecating the existing API. I'm deprecating the existing endpoint, yes. Well, yeah, I mean, the existing endpoint, therefore, all core, all users that have core checked out will have to upgrade or their code will break. That's the only warning I was throwing out there. Now, this is in terms of submitting samples, right? Yeah. So can, yeah, so we're gonna have to put out, I mean, maybe we'll have to push out at like a minor update that checks to make sure the API still works and then if it doesn't, then basically just disables that function, right? So that... Well, I guess if we disable that, because yeah, because we could just say that if you're running an old version of the software, that maybe that endpoint that it's being called right now redirects somehow or something like that, we could redirect some. I don't... Well, basically we're saying that like, if you have an old version of the software, you can't upload sample files, right? Is that the issue? Yeah. That would basically be it. Now we could just put something minor update into like 20, oh, well, as long as that code doesn't crash off the check, otherwise it just like says, oh, I got a bad return code, big deal, and logs it, we'll think we'll be fine. Yeah, exactly. That's my point is that we need to expect that if it tries to upload a file and the endpoint doesn't respond or whatever, then core at least doesn't crash, it just fails gracefully and says, okay, well, I couldn't upload that sample. Yeah, this shouldn't be a breaking change that causes things to crash. No, it shouldn't. And is that a way that we don't, I don't know what adoption rate we have for new major versions, but that's gonna really hinder our wake word collection capabilities or not. Yeah, I'm not too worried about it. I think that the important thing is that we establish a process for doing this gracefully, right? So if we're gonna break something, if we're gonna break an endpoint, we first test that we can break that endpoint gracefully without causing problems. So I mean, we should break it now. We should break it for 20.02 now and test that it's working fine. Okay, and working fine, just meaning we get a return code, maybe if something's logable and a logger and that's fine. Exactly. Okay, I'll do that first. I'll just, you know, I'll check that, check core and see how that works. It may be fine already. I mean, it may not be a change we have to make, it may just do that now. If that's the case, then we will be fine. When we move to the new one, we'll just stop getting submissions from other people. I said it's actually gonna be a bigger deal than that though, because I imagine that it wasn't set up to have the service just disappear. So, but I'd also like the user to be informed, like, hey, you know, the Wakewood collection service has been disabled on your device, you know, some sort of notification somehow, whether it's either through Selene, you know, like an email notification, I don't know if we do those kinds of things now or through the device itself saying, you know, Wakewood collection has been disabled because of an API change, please upgrade to the next major version. You know, that's sort of something like that, but not every time they, you know, not every time the user submits a sample, for example, like some kind of logic there, right? That could get annoying, yeah. Yeah. Yeah, so all I was getting at was that this is something that needs to be thought through. And what would be ideal is if, since you already have the user account coming up into the request, that the new API be able to handle backward compatibility by detecting, if you're planning on changing the payload, by detecting the old payload and doing the authentication behind the scenes using the account ID rather than just breaking. Now, what I would say though, just to add to what you were getting at, Chris V, push comes to shove, if you have to throttle or shut down the sample collection for a period of time, I wouldn't lose any sleep over it because I have about 1,500,000 samples to go through. And so that will keep us busy for a while and chances are by that time you'll have it figured out. So I wouldn't let that bother you that, you know, hey, the old stuff is not making its way into the endpoint. That would be okay. But I was kind of hoping we could just have an API that says, hey, this is an old style request and he's not sending an authentication token, but I do know his account ID and yada, yada, yada. Anyway, it's up to you. I just wanted to throw that out there that migration path is something to be considered if we pride ourselves on only breaking or deprecating things at a major release level. Well, and in regard to this, even if we break things, we can't ever have a situation where we need to push out an update of any kind and an update on Selenie at the exact same moment and hope that they synchronize perfectly. That's just never gonna happen, right? Yeah, and just on your point about authenticating with the account ID, that's not gonna work either because the authentication mechanism for the device API right now is based on device ID, not account ID. It's got a different mechanism than the rest of Selenie does. It's a legacy thing that we never changed. So the way it is right now, authentication just wouldn't work, period. Unless we changed it to, again, send up the device ID, that would be a breaking change that not everybody would have, so yeah. I do also want to say that we have a pretty good update, right? Is... We do? Yeah, yeah. I mean, the people that I see hold back, projects like Big Screen, if I was in a Big Screen, they're trying to keep things as stable as possible and so they'll obviously sit back for a little bit just because making that change is more work. But in terms of users, you know. Like Mark, why don't you use a PyCraft user those days? Yeah, I think we primarily see most people update. I mean, there's still people that run old software and so it's pretty useful that we have maintained a good upgrade path. So even if you haven't had a Mark 1 plugged in for a couple of years, if you plug it in, it should be able to update itself all the way to 2002 or 2008 or whatever. But yeah, I think as long as it's not crashing everything, which I can't imagine it would, then yeah, I don't think we should be concerned. Okay, and that's something else we're gonna have to talk about. I think we talked a while ago about deprecating like 1902 at some point. How does deprecating that work with people who have enough or still running 1902, for example? Does that just break them and they have to burn an image or what happens if that's actually the case? I don't know what the answer to that is, but if we are gonna start deprecating really old core versions, we're gonna have to be able to handle that somehow. Well, I think the rule has to be something like, first we have to give them warning. And secondly, if we're gonna break something in terms of not supporting a service anymore whether they're 1902 installable just stop working, we have to ensure that at least the upgrade path still works, right? So they haven't upgraded yet and they have to go to at least 2002 to access the system. Then that's fine as long as they can update to 2002, right? So we need to have some, yeah, we need to start, I'm not even sure how to put it like, I mean, we need to have the institutional instincts, to make sure that we don't, we're trying to keep people online as long as possible and keep things that we haven't very explicitly decided are going to not be supported from working. So yeah. And it's probably gonna be a while before we have to worry about deprecating an old major version, but we just wanted to kind of put it on people's minds. Yeah, and along those lines, Michael, regarding this issue, if the old API endpoint doesn't get destroyed, and if we don't want to maintain the exact same DNS, which I think is like training.microf.something, then Chris could prop up the new API and the two of them could peacefully coexist. And then the upgrade to core would go to the new API, therefore not breaking any of the old devices. And when they upgraded, they would move to the new one. And so all I was getting at is we need to think it through. That's all. Yeah, no. I'd rather support two endpoints, but they do the same thing if we can avoid it. Well, I'd rather support two endpoints than have a bunch of our devices stop working. So, you know. One way or the other, we need to make sure that we're not borking people's devices. I agree with that 100%. I guess at some point, what do we do if, in this case, we're re-architecting something. Are we just, are we completely eliminating ourselves if we're doing too much re-architecting? If we wanted to do it, just because there's, you know, I don't know, I didn't, it's, you know. In this particular case, I don't think it's that big a deal because, you know, like Ken said, if the only result is that people can't upload, you know, we're not recording their words or whatever, then that's not a loss to us. And we just need to make sure that they're informed if they were, and it's really no loss to the user at this point either, because we haven't really exposed them to the data. The only thing they can do with that data right now is delete it. So it's not really much of a loss for the user, but under the new system, it would be, you know, a different story, but of course, under the new system, this isn't an issue, so. So the point being that as long as, you know, for this particular case, I think as long as, you know, devices don't stop working, if we wanna stop supporting that end point, I think it's fine. Yeah, and to be clear, what we're talking about is like a post request failing. Like we're not talking about, that's it. It's just a simple, yeah, post request. That's it. Yeah, so I think. Yeah, but if I turn off that old URL, yeah, I'll have to find out, like if it takes a while to get like a 502 or a 504 or something like that. Yeah, yeah. You'll have to make sure that's not gonna cause problems. Well, and I think we do turn that off in 2002 before we upgrade to 2008, so that, you know, if people are sticking to 2002 that they're doing those minor point releases, they're not like arbitrarily trying to post, you know, audio files to an end point that doesn't exist because that's just pointless. But, you know, if there's someone sitting on 1808 or something and they're uploading files into the world, then they should try and update. Okay, I'll take a little detour starting now to make sure that, so since we have limited time for 208 comes out, I'll take a small detour and make sure that, you know, that's we're not gonna break anything if we turn that off. And if, you know, when I need to make a change somewhere, I will, but I'll take that. Chris, where are you deploying this new backend to? It's on a different server than the current server, correct? No, it's going on in the same place. It's the same API, it's going to be in part of the same API that the device uses for anything it communicates with Selene before. No, no, no, what I'm saying is the existing machine where this data gathering is running, you're planning on creating your server in a different server than that server, correct? Yeah, I mean, it'll be running on the same server that runs the public API now. Right, which will be different than where the server is today. Right now, this is running on the Iron and Wicked and the new API run or the existing device API runs in the cloud. Right, so you will need to figure out with Josh how to expose the NAS to your server if that's your intent to be able to move those files each evening into the NAS. Well, I mean, SSH is available, which means SEP works. I'm sorry, what's that? So SSH is available, which means that SEP works, so that was my plan, we just used SEP to copy them over. Yeah, that'll probably work. Just so you know, I mean, I'm planning on asking them to expose that NAS to Lambda2 since it makes no sense to have to SEP the files from one server to the other when the Lambda2 is the training server, right? He's the guy that needs the files and he doesn't have a mount to where they are. So I actually have to pull them out and zip them up and SEP them over and use them, but I'm gonna ask Josh to put a mount to that NAS on Lambda2 to forego that. So I was just thinking you might wanna do that too, but yeah, you could SEP them too. That's fine. So Ken, is, that's not really, a mount really won't be useful to you until the directory structure issue is sorted out, right? Yeah, in other words, right now Lambda2 is where the training gets done. That's the big fast, whatever, Lambda server. And he doesn't have access to the data we have. Right. So I actually had to pull it off of the NAS on the existing machine and copy it over. Right, but if it was just a mount, it would be incredibly slow, right? Because now you're not just dealing with the fact that, you know, it's got a million files in one directory, but you're also dealing with the fact that it's not actually physically in the machine, so you're going over the network. Right, but the assumption is eventually that mount will be where the files are placed each day and then each night a cron will move them into their corresponding proper subdirectory. And so the assumption would be that those files would also be on that mount and that that mount would be where the entirety of all this data lives since it's, you know, rapidly approaching two gigabytes and that mount has eight terabytes. So short of firing up a new server somewhere with four or five terabytes, then I was trying to figure out how we can reuse the existing resource, even though it happens to live under a bunch of water. And so, yeah, that's where I was going with that, was that it's wonderful that we have a big drive that we can store the files on. It's a shame nobody else can get them. Yeah, okay, well, I get it, yeah, you're- It's a write-only storage device, in fact. Yeah, yeah, yeah, I know. So you've got two problems to solve. You need to get it into this directory so the performance of the machine itself is reasonable and then you need to make it accessible to the Lambda training server, so. Exactly. You're gonna work that out with Josh. Yep. Okay, great. One extra thing this whole discussion has reminded me is that, yeah, I'm gonna prioritize the 20.08 shift over the next, well, particularly over the next week because I think it'll be worth doing an extra point release for 20.02 so that people that are asking you on that have all the latest updates there and then we can do the breaking changes to 20.08 and, yeah, this will be my first one. So I will be asking for help at points, but yeah, I just wanna make sure I'm not leaving it to August 31st or whatever. Yeah, I think there's a 20.08 out there, right? I think, so, I mean, do we have any breaking changes yet to be made? I've got a few that I want to make. Okay. We're starting, but I'll be listing out. So, yeah, the plan is to list those out so that we can know what we wanna do and discuss whether or not we're gonna do those and then get the point release up so then we can get back to the breaking changes, yeah. Okay, I assume, okay, it's been in touch with you. He's offered to help with this release. Yes, and I have graciously accepted his assistance. Okay. But should we have either a separate meeting or to vote a DevSync upcoming DevSync to what's gonna go into 20.08? I think that you should have a separate meeting for that. Okay. And, yeah, I guess, how about you schedule that? Cool, and so I think Chris pointed out, we haven't really been, someone pointed out that we haven't really been following the ticket system and I think that that's a mistake. I think, I do wanna keep these meetings more efficient, but I think that we also do need to keep track of the work that we're doing. And so I wanna keep that, those instincts to log what you're doing in the ticket system or what you're going to work on. I wanna keep that going, right? Because it's gonna be a lot more important when we start to have more people on the team. But already, I can see that there's been a little drift here, at least from what I expected. And so I wanna make sure we stay on top of that. So let's make sure that we're logging the work, at least for the next upcoming week into the system so we can keep track of it. If you don't know where to put it, then, well, we should have a discussion about that, but maybe we need to bring back, or maybe we need to have more frequent meetings, to be honest. I think these meetings going over an hour isn't super useful or comfortable for anyone. But there's a lot of stuff to talk about, and I assume you guys are in communication throughout the week, outside of these meetings. But maybe that's not enough. I mean, it's a little bit tricky timing-wise because all the different time zones we have to cover. So that might be, maybe we should have a discussion about how we can address that. Maybe we need to have some meetings that don't necessarily include everyone, but at least keeping people up to date on what's going on. So, yeah, I don't know if anybody else has any feelings about that, but I think that I wanna keep the momentum going. And I don't want us to, like, losing two or three days if there's something out of sync. It's kind of a big deal for us right now. So, I don't know, does anybody else have a sense of whether or not this is a real issue now, or if this is just my perception because I've been more focused on the fundraising side of things, unless on the development side for the last little while here. It might be a combination of things. Maybe it's worth having a separate meeting aside from this update that we published, to everybody that just go through the board and make sure we're just, and if I'm a very task-oriented, not a lot of questions or just these kinds of discussions, but make sure we're all, the board is where we think it should be and reflects what updates we've made and what progress we've made. Yeah, I think that makes sense. Maybe on Mondays we should review the board and then throughout the week a couple of times we could just have quicker check-in times. We'd just say, hey, how are you doing? Is there anything blocking you? That kind of stuff. And you don't necessarily have to refer to specific tickets at that point in time, but at least once a week check on the progress of the pace of getting through the work that we are expecting to get through. But throughout the week, still touching base to make sure that things aren't getting caught up. So yeah, maybe that's the way to go. Instead of having two meetings a week where we kind of basically do the same thing in both meetings, maybe we could have a meeting on Monday and then a couple of meetings on maybe Wednesday and Friday. It's a quicker check-in. Yeah, I'm not sure the community really cares about our JIRA board. So that part could be an unrecorded part of the meeting and we could do all this discussion as part of what we're sharing. Yeah, maybe not. I think certainly the planning aspect of putting those sprints together and that sort of thing might be of interest to people, but yeah, they do tend to be a little bit drawn out and somewhat tedious. Maybe we can work on that too, somehow. Yeah, I think the key comes back to like, not just doing it in those check-ins or in the meetings as well, right? So it's like, if you guys wanted to meet and I couldn't make the time, I would just commit to like making sure that those tickets were absolutely up to date each day before you guys met. So yeah. They're also very painful now because we have a lot of different things going on. The more focused we are on one or two tasks, the more we can just talk about those one or two tasks, but we've got, it seems like we're kind of in a lot of different places right now, so there's a lot more to talk about, right? Well, I think this is actually gonna be a problem forever. There are a lot of pieces here, right? And we need to find a way to be able to communicate effectively about it. Like, as the team grows, we're gonna get somebody in here who's dedicated to watching the overall process instead of having me being kind of split and Chris kind of doing some of it and that kind of thing. We'll have somebody who's dedicated to just making sure that we have a plan and we're sticking to the plan and if the plan needs to change, that we do it in a thoughtful way. Right now, we're small enough and we should be able to, if we were all in the same room, every day, we could draw on whiteboards and whatnot. I don't think we'd be having a little bit of a different experience than we are right now. But I don't think we're really losing a lot in terms of efficiency by being distributed but I do think that we're losing a little bit and so I just wanna tighten that up a little if we can. Make sense? Okay, so with that being said, today's Thursday, we'll, let's meet again on Monday, let's plan to go through the JIRA tickets. I would, I don't wanna make that into a meeting where we discuss status as much as we do discuss maybe plans. So let's just make sure that going into the meeting, prior to the meeting, all of your status is up to date and you put in all the tickets that you anticipate working on for the week, for that week, at least for that week, into the system prior to the meeting and so we'll have everything there in the meeting and we can just discuss, okay, well, what are our priorities? And how are we dependent on each other's work and that sort of thing. So, all right. Sounds good. Cool. Yep. Awesome, well, thanks. I mean, it's a lot of good progress. I'm excited about the new wake word collection system and getting a new, you know, the system in its entirety up to speed and I'm really looking forward to trying out the new wake word model that Ken sent me. I did do a little bit of testing with my wife and the hit rate on the current model is abysmal. So it'll be really easy to tell if there's a difference with the new one. Should be pretty easy. Yeah. So, all right. Okay, thanks everybody. I'll keep your prize of any hardware updates over the weekend as we get them too because I know that's of interest to everybody. Absolutely. All right. Have a good evening. Have a good evening. Good guys.