 Welcome to the homelab show episode 24. It's funny, I have to look over at it. Like I just said it and then I'm like, I had to look over before I got it wrong. We're going to be talking about Microsoft today because voice assistants are cool, but I think some people rightfully so are nervous because having the listening device in your room that sends it out to the cloud somewhere seems like something maybe you'd be a little bit concerned about and want to be cautious about, but that's where Microsoft comes in. So we figured, you know, this is right up with the self-hosted podcast because it's self-hosting your own system to be able to do this. So that's pretty exciting for sure. I have Jay here, so welcome Jay and of course Josh from Minecraft. So we're excited to get this started. Someone said there's no audio. No other people said, all right, sorry. Distraction, I was making sure everything seems to be working. We all can hear each other, so it should be, working perfectly fine. Sorry about that. Yep, yeah, it scared me. I mean, it's probably a Linux pulse audio issue, honestly, I had to guess. So we're not going to go there. We're not going to talk about that. No problem. All right, I see people saying we have audio, making sure of it, because we do this live for those who only listen to us on podcasts, we do this live as well. So we have both audiences. All right, now the first thing we have to do is take a sponsor the show so we can get some bills paid here and that is the Linode. We have been a Linode user for a long time. If you're listening to this, if you've been on the website, the HomeLab show, it is all hosted over Linode. Jay takes care of all that, setting it up and configure it, which is also, if there's ever a problem, I just blame Jay. It's easier. Yeah. And last time there's a problem, it was a typo. So the thing is it happens to all of us. Nobody's perfect. I don't care how long you've been working in the industry, you'll make mistakes. That's never going to stop. That's just how it goes. But using Linode as a sponsor was not a mistake because it's been working out very well for me. On LearnLinux TV, the entire web presence is on Linode. So I've had quite a bit of familiarity with Linode. They've been a sponsor of that channel for quite a while now. So it was a natural fit here because yes, this is a HomeLab podcast, but I think the beauty of HomeLab is that you can decide what you want internal and what you are willing to have in the cloud. And that is an intelligent decision you make of them by yourself for your HomeLab. Maybe you have a VPN server on Linode or a sync thing node where, what was the name of that feature that it has on sync thing? Oh, yes, private encrypted nodes. Yes, that's another feature. And if you want something in the cloud. And I've been working on, and of course everyone's been asking when they're going to be ready is some new Warrior Guard videos. And I think I'm going to use Linode for all of the, because I like building my own VPN servers. You can't trust these companies, but you can trust yourself. You can't trust yourself not to make typos sometimes, but at least you made the typo, not another company, so. Yeah, and you know what you can do is you can actually, since you are the VPN provider at that point, you could manage the logs yourself. And let's be honest, I think the best way to manage logs for a VPN provider is pipe them over to slash dev slash null, which you could do on Linode. So that way, you know, all the logs go into black hole and, you know, you don't have to worry about news from some provider making a mistake because you are the provider. You created the server. So you call the shots on it. Yep. And you'll be the first one to know if you get a takedown notice. It'll come to you. They will knock on your door. Hopefully not. I'm not relevant this week for those of you following the news. But anyways, before we get off topic, welcome, Josh. Let's start talking about, well, actually, Jay, Jay has to announce something real quick. I don't want to skip it. Yeah, I want to spend a moment to just make a quick announcement. I'm probably going to do a live stream. I don't think I'm going to do it this week. I still might. I might do it next week. But as of October 1st, LearnLinuxTV will be my only job. So up until that date, I've basically been working two jobs. LearnLinuxTV has become, I mean, I don't want to call it a job because it's a passion. But there's a lot of hours spent wearing two hats. And it's finally at the point now where I've gave notice at work. It was really hard because the company that I work for is amazing. So how do you leave an amazing company? It's really hard to do. But I had to because I'm only one person and I can only do one gig. So LearnLinuxTV will be my only job, which will hopefully mean more content. There probably won't be any difference until mid-October or something like that. But it's happening. So at the same time, I'm excited and a little scared. But I think it's going to go great. Yeah, and well, technically, you kind of had a gig before because you did publish a book or two. So there's some other little side hustles Jay's got to keep the education content flowing. Yeah, and I consider the book a part of it. But also that gives me more flexibility with books because maybe it's not going to work. Maybe I will. And I won't have to spread my time so thin. I mean, there's some days I wake up at eight in the morning and I'm not done until 11 at night. So I'll be happy just being done at five, honestly. Yeah, it's hard because I do the same thing. I was playing with the, I brought up WireGuard because I've been playing with so much WireGuard stuff, diving deep into it, getting ready because it takes a lot to make tutorials really concise and short. You have to spend a lot of time understanding all the edge cases to create the most concise level of documentation. So it takes so many more hours to create concise documentation than to kind of ramble on of how all the things I tried to get here. And then you heard me complain last night off the air about the current video I'm working on and how I can't reproduce the problem. It's working fine. So how do I make a video about a problem if I can't reproduce the problem anyway? Long story. Yeah, it'll be a new adventure for me. So I'm really excited for it. And I think there's big things coming. So yay. Yep. All right. Now on to the personal assistant that you can self-host. This is cool. And it's, you know, I don't think we've ever had to accidentally go off during a Home Lab show. It was certainly when me and Jay are talking, there's a little box behind him. And it has accidentally just started talking. I mean, that's... And you'll see. Yeah, I mean, people that have watched my YouTube channel, I've seen it in the background for quite some time. I bought it like I think two years ago or more to do some videos about it. I didn't get around to it then because I had two jobs, right? So it's really hard. I have to be very selective about the content that I do. But now that won't be a problem anymore. You know, the things that people have been asking for or that they have been asking for will be done because I'll have the time. And, you know, for a year, people are like, well, you have my crop, do some videos on it. And like, yeah, but I have, you know, other things going on. And then eventually I did this April Fool's joke that I hope people saw that my crop was the star of the whole video. It was something that was like a guilty pleasure for me where I just did something that was completely different than anything on my channel ever. And I labeled it as a Debian review. You got a Debian review in that video. Just wasn't me reviewing it. It was my crop. So there's going to be a lot more content on my crop on my channel this year actually, hopefully maybe even as soon as this coming November. So it's something that I've been using, but I think what we should do is kind of take a step back and let Josh from Minecraft introduce himself. Let us know his role there. And he could let people know that have not heard of my crop, what it is and what the goal is that it's trying to solve. Sure thing. Well, thank you very much for having me out, Jay and Tom. It's nice to be here. Congratulations, Jay, on joining the world of the self-employed slash unemployed. For me, that leap took place quite a while ago, but it was really similar to what you're talking about is I had a passion, in our case, we were building a makerspace. And this is before Amazon Echo and before Google Home and Siri only existed in the Apple ecosystem. And I got some friends together and we decided, hey, we want to build a voice assistant for our makerspace. We want to build that Jarvis experience from the original Iron Man. And from that lowly beginning, ends up being, it's now my full-time job. I'm, my title here at Minecraft is actually founder at the moment. We found a CEO who has a lot more experience than anybody else at the company and brought him on a couple of years ago. So I get to go out and talk to folks like you and to customers and the community and talk about what makes Minecraft special. So that's my role here now. Yeah, the self-employed thing is great. You know, it's really an exciting adventure. So I'm excited to see you headed in that direction. Yeah, the whole, I think the concept of the voice assistants, like I said earlier, is a really interesting one. And it's all of, you know, when I was looking at anything sci-fi as a kid, and by the way, today is the Star Trek release anniversary. And, you know, they talked to the computer and things like that, that was part of it. I think it was, the futurist really predicted to be here sooner than it was. And of course, no one's seen it coming to become like a war between large companies and how much data they can have on us and maybe not be the best stewards of all that data. So that's why I think Minecraft is so cool because it's not that the concept of voice is the problem, it's the, maybe who has control over it can be a little bit of an issue. But I think that's what you guys really solve. Yeah, I mean, when we got started, it was more about being open than it was about being private. But as you guys know, openness and privacy go hand in hand, right? If you can't see what's going on inside the black box, there's a pretty good chance that whoever owns the black box is keeping an eye on you, even if it's just log files, right? And so, you know, when we got started, it was about openness. It was about having the ability to have a device that you could do whatever you wanted to do with. You know, you could use it, you know, install, you know, third party standards on it, for example, you know, putting a Z-Wave or a ZigBee USB stick in the side of it and using it to grow and could directly control IoT devices. You know, that's something that didn't really exist when we first started Chippin' the Mark I. Being able to change the wake work, you know, something simple like that that you would expect to be able to do with your, you know, this is your computer, right? In your home, you should be able to name it Bob if you want to name it Bob. And, you know, that really didn't exist. And then from there, you know, it's gradually become apparent that the privacy is really a key in this technology. I mean, we all carry around mobile devices that have two live microphones, you know, connected to an operating system that in many cases we don't fully control, connected to a network connection that we also don't fully control. And, you know, that monitors us in most cases nearly 24 hours a day, you know, adding always listening microphones to your home that then send those same companies even more data about you is just, it's really a challenge for a lot of people. And that's something that we're really excited to help people solve here at Minecraft. I'm excited about that. I've had some situations. I think many people have experienced this where even without an automated assistant, some weird things happen. Like I was talking to or mentioning to my kids we're going to a pharmacy or something like that to pick up a prescription. And then I'm waiting in line to get that. And I look at my phone and just pull the Facebook randomly. And then there's an ad for a pharmacy. I'm like, what? Like, and then at first I'm like, yeah, that's a coincidence but it keeps happening. And then I have a conversation about ADHD which I've been open about that I, you know, have but then I see ADHD ads. I'm like, okay, like this is not cool. Like, you know, I can go off in a tangent on that but I won't. I think that giving people control where they could say, yeah, I mean, Siri looks cool and you know, Amazon Echo, that looks nice. But I'd rather just kind of do it myself which is kind of the heart and soul of this podcast is people want to do their own thing to make their own decisions. And now they have an automated assistant that can be a part of that decision. Well, I think the other thing is being open source and it brings more community involvement into this. So it's not like I'm only limited to here's what developer allows me to have access to. It's like, hey, I can look at it. I can tinker with it. I can start modifying it and do a deeper level of integration or generally as it goes to lots of open source things it gets contributed back. Like, hey, I started with this. Here's my contribution back. Here's a module on the add-on. Maybe someone else would like to do these things in these features. But I wanna get, because we've been talking about this so let's talk about how Mycroft actually works because obviously it's not running in some cloud somewhere. It's running locally. And as Jay has over his shoulder there a physical box that it's running on. So let's talk a little bit about like some of the hardware and software that runs it and how it does all that. Sure, so Mycroft is primarily based in the world of Raspberry Pi. There's a lot of folks that do have it running on desktops and it can really run any place the folks over at risk we're converting it actually to risk architecture. The idea behind Mycroft is that you can run the entire stack locally. So when you look at a voice assistant experience it really consists of four major parts. The first is the wakeward spotter and that's in our case a little neural network that sits there and runs on device. It's attached to the microphone. You were talking about pulse audio. At the beginning of the podcast and yes, we have much experience with pulse and also trying to get everything to play nicely together. So yeah, that's been quite the experience on Linux. The next audio can be quite challenging for sure. It can be and it means that you have to, if you're producing something that's consumer ready it means that you do have to take a lot of control in that experience because the average consumer, most consumers don't wanna shell into their device to fix a setting or to chase down a challenge. And Linux actually is great at that we've found. And so yeah, the wakeward listener runs locally. It connects to the audio bus and it listens for just the wakeward, right? So we have a sampling of, I don't even know how many samples it's millions and millions and millions of samples of people saying, hey, Mycroft and we get those samples from individuals who make an active decision to donate their data back to the community. So by default, if you install Mycroft on a Raspberry Pi or on a Mark one or if you've got one of the Mark two dev kits that we've been shipping, it keeps nothing and it sends nothing and we don't log anything like we do the absolute bare minimum that we can possibly do in terms of data retention to make that service work, right? And, but for people who do opt in we're able to grab those wakeward utterances and put them into a training algorithm and then improve the wakeward spotter over time. So it's not only open source from the standpoint of the software but we also make that data set available to the public so that they can improve things too because in the modern world, source code is really only part of the solution to providing a modern experience for a web service, right? For almost any web service you use out there today machine learning becomes a part of that process and as an open source community, we need to be able to solve both of those problems in an open and transparent way. And so anyway, so we have a neural network that listens to the microphone, it listens for the wakeup phrase which by default is, hey, my craft but you can actually train your own wakewards. And then it puts the device into listening mode, right? And it listens until it hears a silence or until it and it times out and then it sends that data up for transcription, right? And unfortunately, you can run this entire thing locally that's without a doubt but the transcription piece of it, we're really heavily dependent on technologies like the Mozilla voice stack where building a full language transcription engine is actually turns out to be a significant challenge. I think that those guys have started a, the guys from the Mozilla project have started a new company called Koki which also does full language transcription and as I recall was open source. So the audio sample gets sent for transcription and turned into text and then it gets sent to an intent engine. We have two intent engines. One of them is a neural network-based engine called Pidatius and the other one is a known entity rules engine. And what that transcription engine does is it, or what the intent parser does is it grabs that text and it figures out what you're trying to do by extracting objects from it, right? So if you say turn the lights on in the kitchen, it looks and it says, turn lights on, okay, that's an IoT intent. The objects are lights in kitchen and the position is on and it returns it as a JSON structure which, you know, an IoT skill on the device then goes out to the network and performs the action. If it's not an action you're asking for so if you're saying, you know what's the height of the Eiffel Tower? It goes out to, in that case it probably would go to either Wolfram Alpha or Wikipedia, gets the data and then creates a verbal response and then on the mark two it also puts up what's called a card that might have a picture of the Eiffel Tower, right? And so all of that stuff takes place locally. By default, we host a transcription engine or we host a proxy for a transcription engine so it sends us the audio sample. We send that up to a third party provider transcribe it from our IP address, right? Not from yours. Transcribe it and then send back the text and then nuke the log. And then of course, you know a lot of the services you're accessing are also online. So if you're asking for the height of the Eiffel Tower it went out to Wikipedia and got the data. However, if you were just asking to turn the lights on and you're running an instance of Mozilla's Common Voice or Mozilla's speech transcription engine somewhere on your network you can run that entire experience without sending any data at all to the internet which is pretty exciting. I hope that that makes sense. Yes, no it does. And I want to unpack a couple of things. We'll start with the neural nets in the programming of those. Like you'd said, there's source code is one side but it's the training sets that you use to build that. Those are really, it takes a lot to train them. I've actually, I don't write code like that but I've been to some deep presentations on it and it's actually kind of interesting how, like watching how the, I've seen some of the most basic demos and how they keep stacking on top till you build this incredibly complicated training set. It's interesting and the amount of data it takes to get that accurate is kind of cool. I also like that you said that by default it's not sending some of that data because that's obviously, as Steve gets some, I've always liked the words to use the tyranny of the default because if you leave it at the default it's kind of a problem because it will just automatically send it and opting out, they sometimes make buried. So the fact that you have opt in for like the trigger word one for improving it that's actually pretty cool. And the last thing is the connectivity. Those engines that it does send data to, so you act as a proxy to each of those you said and then it's your IP address each time that actually does it and then you say unique the logs afterwards. Yeah, so the best speech transcription out there by far is the transcription engine that Google provides because they have access to so much data. And so the question becomes is, okay, how do we use that engine without giving away all your secrets to Google? So there's two things that we do to protect the users from that. So number one is we're doing the wake word spotting on the device in the home, right? So none of the audio, unless it's been woken up none of that audio is being sent anywhere, right? It just, it runs through the neural network and then the dev knowledge just disappears. The only segment that we actually send up for transcription is between the wake word when it senses the wake word and when it times out or when it detects silence, right? And so we send that up to our server and then our server sends that to a third party. By default, we are using Google for that but of course, the only thing that Google's got visibility of is the audio sample itself and our IP address, right? It has no idea that it's coming from somebody who's using Minecraft in McMurdo Station Antarctica or somebody who's using Minecraft in Stockholm, Sweden. And we eventually want to be using a speech transcription engine that we actually run, right? And so we've done a lot of work with the common voice folks over at Mozilla, trying to improve those tools but as of today or the last time we checked it, the Mozilla tools weren't, they didn't have full language speech transcription that was accurate enough for these technologies to work. You know, the key enabler that made it time for the voice assistants today instead of 1985 or 1968, right? The key enabler was speech transcription engines that are north of 95% accurate because anything below that and the thing just, it just doesn't, it's become so frustrating for people to use that they don't want to use it. Yeah, they'd rather just type it at that point because computers rely on that level of accuracy. They have to understand it. I think the thing that not everyone realizes is when you mentioned intent. Google, they have done such a good job of people not realizing how good their AI systems are and understanding intent. It was when I had seen a demo and this goes back a number of years ago when they did this, it was, if you ask it something like, hey, how old is like Clint Eastwood or some actor? And then all you do is the next question is, how tall is he? And the intent goes, hey, you just asked about this person. So you must also be asking about the height of that person. So it's almost like this whole natural language to try and figure that out. So that object and then intention of people, because that's how we speak. We don't always restate each thing. So we can say, turn the lights on the living room. And then we might say, and in the kitchen, when you can start getting to where it's so seamless to where human language doesn't have to be as precise, it's more natural. I think that's where these things, we're converging there slowly, but that's when the average person who's not listed this podcast that just, the non-techie user really starts to enjoy these things. And that's really hard to get to people. If you're not familiar with how coding and programming works, you don't realize how hard it is to understand the intention and parsing out the objects of language to really understand what someone wanted to say. And I think that's gonna be a challenge for probably a somewhat foreseeable future of getting all that to work together, I'm assuming. That's, like you said, figuring out those objects that figure out intent can be tricky. Yeah, and one of the cool things about the modern world is that we don't have to reinvent the wheel. You have the team at Koki and the team at Mozilla working on open source speech transcription. And then you've got Alex over at RASA, right? RASA NLU, which is, they were based in Europe. I think he moved to San Francisco. He got a bunch of funding, but it's an open source natural language processing engine, right, that allows you to do exactly what you said, is to create stateful speech transcription where it keeps the state of the conversation and allows you to add follow on questions, right? And in that case, once again, it uses machine learning to figure out what the intent was as well as object parsing and things like that. And so, yeah, one of the inspirations behind Mycroft was the original Mycroft stack was like 75 lines of Python, if that, it might have been 30 lines of Python. And it was sending everything out to third parties just to make it work. In that case, we used WID AI, which was later bought by Facebook for the NLU. But yeah, like it went from that to what it is today, where we actually have the wakeward spotter that we built. We do some of the NLU stuff. We're looking probably to add RASA to that stack sometime soon. We do the speech synthesis, so now I can swear. Like if nothing else, we're the only voice assistant that you can actually make swear at you. Well, that's a reason he needs it right there. I mean, just. There you go. There you go. And so the idea was that we could use third parties for a lot of these and just tie it together into one experience. And then as those third party technologies become more open and more capable in the open source community, we can include them as options. And then it becomes the person using the technology. I hate the word user because it's, well, I hate the word user because it's very, it has a lot of implications that are closer to the opioid crisis than to technology. So it's up to the customer who's making use of that product to decide which ones they wanna use, right? Like there's speech synthesis engines from the bigger players that have better prostate cadence tone than our engine, our Mimic 2 engine that we built. And so if they wanna use that, that's great. And they make the decision as to what the privacy trade-off there is and what the trade-off is with usability. And then we don't have a vested interest in whichever one you use, right? Like we want you to have the best experience possible. So by default, we wanna ship something that's as private as it can be while providing a great experience. And then if you wanna open that up and use, build a Rasa server for the NLU so that you can have it do cool stuff around medical, right? Like if you're a hospital. And then you wanna use the Alexa speech synthesis engine for speech synthesis. And you wanna get rid of the wakeward thing entirely and use a push button instead of a wakeward. You can do all of those things inside the stack, which is really what we wanted to do from the get-go, right? And facilitate people doing all kinds of cool stuff. I think what's cool too is, as we said in the beginning, being very open source means we can look inside of these devices, so to speak, at the software level and understand exactly where the data's going. The devices made by all the other companies are just, as you said, black mystery boxes of we don't get to know what they do. They tell us they do one thing. And then later some security researcher goes, hey, did you know there was this extra thing inside of here? I think actually everyone had, we'd found something else inside of there. And their answer was, well, it's not activated yet. We're like, wait, it's kind of what? I think, was it the nest that they found a microphone in that we didn't know was? Yeah, and it's like, that's interesting. That wasn't on the box. Yeah, and Facebook just last week got fined by Ireland like 200 plus million dollars for using WhatsApp data. I mean, WhatsApp, an app that was really designed for privacy from the get go using that in inappropriate ways. And when it comes to VPN, there's another example. Facebook bought a VPN app called Anovo, right? And then turned around and used the VPN data from their customers to better target Facebook advertising at them. And eventually when they got busted, they had to shut it down, right? And so, the big Silicon Valley companies, and I'm sure you've heard this, right? If you're not paying for it, you're probably the product. If you're not paying for the product, you're probably the product, right? Yeah, absolutely. Yeah, they monetize our existence. It's as simple as that, that's their business model. Yeah, I think the thing that I don't understand about the companies that do the things that you were saying is that they get caught every single time. And if you use common sense, it's like, you know, yeah, you have your general people that use computers and they're not really looking for anything, but there's always that person, right? That's doing a packet capture. It happens every single time. There's like 0% possibility that somebody somewhere isn't gonna do a packet capture and find out where those packets are going because that's what people do in this field. They want to learn this stuff. And here we have an open platform. We really have to packet capture. I mean, you can, but you can look at the source code, you can find out exactly where things are going, which is really great. Yeah, that's exciting. And you know, for us, you know, the goal is not only to change this from the technology side of things, like we're technologists and like we, you know, we started this in a makerspace. Like all the original Mark II dev kits, we printed them on the SLA printer behind me. Like, you know, we're young and scrappy, but one of the things that we want to do as an organization is take that and turn it into something that's financially meaningful, right? To, you know, as partially as a demonstration for the public, but also, you know, to return for investors, you know, that privacy, you can respect privacy that people are willing to pay for privacy, right? And that, you know, Minecraft's based on a character from a Heinlein novel and in that novel, you know, one of the phrases they use is tan staffel. There's no such thing as a free lunch, right? If you're not paying for the product, you're the product, right? If, you know, another great example of that is the Amazon Prime, right? So, you know, you pay for Amazon Prime for shipping. I don't know what percentage of the world does, but I'd strongly suspect all three of us in this conversation do. Yeah. And all of a sudden it includes streaming media, right? Like that was just like at a left field. Like I paid a ship package and now I can watch movies on the TV and, you know, how do those things connect? And the way that those things connect is that what you watch on TV and how you consume media tells a company a ton about you as a consumer, right? So, you know, the fact that I love the expanse that I've read all of the novels that I've seen the entire expanse TV series on Amazon Prime, you know, tells the Bezos crew a lot about me and it means that the ads that I see, you know, all over the internet, the ads that I see on my phone, you know, those are targeted to me as an individual because they've learned about me by streaming media. It's not about entertaining me, it's about obtaining data about me. And this same person wants to put an always listening microphone in every room of my house, right? And for me, that's just a non-starter. Like I already sacrifice a lot of privacy with my phone and frankly, we'll switch to a pure Linux phone as soon as one comes out that has the competitive features that I need, right? And that same statement is why we're bringing a smart speaker to market, right? Because there's a ton of people out there who are like, hey, I love the idea of a smart speaker, I can play my music, I can listen to the news, I can ask it general questions, it can help me do the math, right? Like when people start doing math in their head and I'm waiting for something to come out that's private and has competitive features. And so that's what we've been trying to bring to market with the Mark I, which was our first kind of development device and really for guys like Jay, right? Like, people who are willing to hack around and break out the command line terminal and reflash it from time to time when things go south. And then the Mark II, which is envisioned as being a real consumer device, right? That my mom can take it and put it on the counter and for kitchen and have a great experience, have it auto update and be able to use it on a day-to-day basis and not worry about who's got that microphone turned on and is listening to them 24 hours a day, so. Yeah. I'm sorry, go ahead. Honestly, it also gives you the option kind of talking about like the content curation. One of the reasons we provide full RSS for our podcast is so you don't have to use some type of feed aggregation service where those aggregation services will determine whether or not you can listen to that podcast because that's been the new thing, some of the podcasts that get popular and Spotify has been guilty of this. They buy exclusivity so they can get more people that are following audiences and being able to say, nope, we want to be able to put these feeds, curate them, pull them directly in on a device, makes it, I kind of like that. It's kind of another compelling use case. I didn't think about it as much, but that's one of the reasons I don't use those for it. I usually download the actual files, whether they're ag or mp3 to listen to my podcasts. That's cool. I like that. And speaking about, I was going to say, speaking about my experience with Mycroft because when I did the April Fool's video, I also did another video where I'm actually having a conversation with Mycroft back and forth. And I was able to do that. And I don't think I've ever talked about how I did this, but I was able to SSH into Mycroft. So already that's a win. Like if I can SSH into a device and poke around, that's great. And it wasn't like I was in some kind of jail or anything, not that that necessarily would be a bad thing. I mean, as long as it's open source and I can expect the source, that's fine. But I was able to just run all the normal Linux commands. And then since it's open source, like I didn't know how to talk to Mycroft to get it to say what I wanted it to say rather than what it's programmed to say. But I just used SSH and then I found that, I think it's the speak module in the Python implementation, if I remember correctly. And I was able to type out what I wanted it to say. And then I would have my finger on the enter button. And I'd be talking to it. And I would time pressing the enter button exactly to win, I wanted him to respond to something that I was saying, but I was able to do that because nothing was hidden for me. And I think that was my favorite part of that experience. Yeah, and for that application, there's a couple of cool things that happen. Number one, you can always tell Mycroft, say the moon is green, right? And he will repeat back whatever you say. Now, you are kind of at the mercy of the transcription engine there, where in that case he won't swear because the transcription engine that by default won't return swear words. But you can absolutely swear from the CLI. And having spent a lot of time on the CLI, I do swear when I use the CLI. And the other thing is catching, and this is kind of a cool feature that we've been working on over time. As we get those audio phrases, we don't know who they came from, right? Like we just have the original text and the transcription, right? But we can cache those by getting a fingerprint, shoving them in a folder so that if somebody else asks for the weather in the same location, or the weather is the same, right? It's 78 and sunny. That gets cached to a folder and we can return it instantly without having to synthesize that data. And over time, our goal is to have, as many of those phrases cached as we possibly can, locally, ideally, so that you don't even have to go, when it says, hey, it's 78 and sunny, it's actually grabbing a local OGG or probably an MP3, probably compressed. You know, and playing that back in the speaker, right? You know, but to do the types of things we're doing, you know, we definitely need to get a lot bigger, right? And that's one of the things that we're really working on. We need, you know, we have about 5% of our customers off to share their data, and that's actually more data than we can process today. But we need to build that community and make that community bigger, both from the standpoint of people who are donating their data, people who are volunteering their time to go and tag that data, that's something else that we've been, in the past, we've had running and we're looking to relaunch, you know, the ability to play that audio file back and get somebody to correct the transcription for people who donate their data and then feed that back into the machine learning engine. And then also, you know, finally, for us to be commercially relevant, right? Which for us is a big deal, you know, the, you know, voice is the fastest growing segment of the technology sector. The adoption of smart speakers actually eclips the adoption of smartphones in terms of its velocity. You know, if you remember, back in 2006, we all had flip phones and then fast forward like three years and everyone had a smartphone, right? And, you know, it's the same thing with voice. It's that, you know, a few years ago, nobody had a smart speaker in their kitchen, you know, today, you know, for people who are part of the general ecosystem and haven't thought through the privacy applications. You know, it's almost, you know, it's becoming nearly universal adoption. And, but there's still this one big segment and it turns out it's about 20% of the public, right? Who want a smart speaker, right? But don't want big tech spying on them, right? And so that's really who we're building this technology for. And, you know, one of the things we're doing as part of that is raising money, right? Which right now, you know, we looked at that and said, you know, at every step of our process, how do we be community focused, right? So when we launched the company, we did a Kickstarter for the Mark I, which we delivered about two years later. And, you know, we built it on Raspberry Pi, we built it on Python and we did a ton of focus around building community. That's what Ryan Sypes did when he was here before, you know, he moved on to Mozilla to build even bigger communities. You know, when we went out to raise money, the first time we used Regulation CF crowdfunding, which allowed us to take up to a million dollars and we ended up hitting the statutory maximum from community members so that we could be a community funded project so that we wouldn't be beholden to like three venture capital firms in Silicon Valley the way that a lot of the other companies are. When we decided to launch a consumer device, we did an even bigger Kickstarter and actually, you know, in the intervening time, the SEC has changed the rules and you can now raise actual significant money using Reg CF, it's no longer capped at a million, you can now raise five million. So we're actually out raising money today, raising the next five million so that we can ship, you know, the Mark II, which is the consumer device that we've been shipping dev kits for for a while and then also become relevant, right? Get the assembly line started, getting mass production started, you know, getting out there into big box retailers, right? Making sure that you can get a Minecraft at the, you know, at the big box, you know, whether it's JB Hi-Fi in Australia or Carrefour in Europe or Walmart here in the United States, that you can go and grab a Minecraft off the shelf, take it home, plug it in and that everybody can have that private experience, not just, you know, those of us who are willing to get into the CLI. So that's really exciting. And I'll speculate that we will not be buying this with our Amazon Prime accounts. No, we don't have plans to be on Amazon. We do have a product on Amazon right now, though. Minecraft was approached by a patent troll about a year and a half ago. And, you know, instead of paying the $30,000, the $30,000 demand, we decided to litigate. So we're about a half a million dollars in litigating against this patent troll. And one of the things that he's done is, you know, at every step when the broader Linux community has given this guy a hard time on time, on the internet, a guy by the name of Todd Toomey. And he looks exactly like you'd expect a patent troll to look. He's evil. Yeah, like the three buttons of his, his like a formal picture, he's got like the three buttons of his shirt unplugged with like the hair sticking out, right? Like, yeah. I have a picture in my head of Shrek. I don't know why. Shrek is friendlier than patent trolls. Yeah, he is pretty friendly. Yeah. So at every... And I'm hesitant to even mention the guy's name online because I know he's gonna go whine to a federal judge that I mentioned his name online, right? Like Toomey the troll. So anyway, so people would give him a hard time, you know, online and he would go whine to a federal judge about it. So we looked at it and we said, you know, we wanna talk about patent trolling in general, cause it's just really terrible for communities like us that we're spending hundreds of thousands of dollars fighting patent trolls instead of building technology. And so I wrote a children's book. It's called, Mycroft and the Patent Trolls. It's available on Amazon. You know, it's also available directly on our website. And if you buy it on our website, we get 100% of the proceeds. So, you know, please go there if you're gonna buy one. It's actually a great little book for kids and it explains, you know, what a patent troll is and how to battle patent trolls in a way that either even a five year old or a member of the US Congress can understand. And so anyway, so if you have a kid and are looking for an exciting story or you just wanna, you know, give it to a patent troll, you know, please, we did that as a fundraiser, partially as a fundraiser, partially to raise awareness. But yeah, Mycroft and the Patent Troll is the only product that I'm aware of that we have on Amazon at the moment. And then of course, you know, we're out raising money at Start Engine, folks who are interested in supporting and backing small companies, you know, in that are open and private, you know, this is probably something that you might wanna look at. And one of the other kind of cool things about that and one of the things that Big Tech has taken away from us, right, is, you know, back in the 90s, when people started a company, you know, they went public fairly quickly, right? Starbucks, you know, just after a couple years, you know, Amazon actually is a great example. Amazon went public in 1997. And if you'd have invested in Amazon in 1997, you'd have made 120,000% return on your investment, right? Because at the time, they weren't really worth a whole lot, right? Right. Fast forward past the dot com boom and, you know, all of the, or it's Sorbanes Oxley and all of the legislation. And all of a sudden companies like Coinbase, when they go public, they're already worth like $50 billion. So, you know, these Silicon Valley insiders got all of the huge gains, right? SpaceX still is in public, right? The Silicon Valley insiders got all of these huge gains and, you know, the general public ends up being the guys who buy it at the end. So, one of the other things that we're excited about and one of the reasons we're raising funds through the community is given people the opportunity to be here early to the party as opposed to, you know, five years from now, us going public and, you know, having had all those run-ups for insiders and then, you know, at the end of it, the public only gets a, you know, a modest return. So, yeah, lots of community-type stuff going on from fighting patent trolls to raising money to shipping the Mark II, which should come just as soon as we start the mass production line and the design for that is done and we've actually shipped, I wanna say 300 dev kits out to folks, you know, with laser-cut plastics and folks are having a pretty good experience with that. Something I wanna cover, because we've talked about technology and the business model, one thing is, a little bit, we probably choose to cover this beginning, how do you set this up? So, you get the device, do I need one for each room? Do I need to buy a few of these? What's the layout and what are some, well, Jake can probably talk to some of the integrations because I know he can turn his lights on and off with this, I'll let Jay talk to the integrations, but describe the layout for us a little bit of like how we would set this up if we wanted to get started. So, I'd recommend starting with one, the same as any piece of tech, right? Make sure you're happy with it. We don't do a lot of the sound synchronization stuff that you see from Sonos and then, I guess Sonos sued Google for stealing their tech so that Google could do it, where it synchronizes the audio across multiple speakers. It sits in the corner or on a tabletop. The new one has a screen, so the screen faces the room and then we spend a ton of time on the setup process with the Mark II working to make it really consumer friendly. So, in that case, you turn it on, it sets up a little Wi-Fi hotspot, you connect to the Wi-Fi hotspot with your phone, you put in the credentials for your Wi-Fi network, it tears down, its local Wi-Fi hotspot connects to your network, generates a pairing code, you then go to our website and pair it and then of course you can manage the settings from our website. I will say that the web portion of it, which is called Selene, right, which is our backend is all open source. So if you wanted to run that backend locally on a server, you absolutely can pull it down and we would have no visibility of that device at all. However, Selene is like four or five virtual machines all working together and networked and doing all kinds of crazy stuff. So for an average user, it's probably better to use our backend and then that's the piece where we're looking to make sure that we communicate to hand staff, right? There's no such thing as a free lunch. And so our goal is to get some percentage of our customers to actually pay for using that backend and that becomes how the company supports itself longterm instead of advertising and spyware and all the other stuff that the Silicon Valley companies do. So. And I think paying for a service is fine. Like to me, because it's clear, it's always so convoluted with the big companies of what they're monetizing and what they're charging me for because it's a hybrid model with some of them like Amazon. I pay for prime, I get things, I also get spied on and they monetize my existence while also charging me for prime. But I like the free shipping and I like watching the expanse, but I don't mind like saying, I pay for this service and this is what I get. To me, an example, and we've talked about it before, I believe on this show and I've talked about it on my YouTube channels like Bitwarden, it's free, it's open source. By the way, we have a license fee. Could you hack the source code and take it out? Sure. But why? It's really, it's very reasonable. I don't mind paying for the development of that product. We use it commercially here at my business. And to me that's like a great trade. I know what I'm getting, I know what I'm paying for. The license fees are reasonable and I'm like, perfect, that's, I'll pay you for the service and I'll enjoy it over here. I know it's open source so I can always look inside to see what's going on. And I know they're not trying to find another way to monetize like selling some data they know about me. And to me it's just a better, it's an upfront exchange. We can call it an old school traditional business model, maybe, but I think that's something our users will be comfortable with. I think the important thing, the important takeaway in my opinion is that it's a choice that people make or I should say it should be a choice because the idea works out because if you don't mind your personal information if you know exactly what's being recorded and you know what's going on there, you make that decision for yourself. No, I'm not comfortable with that. So I'm gonna just not use that product or I'm okay with that in exchange for the value that the product gives me. But then what we end up finding is that, there's a microphone in a device we didn't know about one time. I don't know if it's Google Chrome or Chromium itself was activating the microphone and didn't let anyone know. And then Facebook gets in trouble for recording likes or even those people that don't have a Facebook account. So at this point it's really hard to say it's a choice that should be, ethically it should be but I feel more comfortable with a device that I can access myself. And I think with the Mark II that makes sense because if you are not a tech person at all, that's just not your thing then a set it and forget it device is great because that's what the Amazon Echo tries to be in is. But if I get an Amazon Echo they might check the boxes for 75% of what I want it to do and I think people are, they just use it for those things and if it doesn't do something they would prefer it to do they, it doesn't do that, so they just don't. But for the people that are into this kind of thing and they wanna look at the source code and either write their own module or they wanna just see if someone else has written the module, then they want that control. So they set it and forget it device can become something more for someone that wants more. And at the same time, they're in control of where the data goes there and speaking about integrations or the skills in Mycroft I think we'd be doing a disservice to it by not mentioning that because when you get Mycroft, I mean, it has some capabilities out of the box which is awesome, it does a lot of things. You can even tell it to or ask it to tell you a joke for example, or the weather and there's all kinds of things you can have it do but you could download skills and like an example of that is, I think it's volumio or I can never say it right now. Yeah, it's the software that you could run on a Raspberry Pi that makes your Raspberry Pi a jukebox you just plug speakers into it, you get a web console you just point it to your MP3s or whatever and you could just click on it, tell it to play a song and it's awesome. But then I found out that there's a skill for Mycroft to hook into that. So I could say, hey, name a personal assistant, play lacuna coil and guess what, it does that. It hooks right into a volumio, starts playing that and then there's home assistant integration which I plan on getting into. So then I already have the automation set up I don't need to do that again. I just need to hook Mycroft into that so I can tell Mycroft what I want it to do turn on the kitchen lights and have it do that for me. You can even tell Mycroft, you have to be careful because on my unit there's a 10, 100 ethernet but you could say run a speed test. It'll run a speed test as long as you're not paying for more than the 100 megabits. But as most people in the United States we are lucky to even get 100 megabits online, let's be honest. But you could say, hey, run a speed test and as long as you're not paying for a gigabit connection and you're within that, you know what your speed is. You don't even have to open a browser. There's all these cool things that you can do that I think adds a value to everyone even if it's a beginner or someone who doesn't care about computers or someone who loves technology and wants to become a programmer or already is one they can, you could dive as deep as you want to and you can integrate it with whatever is available or just even if you're not a developer say, hey because someone, I got this idea has anyone keen on helping develop this thing and we have that capability? Yeah, I mean, that's one of the things that and one of the reasons we based it on Raspberry Pi, right? So the, the mark two actually there's a little door in the side that you can pop off. It exposes the USB ports and the ethernet port, right? For the Pi. And so, and that's based on a Pi four which has a gigabit plan, right? So you should be able to run that gigabit speed speed test off of the mark two, if you plug it in. But the USB ports are also important and that's especially important in the context of like home assistant. So, you know, home assistant, you know runs on Raspberry Pi, right? And I think they do sell hardware on their website but they're really not a hardware company. There's a software company, right? And so one of the things that we're really excited about with the mark two is the ability to add a Z wave USB stick, you know, run, add the home assistant skill and then use the mark two as your home assistant hub, right? Because, you know, one of the things that, that we found with smart speakers, everybody was looking to be like, you know, smart things and all these other companies wanted to be revolved, which got popped by Google, wanted to be the IoT hub of the house. And what we found is that it turns out that the smart speaker is the thing that makes the most sense to have as your IoT hub. But of course, you know, the offerings from the other players don't necessarily let you hack and do all the things that you wanna do as somebody who has IoT, you know, a Z wave set up at your home. And so, you know, being able to run home assistant on it is really crucial. And, you know, for us, you know, the whole idea is to facilitate innovation, right? We want people to create these awesome skills that do really great stuff, be able to share them out with the community, you know, be able to get traction within that community. And then, ideally at the end of the day, you know, we would provide a platform for other companies like Home Assistant that want to, you know, expand their footprint, you know, so that they can have a platform where they can make money without being, you know, beholden to these big tech giants that change the rules and demand 30% of every payment and so on and so forth. So, and we've got a couple of those, you know, the guys at Chatterbox have used Mycroft to build an educational, it's not really a toy, it's a little educational robot that kids can use to learn to program, you know, they've got a drag and drop programming interface for that. The guys at Cubo and Barcelona have integrated Mycroft into their robot. Some guys on Kickstarter, Lumecube, which is like a, it's almost like a Rubik's Cube, but all sides are LEDs. And then it's got like orientation and accelerometers and stuff in it and does all these really cool things with this LED Cube, you know, they integrated Mycroft into it. So yeah, it's really exciting to see, you know, our vision really coming to reality with people taking this and hacking it and doing cool things with it. And, you know, we just provide that foundational component, you know, the basic voice experience, the skills abstraction, and then, you know, as we start mass production, you know, a reliable piece of hardware that people can use to do whatever they want, right? That's awesome. Like how long, and I think this is already possible, but correct me if I'm wrong, but how long until I could just log into my Debian desktop and just say, you know, Hey Mycroft, open my home folder and bring up my notes file, right? Because we, all these tech companies, I mean, Microsoft, I think is getting away from Cortana, but, you know, they integrate that into the operating systems. Like we already have that, you know, we could download it and run it. So is that something that is either A possible or B that it maybe will become possible? I think people are already doing it at some level. The guys at KDE have done a huge amount of work around Mycroft, right? AIX over there, you know, built a plasma TV that uses Mycroft as the primary navigation. And then of course, because the way that Mycroft works is everything is sent to a message bus, right? So the audio transcription is dropped onto the, you know, the JSON structure that comes back from the audio transcription. And I believe the audio, the text itself is dropped onto the message bus and then the skill picks it up. And what that means is that, you know, interacting with Mycroft over the CLI is probably even easier than interacting with it over voice, right? You just type in whatever you wanted to say and it runs away and does what it's supposed to do. And so I believe that there already are several desktop integrations for Mycroft. I won't speak to how well they work, but on a desktop, it should work actually, if it's set up properly, it should work better on a desktop than it does on our device because of course you've got more memory, you've got more storage, you've got, you know, a lot more processing time, you've probably got, you know, a network stack that's faster, you know. And so, yeah, I mean, I know I've seen a number of desktop integrations and I'm looking forward to seeing a lot more. Unfortunately, we don't officially support any yet, but we are working to get from here today. That is amazing. Sounds like we're switching to KDE Desktop, Jay, so we can do this. You know, I will say about the KDE Plasma Desktop. I love Nome, but it happens all the time where I look at the release notes for a new Plasma version. I'm like, why didn't anyone think of that? Why didn't anyone think of that? Like the time they came out with a feature where you can pause or skip music that's playing without unlocking your computer because, you know, the controls are there. And I'm like, you know, just playing through my playlist and, you know, maybe Coldplay is playing and then a corn song comes up. I'm like, I need to pause that button for the spread word start because my, you know, my young child is in the room with me. Oh crap, I forgot my password in it anyway. They come up with some really awesome things and I guess it's just par for the course for them to integrate some kind of, you know, integration with MyCrop, that's pretty cool. Yeah, it's been exciting. And their team's done a lot of work in it, so. I was gonna say one of the last questions that I really have that I think is important. I know looking at the stats of my channels here, me and Jay get about 50% of our audiences in the United States. What are the languages currently supported in MyCrop and what are the future plans around that? Okay, so right now we've got English for certain. I know that there's been a ton of work in the German community around MyCrop and then interesting Catalon, right? So, you know, one of the communities that, you know, the Cubo guys, you know, are building that robot in Barcelona and one of the communities, you know, we traveled two years ago before COVID and did 22 cities over the course of a year. And in every city we went and met with, you know, people who were interested in open source, people who were interested in startups and talked a little bit about what we were doing at MyCrop both from a fundraising perspective and from a technology perspective and got tons and tons of great input from people all over the world. The Mark I shipped to 56 countries, right? And I think that we have a similar number for the Mark II who have purchased devices. We've built an abstraction called lingua franca which allows speakers of people who speak two languages and I'm an American so like the idea that you would put like four languages in one hand is just crazy to me. I know, but when you get outside of America. Most of the world can handle that, just not me. You know, if you speak more than one language you can go into lingua franca and help to translate all of the prompts and all of the intents from any language that lingua franca supports into any other, right? So it's not just from English to Spanish and from English to Russian. If the only two languages you speak are Russian and Spanish, you can translate from Russian to Spanish, right? So in terms of official support, you know, today it's English, right? But there is a ton of work going on to expand that support into other languages. And actually once we get that tool set really dialed in and it's one of the things we'll be doing with this upcoming round of fundraising, you know, we plan to do a big push in internationalization because, you know, ultimately it's a community project, right? And if the community wants to, if the Catalan community wants a smart speaker that speaks Catalan instead of Spanish, you know, it's up to that community to make that happen. You know, we provide the supporting infrastructure, we provide, you know, a turnkey device, but, you know, it's a community project. We're not Amazon and we're not Facebook and we're not Google. We don't have hundreds of billions of dollars to spend, you know, importantly, unlike those companies, we're also not under indictment by all 50 states of the federal government in the EU. So that's good. But yeah, we need the community to help us get from here to there. And the quid pro quo is the community helps to make it go, right? Yes. And we make everything open so that they can use it how they choose to use it. And that's really the trade-off, you know, between the community contribution and what we as a company and what our investors are doing in terms of supporting it financially. That's awesome. And being at a Star Trek day, I was hoping you say it spoke Klingon but I imagine with the right community that likes devices like this, it would be a matter of time before it learns Klingon. Yeah. You know what, like one of the things as the founder, I'm constantly looking, you know, people ask, does Minecraft use something, right? And I'll run a web search and find out that somebody's doing it, right? So it would not surprise me at all if somebody had done some kind of skill that allowed Minecraft to just speak Klingon. Although from what I've read, Klingon is not the most useful language for anything other than wanting to fight with somebody. Yeah, no, which maybe, I mean, why not just fight with your Minecraft? I mean, that's a fun, they can just yell at it. I'm hoping that there's not a time where Minecraft responds to everything with, I'm afraid I can't do that, Dave. Yeah, oh yeah. Another sci-fi reference I had to throw in there. We're old enough to know those ones. Yeah, like the Hal 9000 version of Minecraft, you know, there's just so many places that it can be used. I mean, let's take that as an example. Here's my thing. So you think my good friend in Seattle who's started shipping books and is now shipping everything, right? And has a rocket company, do you really think that his voice assistant is not gonna live on his rockets when he starts if he ever gets one of those into orbit? And I'm gonna put a big if next to that statement. And if so, like, you know, the other billionaire putting things in space who I'd like to point out does have things in orbit, needs a voice assistant for his rocket and importantly, his cars, right? We would love to talk to him because of course, lots of machine learning capabilities on those vehicles, big Linux stack run in the center console. You know, we would love to be the alternative voice assistant for our friends at the electric car company as well. Yeah. And positive, he does not want the other places device in there. Exactly, in his rocket or in his car. Yeah, either one, he doesn't want that anywhere. Yeah, and so, and you know, the cool thing about where we are today versus where we were 15 years ago, you know, automobiles had voice technology in them 15, 20 years ago, like, you know, almost every car out there, right? And it just sucked so much, right? And so, you know, and that's why people didn't never use it or almost never used it. And you know, these things have come along so far today that, you know, having a voice assistant that's customized for your car is a reality now. I mean, it's something that people can and should do. So, you know, I am looking forward to somebody on this podcast, going back and, you know, loading Minecraft on their Raspberry Pi, installing it on their car, teaching it to speak Klingon so that they can speak Klingon to their car and swear at other drivers in Klingon and have it perform whatever functions. Right? This seems like the perfect nerd project. I'm all in on this. So I have to just put this out there because, you know, if you or anyone else makes it compatible with cars and you don't do the voice of Kit from Knight Rider, you are doing it wrong. You absolutely have to sympathize that voice. Yeah, I mean, the cool thing about the new neural networks is that once you get the voice trained, like it's just one additional layer of training to make it sound like a specific person, right? And so then it comes down to like intellectual property and licensing and all that other fun stuff. But, you know, that's the type of thing that on the monetization side of a project that cars make sense, right? So if people are paying us a monthly fee for the privilege of using Minecraft, what do we give them, right? And so maybe the answer is celebrity voices, right? And the celebrity gets a small cut and a small cut goes to us to support the company. You know, we'd love to do music, right? For whatever reason, Spotify is refusing to work with us, talk to us, communicate with us in any way, shape, or form. But there are a bunch of other music streaming services out there that we would love to attach to the back end as part of what you get for a monthly fee, right? And so, yeah, like celebrity voices like Kit would be great. Celebrity voices like, well, heck, Jarvis from Iron Man. Like, I'd love to have that. Oh, yeah, sure. That was actually the name of my Minecraft assistant for the longest time. And I only changed it back to Minecraft and I did that first video because, you know, it's I'm doing a video and I don't want any trouble about that name, but, you know, up until then, I was literally saying, hey, Jarvis for everything until I switched the name back. And you could change the name. I think we mentioned this earlier of Minecraft as something else, if you wanna call it something else. Yep, and then, you know, eventually I'd love to get into paid skills, right? And so the developers out there who are doing really cool things, you know, one of the things we've been batting around the idea of is that we look at the number of uses an individual skill gets, you know, the same way as one of the Silicon Valley companies looks at the number of ad views, right? You know, not tying those back to any individual, you know, within our privacy policy in some way that makes sense. And then compensate the people who build the most popular skills with a percentage of the monthly revenue, right? And these are the types of things that in my view, right, really need to exist to make open source consumer facing software, right? Relevant in a world where everybody else is willing to give everything away as a supported service, right? And so, you know, coming up with a facility that allows people to pay for what they're getting to make sure that the developers who are developing things can support themselves, you know, and innovate, I think is really, really critical. And it's something that we're excited to be experimenting with here, you know, with my craft. And yeah, like, you know, maybe that Klingon speaking car, you know, costs, you know, that skill costs 50 cents a month and the developer gets to, you know, gets to continue expanding the Klingon language. Maybe they connect it to an airsoft auto gun on the roof, right, using a paid auto gun skill. You know, there's just a, there's a lot of really, really cool stuff that can happen, you know, once things become open. Yeah, absolutely. Well, this has been great. This is one conversation. People can obviously get started at mycraft.ai, I'll leave links in the show notes where they can buy one, donate by the book. I actually dropped that link in the live stream, but we'll make sure we leave that book thing and we'll leave a link to your site. It's really cool that for the patent rolls because, man, patent rolls, they are a burden and they do tend to go after the smaller companies to get precedents to go after bigger ones. So they're a drain on society, like an innovation. Yeah, they are. I'm a member. I subscribed to the Mycroft newsletter some time ago. So I saw the news about the individual, what they were trying to do, hit the newsletter and then what you were doing in response to that. And I just smiled. I'm like, yes, this is great. You know, this is exactly what I, I've never wanted them to be in this position, but if they were, this is exactly what I want them to do and they're doing it and they've done it. So I think that sets an example for a lot of companies because I think litigation disrupts innovation and that's our biggest problem. Yes, constantly. Yeah, that's a challenge. And yeah, folks who want to support it, I'd love to sell them a copy of the book. And then, you know, people who want to get in to early stage technology companies, you know, have a little bit of money to spend not their entire retirement, just like a small percentage of it and want to come along for the ride as an investor, we'd love to have them, you know, head to Minecraft AI, hit the invest now. You know, it's really rare that companies in our space effectively go quasi-publicness early. So maybe it's an opportunity to, you know, to come along for the ride with us. Yeah. Well, this is awesome. And thank you for joining. Thank you for sharing all this with us. This is a fun learning experience. I kind of knew the surface from watching Jay play with it, but boy, it's much bigger than I thought. I think our audience kind of learned a lot about it too. Definitely cool. Yeah. Thank you so much for having me. All right. Well, you have sat through another episode of The Home Lab Show with this time with the Hey Minecraft. So thank you everyone for joining and signing off until next time, next week, everything's still on schedule. Now that Jay's, you know, dedicating his time to content creation, we're going to have a pretty steady schedule of releases every week here. So thank you. All right. Thanks to everyone and talk to you next time. Thanks.