 Good morning and welcome to this week's edition of Encompass Live. I am your host, Krista Porter, here at the Nebraska Library Commission. Encompass Live is a commission's weekly webinar series where we cover a variety of topics that may be of interest to libraries. We broadcast the show live every Wednesday morning at 10 a.m. central time, but if you're unable to join us on Wednesdays, that's fine. We do record the show as we are doing today and it will be available to you watch, available for you to watch at your convenience later. And I'll show you the end of today's show where you can access all of our archives on our website. Both the live show and the recordings are free and open to anyone to watch. So please share with your friends, family, neighbors, colleagues, anyone you think might be interested in any of the topics we have on the show. For those of you not from Nebraska, the Nebraska Library Commission is the state agency for libraries. So similar to your state library. So we provide services and programming and resources to all types of libraries in the state. So we will have shows on Encompass Live for all types of libraries. Public academic K-12 corrections, museums, archives, really anything and everything really our only criteria is that it's something to do with libraries. We bring in guest speakers on Encompass Live from Nebraska and across the country, but we also have library commission staff that do presentations for us. And that's what we have today. It is the last Wednesday of the month. And it seems it's pretty sweet tech day. Yay. It also means it's almost October. Where did September go? Oh my gosh. I like the cooler weather. I prefer it the middle of the road weather so. Although we're not getting that just yet, I don't know. It's pretty sweet tech day. Amanda Sweet is our technology innovation librarian here at the library commission. And she comes on the last Wednesday of every month to tell us about something tech related. And today we're going to learn about programming a robot using voice commands. Sweet. Maybe. Good idea, bad idea, risky depending on your point of view. A bunch of people are already doing it. We're, we're, we're, it's happening anyway. All right. Very cool. So I'm just going to let you guys take it away Amanda tell us all about it. Cool. And if you notice me tapping away at my phone, I'm actually installing go to webinar on my phone so I can use it as like a robot cam. Okay, sneaky. Okay. I know. And so this is the Finch and I'll hold it up to my camera here so you can get like a little close up. It's also the first thing on the slide here. I'm going to pull this micro bit to work and micro bit is actually made by Microsoft and it's this little chip that kind of just pulls in and out here. And I'm going to pull it from a different robot because this is our test robot that we're actually going to use matter I can turn back on. This actually pulls out of his tail. Oh, and this can be used separately as a different kit to. There's like a whole collection of lesson plans that just go with this, but this one is built in. Yep. So let me just pop that little micro bit back in there and it slides right back into this Finch's tail. And this was made by bird brain technologies. I'll talk a little bit about what the Finch is all about and how it works with Google's teachable machine and Google's teachable machine is basically just a way to build no code machine learning models. So you can teach your computer to recognize your voice through image through audio classification, or you can teach your computer to recognize pictures of your dog or your cat using image classification. And you can also teach your computer to recognize different poses. So if I really wanted to, I could actually set up a system so that every time I held up a peace sign to my webcam on my computer, it would move forward. And anytime I held up like a little rock symbol, like rock out symbol, I could make it turn left and like I could make it program and react different ways to different hand gestures. So in. I know. And so in practice, this is actually incredibly helpful because if you aren't if there, there are a significant number of people who aren't able to communicate in normal ways or communicate as you would just talking regularly. They might be suffering from paralysis, you might be there any number of reasons that you just can't do that. But when you're able to control your computer interact with your computer and other people using these hand controls or using, or if you're not able to type and you can only speak then AI is a wonderful thing to be able to do that. So, I'll talk about, I'll do kind of a quick demo of what it actually looks like to train the machine learning model that we're going to use to program this little dude. And the actual project, the project itself is navigating this little robot through a maze. That's like the best introductory activity because it uses really simple commands. Basically, we're just going to teach it to go and that'll make it go forward. You'll make it turn left turn right, you can make it back up, you can make it do like a 360 turn. And that is why in the when you're programming this and designing a maze in your own library or your own wherever you're at. I recommend actually not building out the entire full maze ahead of time, but building out your program first, and then designing the maze around your program, and around what you want the finish to do. Otherwise, you might wind up going having to go back and do some things again, just because you can program your mate your robot to go different distances. So if you wanted to program your robot to go forward 10 centimeters every time you said the word go, you would want to design a maze that works in 10 centimeter blocks, instead of building out some random size maze and then trying to get code that works. For any kind of thing you do figure out, you know, don't force the technology into what you want to do, you know, figure out what you want to do first and then find what will work for that. Yeah. Yeah, don't do it backwards. Design your environment so that it works. So in the real world sometimes you just have to build your tech so that it works with like the building there or the environment that exists because you can't change it. But life's easier this time because you can actually change the world around you. So that's nice. And so I already talked a little bit about what this pinch little this little pinch dude is all about. But if you are looking at this for your own library or own use. I put in some links to some helpful quick start guides and a learning portal with other activities because this is not the only thing that you can do with this little dude. And so this is just kind of like for your own future reference if you want to find out if he's compatible with what you've got or if he's the right fit for what you're looking for. And if he uses the, I keep saying he I really don't know if it's a dude. But I know it they knows. Yeah. But you can also compare it against the languages that you're comfortable with. In this tutorial I'll be using snap and Python. Python, a lot of you probably already know is like a text based language it's, it's anywhere and everywhere. They use it for machine learning they use it for pro like programming and use it in a bunch of stuff, Google it, it's out there. And then snap is a block based programming that's similar to scratch. So basically if you wanted to make your robot move, you just drag over a little starting block that says when I click on this flag icon. The robot will move. And instead of having to write a block of text that says like activate motor connected to GPIO pin and move forward or turn and using only the numbers. It uses natural language. So what that looks like. I'm going to skip ahead. This is the Python sample code. So this is actually just all completely text. I made this code freely available to you. So when you access this slide show when you click on this download code button it'll go over to a Google Drive. And I loaded the Python code right in there so you can download it and use it for to test it out experiment with it do what you want to do. So this is just like the what it looks like, and I'll talk you through it when we get to that point. But this is the exact same thing in snap. This little top part is actually running a JavaScript code. This JavaScript code is something that the developers of this pinch wrote for us. Because the library that is required to run like machine learning models doesn't naturally exist in the snap language they kind of had to build it out themselves. And then they made it available to us to make our lives easier. So this is something that this was the baseline template that fringe made available just to make our lives easier. And if you see this URL up here, this the teachable machine dot with Google.com. This is connecting it over to the teachable machine model that I built in teachable machine to so that this robot would understand my voice. So this is connecting the code over to teachable machine. And I like the fringe because they made it super incredibly easy to do that. Otherwise, the code that you would normally have to use to do that is a lot longer than this. And this is your, this is like an activation key so that when you press the space key, it'll start running another little code that they have running in the background that's telling it to start the prediction. And the prediction is basically when you speak into your microphone. It's trying to guess the probability that you are saying a certain word. So, when you say go, it'll start jumping up different numbers and it's waiting for that's probably its probability number to go over point nine. So it says, I heard and recognize these words, the like these sounds. So what is the probability that this word was the word go? What is the probability that this word was the word stop? What is the probability that this word was the word left? And like, so you can load it to understand a series of different words. If it doesn't understand any one of those words, you can program it to say to like display on the screen, I don't understand that word. I was never taught that word or something like that. And if it does recognize that word, and it has an over point nine percent or point nine probability that it is that word, then it'll take the action you tell it to do. So that's when you get down here. And it says, so this is an if else statement. Even if you don't understand how if else statements work, if you just copy the format of this code, it'll still work for you. And if you want to actually learn more about how like the foundations of this code works, like if you're completely new to this. And you've never seen any of this programming language before. Instead of jumping right into this kind of code. You can start going into here, the new to Finch. So if you've never seen this before. You can start with these Python curriculum to understand like the foundational skills that you need to be able to learn and understand the sample code that I just showed you. So the controlling motor. I mean, okay, okay, I shouldn't say that I'm actually I'm lying. I've seen this before because I've attended so many of your sessions that we talked about this kind of programming things like this before, but I have not actually done it myself because I've got my hands on any of the right yet. And honestly, it's probably a lot of people like there are some people that are on this call that probably they might know Python they might know snap they might be familiar with it. But there's enough that it's worth putting this in here. And it always helps to have a little refresher and you might learn something new you didn't realize that it did I always go through training for things that I've done before just in case you never know. I've done in a while. And so now you kind of have like a sampling of what we're actually going for. This is what the complete bike by the time I get done walking you through this tutorial. The code is going to look like this. But now your question is probably going to be. What did the start out as, and what are the steps that you need to take to actually get your code to do this and make my robot behave in the way that I want it to behave. Make it behave. Yes, we want to be ideally. And I have to remind myself that like the people that don't actually know code they don't speak in those terms, because it's just like I've taught myself to just teach to speak in the terms of behavior of a robot and people look at me like, Why are you talking like that that's not natural. You got to learn a new language for all sorts of things you do so sure. So to get to this, or to get to its equivalent in snap. You would go through these steps. First, I prepared this little finch so that I would actually communicate with the computer that I'm looking at right now. To prepare the finch, I charged him so that he was at full battery. And then I downloaded a little file that will allow this pinch robot to talk to my computer via Bluetooth. When I finished that he was kind of all set and ready to go. The next thing was to, I wanted to get an understanding of the space that I had available. You might have like a giant floor of space you might have like a meeting room or a conference room or something that you can just take over to build out a maze. And others might only have like a three foot by two foot space. So just kind of build out an understanding of the physical environment that you're going to be working with, so that you can make your robot do what you want it to do. The next thing that you want to do is go into Google's teachable machine and start teaching it teaching this robot. What you want it to know. So in the case of our maze, we know that we want it to be able to move forward, turn left and turn right. So we'll start there. So once this machine learning model is together and Google's teachable machine will connect it over to our code, just like I showed you in that snap example where we snuck in that URL to tell our code where our classification model is stored. So we're going to pop into Chrome, because for whatever reason teachable machine and this Finch robot, everything just works better in Chrome. So I just go there right off the bat. And I'm just going to search for teachable machine. And we'll go to teachable machine dot with Google. And this is what the landing page will look like, whether you're starting with an image sound or pose. And I'm going to click on this get started button. Then I want an audio project because we're teaching it to recognize and understand audio. So we'll click on that. And then the first thing that you want to do is it's going to seem ridiculous but you are going to record 20 seconds of dead empty air. The purpose of this is that the machine learning model needs to know its baseline. It needs to know what's happening when there's no sound happening at all in your room. You might be doing this in like a dead quiet room, but you might also be doing this in like a conference room floor where there's ambient noise in the background and there might be muted voices in the background. So this is your baseline. And I'll close this and I'm going to hit this mic key and record for 20 seconds. So now you'll see this got broken out into frames teachable machine automatically splits anything out into one second frames. So when you hit this extract sample, it means that this sounds the way that you intended, and you're going to say extract the sample into our training set. So as a right hand side, this will be your training set that will be used to tell your computer what you wanted to do. It's your computer is learning by example. And now it knows that this is silence. So now you have different classes that you want to add. So you're going to have one class for each different action you want your robot to do. So you're going to click on this little pen icon, and you're going to say go. So just like this is separated out into one second segments, every trigger word that you are going to use must be able to be said in a one second block. Otherwise, when you record it, it's going to get broken out into two second segments. So I'm going to try to using the word forward. And that's actually a hair over one second so it broke it out into two sections. So we use go to the charter. So I'm going to go to Mike, and then I'm going to say so it's defaulted to record it two seconds. I don't know why they do that because they use a one second block. So I'm going to click on this little setting icon, this little gear, and I'm going to change the duration to one second, and then go to save seconds to save settings. And now it'll record one second block. Then I just hit record, go. You can go back up here and play go. And then extract your sample when it sounds the way that you want it to say. You have to do it a minimum of eight times so that the model has a good set to work from and kind of in can kind of understand by example. And I usually say go in slightly different ways because I never know how I'm going to say it when I actually am by the robot. So I say it in frustration to go, go, go, go, go. So I put in about 12 of them just for a sample. And then we'll take, we'll click on the puppy to do anything. Right. How to train your robot. And we'll click on add a class because now we need to teach it to turn left. So I'll type in left, hit the mic, left, left, left, left, left, left, left. And we'll do right and record, right, right. So once you have all your samples set up, then you can go I'm going to close this out will hit train model. And this usually takes a second to kind of get everything together and start loading everything. And the more classes you have longer it'll take. So now on the right hand side, these little jumping bars down here are the predictions. So you can see little percentages on here when I'm not saying anything but it's still trying to recognize the different sounds that are coming out of my mouth. So dead silence predicted 100% background noise. Go 99%. It was the word go left, right, right, right. So now because it had such a difficulty figuring out the word right. It's an indicator that you might want to go back and add in more audio samples to your right training set. Right. But now because I know that the computer understands some ways that I say right better than others. I'm not actually going to go back and retrain it right now because I know how to interact with the computer. So it understands me. So that's just our test to make sure that the computer is understanding you if the computer understands you. You can click on export model up here. And this is going to be what lets us connect over to our code. So this is actually this will look like sort of a trick question on here because it'll look like all you have to do is like copy and paste this link. Just like you would look like a Google doc or any other shareable file. But to make this functional, you have to click upload my model first. Otherwise, it's not going to do anything. So we're going to upload it uploading uploading uploading. I just want to remind everybody if you have any excited to mention at the beginning, if you do have any questions or comments or anything you're confused about or want more info, go ahead and type into the questions section of your go to webinar interface. And Amanda can answer any of those for you if anything is confusing here. So now this loaded in so I hit the copy button. And now I'm also going to download this model to my computer just in case I actually need to make any changes to it or make any adjustments. So I will download my model. And then before I go I exit out of this system, downloading my model, all this did was to load the code and load the framework of the machine learning model. What it didn't do is to download any of the audio samples. So when I close out of this if I don't do this next step, these audio samples are gone. So I archive these just in case by clicking on these little three dots and go to download samples. So I download the background, then I download the go download left and download right. So the purpose of this is that if this model doesn't work for any reason, and I've already closed this out. Then it's easier to go back and reset my program, the way that I had it before so I can make small adjustments and set up doing the entire thing all over again. And so now I'm going to drag over our coding environment. So there's a connection failure but that's just because I pulled his chip earlier. So I'm going to close this out. And so here I've loaded in our the Python code here. But I'm going to reset this. And I'll show you what it looks like just from scratch. So you go to Brython dot birdbrain technologies.com. And then you'll turn on your robot to turn on the little pinch dude there's a little black button on this on this underbelly. And so now this is blinking in different letters. So when this little micro bit is blinking in different letters it means that Bluetooth is connected and it's able to communicate with your computer. If you don't see letters, you probably don't have a connection. So when you click on find robots, and then we'll click on this little identified robot, go to pair. You'll hear a little tone coming out of the Robo dude. And fine, it preloaded it. But if this doesn't didn't work, we'll get rid of that. And then we'll just go into the import and grab the little voice command maze. In your case you'd actually have to click the link to download it and then grab it from there. But you can import it and it'll pop in with the sample code that I gave. The only thing that you would do differently is to and I'll make this a little bigger just in case. So you go into this URL, and you want to put in the URL that you just trained there. So now I checked the URL we just trained was this PM one. So now we're just going to run it and test to see if it actually talks and works the way that we want it to work. And I'm going to open up my little. Robo cam to work here. It would if I remembered my dang password. Oh, I know. But okay, so I'm going to know. So instead, I'm going to grab my webcam off the top of my computer. Just if you could let me know if it's lined up in a way that you can actually see it. Yeah, my tilt a little bit better. Yeah, that was pretty well centered there. Okay, so I'm going to hit this play button. So it gave an error on here because the the original model that I put in it also had a class called stop. In this new one I didn't put in a stop. So it'll give you an error if it's trying to find a classification to match the code that isn't in the model that she connected it to. So you might run into that. So I'm going to hit the stop button. And I'm just going to clear out the stop in the code, just to make life easier. And I'll hit the play button again. So now when it loads correctly on this right hand side you'll see waiting for first prediction. And then you'll see go left, right, right, right. Go. And that's cute as heck. And so the reason this works is because if you look down here. Right now you set kind of a variable that says you're setting the finch movement to basically the word bird. Did I tell you to go. I guess you maybe said that. I probably said a word that sounded like it. I'm going to hit stop just so the robots not running around my desk when I explain this. And so we connected this over to our teachable machine model. And then down here is where you're connecting it and telling it to make movements based on the prediction that it sees. So in Python you can actually it basically uses like closer to a natural language. So you just straight up type in the word go. And it'll do the thing that you want. And so this if you go through those tutorials that I gave and like that introductory thing. This is how you can set it to make a to make that left turn. So, like, they made it super easy to just turn left at a 90 degree angle at 50% power. If I were to change this over to like 100% power I can make it go a lot faster. And if I didn't want it to go a perfect 90 degree turn I could set it to like 45. And it would just it would make that 45 degree turn. And if this is actually setting the motors to go f is forward. And then it's going at 50% power. And it's turning both wheels at like, basically like a 10 degree motion continuously until or sorry this is 10 centimeters. I'm mixing my code. I haven't had my coffee yet. So forward 10 centimeters 50 50% power. And this is basically telling the computer that it's that same kind of point nine prediction. If the if you hear a word you think it's go and there's a point nine probability that that word is go make it take that action. And that's why when you talk to your Alexa and you say a word that sounds similar. It means that there's a high probability it was that word, but it wasn't, but it's all Alexa knows that's what she does. And so in the snap environment. I actually had some issues getting the snap environment together I had to open up a trouble ticket with Finch. Just because there's there was something I've had to found a weird roundabout way to actually get it to work. That's why I demonstrated with Python first. But in the snap environment, I'm going to take you over to the main page that this tutorial was adapted from. So if you want to experiment with this in different ways, you can also go through this and you can use. So if you remember that I said that the Finch that the bird brain technology people put together code and libraries to make it easier to work with all this stuff. The code samples that I gave you to download they are adapted versions from what's in here. I clicked on the audio recognition in here. And this is kind of a. It'll give you a rundown of how to put together that teachable machine model. It won't give you the directions to download an archive your own audio samples. And it doesn't give you the direction. It doesn't give you like the direction to say the words in different ways. So that the computer understands frustration, but it'll give you the idea like you'll still it'll get we are where you want to go. And but this using the scroll down here. The main line robots grab our little dude. So this is loading, or this is loading the same code that I had before, but I'm going to stop this. What I wanted it to do was actually show its test code. You know, it looks like they changed their instructions after I opened the travel ticket. Oh, which I like. I appreciate that. Hopefully. Yeah, which is helpful. And I appreciate it. It just means that this is not in the same order. And it doesn't have the link to the same. Is it linking to my test code that I put together? Well, I'll have to look at that later. But okay, so I'm going to go back into the main page. And I'm going to try to find the thing that I was going to demonstrate. Unless it's not in there anymore. So I'm going to click back into this audio recognition. And what I'm looking for. Is I think they took out the snap code. The snap code is what wasn't working. Interesting. Okay. One other place I can look. Snap. Let me see if this is this. Okay, there we go. They just have a different tutorial for it. So they split out the Python and the snap. So this is the snap audio recognition. Click it open. So this is the sample code that they give you. So when I was working with it before. If you were to try to copy and paste your model directly into this snap code. It didn't connect. I had to update this URL link. And then export this file and re import it. And then it would work. But let me just show you based on their sample code just for the sake of time. What it looks like when it's working. So I'm going to hit this s key on my keyboard. And it gave a little error message because we need to go up into the settings and click on JavaScript extensions. And then I'll click on the s to run this little JavaScript code so that it connects over to your to the machine learning model. And then I'll hit the s to the space bar key. And now we're getting another error message and it's because undefined. It's not recognizing the listen library. So it means that this didn't run. So we're going to hit the s. In this case it was recommended that you wait just kind of a minute for it to be able to load and register and update. And then once it updated, then you can switch over and hit that space key. So that this will run. So the indicator that will let us know that this that what we just did worked. Is that over on the right hand side here, these numbers will start changing. They'll start changing rapidly because this is your prediction table because the computer is now connected and listening to the audio that's coming through your mic. It's the equivalent of saying the wake word on Alexa. So now you see these numbers are jumping up and down here. So you can start saying go. Go. So you probably noted like you. The robot didn't move here. And the reason is because a really common thing that you'll find in the snap environment is that it defaults over to something called sprite. That means that it's controlling the little digital object that's on your screen and not the thing that's actually connected. So if it didn't connect and it's not talking, hit the X key to stop and interrupt this program. Click over to the stage. And the stage is actually what's going to be talking to the robot that's connected externally instead of the little random sprite arrow thing. Don't ask me why it's set up that way, but I'm assuming it's because most of their projects and lesson plans use the sprite, but who knows. So now we switched over to the stage and that's going to be pulling from a different code. So this is where I'm going to go in and I'm going to import that sample code that you can find available in the slideshow or on Finch's website. It'll be a different code Finch's website will have will be connected to a model that will make the robot move forward indefinitely when you say the word go until you say the word stop. I ran into too many walls when I did that so I changed the code. I hit a lot of walls doing that. So I'm going to import and I'll grab here. I'm just going to go to the properties and make sure this is the right one. Okay, so you're looking for the type of file it should be an XML document. I was making sure that it wasn't a Python document because otherwise it won't work. So now this is connecting over to a model that I put together previously it's the same model that you'll connect to when you download the sample code from the slideshow. So I'm gonna go with it. I'll hit the S key, run that JavaScript in the background. And then I'm going to hit the space key. And what we're waiting for is these numbers to start jumping up and down so we know that it's connected to the mic and it's starting to try to make predictions based on what you're saying. And we're just going to give it a second. So while we're waiting for this to load and connect. It can take up to a minute for that to start happening. I'll pluck this apart to show you how this code is different from Python. In Python. Instead of saying item to item five item three item four. It would actually say the word. It actually says go left right, but instead of that this is actually connecting over to a prediction table. So when it says item two, it's actually looking over at this prediction table over here. So item two is go item three is stop item four hasn't loaded into the table yet which is what I put in the trouble ticket about yesterday. Because whenever I tried to run this, the prediction table wouldn't update. And I wound up having to use like a weird roundabout system. So I'm hoping that they actually get that fixed before anyone tries to start using that because otherwise I had to go through a weird roundabout thing to get it to work. So I'm going to hit the X key to interrupt that. And I'm going to go back into my settings make sure JavaScript extension is working. We'll hit the S key again to see if we can get this to load. And you'll know that this is loaded because there'll be additional options in your prediction table. Instead of just go and stop there'll also be a left and a right. And then this prediction table down here will also change instead of a length of three down here it'll be a length of five because you added two more options. So that's what will change to let you know that this pressing the S key actually worked. And the rigmarole that I had to do to actually get this to work was to. I had to clear my cache because the cache is kind of where all that little background data is stored. And when I cleared that cache it was able to update and reconnect to the correct machine learning model. And then I had to stop and restart the blue bird connector to reset the connection to the robot because it had timed out by the time I was able to go to get all that other junk done. Then I had to export and re import the model to basically reset the system. And then started it again, which I'm not going to do right now. Well, and hopefully they'll have fixed that. Yeah, and yeah. That's what I was hoping had happened when I saw that they had changed some things on the previous tutorial. But my bed is that they're working on it, but it just hasn't gotten there yet. Yeah, yeah. And so I'm going to hit the X, make sure this is not still going. And I'm going to pop us back over to here. So this is kind of like the basic outline of what you would go through to kind of do the same thing. Before we wrap out, I will just kind of give you some tips for setting up the maze itself. When you're setting up the maze itself. If you think back to the Python code that I put in, you can see that I programmed it to go forward 10 centimeters at a time. So you can use if you have a space that is 10 feet by 10 feet. You can evenly separate out that space into different block sections and then set your code to go 20 centimeters so that every time someone says go. The robot will go forward 20 centimeters and you've already blocked off a maze length that matches out that 20 centimeters. And you can also use angles that'll match up with the with the programming that you already tested and you know works. And you can also like go rogue on this and you can start making it like interact with weird different phrases, like instead of saying go you can say boo, and it'll start running away. And you can also like set it to scream to so every time you say the word scream you can load in an audio file of like a overly dramatic scream. And you can start integrating audio sounds so that you can start embedding like little Halloween or like witches on the side of a maze and then every time the robot recognizes a which you can set it to scream. Cool. So you can do all sorts of things. That's awesome. Yeah. Yeah. And you can also set it up to say, you can do a bunch of different stuff. Like it's just, and whenever I say adjust the code and the model as needed. As you go through this you'll find that if you work with students or work with any like basically anyone who either has a lisp or has like or pronounces words a little bit differently has an accent. You might actually need to go back into that machine learning model and ask that student to train their that model with their voice to so that the machine will be able to recognize the different way people say words. And you'll want to actually test that out before you start doing the maze just have each student just connect through and test that maze and test their voice how it interacts with that before doing any like larger scale projects. And it saves everyone like a boatload of embarrassment and any because instead of being singled out as the one voice that won't work because you're weird, you're teaching the computer to understand it because everyone's different. Absolutely. It was very similar to when you got your Alexa or Google assistant or whatever you had to train it to know recognize your voice. Yeah, some of the people have done before and so they hopefully will understand yeah. Yeah. And now you know why. Yeah. So I can leave it out there, you can go through and try out the links I put the link to the original tutorial that just has the code that makes the robot go and stop. And then I you can also try out the tutorials over here if you want to be able to learn more or make the robot do different things or interact in different ways. You can add in like a distance sensor that so that it'll automatically stop when it gets like two inches from a wall. And so you can do a bunch of stuff. And you mentioned these slides will be available afterwards. I'll link to the Google slides for everyone so that yeah I'm sure I'm gonna give it to this now so that you can get to these. It'll be on the archive page as well. But if you have any questions, you can let me know now, or whenever you think of them. Yeah, type into the question section. I haven't written anything yet. That's okay. But if you have anything you want Amanda to answer right now. She can definitely do that. So type into your question section. While we're waiting to see if anything comes up. Yeah, so thank you Amanda this this was a great I think it should be a lot of fun to do like I said, you know, I've seen you know attended many of your sessions we've done coding and mentioned things just getting your hands on the actual robots is the is the key. Now is the Finch that something I was going to ask actually is this one that we loan out from here at the Commission. It is so if you are a Nebraska library where their school public academic special whatever kind of library are, you can actually check this little dude out for free. And all you have to do is pay the shipping to get it back, or if you're in Lincoln Omaha you can also drop it off at the commission yourself. Yeah, so we have a whole bunch of Amanda has a bunch of tech tech kits that can be learned out to Nebraska libraries, sets of all sorts of different things VR robots, drones, all sorts of fun things you might want to try out. So, yeah, if you're in Nebraska library, you can get the actual robot in your hands to try to practice with before you decide to invest in it for your own library. It's good to test it out and head of time. And if you're from outside of Nebraska, I have gotten some questions about people that want to replicate that model. So you can shoot me an email if you want to set up a system like that yourself. Yeah, you can just set up the program for actually lending them out. Oh yeah, they're state library or first, we do it here, you know, like I said we're the state library for all libraries but if you're a part of a library system or something yeah you could do this as a program to loan out to your the libraries in your area, or however, however you're set up in your state yeah. Any questions from like regions of libraries or like multiple branch libraries or like state libraries it's however you're set up. Yeah. It's a good it's awesome program we've done we've done that here for lots of different things over the years we had gaming equipment that we did when that was first big and makerspace equipment we did is a as to loan out to libraries for a period of time to test out and now we've got the ongoing tech kits that I mean so it's a great way to not have to invest in the money and you know what's going to work for your community, you know, your people may really into the robots or one particular one and not to another one. So before you choose what you're going to do what you might purchase that try it out first try before you buy. Or just use it check it out for a programming event. Yeah, you can if you're doing a one off program absolutely yeah. I've had multiple copies of these devices. Yeah, so it's not a single one yeah. Like there are some code clubs that just check out a different kit every other month and just to play with that particular thing for a little bit. Yeah. So libraries do have are saying code code clubs. For kids for girls boys whichever yeah. Alright well doesn't look anybody has any desperate questions they were want to ask right now that's fine. Reach out to Amanda with her email and she can answer any questions you do have we just have some thanks and thank you this is very really interesting. I'm going to pull presenter control back to my screen. There we go. There we are alright and show you so as I said we are recording today show and it will be available in our archives. Which is here on our main encompass live page right underneath our upcoming shows we have our archive and compass live shows, most recent one at the top so here's the one from last week. These will be here at the top. By the end of the day tomorrow should be up and posted the link to the recording on our YouTube channel and a link to Amanda slides which I already have right here, ready to go. So you have access to both of those. Everyone who registered and attended today show will get an email from me letting you know when the archive is ready. We'll also post it out on to our various social media here we have a mailing list to the library commission, and we have a Facebook page for encompass live. We post about login right now reminders to today show meet speakers and then when a previous sessions recording is available. So it'll be out here. I'll use the abbreviated hashtag and comp live for encompass live and pretty sweet check for pretty sweet sessions. So you can search for each of those here on our Facebook page or on Twitter that we use library commissions Twitter account to push out there. Our archives here just want to show you you can search them if you're looking for any topic that we might have done a show on it'll search. This is our full show archives. You just searched most recent 12 months you only want something recently done, but if you search the full archives which is what defaults to this is going back to when and compass live premiered which was in January 2009. That's a lot. You find lots of good shows and information here, but do pay attention to the original broadcast date of any show it does have that on here and on the main page. Some of the shows will be good informations information state still will stand the test of time. No, not a problem, but some things will become old and outdated information will be will have changed resources services products things might have changed drastically or not exist at all anymore. So just pay attention to the date of anything you are watching in our archives. All right, so that wraps up for the show. Thank you so much, Amanda. We'll see you back here in a month on October 26 is the next free sweet tech. Do you have any thoughts on what we're doing then it's going to be Halloween time. I was thinking about asking some of the libraries to come in and talk about their activities. Like I had some like, like, I had some libraries send in some pictures and some like little videos of stuff they did. So just kind of like activity is the play. The wild. Yeah, we'll see. So yeah, keep an eye on this we'll see what we come up with for our topics. We've got all of our October dates scheduled here and I'm getting November dates confirmed as well so you'll see some new dates coming up. We are off for next week. We will not have an encompass live show every year for the Nebraska Library Association conference whenever that is we take the week off because everyone's involved with conference in our state. So if you are in Nebraska, you may be heading off to Carney for the conference. We'll be back that week after that on October 12 to learn about navigating navigating the new Nebraska access, Nebraska access is the databases that we provide here to libraries through the Nebraska Library Commission, and Debra Alana and Susan are going to come on to us for updates and changes that have been made to the program. So, please do sign up for that show. Any of our other upcoming shows and keep an eye on our schedule and hopefully we'll see you at a future episode of Encompass Live. Bye bye.