 Hey, everyone. Today I'm going to be running through a workshop on tuning your instrument with Google Teachable Machine. And by the end of this, we're going to be using Google Teachable Machine to create a machine learning model that'll tell you what know your instrument is playing out. So if you're playing in A, but you should be playing in D, you will know that. And we'll also create a web app to deploy your machine learning model so anyone can use it. Let's see if I can share my screen. OK. So yeah, a little bit about me. I'm currently a senior at the North Carolina School of Science and Math, and I'm also a writer at AITA. And that's what I'm going to be creating this workshop for. I was born in Singapore, but I've lived in North Carolina for 12 years now. In regards to computer science and programming, I'm mostly into web dev, so creating websites for organizations and just for fun, and also machine learning. So I try to make this workshop be that the intersection of those will be creating a machine learning model, and then we're going to deploy that machine learning model at the website. Outside of school, I guess, some things I like to do for fun. I really enjoy swimming. I like biking as well. And watching airplanes is cool. That is the wrong side. OK, awesome. So outline for today's workshop, we're first going to be talking a little bit about audio recognition. What is it? Then we're going to be actually building our tuning model. Then we're going to be talking a little bit about machine learning, and then we're going to be actually creating our app. So this right here is a demonstration of what the finished app will look like. On the right here, you'll see our website, which is also accessible by this URL, I believe. So I just copy and paste that link into the browser. And we'll share these slides with you, so you will be able to play around with this link as well. But on my phone right here, you might not be able to see this. Let me turn off my brightness. But right here, we have a D note on my phone. So theoretically, my machine learning model should be able to detect that this is a D. So let me start this, play this, and you'll see that it is playing a D. Just to prove that it works and I'm not messing with you, try B. There you go. So that's B. I can also try A. So let me find an A note after this advertisement. We can try A. Right. So this is going to be a machine learning model that is able to detect the notes that you're saying, and also background noise, which it's detecting right now, which is correct. So I'll turn back to these slides, present. Yeah, so this is our app. It'll tell you whichever note is playing, so you'll know whether or not the note you're playing is actually correct. That's one feature. Another feature is you'll notice that when I demonstrate that model, I did not have to enter a single line of code. So for the apps that you're creating, you don't want it to be inaccessible to people or your target audience. So in our case, our target audience would be individuals who are more into playing instruments rather than messing with the specific lines of code and trying to get your web app working. So with the click above button, they're able to detect this and it requires no code. And it's also really easy to expand and add features to our app. So with this information, with this A note that we're detecting, we can maybe say, you're doing a great job. This is correct. Or you might be able to say, you need to play a little bit lower. We have all the data needed to actually do that. So you can go ahead and add a lot of features to this app. You can maybe even make it into a social app. So you can really essentially do anything with it. And yeah, just as a reminder, the link to this app is right here in the bottom corner. So if you want to go ahead and play around with it. Cool. So let's go ahead and talk a little bit about what audio recognition is as a whole. This is a very big umbrella term, sort of like what machine learning is. So audio recognition in short is whenever a program receives audio as input and interprets that audio to produce some meaningful output, typically these audio recognition models or programs are built on machine learning models. So maybe with what I'm saying right now in this workshop, an audio recognition model might be able to transfer it into a transcript. So whatever I'm saying might be actually written down. And that can be done using machine learning. Another feature of audio recognition is creating personal assistance. So if you have an iPhone, that's what Siri would be using. Or if you use Alexa, whenever you say Alexa, it'll recognize that you're saying her name and it's going to come alive. Speaker identification is another big one. So something, it's speaker identification in short is along the lines of what facial recognition is but using your voice. So identifying people based on the way they talk. And also an interesting one that I found online when I was doing research about this topic was animal noise classification. So maybe you have a microphone in the middle of the forest but you don't want to spend all that money and resources to actually install a camera there. So what you can do is you can only install a microphone and collect audio data which takes up considerably less room than image or like visual data. And you can actually create a machine learning model to classify those animal noises. So you're able to know which animal it is. And you might be wondering out of everything that we just did, why do we even care about audio recognition, right? After all, everything on this slide could be done by another human. Speech or text, my friend can write down what I'm saying on my line. They can answer my questions like Siri or Alexa would. So you might be wondering why do we do this, right? And the short answer is that while someone might be able to do it once or twice, it's gonna be very costly and slow in the long run. So as with all computer tasks, audio recognition is really good at saving time and doing things over and over again. So if you read on the slide right here, computers are good at making the same decisions repeatedly while humans are good at making difficult decisions. And largely with audio recognition, if we are getting into actually that field of like a bit of difficult decisions such as like actually recognizing what I'm saying. So this is able to save us a considerably good a bit of time. So maybe instead of like in a meeting, you just click a button and you're able to transcribe all that text instead of having to have someone as a dedicated taker. And in general, it just makes life a lot easier. So in order to do audio recognition, we're gonna need to do what's called machine learning, right? So audio recognition, you can think of it as a subset of machine learning and as such audio recognition models such as what we're gonna be creating today, it's gonna follow every single step any machine learning model would go through. So the first thing we're gonna do when we're building our models, we're gonna actually collect data and you'll see the form of how we actually collect this data and how we feed this to train our model, which is gonna be our next step. So training the model is essentially creating the mind, I guess you can put it that way. So our model, you can essentially think of it as a brain. We're gonna be saying, we're gonna have this data and we're gonna feed it into this brain and we want some meaningful output. And the way we check if we get that meaningful output is by testing our model. And once we're happy with our model's results through testing, we're gonna go ahead and deploy that model and that's what we did with our website that I just showed you earlier. So first step in machine learning is going to be data collection. So with any machine learning model, you're gonna need data, right? So that's why machine learning is often called to be at the intersection of computer science and data science because you do need data and you do need programming ability to do machine learning. And for data collection in our case, we're gonna be using seven YouTube videos as tuning examples. And the notes that we're gonna be classifying is A, B, C, D, E and F. I know there are other notes out there, but just for our example today, for simplicity, we're gonna keep it to these six. Just for an example of what this would look like is if I just click on tuning note A, this is what it's gonna sound like. I don't know if you can hear it, but yeah, so it's just an example of what the tuning note A should sound like. And we're gonna be feeding this audio from each of our audio clips into our machine learning model in order to extract data from our model about these data clips. So after that, so after we have all these data, we're gonna actually feed these data clips of these audio recordings into our machine learning model. And we're gonna be doing this using something called Google Teachable Machine, which has a super friendly web interface where you don't need to actually write a single line of code to train our model, which is really, really neat. So you'll see right here, this is the Google Teachable Machine interface and we'll be going through this again when we actually build it ourselves, but you'll see right here that we have to capture a bit of background noise, so our model knows what to filter out when we're classifying these clips. We'll have a couple of clips for A, so the way we're gonna train that is I'm just gonna hold up my phone right here for lack of a better method and I'm gonna click the play button and we're gonna record using my computer's microphone. And then we also have B notes and you'll imagine that we also have C, D, E and F. And then after that, we're gonna go ahead and click this train button and then we're gonna actually train our model. So all the training is essentially done behind the scenes and you won't actually need to worry about it. But if you are interested on knowing what actually does happen behind the scenes, this is a bit of the, like in the weeds, if you will, of what's happening. So don't worry about if you don't understand fully what's happening on this slide, it's meant to be a gentle introduction to the difficult stuff, I guess, by machine learning. So essentially Google Teachable Machine which is what we're gonna be using is built on top of something called speech command recognizer which is a pre-trained audio recognition model that already recognizes simple words. So it already recognizes zero, one, two, three, four, five, six, seven, eight, nine all the way, such as like down, left, right, stop, yes or no. However, we're not actually gonna be needing to use like these simple words. Our target for this model is gonna be recognizing notes, right? So that's why we're gonna need to add our own classes to these, to this audio recognition system which is gonna be ABCDE app and the background noise. And as such, or, and we're gonna do this using Google Teachable Machine which is no code and you're gonna be able to actually get back a model that you can actually embed into another website or app. It's pretty cool. So once you've trained your model, logical next step would be to make sure that your model works as intended, right? And this is usually pretty easy with models such as a text classification model. There's an objective metric on whether or not the thing you're classifying is correct. However, a Google Teachable Machine doesn't necessarily provide you with an accuracy metric. And you can imagine it'd be a bit difficult to tell you whether or not something is correct just if you think about it, right? So let's say I put an A note for 20 seconds. Maybe for 10 seconds of it, I get A back. And for the next 10 seconds, I get D back. But then I wanna keep adding on, right? So maybe for the next 10 seconds, I'm playing a C note and then, but it detects as D. So notice that it gets, as this adds on, it gets a bit more difficult to calculate what you actually define as accuracy. So what we're gonna do to solve this is we have to actually figure out by testing it if what we're doing is correct. So when we play A, we have to make sure that our model is helping A. So it will be a bit subjective. And as this line says, you have to make sure that you're happy with the results that you're getting in order to move on. And then after that, we're going to be actually deploying our model and making our model available for others. So this is either for others like the public or other programs to use. And this can take the form of many options, right? So what we did in this tutorial is what we're gonna do is we're gonna be actually making a web app or a website for users to use. Additionally, we can actually turn this into an app so people can interact with it on their phones. Or we can turn it into an API endpoint so other programmers can make a HTTP request and interact with our code. And since we're gonna be deploying this as a web app, we're gonna be deploying our code on GitHub pages. And this is the URL that I pointed out earlier. Cool. So I'm glad you got through that. A lot, a ton of rambling right there. But our next couple, our next section is actually going to be building our... Hello, everyone. So I'm getting, I'm back again with the workshop recording. So right here, we're gonna be actually building the model that our tuning web app is going to be built on. So right here, I'm at Google.com and I'm gonna be actually running through with y'all how to actually build this step by step. So right here, we're gonna go ahead and add to Google Teachable Machine right here. Because Google, it should be the first thing that pops up, I'm gonna go ahead and click on it. I'm gonna zoom in a little bit so it's easier to see. And I'm gonna go ahead and click Get Started. So Google Teachable Machine, if you did not catch from the first part, is going to be a almost low to no code solution that allows us to build machine learning models. However, that might sound a bit too basic, but don't worry. This is actually gonna follow the same machine learning principles that every machine learning model goes through. So we're gonna have to deal with data collection, training, validating and testing and actually deploying our model. So we're gonna go through all four steps. It's just going to be without writing as much code, which is actually really nice, so yeah. So right here, once you go ahead and click Get Started and Google Teachable Machine, you should have a couple of options on what you want to do. For me, we're gonna actually be using Audio Project, but I'll go through a little short run through of all these. Image Project is going to be training item based on your webcam. So if you look at this example right here, it might be whether or not it's just the dog or you and the dog, right? So right here, the machine learning model would be able to predict whether or not it's just you in the picture or you and your dog in the picture. Right here in the audio model, which is what we're gonna be using, is actually we're gonna be actually training our machine learning model to detect what note of interest we're playing. It was actually pretty neat. After that is also a pose project. So you can actually see which poses a person is in. So you can check their posture, you can check whether they're doing a squat correctly, which is also really cool if you wanna integrate working out fitness with computer science and machine learning as well. Awesome, so from right here, what we're gonna do is go ahead and click on Audio Project and we'll be taken to this screen right here. This is gonna look a bit complicated at first, but I promise it's not. For at first, all we're gonna do is we're gonna be worried about this column right here. We're gonna have to record a couple of samples for background noise, which is essentially data collection. Then we're gonna go into class two, which is going to be, we can actually rename this A. Go ahead and add a class, call it D. Add a class, click on this number, call it C. Add a class, call it D. Add a class, call it E. Add a class, call it F. So we're gonna have seven classes in total. And what these seven classes are gonna represent are the different outputs of possibilities that a machine learning model would be able to give us. Just for an example of what that's gonna mean. So maybe we're gonna be to that kind of background noise. So we're gonna have to record a couple of samples of data of what the background noise currently sounds like in the room. We're gonna do the same for A, B, C, D, and E now. So with that said, let's go ahead and record a couple of samples for background noise this sample. So I'm gonna go ahead and click the mic icon, and I'm gonna go ahead and be quiet. And it's going to actually record a background noise. Cool, so we have 20 seconds of audio recording recorded for background noise. You'll see that this says 20 minimum, which means we have the exact minimum, which is absolutely okay. So what we're gonna do now is we're gonna click extract a sample. Now we have 20 samples of background noise, which is gonna be good. So our machine learning model is actually gonna be able to filter out what noise is the background noise and what note we're actually detecting. So that's why we're doing this in the first place. All right, so now let's go ahead and you can either record 20 more seconds, but just for my sake, and for time's sake, I'm gonna go ahead and not record 20 more seconds. And I'm gonna go ahead and go on to A note. So I'm gonna go ahead and click on, sorry. Let's go down first and I'm gonna click on this A note, click mic, and we're gonna go ahead and get ready to record the A note and what that will sound like. So one way to do this is I have pulled up on my phone. I just searched up A tuning note on YouTube and I'm gonna go ahead and click one of the results that come up, this one. And notice that if I turn volume up, you can hear an A note sound. So now what I'm gonna do is, oops. Now what I'm gonna do is I'm gonna go ahead and actually hold this up to my computer and I'm gonna record four seconds of it and we'll see how that goes. Cool, so now we have four seconds of A note. We're gonna go ahead and click extract sample. And I'm gonna do that again, so let me reset this. Now that we have four more seconds of it, we're gonna go ahead and click extract again. And now we have eight samples, audio samples, and we have eight minimums, so we're good for A note. And what we're doing right now is actually, we're just, and all we're doing right now is we're actually just telling a machine learning model what each of these notes should sound like in order for it to be able to actually detect it later on. Right, so you're just like a human, how human ones, a machine learning model isn't magic, you have to give it like practice problems or training data in order for it to understand how to classify future examples. So now we're gonna go ahead and do the same for B. I'm gonna go ahead and click on mic and I'm gonna find tuning note B on YouTube. So I have tuning note B right here on my phone. You can see that. And then I'm gonna go ahead and click on it, record, and once those four seconds are up, I'm gonna go ahead and click extract sample, like we did with A. We're gonna record again, and we're gonna extract the sample again. And we're gonna do the same for tuning note C, extract sample, again, and we're gonna extract the sample. Now we're gonna do D. Let's go down, head mic, D, extract sample, do it again, and then extract sample again. Now we're gonna do E, almost done, promise. Hit mic, extract sample, one more time, and extract sample. Last note we're gonna be doing is F. So let's click mic, extract sample, and extract sample. Awesome, so now we have all of our samples that are needed, and essentially this is how we just perform data collection if you recall on the PowerPoint that we did before. Right, so now we have all of our data that's been collected. All we need to do now is go ahead and click train model. So now Google Teach More Machine is going to take a second, prepare all this data, and it's gonna feed it through almost a brain, right? And just like how when you do practice problems, you learn how to do them, Google Teach More Machine is gonna train on this dataset and figure out how to classify this data. And on the right you'll see that our model is actually being live, and this is how we're now in the testing phase. Right, so if I just be quiet real quick, we're gonna see that this should detect all background noise. And you see that's pretty cool, right, it did that. Now if I pull up a note such as I currently have F pulled up, if I just play F, you'll notice that's F, right? Pretty neat. Now let's go ahead and do, how about B? B was a little weaker at first, but that's okay because it got it in the end. Now let's try C. We'll see that C. We'll see that C actually gets it pretty good as well. We're gonna try D now. And D is really good, it got it automatically at first. Let's try A. It got A in a second as well. Now let's try E, I believe, we haven't done E yet. Maybe we have, and I have to go E. And we tried F before, but awesome. So now we have validation that our model actually works, right, which is super amazing, which means that now we can go ahead and click export model. So now all we just did in the past 10 minutes, I'll do a quick recap before we export real quick, is we went ahead and collected data from YouTube and these data links, I just searched up Turingo A on YouTube, Turingo B, Turingo C, and I'll include the actual links to these videos in the workshop description as well. And then we went ahead and trained our model. If you wanna play around with it further, you can actually head into this advanced tab, then you can play around with this and you can actually explore what each of this does. But it's not really necessary, as you saw with the default settings, we got really good results in training our actual, in the output of our actual model. Right, so yeah. Now what we're gonna do is click export and then upload my model. So we're gonna have a link that we can use. It's gonna take a second to upload. Give it one moment. And now our model is uploaded. So what I can do is I can go ahead and click this copy link, paste it into Google, and you'll notice that our model is right here being previewed for us. So just to prove that it is our model, I can go ahead and play E again and you'll see how it detects for E, right? Awesome, so the next step in this workshop is actually going to be a pretty big part of machine learning, which is actually gonna be taking our model and being like, hey, how can we make this usable for the average user? And we're gonna be doing that using actually, by deploying our model on a website, as a website, using GitHub Pages. And we're gonna be making that website using HTML, CSS, and JavaScript. So if you wanna stick around for that, also I'll see you guys in the next section of this workshop. Thanks. Hello, everyone. Thank you for staying with us for this workshop of tuning your instrument with Google Teachable Machine. In this section of the workshop, we're going to be actually taking the model that we already have, which we have right here, if I can just share my screen. Second, so if we can just look at the model that we already have right now, and we're gonna be actually taking this and making it into a website. So anyone, regardless of coding experience, are able to access this model. Okay, you might be wondering that's pretty difficult, but don't worry, don't fret. It's not as difficult as it sounds. So I can just go through the slides real quick. Cool. So now we have our trained model, which is right here, and the model URL at the bottom of the screen was my models URL, if you wanna look at that. And now what we're gonna be doing is deploying the model and the website. And what we're gonna be using is to actually show that it's gonna be three languages, HTML, CSS, and JavaScript. HTML is, can you can essentially think of as the skeleton of the website, which is essentially where all the content goes, right? This HTML is easily viewable with inspect element, and all the text on a website is actually always gonna be in the HTML. And then if we move on to CSS, the CSS is essentially the mark makeup of the website. So if you think of HTML as a skeleton, the CSS is what's gonna be the makeup, or what makes the website look pretty. And this might include changing the background color, the text color, and font weight, like bold, highlight color, rounded corners, hover colors, et cetera. Anything you can think of to make a website look nice, you can probably do using CSS. JavaScript is going to be more of the brain of the website. So JavaScript is what makes websites interactive, right? So when you click a button, what is gonna happen? That's gonna be dictated by JavaScript. In our example specifically, JavaScript is going to be accessing our microphone and detecting audio. And essentially for our build outline, what we're first gonna do is create our website files. Then we're gonna export our model to the site. Then we're gonna add a bit more functionality so it doesn't look as plain, and then we're actually gonna deploy our website. And you might be wondering, this might be a bit hard, but it's okay. Everything we'll be doing is gonna be online, so you won't have to do any local installations or anything. So we won't have to worry about any of that. And the way we're gonna start off with this is let's all head to a site called replet. It's R-E-P-L-I-T dot C-O-M. And this is how we're actually gonna be coding our website. So first thing what we can do is let's go ahead and click on create right here. Let's search for HTML. So we're gonna click on the first one. For this one, for the name of our site, I'm gonna say tuning instruments ML, like that. Then I'm gonna go ahead and click create repl. It's gonna take a second. And then you'll see right here, I'll just walk you through the interface of replet real fast. On the left right here, you won't really have to worry about any of these buttons at all. Right here, you're gonna have three files in your folder. So right now we have index.HML, which is right here. And this is what I was talking about with the skeleton of the website. Right below that we have script.js. This is gonna be the brain of the website. And then right here, this is style.css, which we can think of as the makeup of the website. And right here, when I click run, you'll see that we have our output or the output of our code right here. And just to show you, I can actually open this on a new window. And this is a live website. If anyone who goes to this link will access this Hello World website. Well, actually, maybe you might not because by the time you're watching this workshop, this link will already be populated with my code. So first thing we're gonna want to do is let's go ahead and take this code that I'm about to copy and paste onto the screen. Go ahead and take this code. Let's first copy this code. So all the way from Hello World right here. Let's go ahead and delete that. And then we're gonna copy and paste this right there. I will make this code available to you, probably via a bit.ly link that we'll show on the screen right now or either in the video's description. So keep on looking after that. So paste that into index.html then head to script.js. Then what we're gonna do is copy and paste this code into script.js. So paste that there and I'm just going to clean this up a tiny bit like that. And now when I click run, I'm gonna click start and it should once use my microphone, allow. And you'll see that we do get everything that was on this page, but on our own website. So just to show you what that looks like, I can play a tuning note. I've done before. Oops, let me disconnect this. I don't know if you can see this, but this is note C. And you'll see that C was the one that was populated just now, right? C right here. Except this website as a whole, to be honest, it's not very pretty. And we can actually change that. But if you are just happy with what this website is, as it is, you'll see right here, when I open a new tab, anyone can actually use this website now. And it was that easy to all the code that I copy and paste it, what's given by Google Teachable Machine, by the way. So all I have to do is I wanna test again. You'll see that the note C is playing. I thought that was really funny. Anyways, let's go ahead and head back to our code right here. And then let's add the code that we need to make this website look nice. So our end goal is gonna be turned to this website that we have on the screen right now to look something like this. Oops, not like that. But something like this where when I click start, it's gonna show loading for a second. And then, and right, it'll play that. And it'll show like that. So what we can do is let's go ahead and go back to Teachable Machine. I'm gonna go ahead and add a little bit of CSS code. And I will make this available as a link as well because the goal of this workshop isn't to teach you like as much CSS as more of machine learning. But essentially right here, these are all gonna be classes that we can use in our HTML. So go ahead and replace all that, run. You'll see that immediately it looks a little bit better. We changed the, we have changed the background color and the width of this. It's not all the way on the edge. Next, what we can do is we'll head into index.html. And then what we're actually gonna do is let's go ahead and add a couple of classes. So first one, let's say, set of this div right here. Let's just go ahead and create an H1 called tune your instrument with Google Teachable Machine. And then let's also make it super obvious for the user what note it's currently playing. So we're gonna say H3, current note like that. Let's go ahead and say span. So I'll hit tab and then we're gonna say, and as default, but we will change that as needed. Next, what we're gonna say is create a button. We're gonna call this type is equal to button. Then we're gonna say, give it an ID of start-button. And we're gonna say start. Lastly, oh, we're gonna, so that means we can remove this. We can just put all this later at the bottom. And all we have is label container, which is what we need. Now when we run the site, we'll see that our site looks like this. Well, we can also delete this. That right there, actually. Now when we run it, we'll see that it looks like this. Which is a lot more similar to where did we have it? Let me pull up the original site real quick. Which is a lot more similar to what we have right now. That's our old one, that's our old one. So these basically look the exact same now. Next thing all we're gonna need to do is go ahead and now really nothing happens when we click start, but now we're gonna need to add that functionality. So now let's go ahead and head to one second, script.js. And let's go ahead and grab some stuff real quick. So let's go ahead and say const result container equals document get element by ID result. Then we're gonna say const start button is equal to document dot get element by ID start dash button. And you'll notice what we're referencing here is ID result is going to be this right here. So that's how we're gonna be changing this value. And start button is what's gonna be this button right here. So that's how we're accessing in the JavaScript. So this JavaScript is essentially talking to our HTML file, right? So next let's go ahead and create an event listener, which is essentially what's going to happen when something happens, right? So let's say start and dot add event listener. We're gonna say click and then we're gonna do this a little bit cryptic syntax and we're gonna say console.log button was clicked. Now when we run it and let's open this in a new tab like this, refresh it. Now let's see what happens when we click this button. Button was clicked, right? Every time I click this button you'll see this outputs on the console. Let me just refresh that. Ready? Click. It shows there. Click it again. It shows it was done two times, three times. Pretty neat, right? So now we have a way of interacting with this button in JavaScript. So now we can say is whenever we click it we want to start button dot inner HTML to say loading instead of just nothing happening. And then we're gonna say init. Init is going to be a function that we're calling and it starts right here, which essentially tells the model to start working. Next what we're gonna add is we're going to add a couple of classes. So what we're gonna say is cost classes is equal to, make sure you have a map to a. Then we're gonna say b, oops, b like that. C, C, D, oops. And keep doing this until background noise. And this is just what we're gonna do for our CSS classes, by the way. This is how we're gonna make it look nice. Anyways, now let's scroll down a bit is what yours should look like. And let's head to this init function. So once we go ahead and go right here we have this code right here. Let me minimize this, it's a bit easier to read. Make this larger. So we have something like this. Then we're gonna say, this looks all right. Gonna say that. Next thing we're gonna say is start button dot display dot style. Oh, sorry, that was backwards. Dot display is equal to not cool. Next, what we're gonna say is, why is this not liking this? We don't actually need that, actually. But now what we're gonna say is, and this all this does is it's gonna remove the button from the screen when we click it. So now scores is equal to result dot scores, max score. We're gonna have, let's go ahead and grab the maximum score. So we're gonna say const max score is equal to that dot max scores like that. Then we're gonna say const max index is equal to scores dot index of max score. So we're gonna go ahead and grab the class with the most likely probability, and it's index. Then what we can say is, and then result container dot class list dot remove, result container dot class list. We're gonna go ahead and clear that class list. Then what we can say is result container dot inner HTML is equal to class labels max index plus open parentheses like that plus max score dot two fix. So we're only gonna show two decimal places plus close parentheses like that. Oops, that should be a plus sign, not equal sign. Then what we're gonna say is result container dot class list dot add, we're gonna say classes class labels, let's say max index. So we're gonna go ahead and add, whichever note we're detecting the highest is a corresponding class right here. That's gonna have how we color coded. Then what we can say is for I let I arrange so that this code is all right, for I arrange child nodes dot scores equal to, so what we can actually say right here, we can say to fixed, and then let, oops, and then let's, after we grab this child node, we can say is current node, we have to declare this select current node is equal to label container child nodes I like that. And then we can say, now we can remove this because it's wordy. Then we can say is current node dot inner HTML is equal to class prediction. Then we can say dot style dot background is equal to transparent just to set this up. Then we can say, if scored at I is equal to max score, then we can say is current node dot style dot background equals light green, cool. Let's go ahead and test it. New tab, let's go ahead and test this note. Oops, I have an advertiser not playing. Another advertiser on YouTube. We have A like this. So it is actually detecting C, which is quite a bit weird. Let's try a different one. And let me actually try taking, yeah. So it's detecting background nodes correctly. Let's try C. C is being detected correctly. Let's try B. It's actually a bit confused. So it can't figure out if it's C or B. So in these situations, it might be good to actually, I'm gonna change the input source because when I did record this, I was using my computer's microphone and now I'm using my headphones. So it might be a bit weird. So I'm gonna go ahead and change this source. Cool. So I think it should be able to detect another input source now. Let's go ahead and click start. Yeah, so this is detecting B correctly. So it is just because of the input source training is different. So you see that B detects correctly now. I can try A again just to prove that it was not actually broken. Okay, is there? No, I'm gonna try D. There you go. You can feel free to play around with the rest of it, but I just want to say that the reason why it was not working just now is because I was using my headphones microphone, which is actually different than my computer's microphone when I trained the model on my computer's microphone. So it's important to note that however you train your model, you have to test it in the exact same way. Because if you're training it one way and then testing it another way, it's not gonna work out, as shown when I just did. So yeah, well that was the entire workshop of tuning your instrument with Google Teachable Machine. If you are interested in the actual code used behind this website, feel free to check out my GitHub, which is shown right here. And yeah, thank you so much for joining us today. And we hope to see you in the next one. Yeah, and I forgot to mention, but this website is actually something that you can share with anyone, right? So the reason why we created it on Replet is because they will actually, once you write your code and you click run, this website can be shown to anyone, right? So this URL, if I open it up in a new tab, I access it here, anyone can access it, right? This model still works as normal if I test it in a second. But yeah, there's absolutely no barrier with you sharing this model with your friends, your family to show them what you've created, right? And just prove that it works, right? So this is the A-note. And yeah, so once again, thank you for watching this workshop and we hope to see you in our future workshops.