 Welcome, a few things. Who here was at my talk earlier on today? Who was at my talk earlier on today? OK. Who wasn't at my talk earlier on today? How about that? OK. Yeah, he wasn't. OK, cool. I will repeat through a few things pretty fast. So what we're going to cover today is two things, really. So we're going to cover two examples we're going to work through with TensorFlow.js. One of them we're going to use that mobile net model, which I talked about today in the talk. And the other one we're going to do regression. So we're going to do an example, it's called linear and polynomial regression. And that's all I think we've already got time for today, because we've only got three hours. We've only got a little bit of time today. If you want to do a full one, there's like an eight-hour plus workshop. So we've only got time for two. And what I'm going to do is I'm going to have you all join, how about you check these links? I just created them. Do they work? They don't work. Oh, that is all an elaborate hoax to get you to buy more Microsoft stuff. I don't know. Why is that not working? So I made a slack. So you could, is that really not working? Damn. How is that not working? Let me go here. Wow. Do I have to click something? What did I call it? TFJS. Oh, I know why. Let me check that. Boom. So hopefully, someone try out. It should be TFJSWS slash slack. And that's what I'm going to post, kind of bits of code and links and things like that to make it easier for you. As you go through, there should be a room called jsconfasia-tfjs. This is actually the slack for our meetup group in London. So I recommend after the workshop, unless you're from London, leave the slack because we channel on out stuff all the time. What did you call them? Meetups. So I arrived last night, and I didn't manage to get to sleep all night. So I'm not too sure what's going to happen today. We'll see. I mean, the other thing, oh, same here. Should be TFJSWS slash mobile net. So basically, don't clone that. Visit that URL, and that will take you to the GitHub and clone that GitHub to begin with. I'll give you a few minutes for that. Does people manage to log on to the slack? I have not checked their link at all. Has anyone successfully logged in to slack? Yeah? OK, excellent. I can't check it because I'm not on the thingy. And I'm going to try a new second screen. We'll see how that works. Oops. Started to whistle. It's probably not a good sign. Sorry? The link wasn't good. GitHub? Yeah, it should be TFJSWS. Yeah, WS. Sorry. One second. Where's my mouse? Where's my mouse? This way. And one other thing, duet. Just let me just two seconds while I set this up. Duet. Oh my god, it worked for the first time ever. It actually worked first time. Yes. So I am like there. Yes, yes, yes. Oh my god, this is so exciting. I can't believe this is working. Boom. Boom. He just arrived. Quick, quickly take a snapshot of that screen because it's going to go in a second. I'm OK, actually. Yeah, I got all fluid. Actually, I should. Why am I facing this way? It's the wrong way to face. I'm going to face the other way. Yeah, sorry. I'll just fix it. I keep on saying it. I'll just fix it. Hang on. Oops. The second one is WS. Both from a WS. Yeah, both from a WS. TFJS WS, yeah. That was my mistake. Just going to resituate myself. Things are going to get a little bit iffy. I'll come out this way. This is the way I'm going to come out. Sorry. You want to see the entire way? Yeah. Facing out is probably the best way. It's OK, it's OK. I'll turn it around. It's here or there. Boom, boom, boom. Oh man. This is just spectacular what I'm doing here, isn't it? I'm going to unplug and plug into this. It's behind me, that thing. OK. Now you have to wait again. Yes. OK, if I was right. Here we go. I'm just going to go. Just going to go. All right, so hopefully everybody's got that. If you weren't at my talk this morning, I'm just going to very, very quickly go through some of the things I went through again this morning. If you were at my talk this morning, consider revision. So I run a co-run a meet-up group called AI Jobs London. This is where you can find out information about me, especially my Twitter. You can follow me on Twitter if you want to do that. Follow me on Twitter. Oh, if you want to take a picture, let's do a picture. After later. The agenda today, so this is all what we're going to cover, is we're going to have a really brief overview of neural networks in TensorFlow.js. If you were at my talk this morning, then that's going to be old news. Then we're going to go through MobileNet, the thing I again I showed you this morning. We're going to very, very quickly go over it to start dipping your toe into some of this stuff. And then, and I'm really sorry about this. There's just no way around it. This is like, this is the gentlest I can do it. This is the gentlest I can do it. But it's going to be a steep ramp up from MobileNet to regression. But this is the simplest use case that there is. So I apologize. No, I don't apologize. This is just your life if you want to do machine learning. It's just your life. We're then going to go through what's called regression, linear regression, and polynomial regression. Hopefully, quite a lot of this, you're going to be sitting there on your computer and having a way of working, trying to define yourself. And I'm here if you've got any questions. I'll try and help out as much as I can. Turn to the person next to you. To your right. Turn to the person to your right. To your right. Say hi. Welcome. That doesn't make any sense, does it? It doesn't work at all. Say hello and introduce yourself to the person to your left and to your right. Say hi, Asu. Christine, nice to meet you. Yeah. Good. Help each other out. There's a lot of people in this room, and it's going to be very difficult for me to cover off and answer questions from everybody. If you solve something, the person next to you is struggling. Help them out. Help type on Slack. If you see somebody asking questions, type on Slack. Help each other out as much as possible. Because it's nice to do that. So introduction. So what we're going to be doing is very different types of machine learning we're going to be doing. Well, yeah, supervised machine learning today. But essentially what you do is you get some training data in this type of model. You get some training data. And that usually is something that a human being has labeled in some way as in a human being has looked to that and said, oh, I think this is this or I think this is this. And you take training data and you define that algorithm. And then you train it. And what that generates is a model. There's lots of different types of classifiers, many, many different types of them. What we're going to do today is using a neural network method. And that's a model. And let's say this is a model which you trained it up using sentiment analysis with some words. So you knew if a sentence is a positive sentence or a negative sentence. And then when you give it something it's never seen before, like Isle of Assim, it can say that's positive. So that's kind of what this stuff is really good for, is you train up on one set of data. Then you can give it something it's never seen before. And it can come to a conclusion as to what that is. That's kind of opposite to what we'd normally do if we're programming. You have to kind of always think of every single edge case. And it'll only ever give you an answer based off of what you program. That's kind of the underlying fundamental power of machine learnings. It can kind of infer and give you answers to stuff that you've never even, it's never even seen before. So we're using TensorFlow. It is currently I still think the most popular open source package for machine learning. This is an old slide, but at the time, the next big biggest thing was like at low, which is 30,000, but TensorFlow was 100,000. It's probably been surpassed a lot. As I said this morning, we're using TensorFlow.js. TensorFlow.js comes with two different APIs inside, two different kind of APIs. I use something called a core API, which if you're using TensorFlow, like Python and other versions of it, that's what you'd see is a core API. It's very low level. If you looked at something like that, you'd have to really squint to see the neural network. It's kind of like looking at a program and trying to figure out exactly what it's done. It's like the building parts of a program. So if you're really just doing neural networks all the time, there's lots of abstractions that you can use. And for that, there's something called Keras. So if you're doing in Python, you use Keras, which gives you a high level of abstraction. All you're doing is a neural network. But what TensorFlow.js, that normally that's a separate package, but in TensorFlow.js, they've combined it. So that's your two APIs in one, the core API and the layers API. However, today, you're not looking at a layers API, because we don't have time for that. You're going only for the core API, but it's good to know. Yeah, hey. What I'm gonna show you in the core API is gonna be like this many lines of code. And if you use a layers API, it would be like two lines of code or something. But I think it's good to understand how things work underneath, because otherwise you don't know, you're just using some line of code. Neural networks, pretty boring for the rest of you, because this is exactly the same slides as it is today. But anyway, based on biology, so it's a neuron, you have dendrites going in, a body and axons going out. If enough electricity flows in the dendrites, the body goes, ah, I'm pushing some electricity out of the axons. That's what we do here, but we're doing the exact same thing just in code. You have a node, the electricity going in is your data. One sample of your data that you're pushing in. So whatever that is, I'm gonna go through it, but whatever that is. Then you have these edges, and for each of those dendrites going in, you have to know how important that piece of input is to the end result. We have something called a weight. Initially, this is randomized to a random number. You just push them through an activation function, and you output one. Whatever the activation function outputs, you then type that into the next one, to the next one, to the next one. There's loads of different activation functions, which, this is one of them, very simple. Anything below zero is zero, anything above zero is one. As I'm speaking to you, I realize that throughout what you're gonna cover today, we're not gonna touch on an activation function, but it's good to know, because we're not gonna use it, because normally when we're using a layers API, we start touching on the activation functions and stuff like that. Anyway, it's good to know. So this is one of them. It's good because it's really easy to calculate, and anything with machine learning, you want stuff that's easy to calculate, so it doesn't require too much processing power and compute, but it's really bad because tiny changes, and tiny changes in the input is a huge change in the output. So there's other ones that people use. This is a hyperbolic tan. It kind of gives you a little bit more non-linearity, but this is a really common one which people use called ReLU, and if you actually covered off more of the rest, I can point you out to the, there's two further examples you can do if you go home, when you go home, if you go home, when you go home. And they'll show you examples, and they'll be using ReLU, so it's a good one. It's kind of a nice compromise because it kind of easy to calculate and does give you some non-linearity, so it's kind of nice. You kind of pump it all together, you create loads of those, you join them together, you're not going to do that today. It's too complicated, you're going to do one neuron, and you'd be like, oh my God, one neuron so hard. You're doing one neuron today, but if you did, you wanted to, you can connect a whole bunch of them together, like this, and if you use the core API, you would literally be creating a neuron, a neuron, a neuron, a neuron. If you're doing the layers API, you just say a layer with three, a layer with two, a layer with three, and it kind of figures the rest out for you. But that's what you're basically doing, then you feed in some input data, whatever that is, whatever you're trying to train on, depends on what that is. It just multiplies everything out and gives you a number. So I can't remember, what did I say? This was my example with the faces today. But what did I say this? Why have I not got the faces in this slide? I don't know. But anyway, this was supposed to be an example of facial features. You pump it in, it gives it a number. Let's say we know this input data should be between eight instead of three. We know it's wrong by five. Of course it's going to be wrong because it would have initialized the things randomly. And then what you do is you do this thing called back propagation or tuning, or essentially optimization. You're trying to change these numbers. So the next time you pass this thing, all this data through, it's going to give you a eight. And that's what the training, we use what training in neural network means training the stuff, that's what we're doing. We're just tuning these numbers using a loop and TensorFlow so that the next time it goes through, it's just going to give us an eight. And we'll tune those numbers to some value, right? That's TensorFlow.js, that's TensorFlow. That's a really high level overview of a neural network. And yeah, that's right. So, let's go through, but what we're going to do for the very, very first example. So once you trained up one of those networks, you can actually then save them. I'll show you. Aha. Wait, what can you see right now? No, nothing. I want to mirror this screen, that's what I want to do. So I want to do, that's what I want to do. Yeah? Yes, this is what I want to do. Okay. So if you go to, here we go. If you just visit, if you go here, there's TensorFlow.js, TensorFlow.org. Oh, it's changed. TensorFlow.org slash js slash models. This is where you can find a whole bunch of the kind of the pre-made models that are available that is growing, is growing. So the one we're going to use is mobile net, but you can see there's a bunch of, there's toxicity is a new one. So toxicity, I wonder if you've got a demo of it. What's the demo? The demo, essentially what toxicity does is it, it doesn't just tell you if a piece of text is bad or good. It tells you if it's offensive and in what way is it offensive. So it tells you where dude's on computers more on, you are quite astonishingly stupid. It's not an identity attack. It is an insult. It's not obscene. So you can see, you can have like stuff that is an identity attack on what somebody is. So it's kind of an interesting use case, this one. I think this is the latest one they've got. It's growing. You can try finding some yourself. These are the ones that kind of very well, nicely packaged. And it's just a really great way if you just really want to get started. I don't know, would you, JavaScript developers here? So we kind of just like MPM installing stuff and then just getting on with it. So it's a good way of playing around with it. Another really fun one. The two really fun ones here actually, PoseNet is a very fun one. I wonder if I can start the demo myself. Try the demo here. Here we go. Do you know what? I'm going to be rude. I'm going to get somebody to like volunteer. Here we go. I need to volunteer. Probably use her. Go and stand up. Yeah, go and stand up. So you can see and move about. That's pretty cool, right? So you can do it. Yeah, for ages, I've just had another time. I've wanted to make like a flossing game. See if you can count. Do the flossings. Cause you can actually do multiple people. So I've often thought you could have two people have making a flossing game. Make it. One of you make it, right? And another really fun one that I've only really started to play with is speech. She can actually recognize speech. Wonder if it has an online demo. Don't know if it's got an online demo. It does have an online demo. I can't remember is. But essentially they've got a model here which has been trained up to recognize the numbers zero to nine, up, down, stop, yes, no. And it can just recognize those words. Just using the browser and the web audio API. So that's pretty cool. Sorry? Is it at the bottom? Ah, there we go. Oh, it's going to download, isn't it? Anyway, that's how it works. We're going to use the, oh, it is, there we go. Down. Eight. Ah, whatever. Yeah, yeah, that's pretty cool. But we're going to be using the mobile on that one today to play around with. So, where's my slides for mobile on that? I think this is it, actually. Hang on. No, that's not going to work. Yeah. Yeah, that's how it works. So, hopefully you should all get cloned this thing by now on your computers. So we'll learn how to load and use a pre-trained model. And, oh, where to find pre-trained models? Gone the wrong way around. It's okay. It's not the end of the world. And, yeah, so basically underneath a lot of this stuff we're just going to do tf.loadmodel, which is going to load those models underneath. And, yeah, and this is mobile on that. We're not going to use it in this way. We're going to use it in a slightly different way. Because we're not going to use import statements. We're going to use the script tags. Those all websites should be made, the script tags. All right, give me two seconds while I get set up because I need to get my head straight. Hang on. So let me get mine. This is not it. New window. Oh, yeah, I've won that. Excellent. Uh-oh. Uh-oh. Uh-oh. Uh-oh. Oh, why? Why do I have C++ extensions? Okay. So I think I'm going to revert this. If you want, I know it's going to be pretty simple for a lot of you, but trust me, you're going to remember this moment later on. And you're going to be like, I wish we were doing mobile on that again. Because it's going to get steep pretty fast. So you should see, can you see this? No, probably not. Oh, damn it. So I'm supposed to do this. So this is what I'm going to do. I'm going to load up the completed one. So in the folder, you should see main.completed and main.js. Maybe some of you have already cheated. You've looked at the completed code. Yeah. This is basically what you'll end up doing or should end up looking like. Bye bye. Oh, wow. That's awesome. How's that? This is great. Can someone take a video of this? Let me just refresh that because the canvas went all wrong there. And then why is that not? Sorry, I think because I'm screen sharing, the canvas is a bit wrong, but anyway. So this is not going to work because my screen is not going to, anyway, whatever. So at the bottom of the screen, it's detecting what's in the image. This is crazy. What if, like, barbershop? Monitor? No, it's not going to phone. Maybe a phone. Wait. Modem. I don't think because of the computers, because of the things behind it, iPod. iPod at the bottom. iPod. It's close enough. Close enough. So that's basically what it's doing. This is what I was demoing earlier on today. And this is what we're all going to build today. So all that we've done here is two things. Oops, this is an old version. I haven't updated it. I'm not going to update it. Don't update it. Keep it as it is. So what I'm doing is I'm loading up TensorFlow.js at the top. It's 1.0 now. I haven't updated this in a while. But it's the same. For this demo, it will be exactly the same. And the other thing I'm doing is loading up mobile net at the top there as well. If you're doing this properly, so divisive, aren't I? Properly, you do import statements in a bundler or something like that. But I'm just doing a script tag. And my code is going to be main.js. TensorFlow.js is its thing. Let's have a look at what mobile net looks like on the internet. Oh, no, it's here. I already opened it, didn't I? Did I open it? No, it didn't. Here, here. So all that mobile net is when you're downloading is, it's just if I go into source here, index. All that mobile net is doing is if you look really closely at the source code. Where is it going? Oh, it had to be doing a different way, didn't it? Oh, OK, there's another way of loading it. You need to load graph model. So all this doing underneath, this is just a nice wrapper around TensorFlow just to help you for mobile net. All it's doing underneath is using load graph model and passing in a JSON file, which is kind of depending on the version number. It's whatever this is.json or something along those lines. When I'm loading in this mobile net thing, all it's doing is just doing some nice bits of code so you don't have to do all this stuff yourself. But it's just loading up a JSON file with tf.loadGraphModel. And the other kind of really interesting file, and also what it's doing is when you ask classify, classify, because that's the code that you're going to use. You're going to pass it something usually like an image. You can pass it canvas, a video, image data, just like some raw numbers. And then what it does is calls infer. When you load up a model, you give it some, once you've trained it, you call infer. You give it some new data and it gives you the response back. But it's just giving you really raw numbers. Like really, really raw numbers. It's not really giving you something that you need to understand what's actually going on. And so it just has some nice helper classes, helper functions to kind of turn those raw numbers into something meaningful, i.e. a string that describes in the image. And so it's called this thing called topk classes, which basically loads up this file. And this is the interesting file here, image net classes. Because at the end of the day, all the models we're turning to is a number. What does that number mean? Well, this is going to tell you what that number means. So these are all the thousand things that MobileNet can understand. This is it. Toilet tissue, toilet paper, a bullet. We all know what a bullet is. I've got several bullets. I think I'm a garrick. I don't know what this is. A gyromitra? Stinkhorn. Anyway, a scuba diver. Wow, that's all that MobileNet can really detect. And these are the things that MobileNet knows how to detect, right? Which is why it's not so good. But it's small enough so it can actually be usable on a mobile, right? So that's basically what it's doing. Let me just get up visual view code, insiders. Fract. I don't even need to do that here. I can just do this. Oh, closure. Yeah, I'm just going to do this way. Oh yeah, the icon's changed. Cool. Listen. Yeah. Does that happen a while ago? A few days ago. Was it a few days ago? Man, I've been on a plane forever. I don't know this stuff anymore. All right, so you should see something like this in your main file. First thing you should do is like, obviously there's hints there to do one. What do we need to do? We need to load the MobileNet to start the camera. So all you need to do is load the model Yeah, MobileNet is already in your name space because you loaded up with a script tag. We didn't do anything. Oh, it's going to be very difficult for you, isn't it? Oh no. Maybe we can turn that screen that way and you'll, can you all see this screen from that side? Yeah, so maybe we can turn that one for the people behind there. Try it. Go on. Cool. Opportunity for that self, that picture. That's the wrong way around. Every wave. Yeah. And this side, this side too. Wait, wait, hang on. Yeah, cool, awesome. Oh, we've got some professional ones as well. Don't worry. I got it. It's cool. Cool. You good? You're good? Excellent. I'm what that's doing underneath as I showed you before. This is just going and that mobile net is just finding the right JSON for whatever version number we're running. And then it's just calling the load graph model underneath. It's literally going to go across the network and just download quite a lot. It's actually quite a large model still. I can't remember what the size of it is, but it's multi megabytes downloading onto your computer. So even mobile net is quite large. Right. You wouldn't just have it on your home page for your, don't do that, because it's pretty big. Then, oh my God. Okay, then they're pretty simple. We need to pass the canvas. So, where am I? I'm lost. Classify image. Classify image. There we go. So then, oh, I haven't explained what the code's doing. I'm sorry. So it's just doing kind of basically pretty simple. It's pretty simple stuff. It's getting, it's using the, kind of get user media API, which is going to get a stream from the camera. Okay. And then what we do, every second, I call take snapshot. So every, I have a second or every second I'm going to call take snapshot. And all that that's doing is from the video stream, it's grabbing one frame. Unless that drawings, it's that drawing that frame into a context. And then it's called, let me call in classify image. All right. So then we have the image and what we actually, now we want to know every second we want to, after we've loaded the model into it, we want to know what's inside it. So, it's pretty simple really. Just do predictions. Okay. Predictions, you gotta wait, model.classify. That's it. That's it. Right. Oh, I know you're all going to just say simple as in. It's really easy. Yes, it is easy. You're going to remember this moment. You're going to dream about this moment. You're going to wish this moment was going to come back to you. Right. This is it. Right. And I can't spell predict. Yeah. Okay. I don't even know what to do in the next one. You can do what you want. Basically, that just returns for you. Like if you want to print that, you could print it on the console and see what actually returns. I think it returns an array. Yes, it returns an array. And then I'm just going to print that somewhere on the page. So if you're like a proper developer, not proper, that's a horrible thing to say. If you're, if you use one of the web frameworks in the world, you probably, there's a better way of doing this. But if you're a vanilla person like me, this is how it works. I'm not a vanilla person. I'll paste this in there just for the sake of saving everybody a headache. I'm going to paste this into the Slack channel. Yeah. Just so you can just copy and paste that there. Guess what? That's it. Done. You've loaded a model and you've used it. So now if I, I'm still looking at main.js, I am there. Oop. Oh, I look at it. No, you can't see if I look at it this way. Where do we go back here? Boop, boop, boop, boop. Could you part your, part your power? Could you ask some power? Hang on. Can I scream? Oh, I can. Hang on. Why don't I just record the desktop? Here we go. This is just to ignore me for a second. Okay. Okay, here we go. Stop. Awesome. And that's it. So you should all be seeing this working. Well, that's, you know, I forgot that that isn't the point. The point is this. So hopefully on your computers, have people managed to get it working? Yeah. Powerful, interesting. It is crazy. It's absolutely crazy that we can do this. We live in this world full of surprises and technological miracles on a daily basis, but just take a second. Just appreciate what's happening right now in a browser is detecting what it's seeing. Epic. It's the only word that you could use to describe. Nice, maybe use some other words. That's it. I'll give you all a moment. Whoa. What's his name? Prison. Prison. It does come up with some weird ones. Oh, it just said it. It just said I was a microphone. I think that was my beard. There you go. And I'm going to leave it, give you a few seconds while I plug in. And was that an hour already? No. Are you serious? No. It was 45 minutes. A $5 bill is a toilet seat. Toilet seat. I didn't say it was any good. Right. Just that it does stuff. Right, that was it. Can it sector this thing? What's this thing saying? Bathing cap, hairspray, water bowl. If the background, I think, is that you want a good clean background as well. That's probably a cause of the problem. Yes. I think I hinted to it earlier on today. Like, mobile net has its usefulness. I use it a lot for teaching and training. Practical usefulness. It's iffy because it's really not that accurate. Like, if you really were looking for one of those things in that image net, if you were looking for whatever it was called, a scuba diver. Right, it's one of them. Then maybe it's useful for you. Oftentimes it's not. You may be going to use an API. One thing it is useful for is something called chance for learning. I won't leave it for a second. They're all having fun. They'll be having fun. What if we all have fun? I can see it. I mean, if you want to take pictures of what you're doing and tweet it and tag your ache, you know, that would be nice. And if you wanted to, you're going to take your screenshots. No, you don't have to. I mean, you can. I mean, if we present it, you did. But you don't have to. You don't have to. It's not essential. But, um... What are we doing next? What are we doing next? What are we doing next? Give me a second. You'll have fun there. I'll just get set up on this side. Uh-oh. What did I do? What did I do? What did I do? What did I do? What did I do? Give me a second. Or I figure out what I did. Workspace. Stretch. So, give me a second. I'm just trying to figure out where my code is. Yes. Joy Development Workspace. Joy Development Workspace. Joy Development Workspace. TFJ's Workshops. TFJ's Workshops. The... regression... opening. Yes. Excellent. Right? And... All right. So... Yeah. So, Mo wrists is... So if you are on a clean white background, it'll be a little bit more accurate, but not massively. You'll still find it's not going to. So it's good for, in terms of practical use cases that you'll use today with MobileNet in like your real world, no, right, it's not, it's not, it's not good enough for a lot of that stuff. It is good enough for certain things, but it really depends on your use case. So I think the real power of something like MobileNet, and I think one of the, just talking broadly really about JavaScript and machine learning, you're not really going to be training up incredibly complex things in JavaScript, in the browser. You're just not. It's like single threaded, JavaScript's not like the best language to do in compute. Do a lot of these machine learning models. You need hundreds of servers running at it for weeks, you know. But what it is really good at is what, but what is the possibility of a very interesting use case is something called transfer learning, right? With transfer learning, you take an existing model, an existing brain, and then this brain MobileNet has been, has been trained to recognize one of a thousand things. But during that journey of training it, remember the layers? Each layer and each node, actually if you, there's ways of interrogating a model and seeing, well, what does that node, what does it really understand? What is it seeing as part of that image? I mean, if I don't understand, I'll have to get the slides up later when I'll find them. But when they, when they examine these networks, they can see that actually this node at the start is detecting this. All it can detect in an image is this, a curve like that, or another node. It learns how to detect a corner. And then the next layer, maybe they detect one, one node has figured out how to see if something's an ear, right? And then at very the end, then you decide, oh, this is a face, right? Or cat or a dog or something like that. What you can do is you can kind of lobotomize the last layer and you retrain it to learn something else. Because if you've got, if you've got a model that already knows how to detect edges, corners, maybe colors, it's learned something, you can then retrain it to retrain to detect other things. Got a friend who used mobile net, it's called Transfer Learning. And that last layer learning, just train the last layer, doesn't require as much computation as training a whole model from scratch. So it's possible to do that stuff in JavaScript, possible sometimes in the browser. So that's where I think mobile net is useful for. Okay, it's not detecting a lot of very clever things, but on the lower levels of that model, it's detecting something. It's like a baby, you know? It's like learn something, but you just need to train the last mile. Another way I describe it is imagine learning JavaScript from scratch. Imagine teaching something JavaScript from scratch or taking a Python developer and teaching them JavaScript. Like the second one, it's not going to be as much effort. And that's kind of Transfer Learning. And that's where it can start being useful. Yeah? Cool. APIs, if you want to do some really, some really powerful stuff, you can get started straight away with APIs. If you want to like train stuff up, that's where you go. Yeah. Well, I can't say categorically you won't, because maybe there is a use case where it's actually accurate enough. But yeah, typically no. Like, yeah. Any other questions? I can't remember. If you look in, is it not downloading for you? If you look in the network tab, because it's actually not one JSON file. That JSON file then triggers the loading of some shard files. So if you look in the network tab, you'll see, but it's many megabytes. It's not small. So to get started with the regression, clone this bad boy. Bad boy? It's not good. Bad person. If it's very, very simple, and maybe you can get away with something in the browser. I know, I think I was talking to you earlier on, but he did stuff in Node on the server side, because it just, just a computer up. But yeah, you can still, it's doable in JavaScript. I think it's good to play around with the stuff in the browser, because it's easy to play around with, but then if you need to get a little bit more compute and you want to stay in JavaScript, you can use the Node.js version of TensorFlow. And then if you really want to go full out, you can rewrite it in a, in a, I can. It's generally used for, for training. Yeah. For learning, I'm right now just for inferring, in the browser, I mean. Yeah. Mostly just for inferring. It's like, yeah. Yeah. Yeah. I don't know. It's difficult to know really, what people are really using it for, but yeah. Has anybody cloned it? Who hasn't cloned it? That's a terrible question. Who hasn't cloned it? We're all good. Excellent. So aggression is lots of different types of regression. We're looking at linear regression, which is kind of, maybe you remember this stuff. You got a whole bunch of dots, points, and you want to find the best fit line between them, kind of what is the linear relationship between these two sets of data. With logistic regression, which is more, oh no, that's polynomial. With logistic regression, we have polynomial regression, which is kind of a similar thing as linear regression, except you understand that the relationship is nonlinear, right? Like, yeah. We call logistic regression, which is more to do with probabilities. We're not going to go into that. So for instance, this is a famous one. So cricket chirps per minute and ambient temperatures. That's actually has a linear relationship. I suppose up until they die, right? Up until they're dead. But up until then, it's a linear relationship. The more temperature it is, the more cricket chirps per minute. Oh yeah, it's my friends. So extrapolating this data can be a problem sometimes. So you need to take it with a pinch. So you need to understand your data, understand where it might have a linear relationship and where it might not. Remember this? Remember this? Yeah? Yeah, here we go, back at school. Machine learning is maths. It's maths. There's just no way around it. I'm no good at maths. So it's a real struggle for me. And I'm very, very lucky that I've got a lot of friends who know this base who are willing to spend time with me to help me figure stuff out. But yeah, it's maths, right? You do have to get to know it quite well. But anyway, this is an equation that describes a line. So y, the position on the y-coordinate of the line is equal to m, some variable m times x, plus b. Now b would be if x is 0, so this would be 0, so that would mean b is 5 here, right? That's the constant. Remember this stuff? Ah, so I feel so clever. Remember all this stuff from school. But that's basically it. And that's what we want to do here. So you've got this whole bunch of data sets of points, and you all know what a linear relationship is between them. Right, this is how you do it. OK. No. And why linear regression is a really good start to get to understand how to do neural networks just because of this, right? This is a neuron, the same thing we had before. We have x. Imagine your x and y-coordinate of your point is your expected value and your input value, right? So your feature and what you expect. So it's kind of like having labeled data, training sets you can train on. And then your m and b. What we really want to do is you want something that will figure out, given a set of points, which is your data set, the best fit line. So you really want to, it's going to tune some values for m and b. That's what you're going to be doing. That's why linear regression is a good first step to understand how to build neural networks because it's going to teach you just one neuron. So it's so hard to get this one neuron right. But once you've got this right, you can apply it to all of them and join them all together, right? I forgot. I actually was supposed to show you a demo. Let me show you a demo. Oh, no, I didn't. Ah, clever icing. This is basically something like what we're going to build, right? This is what we're going to build today. Once you add points, it's going to figure out what that line is and what that line is figuring out basically an m and a value for m and a value for b. That's basically what it's going to figure out. And it's going to automatically figure out what those are from your data sets. That's it, right? I think I skipped a slide. Let me go through this again so I don't skip anything. So, yeah. And so then, let's say we gave it a value of x1 and y16. It then figures out, let's just say, what have I done here? What is this maths? Oh my God. Mx plus b. Yes, right. So if we gave a value of x is equal to 1, m is for b is 12, y is equal to mx plus b. Mx is 1 times 4, 4 plus b is 12, y is equal to 16. That's what this neuron would give us. However, we know, because it's our point that we created, the x is equal to 1 and y is equal to 13. So we know that the values here are wrong. The optimizer is going to run and it's going to retune these things. Do I have another slide? Those things to be better value. And this thing here is called a loss function. This is the most important part of it. You need to describe how wrong you are. You need a function that tells you how wrong you are. That's what it's all about. I mean, that's one. Great. OK? All right. So hopefully, you should have a folder that looks a little bit like this. Yeah? Good. Create one. That's main.js. Just create it. And that's where you're going to type stuff in. Because all these other ones are just different bits of code that you can use. OK, cool. That you can use to figure out that you can go to. So we're going to go through kind of one, two, three, et cetera, et cetera. And at the end, I'm going to show you linear regression. And then you're going to figure out all by yourself. You're going to figure out how to do polynomial regression. So you're going to figure it out. All right? Oh, yeah. I checked in TensorFlow.js library in the source code. That's how I code. Yeah? I checked independent libraries into GitHub. Yeah? No one thought of that before. I thought of it. OK. Obviously, don't do that if you're doing it in production. I use the official version of it. I just checked in to make things simple. We're also using the library called P5.js. I'm going to go through that very, very briefly so you can see how it works. It's a visualization library. Who here is the React developer? View? Angular? All right? P5.js is not like that at all. Do not use P5.js. Do not use it in production. You're going to see why in a second you should never ever use P5.js in production. It's just an interesting way to draw stuff. It's used a lot in teaching younger people. And I just didn't want to get stuck. I just didn't want to have an argument about a web framework. I just didn't want to have that discussion. So I use P5.js. And then we're also going to use a main.js where we're just going to stick everything in, like all of our code, as we code along. And I'm going to do things. Today, as I'm going to go through this, I'm going to experiment a little bit. I'm going to go through things. I had some feedback from the last time I gave this workshop, and I'm going to try and go through these things in a different order. Fingers crossed. This will work, but it should hopefully be a lot more engaging for you all as you go through. So right now you should have, in this index.js file, make sure you've got a main.js to make sure you've got it here and it exists. Let me actually rename mine. Main.js 1. And I'm going to just empty this one out. And then however you... Oh, I never... Well, you all did it with the mobile net. So I presume you all know how to open up the thing. Yeah? You can just open it in a... I'm using live server plugin, but if you serve or whatever else that you're using locally, go for it. And then that's it. Well, nothing should be there because it should be... It's just your main.js is blank. I'm missing something in a second. I'm missing explaining something. Am I? No, I'm not. No, I think I'm good. Yeah. Yeah, I think I'm good. All right, so we're going to go through P5.js super quickly. So P5.js is amazing. It uses globals everywhere. So everything you've been taught about globals are bad is wrong. Globals are amazing. So what happens is that basically, if there's two functions available called setup and draw, that's what it will call. So it's expecting these functions or these specific names to be available. So setup is called before anything helps you set up things. And draw is called. Well, let's figure out when draw is called. Let's just see. So if you want, you can just copy and paste it from setup.draw. Put in your main.js and it's safe. And then open up in the browser. Boom. Yeah? You see that? This is probably why you shouldn't use P5.js in production. Because it just draws as fast as possible, and it will continue constantly. And that's it, basically. Pretty cool. I think it even does it. No, I think with Chrome, when you're not looking on the screen, it doesn't draw it. Oh, no, it still draws it. That's basically what happens. There's going to be drawing logic that you're going to stick in draw. On the setup logic, stick in setup. The main thing you want to do is you want to create canvas area, which is where you should draw stuff. So if you go to 2.canvas.color, you can grab that line from setup.the setup there and copy it into your main. OK. Create canvas. Again, another wonderful global. You know, there's other things. It's not given as window width and window height. Another global. Why is the coding not like this? Why don't we do it like this? It's so much fun, right? Basically, this is the window, the width, and the height of the window. Just saying create a canvas. It's literally creating a HTML5 canvas. The full size of the screen. And once you've got a canvas, you can do fun stuff. Just the background, right? I don't know what color that is. Maybe it's gray. It's red, green, and blue. That's all that is. Or you can actually, if you wanted to, you could kind of, well, let's just see it working first. There you go. What is that doing right now? Every time Draw is called, it's painting the whole thing, every single pixel, that color, right? Drawing it from scratch. It's going to do it every frame. I can't remember. Try out. I think you've got to put it in Draw. Anyway, you need it in Draw, because if you draw a line, it doesn't disappear. You've got to paint over everything again and draw it all again, right? It's great. It's fun, though. It's fun. This is how coding should be. Or you can do like red, something like that. Nice. So you can set a background. I think this is the point where I'm going to jump ahead. Say again. Oh, you know P5JS? No loop. Yeah, I think I remember this. With a capital L? Yeah. And it stops looping, yeah? Boom. And then loop, right? Loop to stop looping? Well, let's just see. Yay! This is why it's so much fun. It just works, right? Uh-oh. Don't use no loop. Let it loop. Let it loop. Let it loop. And I think this is the point where I'm going to skip ahead. Hang on. Skip 3, 4, 5, or go over very fast. I'll go over very fast. 3. You can have a look at it if you want later on, but this is how you draw texts. You define a fill color. This is almost just like... It's all sequential. I know with Jarvis it's all about callbacks and stuff, but this stuff is just sequential. What is the color of the pen? What is the font that I want to use from this point onwards? What is the text size I want to use from this point onwards? Draw the text Hello World. In this position, X and Y. X is at the bottom. Y is at the bottom. Y, 0 is at the bottom. No, top. I can't remember now. Let's figure it out. If you want to... Yes, let's just do that actually then. I can copy and paste that into my draw function. There you go. Hello World. I like ice cream. I don't like ice cream. I'm intolerant to milk protein. Not lactose, milk protein. Very bad for me. I'm also Brexit supporters. Just two things. Hang on. I'm lost now. I'm lost. What am I supposed to be? Yes. 3, 4, 5. That's how you draw text. You can also draw shapes. If you're used to drawing tools, you can set the stroke weight, the stroke color. You can draw an ellipse of this X and Y and this size. Then you can say, from this point on, there's almost no stroke. That's what I'm going to do here as well. I'm just going to drop it in there. I'm just going to draw a circle in the center of the screen. Theoretically in the center of the screen because it's window width. It's divided by 2 and window height divided by 2. Let's see. Let's give it a that. Let's give it a that. Let's give it a that. Boom. Very faint circle. Why is it that color? I don't know. Because I haven't said that I've lost the background. Let's set the background. Background. Background. Background. Oh my God. I just want gray. G-R-E-Y. Or A-Y. A. Are they both valid? Are you serious? I just typed in light blue and red yellow at first. Have you just like blown my mind? No. Wait. Are they both valid? Was that the same shade? Was it the same? Have I just learned something? I feel I should have known this already by now. I didn't know that. Okay. Excellent. Let's set the stroke to like red or something like that. You can do this. Yep. So you can draw circles. Right? I really feel I should have known that. Why is the center white? Why is the center white? I think I must have said it. Oh, interesting. I think that's the fill. The fill must be white by default. So if I set fill to blue, maybe fill. Oh, my God. How do I not do that? Escape? Yeah. Blue. Got the makings of my next website. Right here. Color, color selection, spot on. The next thing I was going to skip over was five. And lines. Okay. Did I go over lines? I didn't. So lines is kind of very similar. You just have a line that you can draw. In fact, let's just draw a line in the middle of it. Let's not go nuts. So a line. X, Y to begin. X, Y to end. No, no. X1, Y1. Yeah. X1, Y1, X2, Y2. So 10. So it should be kind of going. No. That way down. I think. Yeah. So the X and Y's top left, which is really annoying when you're drawing graphs and lines because you want the Y to be here. Y is zero to be there. But anyway, that draws a line. Boom. You can draw a line. Okay. And now I'm going to skip to seven. Boom. Okay. Cool things you can do. Again, loving the globals. You've got to embrace the globals. Okay. They're great. They're fantastic. If a function called mouse clicked exists, guess when it's called? Every time the mouse is clicked. Why is life? Why is it not this simple? Simple, right? Mouse clicked. Mouse X and mouse Y are helpfully set to the X and Y position of the mouse at the moment that it was clicked. Then you can set, draw a circle at that position of radius 10. Let's just copy and paste that. In fact, I'm going to get rid of my, get rid of my draw function just to clear it out. Oh, no. Oh, no. Oh, for God's sake. Boom. I'm going to add a mouse clicked. This is it. If you just want to copy and paste it, you can copy and paste it from 7.mouse clicked. Right. Then that's it. Now if I go here, yes. You should be able to now do this all over the screen. Pretty cool, hey? Pretty cool. Now you're starting to like P5JS, aren't you? Yeah? Yeah? Now you're wondering, your mind's trying to think of ways you can release this in production. You can't. Don't. Right? There you go. Give you all a second to try and get that while I frantically remember what the next thing is. Hopefully you're all working on that. What I want you to do, I want to give you a little bit of time now to focus on this. Create two variables. That are, am I letting these? Am I letting these? Hang on. Let's let them. Yeah. Let. Oh my god. X's. Right? Y's. Right? I just want you to, every time the mouse is clicked, see this X and Y? Just start collecting them. Correct all the X's in the X's array. Correct all the Y's in the Y's array. Just do that. Just getting gentle. Gentle introduction. Hopefully it wasn't too difficult. If this is how, this is all you should hopefully have needed to do. X's.push mouse X. Y's.push mouse Y. Let's double the drawing it for now. So that's it, right? In fact, I can just, it's a clever way of writing all this stuff. Isn't it like, use a destructure or something and then, man, tell me if I'm being an idiot. What am I doing right now? Okay, does this work? Man, how am I even doing, how am I even thinking right now? Is that a thing? Is that a thing? Can I do that as well? If this works, I'm going to be happy. Okay, okay, okay, okay. Okay, it's working. So that's basically creating the two arrays. One with all the X values and one with all the Y. Is it? How can I just only go in printing one? Let's show you printing two. Here we go, it's two. One, two. Right, okay. X's and Y's. However, you can see it's actually storing in like the actual X and Y of the position in the screen. And my screen might be different to your screen. So a lot of the time when we're doing this stuff with machine learning, what we want to do is we want to normalize our data and you usually want to normalize it to between zero and one. That's kind of a really good thing to do. Keep everything normalize it between zero and one. So if you know the width of your screen, you know that you clicked kind of there, then that's probably about 75%, or 0.75. But P5JS has like a nice function that allows you to do that and that's called map, okay? And with map, what it does, I'm just going to copy and paste the whole thing to the bottom there. Here we go. So what map does is it kind of maps a value from one range to another range. So what this is saying is we've got the number 500. That range, that comes from a range of zero to 1,000. And I want to map it to a range of zero to one, right? So it's going to normalize it to zero to one. So if I actually now, if you look at that printed out, oh, shizzle, we're just done. It's printing at 0.5, okay? What we want to do is we want to use the window width because depending on where you clicked on the window width, you want it to say that'd be one, that'd be zero. We're going to normalize our numbers. This is just a standard thing you want to do in machine learning with your data sets. You want to normalize everything from zero to one. So given that, given that's how map works, right? Normalize the X's, the X's and the Y's so that it's storing a value from zero to one instead of 100, 150, something like that. Do you got me? The helpful variables you want to, you want is window width and window height, right? There's two clues I'm giving you, right? So map, window width, window height. If you want to see other examples of map working, you can look in 6.map.js. There's lots of examples there to play around with. But that's basically what we want to do here. Give you a few minutes. We haven't touched on TensorFlow yet, have we? Right? I know. I know. We're getting there. Again, as I said previously, you want to remember those moments when you were just drawing circles on a page. You remember it fondly. I'll just show you. I'll give you the answer. So basically, how you would make that work would be this. And what I'm really doing is we are going to start, I'm showing you enough of the codes that later on, when you need to start changing some of this stuff yourself, you kind of know what you're looking at in terms of drawing stuff and in terms of getting data in the right way. So if you want to normalize the x's, we want to push x. The input is mouse x. Between 0 and max and max from the x's is going to be the width. It's just going to be window width. Right? And then we want it to map from 0 to window width to 0 to 1. Same thing goes for the y's. Mouse y. But this time it's window height. OK? That's the map. That's the usefulness of the map function. So now, if you go to... My wife's calling me. So now, wait. I should stop it printing out the other stuff. Why is it printing it? OK, here we go. OK, so now, if I should click it there, it should be 0, 0 close to it. Yeah? 0. If I click it right to the end there, it should be near 1 for x. Yeah? Yeah, yeah, yeah. This is the second one. I'm going to click bottom right. It should be almost 1, 1. Yeah? So 0.99, 0.974. So we've just kind of normalized everything from 0 to 1. All right? If you look in the functions... Calculate line? No. Calculate... Collect points, which is kind of the... Collect points is where we're going towards, what we should end up with. You can see I've actually got some functions here. And all they are is just a map function. But I've just kind of created a helpful versions of them to do essentially the same thing. So norm just saves you from having to write all of that stuff yourself. So if I put it here, I'll just paste them at the top like that. So with these, all this is doing is calling map. We're providing in the max. And norm x is just doing the same thing, but it's giving you window width. So with these helper functions, you can replace this with just norm x and norm y. That's all of those functions are doing. Let's get to those functions at the top. All they're doing is what you just did in just fewer characters. Hopefully, let's see if that works. Yes! Oh my God. I'm so good at this coding stuff. I should be paid for it. If you actually look at collect points, that's all I'm doing. We should be basically here. Oh, that's how you do it. That's a clever way of doing it. There you go. Yeah. So now, let's take a look at map, collect points, explain lines. So I think I explained lines very briefly before. So you have a line x1, y1, x2, y2. Before, I showed you the equation of a line. Remember that? Y is equal to mx plus b. For some reason, in a code, I then changed it to y is equal to ax plus c. I must have had a reason. Oh, then this shouldn't be const. So all I'm going to do is at the top of my here, I'm going to add another few lets. Let's A, let's C. This is just two values. Then I'm going to create a function which is going to give me a value of y given a value of x. So get y, given some value of x. This is just mx plus c, but for some reason, and I cannot figure out... Oh, I remember now y. Yeah, I remember y. I used ax plus c. That's all that I was doing. That's all that I was doing. So, given that, given that this is some new information that you've got, given that you know how to draw a line, can you draw this line? This is a line and it should look like that, or it should look like that, or maybe it looks like that. You don't know. Draw this line on the screen. Remember x is zero here. X is a window width on the other side. That should give you x1, x2, and the function should give you the y's. Then you should be able to join two dots together. You got me? Yeah? Try doing that. Try using this function to draw a line on the screen. Any line. Remember we have to use the draw function. You're very close to starting to write some TensorFlow code. I promise. Very close. Very close. It'll give you another minute or so. Yeah, I thought. All right. I'm going to draw a line now on the screen. It's going to be legendary. So what do you want to do? Let's get an x1. Where's the x1 going to be? On the farthest left of the screen. So it's going to be zero. Let's get a y1. So what is the yy given the value of x? Well, we've got a nice function above called gety. Let's just call it gety x1. There you go. Now we've got an x1 and a y1. Let's get an x2. x2 is equal to the farthest right of the screen. Window it. Let's get a const y2. Gety x2. Boom. Now we have an x1, y1, x2, y2. And then let's draw a line. Let's give it a stroke. Ah. What's the... Hmm. Let's give it a weight to give it some, you know, weightiness, thickness. And then let's draw our line. Ah, what the... Line. x1, y1, x2, y2. Boom. Should hopefully, ha, draw, stroke, stroke red. What have I done wrong? Ah! I've got two draw functions. The beauty of globals. There you go. Two draw functions. Right? One overrides the other one. Right? Boom. Okay. I've drawn it alighted. And if you wanted to do, if you wanted to change what you're drawing, what you're, ah, the line, you can just change it here. So you can change that to 0.5. Not 0.5. 0.5. And it changes the line that gets drawn. Right? Ah, minus 2. And you can change this to 300. Right? So now we've created something that draws a line based off of that equation that we have. That we had before. Right? So given that, let me get back to where we were. 1, 150. Conor, what was it? Something like that. 0.5. Ah, that's the line. So, and we've also got this thing at the bottom. I've still got mine there. Where is it? Here. I want to still draw the circles. Give me a second. Ellipse. Okay? Let's give it a straight way of like 3 and black. Okay. So now I've got this line here. We've also got this data points here. Okay? Remember, we need to figure out the loss. Like, how wrong is this line? If, if all my circles were here, would that be a big loss number or a small loss number? Small? Small. Right? If all my points were here, but yet it drew a line here, would that be a large or a small loss? Large loss. Large loss. Exactly. Right? That's what we need to calculate. We need a way of knowing how far, how wrong it is. And we use something called a common one we use, and it's used all the way through. You'll see all the time machine learning is mean squared error. Okay? Mean squared error. And you, you could get away with like ages now, just using mean squared error in a lot of things, because it's just, it's a good loss function to use generally for machine learning. So what this is is given, given some value, so given this value of x here, you get, this is, this is the actual value, and this is one, and this is one. And yet, however, if you to look on the equation, it would kind of give you these numbers here. So you figure out the distance. Okay? Mean squared. You square it. Oh man. This is where the jet lag is really going to kick in for a second. I need corrections. If anyone wants to correct me, correct me. You square it, and then you divide it all by the means. You end up with one number. Okay? The reason you square it is because sometimes you might get a negative number. Yeah. And you need it to kind of, you don't care about the signage. You don't care the direction it is wrong. Right? So that's why you square things. And that's basically what we want to do now, is we need to calculate the loss. It's super important to understand your loss. It's super important. Because remember everything I showed you? Like it needs to know, TensorFlow needs to know how wrong something is in order to tune something. So that's what we're going to do now. And in fact, if you go to, I'm going to write this out. Yes. No. Man, it's going to be dangerous if I write this out. I'm going to copy and paste. I'm sorry. I'll just go through it line by line. Yeah. So the same curve you saw before. Same stuff you did before. Mouse clicks. You've done all this before. You're storing everything in X's and Y's. And then you call loss. Remember, our A and C is fixed here right now. What happens when you call loss? Each of the X's, OK, you get the X and Y. You then pass it through. You kind of need to do this, kind of take the 0 to 1 value. Oh, no. You don't need to do that. Anyway, that's what I did. You basically get the Y value. Take the difference, OK? That might be negative. You square it. You sum it all together. You divide it by all the X's, all the values. And you get, like, something called a loss. And you print it out. Right? And I've actually got, in the rest of the code here, if you copy and paste the code from Calcutta loss, you'll see that I'm actually drawing that loss value on the screen. I'm drawing it on the screen, OK, as text. So that's the only real difference between this and your other code before. That's it. Just calculating a loss value. Right? So if I now load this up in the browser, this is what you see. And if I draw, if I click here, if I add it, there's no X's and Y's now because I haven't added anything. If I add one close to it, our fingers crossed. Yes. The loss is low, right? The loss is very low. If I hopefully start doing it away, so loss increases. And now if I add some more data points closer to it, the loss reduces, right? That's what's going on. And so what? Imagine this is our data here. This is our actual data, like something like this. The line's wrong. The line's wrong. But now you have a number that describes exactly how wrong it is. And TensorFlow then uses that loss to figure out how to adjust the A and the C because all you need to do is you need to find the right value of A and C so it fits perfectly in all those dots. That's all that we're doing here is we're tuning that value of A and C. That's all we're doing. That's the magic of TensorFlow. It's not really magic. It's nothing. Right now you're like, is this it? This is it. This is it. This is all the machine learning is. Okay. We're going to hit TensorFlow. Before we hit TensorFlow, we have to hit Maths. Maths, right? Matrices. Let's go there. Matrix Maths. Dig deep. I wanted to get some food. Hang on. What did I want? Not food. What's the other stuff that you get? It's not food. Coffee? I can't remember. I think I wanted a coffee. Yeah. Okay. Matrix Maths. It's a simple two-dimensional matrixes. Okay. Just go. Remember this stuff. Back when? When you're adding matrices together, this is it, as long as the same size and shape. You just add the appropriate stuff together and it gives you the output matrix. Okay. Skip broadcasting. I don't know why I've got that. Why do I have this? Oh, sorry. I do have this. Transposing takes a matrix. Blips it on the side. Right. Subtraction. It's the same as element-wise subtraction. No, it's the same as addition. Just take the appropriate one away. I didn't draw this one properly. I just didn't have the stint there. You can figure it out. Division. Same as normal. You just kind of take that divided by that. That one divided by that one. It's kind of element-wise. It kind of goes that way. It's divided by constant all the way through. It multiplies all of it together. And then you can also do what's called element-wise multiplication, which is 1 times 2. And this one is 2 times 3, which is 6. 3 times 4, 12, 4 times 5, 20. I'm going to quickly go through these and you're going to go through some examples in TensorFlow. I want you to have this in your head and then as you're going through the examples in TensorFlow, hopefully we'll go through. We also have this thing called a dot product, which allows you to do kind of like a zipping up. Maybe that's the best way of describing it. So if you've got two ones of the opposite shapes, you can kind of zip through. So all of this row multiplied and then plus all of that row and added up together is equal to that. And that equals that. It goes that way, dot product. No, that's the only examples. I'm going to show you now is how to do all of those in TensorFlow because you're going to need to know that. So open up. No. 10a. Oh, man. Wait. Yes. OK. OK. In this file, you're going to see these things called quiz. Under the quiz, you're going to see this thing called answer. Don't scroll to the answer. Give it a go. Don't go mad. Just give it a go. Try it out. Don't scroll all the way to the answer. Oh, I'm just going to stop copying these into my main. In fact, I will just get rid of all of this. So this is how you create a one-dimensional array, a 1D tensor in TensorFlow. Once you load a TensorFlow, all that tensor is just arrays. You can have a scalar, which is kind of one value. This is a 1D tensor, which is what you would normally call just like an array. A 2D, a second-rank tensor is a two-dimensional array. Then you get 3rd-dimensional, 4th-dimensional, 5th-dimensional, 6th-dimensional. You can go crazy deep in terms of dimensions. And it gets really hard to visualize. So that's why using something called rank and shape is useful. So if I now print this, what you'll see is, oh, why is it not printing the rank and the shape? Console log. I don't know why it's not printing out. Can anyone see why it's not print? Oh, it is printing out, just above. Oh, sorry, it is printing out. So the shape, the rank is 1 because it's 1D. And the shape is 3 because it's just the length of 3. It's just a 1D array with the length of 3. So it tells you the shape. And then what you can do is A dot print. So you can't do console log A unknowingly. You've got to do A dot print. And that prints you out kind of a visual look at the tensor. If you are printing something crazy long, A dot print is nice because it only prints some of it and it doesn't kill your console. That's why it's useful. If you do want to make it kind of clearer, you can just say tensor 1D and it will give you an error if you give it something other than 1D. If you don't provide the 1D, it will try and figure out what it is from the day that you're giving it. The other thing, and then you can give it a different shape. So now we're going to give it a 2D array there. Do you know what? I'm going to just paste it here. So now I'm giving it a 2D, two-dimensional array, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4. So this is a rank 2 because it's two-dimensional. This helps. And we're only dealing with two-dimensionals right now, so it's useful. When you're dealing with 3, 4, and 5, you need to know this stuff. And the shape is 2 by 2, 2 by 2. But then when you print it out, it's printing out like that. Less tensors. Let me open up on the other screen so that I can just copy and paste. Another way you can do it is you can just give it a flat. You can give it as 1, 2, 3, 4, just as a 1D array. And then you can say, actually, I'm going to give you as a flat array, but the actual shape is 2 by 2. You can do it that way if you want. And that's useful, very useful, especially when you're loading up tons of data and you just stored it as one CSV file. And you just want to load it up and go, here it is. By the way, this is the actual N shape. Don't bother. You'd have to reformat it into the right shape before you give it. But anyway, there you go. It's just done that. Now we have a quiz. Now we have a quiz? Yes, we have a quiz. Right. Make a... Oh, no answers. Don't go to the answers. Make a rank 1 tensor of 4, 5, 6. Remember, rank 1 tensor is just a 1D array. 4, 5, 6. And then a rank 2 tensor, look how that's laid out. 4, 5, 6. What does that mean? Column 4, 5, 6. Then a rank 3 tensor, 3 dimensional. What? Yeah, rank 3. Oh, fuzzle. Oh, my God. 4, 5, 6. That way. Look at those brackets. That's insane. Right. Try that out. I'm going to go for it. Let's give it a go. Or maybe I'll give you a second. All right. Let's go. Let's go for it. So we have... So the question was, let me copy and paste the question, make a rank 1 tensor of 4, 5, 6. Let's try that. There's a couple of different ways you can do that. You can just type it out like this. 4, 5, 6. Pretty simple. Let's print it out as well. 4, 5, 6. 4, 5, 6. There you go. Simple. Simple as. You can do other ways. If you wanted to, you could do 1D. Add a bit of type. No, it's not type checking. It's kind of type checking, but not really. Or if you did really want to be completely specific, you could then say, give it a shape. And the shape is 3. If you want to be really specific. A rank 2 tensor. So for that one, you would basically do... Well, you could literally just give it this. And then figure the rest out. No, it won't. That way. Fingers crossed. Is that what I wanted? Yes, that's what I wanted. 4, 5, 6. Colourma. 4, 5, 6. But again, you can see the shape there is 3, 1. 3 rows of 1 value in each row. So if you wanted to, you could be... You could give it like this. 4, 5, 6. With 3, 1. You could do it that way. Same difference. What the hell is this? What's this? Oh my God. Let's give it a go. What is even this? What does this even look like? Like, how can you visualise this? I can't even... It's just another dimension. I can't even visualise it. It's insane. But basically, this is it. Uh-oh. Where I do? One more. My daughter. My daughter was doing homework the other day. She started to do computer science. She's 13. She started to do computer science at school. She was like, Assam! Or what? She was like, Stop working! I looked at it and she'd missed out some brackets. I was like, Yeah, you missed out some brackets. I don't know what. I missed out the brackets. She put the brackets in. And it worked. My wife's a programme as well. And I said, My daughter's just got her first brackets error. Imagine that the first time. It's right there. The rest of her life, she's going to get a million of those. It's there for the first time. Anyway. So that's what it looks like. But still, how do you visualise it? It's kind of like, What does it even mean? Unless it's why the shape's getting become even more useful now. Because it's like three... Are they even columns? I don't even know what to call them. Three... Thing? Layer? Yeah, three layers. Three zeds. It's like another thing. That's the shape. Three of something, and then each of those things has one of something. And each of those things has one of something. Right? You have to start thinking in this way. Remember MobileNet? Yeah? Told you. Yeah, MobileNet. That was fun times. MobileNet, wasn't it? Yeah, that was good stuff. And then... Yeah, and that's that. Okay, so you can do other things. So you can transpose a matrix. Let's do that here. What does a three-dimensional matrix look like when it's transposed? I don't know. Let's find out. Let's find out. Oh no. Yeah. It looks like that! Whoa! Yeah, if we did this in a 2D thing, that might make a little bit more sense. You can visualize that a little bit better. We're getting there. I swear. We're getting somewhere. We're close. Right? So if this was 4, 5, 6, we'd then go either way, transpose. You can add things together. Let's go here. Let's go here. Right? Two tensors. We all know what this is doing now. This is turning this into a two-dimensional tensor, 2 dot 2. And this is just another way of describing a two-dimensional tensor. We can add a, add b, and then print it out. And that's it. That's doing the element-wise addition that we showed before. We can also do something, and this is kind of more TensorFlow-y than anything else. This is not proper matrix multiplication. But you can broad... Broadcast, which is where... If some data is missing, it will extrapolate it, extrapolate it, repeat it for you automatically. So normally, when you're doing addition, you would... And A is a two-dimensional, like four values. The thing you add it to has to have four values. We can just give it one value, and it will go, I'll copy it four times. It's called broadcasting. And we're going to use an example of that, and that's what works. Did that broadcast? Eight. Three, eight, four, six. Last one should be 10. Yeah, that broadcasted. And... You can also do subtraction, as well as addition. It's the same way as you did before. So A, subtract B. Same as addition. You can also do division. We're going to skip that out, because we don't need it. And then... You can also... Multiply. Same way as before. Multiply by constant. A-mult-two. Print. There we go. Multiplied. And you can also... Multiply by another matrix. I think I'm going to skip the rest, because that's all you need for this particular example. But this is doing the same thing as you did before. So it's going to be one times two, then two times three, then three times four, then four times five. And that's what it should print out. So the last one should be 20. Maths. I don't think we need to do the rest. Okay. Hopefully, if I skipped enough, or skipped the right stuff, that should be all the matrix maths you need to know in order to do the rest of it. All right? I... I'm just... going to... read... read through this. Am I? Yeah. I want you to get to the point. Give me a second. I've lost track of where I am. Oh, yeah. I don't know. So then we take it from... Yeah. So then we take the calculator loss. So if you take the calculator loss code to begin with, now we're actually going to start doing some real TensorFlow stuff. Or are we? Right, I see. Because I'm going to mess this up. I'm sorry. I want to show you how to do linear regression. I'm going to go through it, and then I'm going to ask you to do polynomial regression yourself. Okay? I'm going to mess up some brackets at some point. I don't trust myself right now. I'm just going to talk through the code. I apologize. Well, I don't apologize. So this is basically the same code as we had before with the loss. You can see that you've got the get y, the norm x, the denorm y. But now we've got this kind of other section with some TensorFlow stuff. Right at the bottom, we've got the same stuff. We've got the mouse clicked. We've got the draw, drawing the points on the screen. They're drawing the line, drawing the loss. It's the same code as we had with the calculator loss, but we've got this kind of TensorFlow bit at the top. What's that doing? All right? We create these things called variables. These are the things that, the variables they can change. These are the things we want to change, the a and the c. Right? We initialize them with a random number. This is our weights to our node. Okay? We then create what's called an optimizer. This is the thing that tunes the values of a and c. We give it a learning rate. A learning rate tells you, tells you, optimizer, how quickly it should try, or how large it should try and tune those numbers to get to the right value. Okay? What a big jump it's going to make. So if you did 0.5, it's going to maybe try, it's not going to do 0.5. It may be able to change each weight by 0.5 as you go down. And that may be that will make you train faster, but at the same time, maybe you'll overshoot and go to the side if you think about it, right? So normally you want to have a pretty slow learning rate. Like that. Okay? That's the ideal thing to have. I think I've got 0.5 there for some other reason. If you look in a TensorFlow documentation, there's lots of different ones you can use. API. Uh-oh. Oh, shizzle. Come on. Let's go here. Am I there already? I am there already. SGD. Oh, man. They've changed the docs train. Yep. Oh, this is proper TensorFlow. That's why. API. There we go. There we go. There's lots of different types of those optimizers that you can use. This is called stochastic gradient descent. Simple one that you can try and use. There's other ones there. And if you go through the other examples, you can try using some of those. For us right now, we're just going to use stochastic gradient descent. And then we've kind of got a predict, which is basically just like get y, but for TensorFlow, it's like get y, but for get y, you could just pass in normal numbers. But now we're dealing with TensorFlow's and tensors. So we do that matrix multiplication. A.mole.x. So the value is it passing in. Add our C. So A is this A. X is actually going to be our full list of X's. So it's basically like a matrix. If you've got 100 points, it'll be 100. That's what you're passing in in one go. It's doing a matrix multiplication. You're doing all those calculations at once in the predict function. And what it's going to give you, it's going to give you an array of results. That's what predict is going to give you. For a given array of X's, tensor of X's, it's going to give you a tensor of y's. And you do this loss function again. Remember that loss function we spoke about before. Given the predicted y's, subtract the actual y's, mean squared error. This is how you do mean squared error in TensorFlow. You've got an array of y's, what you've calculated. Then you've got the actual y's that you clicked. You take one from the other. You then get an array of the differences between those y's. Some are going positive, some are going to be negative. So you square them to make them more positive. Then you take the mean. Mean squared error. They think it's just a way, once you're dealing with tensors, you can't, maybe you have a dot print to actually get the actual value inside it. You go use data sync. And then this is the actual training thing. This is where it gets TensorFlow-y. So, for a given number of iterations you want to go through. Go through all the X's. The current iteration is the current epoch. It's just a normal loop. Right? I go to TF-tile in a second. You create a tensor. You're getting a one-dimensional tensor of all the X's. That's all this is. A 1D tensor. I've done it before. Then we're doing a 1D tensor of all the y's. This is just a normal X's and Y's that you've collected from the mouse clicks. That's what you're doing. I missed the whole section. Which is why this is going so fast. I missed the whole section. I really apologize. Let's go back. Rewind. That's rewind. I knew this was like I haven't covered this. How is this? I'm not normally like this. I missed 11. I went straight to 12. I missed 11. I knew this. Start from scratch again. Let's delete this. That went from like 0 to 100. Straight. That went so... I'm sorry. It is going to go from 0 to 30 though. I'll tell you that right now. Okay. Let's create TensorFlow. Create a variable. No. That's a variable. That's all it is. Something that has a value. It has a value of 4.12. Okay. We then can then create an array of... a 1d array. 5 long. Of 2222222. Okay. And then just like you saw before, we can perform math operations. Yeah. The y's. 22222. Multiplied by x. 4.12. And you get the average of it. 8.24. 8.24. 8.24. The average of that should be 8.24, right? Yeah. Okay. That's what that should print. Let's do res.print. Wow. I really skipped ahead. I didn't mind before. 8.2. Okay. We're in the lecture today. Part numbers. There you go. Close enough. 8.24. Did I just save that? I did, didn't I? All right. Okay. This is why I didn't... To actually get the value of a tensor, that's when we do res.datasync. Okay. So that would return a floating point array and we're just getting the first value of it. All right. Let me ask you a question. What... What would the value... We want to train something here so that the result of this here is zero. Yeah? The result of this line here is zero. Res is zero. Okay? What does x have to be for that to give us zero? Zero. Right? I want to train this with TensorFlow. I want TensorFlow to turn that into zero for me. I want TensorFlow to turn that into zero for me. Right? That's what we want it to do. So we create... God, I skipped so much ahead. That was shocking. So this is when we start using TensorFlow. Okay? So we create unoptimizer. Create like that. Okay? We're creating an optimizer. Oh! Sorry, I spelt it wrong. Optimizer. We start off with that. And we're going to call... First thing I want to do is I'm going to print out the value of x here. Let's print out twice. And then in between that I want to call optimizer.minimize. I'm going to explain this in a second. Alright? So print out x. It should be 4.12. We want it to... We want the optimizer to minimize some value for us. Minimize a loss. We always want a loss to be less. A loss number to be less. That's what we call minimize. We're minimizing a loss always. We pass in a function. And that function is basically going to return... It's going to be a tensor. So it's going to be y's. Okay? It's a tensor here. Multiply it by x. x is a variable. Min square. Right? So if that's the same as... We'll be squaring it as well. But anyway, that's the same as essentially this. Um... TensorFlow knows... This isn't... This isn't returning an equation. This is returning the description of an equation. And it knows that x is a variable. So it knows... It's not going to try and change anything in the y's. Because the y's is just a tensor. It's going to try and optimize those values. It knows that you're return... That you're giving it an equation where one of them is a variable. So it knows that it can only tune the variable. Right? That's what it knows. And it's going to try and tune it that it gets... So the y's become 0. Because if the y's become 0, then that's the lowest working actually go negative actually. But that's why we square it. So it won't go negative. So what does this show here? So if we now go into... Here, you can see there you go. 4.2 4.09 TensorFlow changed a value of x all by itself. All by itself. Because it knew it's trying to minimize... It hates loss. It's trying to minimize a loss. If you change this learning rate to like 5 Whoa! It went completely the other direction. Right? That's what the learning rate is all about. If you change it to be like like that If you change it only a tiny... I didn't change it at all. That's too much. Change it a tiny, tiny amount, right? So the learning rate is important. If the learning rate is small you're going to slowly, slowly, slowly, slowly, slowly get the right answer. But it's going to take you a while to get there. If the learning rate is too big you're going to zoom past but then probably overshoot and go through the size. Choosing a good learning rate is an important... Well, yeah. Yeah, it would go negative to... Yeah, which is why we square it to make it well, yeah, why we square it. Okay, but that's not really that useful because we're only learning it once. We need to keep on doing this until it gets to where we want it to be. And all we do is wrap in a loop. That's it. We just do this. We just do this. Loads. And so... we just do a four loop like totally wrong. Oh, have I got a thingy? Why is it not... Am I messing up? Tab? Yeah. Yeah. I'll screw it. I'll just copy and paste it on the screen here. Nice. Added too much code. Who needs that much code? Don't need that much code. And then there. All right. So I'm just going to run it a thousand times. I'm just doing exactly the same optimizer thing, but I'm running it a thousand times. That's all I'm doing. And so what it's going to do is it's just going to try and tune that number X over that period of time. So now you can see. Let's try this. Let me refresh. You see? It's tuning it. Remember, was that a thousand? Oh my God. It's going to get to zero. So let's now choose something like that. Maybe we'll go faster. See? It's pretty much gone to zero. With 18 decimal places, right? That was pretty fast. That's what the TensorFlow is doing for us. That's it. We tell it some variables. We give it some data. We wrap it all in matrices and tensors. And it then goes, okay, what's your loss function? All right, whatever. I'm just going to try and minimize the variable. Change the variable so that this loss is the less it could be, the least it could be. Right? Okay. One thing to note, though, one thing we're kind of very, very, very used to in JavaScript is you never have to worry about memory ever. We just know that when we create something in JavaScript, it takes up some memory on your computer, but JavaScript knows at some point later to release that memory back out, right? We know that, right? However, TensorFlow uses your GPU, your graphics card on your computer. It's quite interesting, actually, how graphics cards work. What they allow you to kind of parallelize all that's going on in an easy way. However, your graphics card, JavaScript can't then, it just doesn't know when something's not in use anymore. So it doesn't know to free up memory. So this actually right now has a memory leak. And if I kept on running this, we'd run out of memory on the computer. So what you need to do is you need to wrap all your TensorFlow code in something called TFTidy, like this and like this. Right? Uh-oh. Uh-oh. Uh-oh. What have I done? Our brackets errors. What have I done? There? I'm just going to keep on, yeah. There you go. So what this is going to do is it's just going to tell TensorFlow whenever this function ends, just delete whatever it is, just delete all those tensors automatically. Right? And that's what TF.tidy is doing. And that's it. That's TensorFlow. Right? What's going on when you're using the optimizer? Right? It's just changing if it detects a variable in an equation, it's going to try and optimize it given some data and loss function. Right? Let's go back to what we were before then. Right? Hopefully now you've got some, uh, some head space there. So... I am outside the... Uh, no, I think here it's okay because it's outside. You could do it inside, I suppose. But I think you need to because you don't need to free it up. This is kind of like, at this point, whatever there is, delete it. I don't think this is... Maybe you're right. Maybe this is creating a separate tensor. Maybe. Now let's grab all this. Okay? So now I'm going to go through this again. And then we're going to go into polynomials and you're going to do all that yourself. Alright? There we go. Back here. A and C. They're variables now because you want TensorFlow to tune them. Right? Variables, TensorFlow tunes variables. We create an optimizer. You've seen this before. Right? We're going to train this optimizer. We're going to go through a loop. Number of iterations. tf.tidy. There it is again. We're going to get the x's and y's that we've already stored. And call optimizer.minimize. And it's going to try and minimize whatever this value is here. Whatever this returns, it's going to try and minimize that whatever that is. So what do we get? So we've got the actual y's from the things. What are the predicted y's? And that's just calling get y. Get y. a is equal to mx plus a. y is equal to ax plus c. That's all this is doing. Give you an array of x's. Give us an array of y's. Okay? That's our predicted y's. Then now we've got our predicted y's. And our actual y's, we can get our loss. And this is the kind of mean square area you saw it before. You take them away. Alright? And you want this to be the least loss as possible. Okay? And that's it. That's all it's doing. And it's going to change the values of a and c repeatedly. One for each one of those. And then tune it. And so all I'm doing here is I'm getting the value of a and c out of it. And then that's actually used. You don't have to look at it. Look and draw line. Oh yeah. It's using get y and that's using a and c. That's changing the values of a and c just for the drawing function. And the last thing you need to know is tf.nextframe. It's running in the browser. So if you're running any kind of mathematical stuff in the browser, it's just going to, hmm, take up all the CPU. And it's not going to be able to have a chance to draw anything. So what next frame does, it just goes, I've done something. Let yourself draw something on the page. And let me do another iteration for the loop. So that's just for that. That's what next frame does. So if I now ran this in the browser. Kill that. Fresh the page. There you go. This is now that movement, that animation that you're seeing right now. That is TensorFlow learning. Each time it adjusts the value of a and c, it moves it. And these are the iterations that it's still running. All right. Remember mobile net. Mobile net was good, wasn't it? Yeah? Sometimes a mobile net. Right. And I was zoomed into that, but hopefully that should be enough information to do the next challenge for you. We have another type of let me get the there's a file called polynomial.solution. Don't open it. A curve there's another equation to describe a curved line. A line with a curved line. A curve. This is a curved line. That's just stupid. Do you remember this one then? Do you remember this? Y is equal to mx squared. I've got ax squared plus bx plus c. Slightly more complicated. But that's what describes a curve. Right? Oh, shizzle. And so if looking on the screen, this is the same one. This is the polynomial version. So you can see it's drawing a line. It's drawing a curve. Oh my god. It's drawing a curve. It's doing the same thing as before. You're giving it some data. You've not just got a and c. You've got a, b and c. You've got three variables. A, b, c. Not just a and c because you need the b as well. And so what I want you to do is I want you to start off. Copy the code from 13 dot polynomial regression start. Copy all of that into your main.js. Well, there's a whole bunch of commented out code at the bottom there. Anyway. And here flesh it out for polynomial. I've put some clues in. Put some clues in. To do. If you see it to do you're probably going to have to do something. To do. Tense, and by spelling it's terrible. Tense of four. To do. Right? To do. If only life if only life was like this, if any when you're given work to do work people put to do's no, no, you've got to figure stuff out in life. I've put to do's everywhere for you. There's just a couple of places for to do's. This is it. Give it a go. Oh, shizzle. This should be a to do. Don't forget that one. There's a spare to do at the top. Line 30 is to do also. Everything else here is enough to give you the curve. You don't have to do anything else. Just flesh the rest out and you can see here a little bit more going on. Give that a go. Five minutes, ten minutes. I have a vague memory that every time I give this there's something I've forgotten on here which I then pretend was a trick question to keep you all on your toes. So, we'll see. Maybe is a trick question. Well, while you're doing that I'm going to turn myself into a meme. Here we go. Maurice. Sorry, say that again. Yeah. Yeah, you can do that. You can just play around with it. Start with a low one and if it's not converging pick a bigger one. Go down. Oh! No, because you're setting up the optimizer before you're running it through. You could probably do something clever with batching or something like that. But, yeah. Yeah. Huh? Yeah, normally, yeah. Right. You done that? No, yeah. Alright. What's up, people? Want to chill? So, I'll just go through the answer. Let people go off. Alright. Okay. I'm going to go through the answer. Let you all go. We need a variable. ABC. Equals 2. Alright. My spelling is terrible. So, we need another way of calculating Y. AX plus C. So, we need A times Oh, man. I should plus B times I want to look prettier. Figure this out for me. AX squared plus BX plus C. TensorFlow. Needs another variable. ABC. Okay. I'm just going to tune another value there. This is probably the complex one if you might have got stuck on this. In fact, I can't remember it. What was this? Yeah. So, we need another way of calculating that kind of... This is the get Y but for TensorFlow world. So, how do you define that? There's basically A multiplied X squared. Alright. Add Oh, add B multiply X at C. Kind of like TensorFlow to up. Then something we want to do is just store it so it will then render on the page. Does that work? I don't know. Let's see. Let's figure it out. Is it working? I think it's working. That's how it works. Did anybody get that? Yeah? Have you all got it? Really? Yes! I taught people. TensorFlow. There we go. If you want to go and learn a bit more unfortunately, I don't have it in a really good format but the very next thing if I was teaching this a longer version of this workshop you would go to which repo No. No, I can't. Do I jump straight into there? Maybe I jump straight into there. Oh, shit. Uh... Same thing, MNIST. Might be tough for you to follow without seeing it. The next thing I would show you is how to use MNIST is a data set where you can... I'll show you. It's a data set where you can create eventually I'll finish my sentence. Shut up. Here we go. Hopefully I don't need to do anything. Okay. This is the next example. If you don't want to learn it, the next step will be the next step. Hopefully it works. It doesn't look like it's working. Oh, it is working. And this basically is training up to this is the next step is where you kind of pull a lot of stuff together and learn how to oh my god I am ruined right now. Recognize handwritten digits. The MNIST data set is a famous data set of lots of handwritten digits. It's a very open data set. It's kind of a famous example. And what this would allow you to do after it has finished working is right now it's training up on tons and tons of images of handwritten digits. Learning how to recognize handwritten digits. And then you can then use it. I can make this faster. Wait. Because I'm using around what I'm doing. There. Great convolutional model. Great dense model. Great dense model. Here we go. Let's try it on the time. Training. I'm using a simpler training model. Okay. And then you can put in a number. It's off the bottom of the screen. That's how can I wait. I know. I'll trick it into how about that. Okay. This is two. You can't see it. It's on the bottom. But it's two. It's two. Right. It's two. And like this, this is not going to be right or wrong all the time. And this is like eight will be like here or something. There you go. So. No. Wait. If I did it, like if I did it from here, I refreshed it now. Yeah. Boom. Two. It's actually telling you probability that it thinks it's two. So this is kind of the very the next step. And this actually does use more than one you're on. Um. And in the example on github which is gone. Where's github gone? Oh, here it is. This one's done a bit differently. It's completed a master. A master could be master started and completed is the end. Um. And you'll see here there's a built it with a couple of different ways of building it. So you can have a look at that see how it works. It's kind of very well documented. The one you might want to start off with dense. I don't know. It's going to be quite difficult for you to follow it from there. But if you didn't want to kind of flesh it out a little bit further, that's probably where I would start. Or like sign up my book. Yeah. Did you sign up my book? Well, we'll see. I'll check the numbers tonight. I'll check it. I'll see. It's time for the book because I'm going to try and write all this stuff up. It's a little bit all over the place. I'm going to write it all up into a book which I'll then release. You can learn it a lot more. What did you sign up for? You signed up on a bit of a ding. Made a hundred. Spoke at another conference recently. It was 1,400 in the crowd. I was like really excited. I thought I'd reached 100. I had like four sign ups. I was like really upset. I was like you know. But hey, maybe I'll reach 100 now and I'll launch a Kickstarter. I made the cover. I made the cover. It's a good one, right? It's the robot. I made that. Well, I didn't made that. My designer friend made it. I just want the book because it's a good cover. Look at it. It's cute. Made a little robot. All right. Sorry? I should do that. But it would draw every second. You'd be trying to move the mouse to the sign up input field. I'd be so slow. It moves away. It moves away for me. I only want the people who really want it. That's it. Yeah. That's it. We're done. Thank you. We're not done. We're not done. Wait, wait, wait, wait. You're going to have to fill in a survey form. No one leave. Everybody stays. Okay. In two seconds. No one leaves. No one leaves. How'd you quickly copy? I've forgotten. How'd you copy? Three. We are in... Yes. Asia. Boom, boom, boom. Here we go. Preview. Boom, boom. Huh? The title. Forget about the title. Don't worry about the title. Right? Where's the slack? Where's the slack? There's my slack. There you go. Right. Open the slack. Do it now. Once you click it, you see the signup form. The form. Were the objectives of the workshop clearly explained? No. Is one? Yes. Is five? Did the workshop meet your expectations? Not well. One. Very well. Five. It's weird. When I go through the form, I get so many people filling it out. There's no link. No one... I have to go through the form. Was the workshop too easy, too complex, or just right? So one is too complex. Five is too easy. I'm guessing where that one might land. Here's an interesting one. If you ask to give a five-minute presentation about TensorFlow.js, a five-minute one, how confident are you that you could present this in your work or something? I only managed to go through two applications today, but anyway, did I go through too many or too few? Can you tell me one thing you liked about the workshop? It's the only goodness that I get. But no, there's no goodness. The most important thing is what should I change about the workshop? What is something I should change? You saw some of the things I changed this time around to make it a little bit more engaging. What are some of the things I might change? Anything else you want to tell me? Thank you, that's it!