 Not every day. Not many. How many of you have used it quite a bit and feel comfortable using it? That's less than four. Alright, so TensorFlow is basically a tool for building computational graphs. Everyone sort of talks about it as this deep learning tool, and it is that, but it actually can do a lot more than just pure deep learning. You can do lots of different things with TensorFlow. And one of the things that I'm going to try to do is show you some of the things in relation to that. The other thing that makes it so special is it allows you to take this graph and distribute it over many CPUs or GPUs. So it allows you to basically deal with extremely large data sets and train them over 64 GPUs, 128 GPUs. Now, pure TensorFlow has been a very low level library. Alright, so we'll talk about later on about some of the announcements that have happened today. And some of the things that are sort of changing. But pure TensorFlow is a very low level library, and on top of that I've built a lot of other libraries that many of you have used, things like Keras, TF Learn, TF Slim. There's quite a number of them now. The other thing is TensorFlow by far is by far becoming the standard deep learning library out there. When you look at the number of stars, the number of commits, everything on GitHub in relation to this, you look at news articles, you look at tutorials, everything is sort of going towards TensorFlow. And I think especially after 1.0, which was just released overnight, it's going to accelerate even more. So I talked about pure TensorFlow versus high level abstraction APIs. You always have the most amount of control when you're using pure TensorFlow. While things like Keras and stuff are really fantastic to do, they're limited in that you can only basically build things that they have already abstracted to build. It's much harder to basically drop down into pure TensorFlow. That's one of the things that's going to change. We'll talk about that later on. The other thing with the pure TensorFlow API is that it's a lot more work than Keras and TF Learn. But I think one of the big things about it is it allows you to actually understand more about what's going on in your network, and then you can get a much better sense of how the people actually would work. And one of the things that I know for myself, when I speak to a lot of other people, when you use Keras or something like that, it's great. In a few lines, you can write a network, but then most people don't understand actually what that network is doing. They see that this is a certain type of layer, okay, but what does that mean? And what does that actually mean mathematically? With TensorFlow, a lot of what you're going to be doing are the actual mathematical operations. Okay, and then the other thing I think is probably one of the biggest advantages of working into pure TensorFlow is you've got access to TensorFlow which allows you to basically do everything from look at your graph to look at the training rates for different things throughout it. So the big concept of TensorFlow, there are four main concepts that you need to understand. The first is the graph. The second are the operations that you run on the graph. The third are sessions that basically run your graph for you, and the fourth is TensorFlow. So I'm going to try and go through these four things tonight, and I give you a sense of what they are, how they work. So one of the biggest things is that the graph, everything has to be built on a graph before you can execute, and you'll see when I go to the code in a minute what this actually sort of means. But unlike in a normal sort of code, you don't just write a piece of code and then you can just put it into a print statement or print down the mathematical operation. The other thing is the graph can actually be built because we can write out graphs in different languages, quite a number of languages now. Just overnight we've also added Java as an experimental language, they've added Go. There are quite a number of graphs that, sorry, quite a number of languages you can write your graph in. But generally what happens is all of these languages then get converted down and compiled to C++. And this is one of the things that allows TensorFlow to be so fast is that it can compile it in such a way that then those equations and operations can be distributed very easily. There's a new thing called XLA, which we'll talk about later on. It was announced overnight. We're just going to basically take this to even a higher, to sort of a better level of where TensorFlow will actually be able to sort of edit and optimize your graph for you as it's compiled again. So the operations, operations are what's performed on the graph. This relates for everything from standard math operations to common deep learning formulas and tools. This gives you sort of that high level of granularity in your model to really get down and see exactly what's doing and be able to change something and see does it do something. This is really important, this is one of the reasons why TensorFlow is so favored by researchers. It's because if you're trying to come up with something new you can't just use pre-made layers that other people have already made for you. Sessions, sessions get executed on the graph. Nothing is right until you're in it to initialize and run a session. And finally the TensorFlow gives you a visual representation of our model, gives us stats about our training variables like loss, accuracy, etc. And TensorFlow soon is going to actually have debugging inside. Okay, I will come back to that. Let's jump into some code. So how do you know Python? Oh, wow, that's fantastic. So many people. Alright, so I'm guessing everyone knows Jupyter Notebooks. I don't need to explain what Jupyter Notebooks are. Right? Yes? Okay. So basically I've got a notebook here and I'm going to go through a very simple network that I've built and explain it. And first I'm going to also explain some of the things about the graph and some of the things about the operations and stuff. So the first thing we need to do is we actually need to set up our graph. So one of the best ways to do this is to reset the default graph. This allows us to basically clear anything that was on the graph beforehand and get a sort of like a brand new default graph. We then basically can set up a session if we want. So what I'm going to do is start off with a very simple sort of neuron thing. Just very simple math equations. You can see here we've got two inputs. Three, four. We've got the node C that's doing a simple addition. We've got D doing a multiplication and we're also doing multiplication. So let's look at how we write that in TensorFlow. So I'm going to start off with some constants. We're basically going to just define them and this is how we define things in TensorFlow. We basically even though these are very similar to NumPy NumPy numbers and things like that, TensorFlow has its own set of everything and you're best to use TensorFlow's own ones. Otherwise when you actually run a model it has to copy them from NumPy and actually then turn them into TensorFlow so it actually slows down the process of running the model. So here I've got basically Okay, so here I've done my assignments, right? I've basically put the input A, input B, I've got an addition happening here, I've got a multiply and I've got another multiply. Now if I print out C, what do you think we should see? So if we were using just normal Python numbers or something like that, like ints or floats or anything, we'd actually see the result of the equation. But we don't here. It basically just returns a tensor because what we've done is we've actually assigned these to our graph but we haven't actually run the graph at all. So actually if we come down here we can actually see what's on that graph and we can see that these are what we've already put on the graph. And if we want we can actually start to look at some of those. So let's look at the A multiply B. So this is basically look at the load definition. We can see that it's basically a model op. We can see the two inputs that are coming into it. We can see that it's a flow. We don't actually see the value though because there's no value assigned to it yet. If we want to actually see something we need to run it in a session. This is the print statement wrapping a session run command. And now we can see the number that's assigned to that. And then now we can actually do two things here. Run E, which is basically our last thing. It's very important to understand that when you, let's say we were running this node TensorFlow knows that to run this node it needs to run all the other nodes that connect to it. It takes care of all of that for us. So now we run that node. At the same time I've also set up a summary writer to basically send the graph to TensorFlow. So if I come over here there's that graph. And if you see if we click on these things we can actually see what they are. We can see what each operation does. Okay so moving along. Does everyone know what a tensor is? Okay but the fact that people haven't put up their hands I'm guessing not everyone. So the best way to think about a tensor or a way I like to think of it is either a multi-dimensional array or a multi-dimensional matrix. An in-dimensional matrix. So I put a couple of pictures of one so that you can start to sort of think about it. This is kind of like a normal array that we would think about in code. Then we've got a 2D tensor, a 3D tensor, a 4D tensor, a 5D tensor, a 6D tensor. Very quickly representing these things in a way that humans can sort of think about them just goes out the window. And you'll see that often you will have very high dimension tensors going on being passed around. So doing the little math stuff that we did before is not a big deal at all. But now let's sort of start to work with a matrix. So we're going to basically make a matrix. Again I've reset the default graph. I've basically started a session. Okay so in Jupyter Notebook's exclamation mark means just a command line. So I'm basically just removing the previous tensor board files there. And what I've done here now, I've got to stop the tensor board. Okay so when you're running tensor board you need to obviously stop it. So now I'm setting up two matrices. And basically we're just doing very simple ones and I'm going to multiply them together. So if I want to get the shape of one of them, how many people know NumPy? NumPy really well. Okay quite a few of you. So in NumPy we basically just say dot get shape. Intensively we have to say dot get shape. And we can see the dimensionality over there. Okay let's run it. So yeah I'm just sort of printing these out so you can actually see what's inside them. And then there's the matrix being calculated. And we'll go back to tensor board and have a look at it. And the reason I want to show you this even though it doesn't look that amazing is that I want you to understand that even though you see just sort of one little note there one little thing going on there, that's actually a matrix with lots of numbers. And it could be like 10,000 by 10,000 numbers. So when you see things represented on the tensor board you need to sort of like often highlight them, click on them, see what they're doing, get a sense of what's actually going on. Okay let's go ahead and build a network. So I'm going to build a very simple MNIST network. Does everyone know what MNIST is? MNIST is basically just a bunch of digits. So the stock standard boring thing that everyone uses. We're going to build a very simple MNIST network. Let me just bring in my train test. So basically MNIST is 65,000 images. So yeah we've got 60,000. 60,000 images, we're basically taking 50,000 for our training and 10,000 for our testing. So going back to sort of like the tensor thing it's good to think about images and what does that tensor look like for an image? So the MNIST images are 28 by 28 pixels. So I made a little picture here to show you. So we've got 28 by 28 and then we're going depth for the number of pictures that we've got. Now if this was say an inception network or something else that was dealing with photos, each one of these would generally have three layers. We'd have an RGB. So if this was actually in colour we would have 28 by 28 by 3 and then depth for number of images. Now actually what we're doing with the MNIST tonight is we're not going to do anything sort of CNN or convolutional so we're actually just flattening the tensor to make it a vector which is basically 784 numbers. And that's basically just done by taking the top row of pixels, taking the next row of pixels and just sticking it on until we get to the end. We can still use numB or something like that to basically represent them so here you can see I'm printing out sort of the numbers and we can see what the labels are. Okay. Now what I want to do is building a batching system because we can't just put 50,000 images in each tensor flow. It's not going to be a good way of doing it. Especially if we had actually probably 50,000 we might be able to but let's say we had like 500 million pixels going in or 500 million images going in. So we need a batching system. So the batching system here is actually from TensorFlow's MNIST. TensorFlow actually uses MNIST as one of their sort of tutorial things so it's actually part of their library and they've actually made a batching system for it. So basically all I have to do is call this and give me a batch of X, a batch of Y. So the X is the tensors with 784 vectors and the Y is just the labels. So the Y is basically just going to be a one hot encoded 10 vector. So basically for the numbers, 0, 1, 2, 3, 4, right up to 9. Can I make a pickup? Alright, so here's a rough diagram of what we're going to build. So we're going to have 784 inputs going into a hidden layer of 384, going into another hidden layer of 100, going to an output of 10. Any questions about that? Do people understand what's actually going on there? I think that's pretty self-explanatory. So we're going to sort of have our 380 inputs. I'm not trying to train the network for any sort of massive accuracy or anything like that, I'm just trying to keep it really quick. Good question, very good question. So the real answer is voodoo. You get a sense. The more networks you build, the more you can get a sense. I made a mistake is why it's 384. For some reason I kept thinking of number 768 in my brain. So I halved that and I was basically going for half whatever it started out at. Here's what you want. You don't want too many because you've got too many, you're just going to get your network overfitting all the time. You don't want too few because then the network won't be able to generalize and learn. So you will learn that over time you'll get a sort of sense of like, okay, and there are some formulas out there but I wouldn't say that any of the formulas are sort of guaranteed to work all the time every time. It also depends a lot on what you're doing. Here we're doing a very simple multi-layer perceptron. If we're doing convolutions it would be very different. Okay, yes, exactly. So the last one is basically where we're going to compare against the labels. So we're basically, what we're doing is we're crunching all the numbers through here. All these are coming into one node here. They go all out. Eventually you've got 100 nodes going into 10. Sorry? In some ways, for something like MNIST half the pixels are not relevant. Really the network can learn on, there are certain pixels in something like MNIST that tell it whether it's a number or not. For example, a good example would be the center pixel. If you think of the 3 or 4 pixels around the center, if they're black we know that there's a very high probability it's not going to be a zero. So there are certain pixels that are actually sort of more valuable than other pixels. But this is one of the cool things about deep learning. We don't sit there and try and work out features. We just check it in and we let the network itself work out what the features are going to be important to it. If we had a convolutional network, we'd actually take the filters and start looking at them and seeing which areas of the images respond the most. You'll see lots of diagrams to do that sort of thing online. We're crunching through so we've got to take out ports. So we're going to basically output 10 separate numbers and whatever is going to be the highest number there we're going to say that's the probability of being the one that we want. So let's look at how we do that in code. We've got learning rate, training epochs, batch size, just place them as just for showing. That's not a big deal. And then we have certain things for saving the model and then here we've actually got what our model is. So I talked about this before. We've got 784 for the first player for our inputs. We've got 384 and 110. We're basically getting a new graph so that we can actually see the graph once we've been done. And the other thing I'm doing here is actually defining the inputs. So inputs we define as placeholders. So anything intensive for a placeholder means that you can basically substitute values in and out of it very easily. So you'll use placeholders all the ways for things like inputs. You'll also use them for things like learning rate if you were going to have a decaying learning rate that you wanted to change with each batch or with each epoch or something like that. You would then make your learning rate a placeholder. A very good question. An epoch is one run through the entire training set. So we're doing it in batches of 100 but we're going to get through 50,000 images. So one epoch is one run through. So I'm doing 4 epochs here. I'm not going to get a very high accuracy score because 4 is not exactly... often you'll see people training like 1,000 epochs for certain days. It's 200,000, right? 4 times 50,000. A good question. Very good question. Because I remember when I first learned that too I also learned for a long time. So I've set up the inputs. Now we're at our network. So this is sort of like all the juicing part. I'm defining this as a method. We're going to basically pass in our training set. It's going to be X into here. Now there are some bits. I'm going to explain different bits at a different time. So let's look first at actually the hidden layer operations. So the hidden layer operations that we're going to do are in addition, right? And we're going to do a matrix multiplication before we do that addition. So we're actually going to use weights and biases. So does everyone understand what weights are? So basically weights are things that... is what the network is actually going to change to blur. So it can't change the input, right? If it changes the input we're changing the pixel. It can't change that. So the thing... we actually can if we want to. One of the things that we played around with a lot was adversarial images. I've had a lot of things with adversarial images where instead of training the network you train the images. So you can make an image that looks perfectly right to a human as a Mercedes-Benz or something and they put it in inception network and it says, oh it's a nice big Persian cat. And what's going on there is you're making very small changes to the pixels. But we're not doing that here, right? We're basically making small changes to the weights and to the biases. So up here I actually set up the weights of the biases for this first layer. So I just changed this before we started. So what I've got here is what we call a truncated normal. So this is basically just a random number. With this very small standard deviation. And if it picks the random number outside of that deviation it basically drops it and picks one again. So we're doing that because we don't want, for example, all of our weights. We want to try to keep our weights as close together as possible to get an efficient and good generalizing network. If we had all our weights between say negative one and one and then we've got two or three weights that are three or four hundred that could cause a lot of problems inside the network. Now I may just train out but there isn't always guaranteed that it will. So in this way we're basically dropping anything that's outside the standard deviation that we set. So I've got my weights there. I've got the bias. So we've got basically matrix multiplication which is multiplying the x by the weights. So how many weights are there for each thing? Same size as whatever we're feeding in, right? So yeah. So seven hundred eighty four weights for each of these things. Okay so we basically multiply those. We then add that to the bias. And then we basically put it through what's called an activation function. So an activation function is basically making sure that what we're doing here is not just a linear function. And the activation function that we're using is called Brelu or rectified linear unit. And I didn't have a picture. So Brelu is very simple to think about. It's just basically anything under zero doesn't go through. It just passes through zero. Anything above zero it passes through whatever the value was. So other activations that used to be very popular, things like sigmoid, softmax, tan, still for certain networks. But generally nowadays most people are going to use a Relu for most things. Just because it seems to be one of the best ways of training a network. Okay so we've gone through the weights and bias. We've gone through the two operations for our first hidden layer. We've now got a 10 support. I will come back to what this is actually going to be. And so when you see these scopes I'm actually scoping it. Not in a traditional sense for scope as in a programming sense. This is for 10 support. So we're basically saying that all the things in this belong to hidden layer one. So that when it draws the pictures nice later on it will draw it nice and neatly. Okay so hidden layer two is basically the same as hidden layer one. We've got our weights, we've got our bias, we've got our addition matrix multiplication and we've got a Relu activation going out. And then we've got our last output layer which again is very similar. We've got our weights and we've got our logits layer. So the logits layer is going to output and this is what we're going to use to then check our loss for our error function to see like okay what how close are we, what are we getting right, what are we getting wrong. And basically this is what gets returned out of the function. So let's just run that. Okay so then we basically assign that method okay now we're into our optimization functions. So the loss that we're going to be using here is cross entropy loss. I'm not going to explain it. If you can just do many searches for it there's lots of information about how it works. But there's lots of different types of losses that we could use. For depending on what type of network you're building and what it is that we're trying to predict etc. we might use different losses. So basically anyway we're basically assigning the loss here and then we've got our optimization function. So a lot of people probably a lot of you have heard of gradient descent how many of you have heard of this? It's a very popular one. Another one we're using is the one I'm actually using is Adam but I put gradient descent there so you can use that as well. How many people know Adam? Adam is not as well known but it's probably a lot more effective nowadays. Again these are pretty complicated equations. So TensorFlow is handling this for us. Now here's the key thing though is that our optimization function is using it our learning rate that we set earlier on and it's using that to minimize the loss from the loss function. And what you don't see here and what can be very confusing about TensorFlow at the start is what is it actually doing then to make that loss? What it's actually doing is tweaking all those weights and biases. So it can work out what are the weights, one of the biases and it can then alter those and start tweaking them. Okay, then I've basically just got another thing for TensorFlow where we're going to measure the accuracy so I've just set that up. But that's actually not part of our variable per se. I could actually take that out, we don't need that. I've also initialized the variables or I've actually set up an initializer to initialize the variables. They will only get initialized when they actually get run on the graph. I've got a saver so that we can basically save the model and then I've got our things related to our TensorFlow. We're basically going to make a file writer so that we can write things to the actual disk and then we can look at those on TensorFlow. What we're going to do is measure some scalars. We're going to measure one for accuracy, one for loss. And then we've got a summary op. So basically what a summary op does is it merges all your summaries so that it can just run the equation at once. I'll explain that a little bit more in a bit. So now I'll get what is built. Our graph is built. Now we're actually at the point where we're going to actually train the graph. So let's look at what's going on here. We basically open a session. We run the session in it so everything gets initialized. We then start our training cycle. So this is just very simply for each epoch that we set, we've already set the number of epochs we're basically just going to go through and we're going to run all the batches. So our follow up here is basically just going through and running batches. So you can see each time it gets a batch of x and y it then, the way it gets those, what it does is it puts those in what's called a free dictionary. And the free dictionary is basically what gets entered into TensorFlow for all its operations. So what we're doing here is we're basically working the summary up. We might talk about that before. So that as we're going around each batch, it's not only just calculating training the network, it's also calculating our loss and our accuracy and storing that to TensorFlow. So we can see that a little later on. Yeah, and then we're basically going to print out and see how it goes. And at the end of that, we've got our same, we're going to save our model. So for saving the model, all you have to do is this. And what I was trying to use tonight was basically give you a sort of set of code that you can take home and then tweak it for a bunch of other models yourself. And you can use things like the saver and stuff like that. So okay, our first epoch is done. Let's see how we're going to go. We're running all of four epochs. Okay, so basically it's sort of saying open up the session and with this session open we're going to run all these things and then you can automatically close it by yourself later on. If we wanted to, we could actually do a session close as well. Does that explain it? I was curious about the width. Okay, so this one is different from the ones I'm using up here. The ones I'm using up here is basically saying with the scope of accuracy for TensorFlow, I want you to perform these equations and store them under that scope. Right? You can define whatever you want. Yes. That's one of the cool things about TensorFlow is you can actually then put your own functions into different things. So if you think that you've got a better activation function then you know. Okay, so yes, you do have to do a lot of work to do it. It's easy to do but you can't actually do it. You can define all your own Python functions to do certain things. I'm not going to go into how to do that. But you can. Just go and do a search on the API for using Python functions for different things. You can actually then contribute it to the actual project. So there is actually a tf.contrib where people have put in different ways to calculate certain things. You can define... This is the whole point about TensorFlow is because it's basically just one big mathematical graph, you can cut and change whatever you want. Now doing it for the optimizer is not going to be easy because you're going to then have to think of how it handles the back propagation. You're going to have to go through and check and make sure that your function handles all those things the way the TensorFlow expects them to be handled. But it's something that you can do. Okay, we finished our training. We've got let's see how do we go against the test set. So now what we're doing is we're basically just running the model again. I'm loading the saved model that we just saved before and I'm bringing it in and we're going to use it to predict against the test set. So he hasn't seen the test set yet. It's just seeing the training set. Okay, we've got 97.5% accuracy. Which is okay but not that amazing for MNIST. That said, consider we only had four epochs for these small network. It's not bad. Let's jump into some more interesting stuff. The thing is now as we've gone through there we've been adding a lot of things to the TensorFlow. And this is where TensorFlow can really shine. So let's look at our graph. We're going to graph them what we had before. So there are a few things here that we The training is basically we can just take that off and put it on the side for the moment. And this is where you'll see what basically is sort of the model that I put out there. We basically got our inputs. So let's look at what we're doing with our inputs. In our inputs we basically have got the x input which are all our pixels and we've got the labels. We look at our key layer. We can see. Okay, we basically have got our h1. Sorry, we've basically got our weights and our biases. And we've got them the math that's been done there. We can even look at how we're calculating the cross entropy. There's a bit more to that than say some of the other ones. Now this is a really simple model. So anything we've got in here, if we wanted to inspect it we can basically just click on it and we can see what's over here. And it will seem that when you've got a very simple model that okay well it's not a big deal to go to see it like this. But when you've got a much more complicated model it is a very big deal. Because we can do things like general trace inputs. Now whatever I've got clicked I see what goes into it but not what comes out of it. So if I'm worried that okay it should be something going in here but why is it doesn't seem to be working? Very quickly I can isolate and see okay it's not actually feeding what I thought it was feeding in there. Let's say we didn't have the inlayer 1 connected to the inlayer 2 or something like that. We'd be able to see that very quickly here. Okay we can also look over here and look at our training. Which doesn't look that good. One second. It's actually saving to a it's you're basically logging out to a directory and saving to that directory. I've actually set up you know I was actually saving to a sub directory. So now we can basically see our accuracy going up as it trained and we can see our loss going down. And we can get a very sense now you know okay and this is kind of standard right what we've got here in this network. But often you will see there'll be like a big spike or you know where something maybe the learning stopped at a certain point. Maybe your learning rate was too high and the learning plateaued. You would be able to see that very clearly here. The other thing that we can do too is because we set up those histograms we can actually go and see what's happening with our weights and our biases over time. So we can look and see sort of in a 3D way or we can go so we can see you know what's actually happening here with training our model. And we can get a sense here okay this is what our biases are. So the biases are actually pretty good. You can see that there's nothing radically too low, there's nothing radically too high, they're all in a very nice band. And that also explains why the model trained well and trained pretty quickly. Our weights also recently in group places. There's nothing out of the ordinary in this model. But if there was you would be able to see that very quickly. So TensorFlow is one of the key things that you need to sort of learn to set up. So you can constantly come back and benchmark your stuff and go back and look at it through different things. Yeah we could put something like that in if we wanted to yes. If we reached a certain level then it would just stop. So we basically code that into our training section. Another thing that we will use a lot is like a decaying learning rate. So that's one of the things too. Let's see if we can predict an image. We got that one image that we picked we got wrong. So it's predicted a six and it's actually supposed to be a five. Okay it's zero. You can see that it's predicting most the images right. Okay so let's just go through TensorFlow and some of that code that I didn't explain before is that to make TensorFlow work you need to basically set up your graph right otherwise it will just look like a whole mess all over the place. So the key things that you need to be able to reset your graph, your default graph and you notice that each example I've gone through I've reset the graph to basically get rid of the old whatever was there and start again. You need to have a file writer. What the file writer does basically it logs everything to disk so that we can look it up later on. If you want to display something that's like a scalar or something like that you need a summary scalar, summary histogram you merge them and then when you're in training you actually write those you write to your file writer and that basically then saves it. This is the command line I think. We're basically running TensorFlow on this directory and we're then using in Jupyter exclamation mics for command lines. If I wanted to let's say I've let's say we want to just quickly do something else. We do a two epoch run but we're going to do a really really low training rank so what would you expect to happen? Is it going to learn well or not? We'd expect that a company is not going to learn to learn well. Let's just see what we've got for that one. Not very good accuracy. Basically double rank. Let me just run through a different one with a much higher than you may. Again we're just going to do two epochs and what I'm going to do this time is rather than just show if it's going to work it's going to work sometimes and not work sometimes if rather than just show the stats for one model we're going to look at multiple models to compare them. We've got 74% on that second model. We're running TensorFlow again so you can see now this is model 8 and this is model 9. They're very very different. And we can see that I'm pretty sure the model 8 was the one that did no learning at all. But here you'll be able to start spotting things. And you should be able to see I think it actually might be a part of one for another at the moment or it could actually be more likely my code because I've got to change these as it goes through different things. What I was changing basically is instead of us outputting to just the log simple graph who are making another folder and then when we basically just open this top folder instead of the lower folder we can see all the models that are in there. I've got another model but you can maybe go through this one yourself. It's a model for sentiment analysis. It's using almost the exact same model but doing sentiment analysis. Anyway I will put that up on GitHub and you can go through it. Because we're only showing the phone. I will take questions at the end. If everyone is cool with that, I might come up.