 Okay, I'm going to go ahead and get started. So this is the talk on Python and TensorFlow, so if you're in the wrong room, get the heck out. Okay, well you could stay anyway, even if you're in the wrong space. So originally my talk was scheduled to start at 11.15 and go until 12, and then after that there's lunch and everything. So I know that everybody's going to be chomping at the bit to go to lunch. But my talk was changed from 11.30 to 12, so I lost about 15 minutes. And I was very likely to go over as it was, so you're going to have to bear with me a little bit and forgive me if I do go a little bit over. So my name is Ian Lewis. I'm a developer advocate on the Google Cloud Platform team from Tokyo, Japan. So I've been living in Tokyo for about 10 years now. And I'm on Twitter, so if you have any questions later, you can kind of catch me up at E&M Lewis on Twitter and give me feedback on the talk or ask me questions about TensorFlow or whatever you happen to have questions or you think that I might be able to answer them or whatnot. So here's my little plug for PyconJP. So PyconJP is the Japanese Pycon. And we've been doing this, the Japanese Pycon, since 2011. And as an interesting backstory, PyconJP actually started at Pycon in Singapore back in 2010 because me and the other founders of PyconJP actually came to Pycon APEC as it was known back then. We all met afterwards and were like, we need something like this in Japan. And so that's actually how PyconJP got started. And so we started out as a mini Pycon, and then six months later actually did a real Pycon in 2011. And we've actually done a lot of work to encourage Pycon activity in Japan, helping the folks get started in places like Taiwan and Korea. So if you have any interest in that, definitely check out PyconJP and sign up and come hang out with us. I'm also very interested in Python and Go and Kubernetes and things like that, containerization, that sort of thing. So if you're interested in that, definitely check out my Twitter. So I'm going to talk a little bit fast because I don't have a lot of time. So you're going to have to bear with me a little bit on that. But first I want to kind of jump into the actual topic of my talk, which is deep learning. And so I'm going to do a little bit of background on deep learning and talk about what deep learning is. So how many of you are machine learning scientists, data scientists, that sort of thing? OK, a fair number of you. So I'm not. So that's going to mean that you might be a little bit bored with this talk. But we'll see how it goes. Hopefully you'll get something out of it as I talk about TensorFlow. So as a little bit of background into what deep learning is and what it means, so deep learning is specifically talking about a type of machine learning, specifically neural networks. And then the deep part of that comes in the deep neural networks. So essentially a neural network is a type of network where you have some inputs that come into the network. And those are connected to nodes, which are basically included in activation function, something that you actually do to the input. And then that input is then exported out of the neural network as an output. So in this case, we might have something like a cat picture. And so the input would be the pixels that we have from the cat picture. And that goes through the neural network and outputs a classification of whether it's a cat, whether it's a dog, something like that. And each of these are interconnected by passing around things like what's called a tensor, and those values are then converted into an output tensor. So here's kind of a background into what these neural networks are good at. So there's two main problems or classes of problems that neural networks are good at solving. One of those is classification. So basically saying, OK, here's my input. Which bucket does it fit in? Is this a cat picture? Is it a dog picture? Is it a picture of a human? That sort of thing. So this is good for adding things like labels and putting things in buckets, et cetera. Another type of problem is regression. So regression is basically creating a type of mathematical function that describes the data. So you get more of a range type of output. So I'm going to talk a little bit more about classification than regression. But you can imagine that you have some sort of classification problem like this one here. This is a little bit small. See if I can blow this up. But say you have some data like you have on this graph over here. So you have some of this blue data and this orange data. And you can think of this as maybe something like the height versus the weight of a person. And then these blue and orange dots are the groupings of, say, adults and children. And you want to create a network or a program that is able to differentiate between children and adults based on their height and weight. So this is a really easy problem to solve. I mean, you can just draw a line between these two and you're done. But this is kind of an example of a classification problem that you might want to solve with a neural network. But say you have something that's a little bit more complex. So you have something that looks like this where the data, it's not exactly clear based on the input data what type of function or what type of grouping or method that you should use in order to group or classify the data. So in this case, if you had a very simple type of neural network, this would not actually converge. It would not actually figure out. It would not be able to figure out how to solve this problem of classifying these two types of data. So in order to do that, you need to be able to develop a more complicated type of network. So something that's a little bit deeper. So these type of networks, you can add these intermediate hidden layers which allow you to do more complex recognition and classification with a neural network. So this one may or may not converge depending on how I've set this up. But essentially, in order to be able to kind of solve these types of problems, you need to be able to have a much more complex neural network. So what exactly is a neural network in its core? So a neural network in its core is essentially a big function that takes in a tensor and then outputs another tensor that is your output. And so in the intermediate steps, what it does is it actually does these operations on the tensor in order to produce the output. And generally, these are types of things like how many people know what matrix multiplication is, how many people are familiar with that. It's kind of high schoolish math, maybe. So you would do those types of operations on these tensors by multiplying weights and adding biases and things like that in a kind of a pipeline scenario where you do this over and over again in order to produce the output. And then the intermediate weights that you're actually using to do these multiplications are actually what makes up your neural network. So I mentioned tensors. So that gives the name of two tensor flow itself. So tensor flow is the idea of how the tensors flow through the neural network. But what is a tensor? So you all are pretty familiar with matrix multiplication or with matrices. But so something like, say, a vector or would be a simple array, how it would implement that in a programming language. So you have a simple array of a bunch of values. And then a matrix might be a two-dimensional or three-dimensional version of a vector. So you have something like this, where you have a three-dimensional array in your programming language. So a tensor is essentially an arbitrary type of matrix. So it's an arbitrary number of dimensions. So you essentially have this arbitrarily large number of dimensions implemented as an array in your application. So this is essentially what a tensor is. And then you can do the exact types of matrix multiplication on tensors, as long as the number of dimensions kind of match up. So as a way of actually implementing these neural networks, typically neural networks are fully connected networks, what are called fully connected networks, in the sense that each of the input values is connected to an output value in the output tensor. And so what that means is that each of these connections has a weight associated with it. So what you're essentially doing is either you might be doing an addition or a multiplication, but you're basically taking the input into the neural network. You're doing a multiplication on the weights and then adding some biases at the end here. This is a very simple example, but you essentially add these biases. And then that produces an output vector that says, say like here, we have three output values. This might be three different categories for say like dog, cat, and person for an input image. And then those output values would be kind of a percentage or an output value that indicates how well the input image matches a particular category. So that particular value is not going to be a very human-friendly number. It will just be a number that indicates a particular value. How much evidence you have for that this picture is a human or a dog or a cat. So just looking at the number, it won't really tell you very much. So what you end up doing usually with these types of neural networks is you apply a softmax function at the end. And what softmax does is it basically picks out the maximum or normalizes all of the data so that you get an output value that's between 1 and 0. And so that tells you that you have, it basically gives you a percentage that this image is a cat or a dog or a person. So we get like that I'm 85% sure that a picture of me would be a person. That would be pretty low. But you essentially get the idea. So one of the things that's really cool about neural networks is that you don't actually need to know very much about the data to begin with. You can basically start training a model on the data and then use a method called back propagation to actually continue to train the model and update these weights and biases in order to make the model perform better. So this is done through, like I mentioned, a method called back propagation, where you actually provide the neural network with what's called a loss function or a cost function that takes the difference between an expected value and the actual value that came out of the neural network. So say if the picture of me should be a person, that the expected value is that it's 100% that this is a picture of a person. Whereas the neural network might produce something that says it's 85% sure that it's a person. And so you apply this cost function, you get the difference out, which is 15%. And you use that 15% to then update the weights and biases for that neural network to make it closer to the expected output. And you do this over and over and over and over again, and eventually you get a value of weights and biases that give you good output for a large number of inputs. And you do that by giving it some training data. So here we have training data that says this is an image and this is the actual correct labels or the correct values that you should get out of the neural network. OK, so that's a little bit of background into neural networks. I've already used half my time just to get through that. But why are we even talking about this? So the reason we're talking about this is because of a number of breakthroughs in machine learning. Yes? Yes? It's predicted how would you know what could be the value to be able to expect and then feedback based on what? So you need to have some training data, some data that is the question is how do you actually train the model and how do you know what the values that are expected are. So you need to have some training data that exists already that matches that stuff, those expected values and the data together. So the reason why we're talking about machine learning and why machine learning has become kind of a buzzword in recent times is because of a number of breakthroughs in machine learning that allow us to use machine learning to solve problems in an actually useful way. So up until fairly recently, we could use machine learning for certain types of problems to help humans along and do things. But there wasn't really, you couldn't actually put it into products and make them really nice to use and user friendly and things like that. So this is a picture of the inception model that we use at Google for training images and applying labels to those images. So this is a, so I'm going to start by giving a little bit more background into this, but this is a large neural, deep neural network. So the idea of the neural networks that I mentioned earlier are that you have all of these matrix multiplications, but essentially you get by having a very deep neural network like this, you add the deep keyword or the deep aspects to these neural networks. And so you can imagine that each one of these is a kind of matrix multiplication or whatnot to apply operations onto the data. And then the images that are coming into this might be a one megabyte image or something like that. So you can imagine that each one of those has an image that's translated into a tensor. And each of those pixels in say a megabyte size image might have thousands and thousands of pixels, which makes up thousands and thousands of dimensions in your tensor. And then you're going to do these thousands of thousands of take this tensor and do thousands and thousands of multiplications on all the values in the tensor over and over and over again, just to do one pass through this. So you can imagine that this is a huge combinatorial problem for how much actual computation needs to be done in order to train the model. So you do this one time for one image, and you might have millions and millions of images that you need to train. And then you need to train that thousands and thousands or tens of thousands or even millions of times. So you can imagine that this is a huge, huge problem. And so what we find is as we kind of build these kind of deep neural networks, that as you, the more complex and deep that you can make this, the much, you get much, much better value out of it, out of the output of these neural networks. So for the amount of data, if you have a large deep neural network, you can get a lot more value out of the same amount of data by having a deeper neural network. But the problem with deep neural networks is that they require a lot more computation. So what people usually do is they actually build these huge, huge matrix or these big machines with lots of GPUs in them. And then they use those to train their models. And they take weeks and hours, or hours, weeks or days, or to actually train a single model just to do a single test to make sure that their particular model or their particular neural network works or is performing. And generally, researchers will have to do this over and over and over and over and over again in order to actually produce a usable model. So people have started using things like supercomputers for these types of trainings in order to make them go faster. But this is something that's not really available to most people. You have to lease a supercomputer ahead of time. And so just as a poll, how many people in this room have access to a supercomputer? Somebody might actually have one. Yes, that's great. You can do this stuff, maybe. But for the rest of us that don't have access to supercomputers, you can. That's actually the first time that anybody's ever raised their hands to that question about everybody. But so people don't really have access to these supercomputers to be able to develop these type of machine learning models and actually take advantage of deep narrow networks. So at Google, we do have a lot of computers. We don't have supercomputers laying around, but we do have a lot of computers. And so we've approached the problem a little bit differently. But we have been able to make a lot of great breakthroughs in how we do machine learning at Google. So these are kind of evidence in some of the products that we've created. So if you're familiar with Google Photos, you can add a bunch of photos to your collection of photos. And then you can search the photos for keywords. So say things like statue, wedding, any type of keyword. And you will get back photos that match that particular tag or keyword. And you don't have to tag these ahead of time. You don't have to teach it what the images are. It already kind of knows these based on our own pre-trained models. So this is something that's very, very powerful for building products and building actual real world applications. So another one of the things that we've been doing is to identify text and pictures. So we have a lot of street view data, as you might imagine, or street view images. And we want to be able to get the names of the shops and things like that that are out there in the real world. And so we need to be able to look at all of these images and get the text out of those images in order to kind of index it and figure out where the shops are. So we've been working on problems like that. Also, you might have heard of the AlphaGo project, which is a machine learning neural network that will play go and is actually pretty good as I've heard. I don't know. So we've been using a lot more machine learning at Google. And so this is a fairly relatively recent phenomenon. So like you can notice here in this graph, like up until 2014, there was kind of moderate growth in how many projects at Google were using machine learning. But then after 2014, you have this big hockey stick kind of graph that shows you how fast machine learning has been taking off in recent years. And this is part of a project at Google called Google Brain, which is to build these kind of neural networks. And the reason we're able to do that is by distributing the problems of neural networks into over multiple machines, and then being able to train and do prediction using many, many machines at the same time. And so this has allowed us to get things like 40X speedups with using ImageNet on our reception model, which I showed you that big graph earlier. ImageNet is a very famous kind of data set for machine learning or for training. And then we also use it for RankBrain, which is the machine learning model that we use to kind of rank search results. So we use things like on the order of 500 nodes, like 50 to 500 nodes of machines to do training on these types of models. So this is how we get to TensorFlow. So TensorFlow is a library that we developed at Google in order to help with building these types of distributed machine learning models. So TensorFlow is an open source library that's, it's a Python library, which is a general machine learning library that you can use to build neural networks. We open sourced it last year in November, and it's used by many of our production machine learning projects. So TensorFlow gets its name from the idea that I mentioned earlier of having tensors that you run into through a pipeline. So you have this data flow of tensors running through the neural network. And so that's how we got the name of TensorFlow. So it gets the idea from this flow graph that you create for these tensors. So it does a lot, has a lot of really cool features like this kind of like flexible intuitive construction of the graphs, supports for things like threads and queues and asynchronous computation. And you can train on CPUs or GPUs or whatever particular devices that TensorFlow supports. So it basically will take the operations in the graph and be able to break those up and distribute them a bunch of a bunch of different GPUs and CPUs. So some core TensorFlow data structures are the graph itself. Then you have the operations which are the nodes in the graph and the tensors of the values that are being passed around between the operations. Then you have these other types of pieces which are like constants. So constants are things that don't change as you're doing training. These are things you can change like in between training runs or in between as you're kind of updating your models, but it doesn't actually change as you're training a single, through a single run. So things like placeholders and variables are things like placeholders kind of like an input into your neural network. And a variable is something that can be updated as you're training the neural network. So generally what you have is these placeholders which are like kind of input values into your neural network. And then you have variables which are things like the weights and the biases that you have that are being updated as you're going through the training. And then session is kind of an object that encapsulates the environment that you're running in. So this is the kind of thing that will map operations to a device and things like that. And then this is just a slide that says lists like gives a non-exclusive list of all of the operations that's TensorFlow support. So we have a number of operations built in that TensorFlow is supporting. So I wanted to run through a little bit about how that you might actually use TensorFlow. Let's see. So I have this Jupyter notebook that I would like to show here. So here's like kind of an example. Let me restart the kernel and kind of clear the output here. So this is kind of an example using a TensorFlow in a Jupyter notebook. So what I'm gonna be doing is I'm gonna run through the really basic MNIST example. So MNIST is a classic kind of dataset for use for machine learning which has a bunch of images of handwritten numbers. And so what you're gonna be doing is taking those numbers and then doing OCR or character recognition to decide what number that image represents. So if it's a one, you wanna be able to output an actual text one out of the output. So here I'm just going to be loading the test data. So here I have like a test and a training dataset. And so the training images are basically a 55,000 images and each one of those is represented or mapped into a tensor that is 784 dimensions big. So each of those images has 784 pixels in it. I think that's like 27, 27 by 27 or something like that. And so you have this shape of your input data which is 55,000 characters big and each one of those is 784 dimensions large. And so here's just a sample image. I'm just pulling out the image. Oh, it's a 28 by 28, sorry. So I'm just pulling out a single image. Like this is the sixth image in the training dataset. So here's what the actual output looks like. So this is just a NumPy array which represents the training tensor that I'm gonna be inputting. And then if you actually map, let's see, this is taking a while to run actually. But this is actually just gonna show me, show you the image itself but it's taking its sweet time. There it is. So it's actually, this is actually an image of an eight. So you can see that if you look in the actual input image, each of these values in the input image represents a particular pixel in the image. And it's basically from zero to one as to how dark the image is. So if it's a zero, it's a white pixel. If it's a near one or one, it's a dark pixel. And then as we go down here, we can see this is the training data. And so this is the shape of the output of the training data. So this is just basically a 10 dimension or a 10 size large array. And that is going to be the output which is basically a zero one or actually the training data is either gonna be a zero one or in each of these values. So that's what's called a hot one vector which is basically that there's only one item in the vector that is a one and all the rest are zeros. So this actually like gives you an indication of what the training data represents. So this is the actual training labels for the input image which is an eight. So you can see that like in the ninth, what is it, the eight zero? Cause there's a zero. In the eighth position, there's a one which says that, hey, this image is an eight and we're gonna use that to actually train our neural network. This is that image I showed before. And so this is like as you're training the neural network, it's going to look at each of these pixels and assign a weight to a particular pixel cause I'm only gonna be doing fairly shallow neural network. So this is just one hidden layer. So this is actually going to assign a weight to individual pixels as to how whether this, that pixel or that image represents a particular number. So here you have the blue which indicates a positive weight and a red which indicates a negative weight. And so if you see like pixels in these blue areas then that generally will indicate that it's a zero. And the same thing for a one or a two and you can see that this actually kind of like maps a little bit in this particular example will map a little bit to the way an actual number looks. So here you can see that the eight kind of looks like an eight. And then once you're done that, you can kind of set up the neural network, how you're gonna actually train it. So this is the actually defining the neural network itself. So here what I'm doing is I'm creating a placeholder this X. So that's actually our input to our neural network. So X is the input and you define the shape as being 784 dimensions big and none which is like, which basically means that it doesn't have to be 55,000 size, it could be any size. And then you assign these other variables. So the weights and the biases which will be updated as we're going to be training. And then here I'm going to define the actual operations I'm going to be doing on the values. So here I'm going to be doing a matrix multiplication. So this is one of the define the predefined types of operations that I could do. So I'm gonna be multiplying X by W and that's going to multiply all the weights. So that's basically doing this operation right here, this matrix multiplication and then at the end I'm going to be adding this B at the end which is going to add the biases. And then after that I'm going to be doing this running softmax on it, which is the final softmax to get the outputs. And so this gives me an actual training or an actual neural network that I can then use to train. So now I'm going to define the training steps. So the training step defines the kind of back propagation that I'm going to be doing on the neural network. So here I'm defining a placeholder which is Y. This is for the loss function. And so I'm going to be using, in this particular example I'm going to be using cross entropy which is a type of loss function that you can use. There are several others that you could try. But that's a very simple example. And then I'm going to be using gradient descent optimizer which is, this is an actual way of optimizing how I update the weights and biases. So essentially when you get the difference you can then use gradient descent to figure out how you should update the weights and biases, how much you should actually update them by. And so essentially what you're using is this kind of optimizer in order to try to take, say a difference in the output and then map that to a difference in the weights and the biases that you need to make to update the value. And so this is going to tell me how to minimize my cross entropy function. And so if you have a visualization of how that looks like you have something like this where you have these in initial value and you use the gradient descent optimizer to kind of figure out which direction you should move. You should nudge these values in order to get a better output. And so you kind of do this over and over and over again in order to get a better value. This does have a little bit of problems with finding local minima and things like that but it's a pretty good basic way of nudging your values around. So next I'm going to use the optimization or the back proposition as well as my neural network, my trained neural network in order to actually train it. Oh my God. Yes, I missed executing this step here. So this is going to initialize my session which is actually going to initialize the training session for TensorFlow. And then it's going to loop and do mini batch training over a batch of data a thousand times. So here what I'm doing is I'm taking the training set and then I'm doing picking a next batch of a hundred values. And so what's really interesting about this is that I don't have to loop over the entire training set of 55,000 images and train those every single time. I can take a random set of a hundred of those values and train only that in each mini batch which is really interesting because if you're kind of a person that enjoys statistics this is you're essentially taking a population which is your training set and then picking a random tests or a random taking a random sample of that training set and then training on that which will give you kind of a representative example or representative sample of your training set. And essentially what that does is gives you a almost the same results statistically speaking that you would get if you trained on the entire population. So this is the same type of idea that you would get if you were like polling everybody to see whether they liked one presidential candidate versus the other. You don't have to ask everybody in the United States you could or everybody in a particular country you can ask just a random sample of them and still get something close to the actual results. So this saves a lot, a lot of time. You don't have to, this is like, what is it? Like a fifth of a percent of the actual data that you have to run through every time. So that saves a lot of time. So next I can actually test this to see like how good my neural network is. So here what I'm doing is I'm using equal this is another type of operation that you can use in TensorFlow and then I'm applying argmax to it. So this is going to I'm applying argmax to the values. So this y value is the value that I get out of my neural network. And then this y prime is the one that I get which is the actual value. This is the value for my training set which is the correct value. So what I'm gonna do is apply argmax to both of those which tells, which gives me basically a zero or one in each vector in the outputs. So from the neural network I'm gonna get between a zero and a one but what argmax does is just finds the maximum value changes that to a one and changes all of the others to a zero and then I can actually compare them. And then what we're gonna do is actually do check the accuracy. So I'm going to find the average of all of the correct predictions. So if I do a prediction of all of them then I can take the average of whether that was a zero, one, equal or not. And then I'll get back that gives me the actual accuracy and then I can run that in a session as well. And I get back that I have a neural network that is 91% accurate. And this is actually really, really bad. One in 10 images is incorrect. But this is a very, very simple example. You can get, start doing much more complicated examples using MNIST and I think the state of the art for MNIST is something like 99.997% accurate or something like that. So you can get very, very accurate if you just do a little bit more with the neural network. And one of the cool things is that the TensorFlow website has a lot of these kind of tutorials on how to do this. So this particular example is this MNIST for beginners. So this is how to use the MNIST training set with TensorFlow. But then if you wanna do something more complicated to get better results you can try the next tutorial which is deep MNIST for experts. So this adds a little bit more complexity to the original neural network in order to get something like five or 6% more accurate. And then there's quite a few other ones. So like doing things like convolutional neural networks, recurrent neural networks, and so on. So there's quite a few examples which is kind of puts that are very easy to go through which makes TensorFlow a very attractive library for doing machine learning. And just for funsies, I kind of went through and did the exact same training data for MNIST with the Theano library. And Theano is very, very similar to the way that TensorFlow is developed. So this is very, or how you would use TensorFlow. The, and so if you go through here you can actually see I'm using the exact same input data from the TensorFlow examples. But, and you do very something very, very similar. In Theano you have something called a shared object which you use for the weights and the biases rather than something called a variable. But then you can define that the neural network using the same type of softmax dot matrix or dot multiplication and the adding the biases. And then define the same sort of cross entropy for our loss function as well as the training steps. So one thing that's a little bit different is that you have to do the kind of cross, the back propagation a little bit yourself. So this is the actual doing of the back propagation here. You do get a built-in gradient descent function here that you can use by giving it the cost function of cross entropy and the weights and biases. But then you have to actually apply the update yourself here. And then once you do that you can build a training model using Theano. And then do a thousand times of money batch training and then do a test here. So in this particular one I got 89% accurate. But you essentially get about the same amount of accuracy because it's the same types of operations that you're doing. The main difference between something like Theano and TensorFlow is in the way that the core part of the library is built. So TensorFlow allows you to break up the operations much easier and map those specific devices. Whereas Theano's inputs is, or core library makes it a little bit more difficult to or pretty much impossible to map to multiple GPUs or multiple devices for training. So I'm going to kind of skip ahead a little bit. So one of the things that makes TensorFlow different is it's distributed training. So that allows you to kind of map to individual GPUs and CPUs and supports a lot of different types of distributed training like the data parallelization or model parallelization, things like that. So there's a little bit of a trade-off between things like data parallelism and model parallelism which allows you to get kind of different results for how you parallelize the data. Model parallelism is essentially breaking up different parts of the model and then training on the same data as part of the, on different devices or different machines. And then data parallelism is essentially running the same model on multiple machines but splitting up the data that way. And so each of those has like a little bit different good parts and bad parts. And so what, at Google we tend to like focus on data parallelism but model parallelism works for a lot of different types of things and TensorFlow supports both. So I won't really get into the details of things like data parallelism and what synchronous and asynchronous models are. If you're interested in that I can talk with you a little bit later. But one of the problems that you have when you actually have a, when you try to distribute these types of, these type of machine learning models or these machine learning trainings is by, is that in between the individual machines you have to transfer quite a lot of data depending on how you're actually breaking up the, or distributing the training. So you basically need to have a fast network. So because the operations take many, take things like nanoseconds on individual GPUs but transferring data over the network takes on the order of milliseconds. You have, there's like orders of magnitude differences in how you, in the ability to distribute the data. And so the problem is that you bottleneck on the networking between the machines. And so at Google we've kind of, we've worked on this in order to make the connections between the machines as fast as possible. So we are planning on building a cloud version of this called Cloud ML, which is, which supports running TensorFlow graphs in a, in Google data centers. So this allows you to take advantage of the hardware that's in Google data centers in order to run distributed training. So this will help you to reduce things that would take say eight hours to 32 minutes on 20 nodes. So that's about 15 times faster. So as well as being able to utilize the things like GPUs and those types of hardware. We're also developing our own hardware for machine learning and matrix multiplications. And so once we're calling these tensor processing units, and these are a type of ASIC that we have developed ourselves at Google in order to get better performance per watt. So GPU is very power hungry. So we've developed something that's a little bit less power hungry, but these are specifically geared towards machine learning. And so we're also planning on making these, using these as a, as part of our cloud machine learning offering. So that's all I had for the, this presentation. I know that you all are hungry. I'm very sorry about running over, but if you're very, if you're interested in TensorFlow, please check out the website and take a look at the examples. There's quite a lot of examples and tutorials. And I think that that is also one of the defining things of TensorFlow is that the tutorials are very, very good and well written and they're very approachable. Also check out this bit.ly slash tensorflow-workshop. This is a really good workshop on building this TensorFlow model. It goes through the basic MNIST example as well as the more advanced MNIST example, as well as getting into how to distribute it using Kubernetes and use TensorFlow serving to kind of build a production version of a machine learning service. So definitely check that out as well if you're interested. So thanks a lot and thanks for coming. I know that you're all hungry. So those of you that are too hungry to stay around for questions can go, but if you do have any questions, I can try to take those right now. All right. Thank you again. Yeah. Last break. Yeah.