 All right. Should I start? OK. So good afternoon, everyone. My name is Virat. You can call me Ta. And today I'm going to talk about machine learning in JavaScript with TensorFlow.js, which is another exciting thing that was announced during the TensorFlow Summit a few weeks ago. First of all, let me see hands. How many of you actually JavaScript developers? OK, more than half. I'm happy, because otherwise I would just skip this talk. All right. I'm a Google developers expert in machine learning based in Bangkok. I was previously a data scientist back at Facebook at the headquarter. I recently moved back to Asia and joined the Google GDE team. All right. And I'm going through this very quickly. But if you want to catch up in any of the details, this talk is actually available on YouTube too. OK. So a lot of you here probably already used TensorFlow. I saw some hands earlier in the talk. And a lot of us think we probably need to do machine learning in Python only. But that is changing. And all of you have already seen this from Martin's talk. This is actually a very basic neural network written purely in JavaScript. It was pretty much a prototype. They built this mainly for educational purpose, because it's really, really helpful for you to understand how machines actually learn from the data. So keep this in mind. I'm going to go through the benefit of why we would need in-browser machine learning. First of all, no drivers, no installs. If you ever try to build a machine learning before, you probably have to spend like a half an hour at the very least. Try to install every single thing, all the libraries. And if you're lucky, everything's compatible. If not, you have to reinstall everything all over again, clean up your environment all over again. So that was not very fun. So with browser machine learning, you just turn on your browser and everything just works. So it sounds like a magic for everyone. So that's the first benefit. The second one is interactivity. Just what you see from the Playground, the TensorFlow Playground, the huge benefit of that too is basically you can play around with it. You can change the number of neurons. You can change activation functions. You can change the data set. You can try changing everything and kind of see how things change after you change certain things. So interactivity is also another thing that you probably want in your AI product or machine learning product. The third part is that once you get all this up and running in web browsers, it means that you can actually open this up on your mobile phone as well. And with your mobile phones, with all the smartphones these days, it comes with tons of sensors data that you could use to enhance your machine learning application. You've got GPS data. You've got microphone. You've got accelerometer. You've got gyroscope. You've got camera. So you can feed all this data into your machine learning model and kind of build cool applications out of it. And the last part, which is probably the most important thing, is that data actually stays on the client because you can actually build the model, train the model, and make predictions, everything in your web browser. Imagine if your big company is at Google, Facebook, Baidu. The way they get data is that they actually track every single click, every single thing we do. And they actually dump tons of data out of your browser, send back to their servers, and log what you actually did on the app on the web. So with this in browser machine learning capability, a lot of things can actually stay on your machine. And that probably has huge benefit for privacy. All right. But this tool, it was built as a one-off tool. And the team kind of see the value that if we give this capability to everybody that we can build machine learning using JavaScript to run in web browser, that would be really cool. And that's the beginning of DeepLearn.js. So DeepLearn.js is actually a pretty new project, too. It started just August last year. And the cool part of it is that it's GPU-accelerated via WebGL. On your web browser, if you don't know there's a WebGL that's directly connected to your graphic card. So if you have a fancy graphic card on your computer, it means that your web browser can actually leverage that. And the cool part is it allows inference and training entirely in your web browser. And just from August 2017 up to now, there are a lot of people building a lot of cool applications out of DeepLearn.js. One thing here is the style transfer. You can actually do everything based on your web browser. I'm not going to do the demo here because we're shorting on time. But the cool part is that imagine if you use the traditional way of machine learning. You probably have a web server up and running to serve your TensorFlow model. But with this, you can actually download the model and everything and do the size transfer right in your web browser, which means that instead of sending, say, four or five megabytes of your photo back to the server, wait for the server to run, send back four or five megabytes of photo. And let's say, OK, I actually don't like this photo. I want to do this again. So it's a lot of data sending back and forth. But if you can do all this in your web browser, it means that you can simply download the model and run everything in your model. And let's say I actually want to do style transfer using my own photo, right, my selfie. I probably don't want anyone, any third party, to store my selfie on that server for a long time. I'm not sure what they're going to do with my photos. So with this running on your web browser, it means that my photo is not stored anywhere in the world. And this is another cool application that they built using deeplearn.js. This is a generative model. So basically they try to learn from tons of fonts that we have in the world on the internet and try to learn how bold font actually work, how italic font actually work. Let me show you this photo so that you understand a bit more. So if you do bold font in a very naive way, this is how you're going to do it. You basically just expand the font, like expand the field by two pixels. But if you want to do it beautifully, this is how type of cover would do. They just increase the boldness of the font on only one side and try to keep the other side the same, just to keep the whole thing pre. So basically they try to learn that. And this is basically a vector representing bolding action. And based on that, then we can build a generative model that say we start with this particular font and we want to try to make it a little bolder. We want to make it more slant and you get a new font that is quite beautiful. All right. Okay, and how many of you have seen this? The teachable machine, not a lot actually. Okay, if you haven't seen this, I really, I highly recommend this I would say. It's a good way to demonstrate how machine learning work from the beginning to the end. Basically you need to put in the data, train it and do the prediction based on your web browser. There was build using Deep Learn JS but the new version using TensorFlow JS already came out. And this was me playing Doom. How many of you here know Doom? You gotta be somewhat old, right? Okay, so given that you can do prediction, training and everything based on your web browser, you can build this kind of cool application that I can now control the game using my webcam, right? So I do this shooting action and all of a sudden I start shooting that barrel of gas. All right. All right, and that was Deep Learn JS. Everybody loved it. A lot of creative applications came out of it and during the TensorFlow Summit, they decided to merge and convert it into TensorFlow.js. With TensorFlow.js, like Deep Learn JS, you can author models, create a new model directly in your web browser. You can also import pre-trained model let's say you already heard of pre-trained models like several times today. You have mobile net, you have NAS net, you have all kind of models that people already built it for you and you just wanna use it. So if you wanna do that, you can simply import it into your web browser and use it right away. And the last part is you can retrain some of the imported models and Martin already talked about it. If you wanna train with slightly little data, you can use pre-trained model, retrain it a bit and make it serve whatever application you want. All right, and this is how Deep, not Deep Learn, TensorFlow.js looks like. So basically it runs on your web browser and as I told you earlier, it runs on WebGL and it has two levels of API. The first one is called Ops API. In the real doc, it actually says core API. It basically TensorFlow model, TensorFlow safe model. It allows all the low-level operations or the tensors or the operations. I will show it to you shortly. And it also run in Ego mode, right? You already heard about Ego execution mode. It basically run that way so there's no need to build the graph and all those. And another layer is the layers API which is the high-level API that you can simply call DNN and those kind of stuff. And that's sort of equivalent to Keras model. You can actually import it in Keras model and I will show it to you shortly. Okay, I'll move to my first demo. This is basically just to show you the, oops, this is not the first one. This also not the first one. Oh, okay, this is the first one. Okay, all right. So this is just to show you a bunch of things that you can do using TensorFlow.js. This is a quick tool that I built just to demonstrate to you that, okay, let's say in your web browser you might have dynamic data, right? Your user come in, your user might click here, click there. Everybody has different actions so you have dynamic input stream coming in. And the code looks something like this. If you already use TensorFlow, you should be able to understand this code almost immediately. The cool part that I wanna point out is this chaining function. I really like this. You can kinda like, if you're a JavaScript developer you would love this. You can call one function, you can dot add dot another thing and this make my life much easier. And okay, I'll show you the first model. I'm gonna train using the core API. If you remember the core API is the one that use very low level operations. Okay, so this is it. I already got the model. And the code looks something like this. It started off by decline variables. If you use TensorFlow Python, this should be familiar to you. Then I define a function for the model. I have a cubic polynomial here. And okay, I define optimizer. I'm gonna use SGD in this case with a learning rate of 0.8. And then I call optimizer dot minimize. And that's it. And then I just run everything just like what you would do on Python TensorFlow. But the cool part, again, interactivity. Let's say I wanna change the parameter of my model. Let's say I don't wanna train 50 iterations. Let's say I just wanna train for, let's say four iterations. Let's see if it works. It's gonna work, but it's not that great, right? Let's say if I wanna train for slightly more, 16. Okay, it's already really good. And if I go all the way up to 100 it's almost like overfitting this data already. All right, so this just to show you that with dynamic data coming in, with dynamic model parameters you can actually adjust all this, do all the training and do prediction in your web browser. The second one that I'm gonna show is to train a model using layer API which is a higher level. And if you haven't heard of it there's a theorem called universal approximation theorem. It basically says that a simple neuron network with a single hidden layer is sufficient to represent any function. All right, and I will show that to you. This is my model. It's training, training and training. I'm using eight hidden layers now down here. Okay, it's gonna take a while. But basically while it's training I'm just gonna show you the code. It's really simple. You just call tf.sequential. And because I'm having only one single hidden layer so I only have this one layers.tense. And then I put, this is the number of hidden nodes that I want. I use ReLU as activation function and this is my input shape. And then I combine everything into the last layer and that's it. That's a very, basically you build a model in like this five lines of code. All right, with this you get this kind of model and let's see if it can predict better if I use more hidden nodes. And hopefully it's better. Okay, it's slightly better, but not that great. Okay, we don't have much time so I'm gonna move on to the next demo. Okay. All right, what I showed you is how to create a model in your web browser. Do the training in web browser, make prediction in web browser. Next thing I'm gonna show you is how you can import existing models into your web browser. So in Python, you can save model and we provide two ways of including saved models. So in TensorFlow, you basically can call save model builders and then call builder.save and that's how you get TensorFlow save model. If you're a Keras user, you can call save weights and then you get the files that include all the weights and your model topology. But now that you have model in Python, you wanna convert it into something that is readable by TensorFlow.js. So they provide this package pip install. You can do pip install TensorFlow.js and then you can convert TensorFlow model or you can also convert Keras model into TensorFlow.js compatible model. One thing that I want to note, okay, it's actually in the next one. So once you have the model that's compatible in JavaScript, you can simply load it into your JavaScript code. So for TensorFlow model, you would call load frozen model right here. For Keras, you would call load model and the difference is that for TensorFlow saved model because it's very low level already, it doesn't allow you to retrain the model. So you can only make predictions using TensorFlow model, but with Keras model, because most of them use higher level API, you can actually retrain the model that you imported from Keras. All right, and with the model conversion code, what it does is actually, it does graph optimization for you. Let's say your graph has tons of notes that not necessary for your computation, it will get rid of them for you. It also does some weight optimization for browser caching. And currently it support 90 plus TensorFlow ops. They plan to support control flow ops very soon. And currently it supports 32 plus commonly used TensorFlow Keras layers for inference training and evaluation. So right now, if you take random Keras model and try to convert it into TensorFlow JavaScript model, it might not work if it's not using the common layers included here. All right, now I'm gonna show you the next demo, which should be pretty straightforward. So basically, I just wanna show off here. This is the one line of curve that got you up and running in your web browser. You just call tf.loadmodel and there you go, you got mobile net up and running. So here I got random photos and it predicts that it's a C-shore code for 21% probability, I'm gonna random new photo. Okay, I don't even know what that is. Let's try something else. Internet, yes. Okay, I don't know what that is either. Okay, forget the photo. You can do webcam as well. I think this should work. There's a class called microphone. So I will try this and see if it works. Yes, so it's microphone with somewhat high probability. So basically just two line of curve to load up the model and you can all of a sudden start doing all this. Okay, I'm flying through all this. Let me close the webcam or I would crash my browser. Okay, okay, and with this, when Google launched this during TensorFlow Summit, they actually show off this emoji scavenger hunt. So now that you can import image recognition capability into your web browser, they built this cute game. Let's see if I could make it work. So basically they would ask you to show something that they asked us to look for. Oh no, it takes a while. Okay, be patient. Okay, there you go. Oh shoot, how can I find this? Am I seeing a ski mask? Nope. Am I seeing a band aid? Nope. I think I saw a lens cap. Do I see a elbow? Okay, they're trying to make prediction based on what it's seeing right now. Do I spy a plunger? Not a plunger. What's my plunger? Not a plunger. Might I see a spatula? Okay. I think I saw a buckle. Nope. Oh no, your time is up. Let's see if it would ask for something that I can find here. Okay, no, I can't find a TV. I think I saw a spatula. All right, for the interest of time, I'm just gonna stop this, but it's really fun and it actually works on your mobile phone. So you can play this on your way back home. All right, and what else do I have? Oh, oops, okay. So now that you have this object detection capability, you basically can build this in your mobile web or your web application very easily. This is one of the examples that I came up with. This is a restaurant review app. So let's say for this particular app, it's called Wong Nai from Thailand. And they basically ask you to classify food menu, storefront, inside, or others, right? So let's say you are a user, you go into this restaurant classification, this restaurant review, right? You're looking for food photos, sometimes you get selfies. You're looking for storefront, sometimes you get something else. So the capability that allow you to classify image right away should be like really good for your app. And I've been asking my friends, I'm gonna give a talk in Singapore, what joke should I do in this talk? And they were like, okay, right now people are kind of obsessed with this high SES and low SES. So if your startup have sense of humor and you wanna tag photo, whether it's for high SES people or low SES people, you should be able to do this in your web browser right now. Okay, and the last one, Martin sort of talk about this already, the transfer learning. So basically you don't have to retrain the whole model from scratch. So what I'm gonna show here is that I'm gonna load up this mobile net. But instead of using the result from mobile net directly, I'll pull out this particular layer inside that model. And I'll use this as an input of my new model. And what it would do is something like this. So let me turn on the webcam. Sorry, you have to see my face quite a lot during this talk. Okay, I'm gonna start adding some training example. It will take a while. All right, so this is class A. It's just gonna be my face. I'm gonna move around a bit because when I actually do prediction, my face might not be right at the same place. Okay, class B, let me do with my watch. And hopefully it can. I actually tried earlier before the talk. It didn't work. I think the lighting condition in this room somehow ruined my model. But I'm just gonna try it again. Let's see, the third class would be this card. All right, now that I collect the data, and again, everything happening in web browser, I'm gonna start doing the training. And basically I just add one dense layer right here. I just take the input from mobile net, from one of the layers in mobile net, and try to retrain based on that. So this is my model, that's it, very short. And I'm just gonna train it a bit. Hopefully it works. Okay, the loss already comes out a lot. I'm guessing it's gonna work. Okay, okay, let's go back up. Okay, now I'm gonna do prediction. Okay, this should be class A, my face. And if I show my watch, it should change to class B. Yes, and if I hold up my orange card here, it should turn out to class C. Yes, it works, all right. And again, Google actually make it really fun. They launch a demo, which is this Pac-Man game. But I don't think I'm gonna embarrass myself playing Pac-Man with my face in front of all of you. So if you're interested, you can go to this demo. Basically, they would ask you to train, like, how would you do up and down, lift and right? And you might look a little bit funny in front of everyone. I'm not gonna show it to you. But yes, that's something that you could do out of this. And currently, I think Google has built tons of experiments based on TensorFlow.js. And a lot of them actually come from a magenta group. They basically do a bunch of art and music applications using AI. So I think it's a lot of fun, and you can try playing around with it and dig their GitHub code to see how they can actually use this TensorFlow.js in the model. And lastly, one thing that you might wanna ask, why would I use TensorFlow.js? My TensorFlow in C++ code is already really fast. So this is the benchmark from the standard TensorFlow C++. So if you train it on the CUDA 1080 GTX, that's a pretty powerful graphic card. You can do mobile net. You can run mobile net to classify image in 2.8 milliseconds. And if you use normal graphics, and if you use normal laptop, it probably take 56 something milliseconds. But with TensorFlow.js, it can still do really fast. If you have really good graphic cards on your computer, imagine you recognize object within 10 milliseconds. That's like really, really fast. And let's say I'm doing this on my OK laptop. It's still run under 100 milliseconds. So I would say that's still pretty fast and allow you to do a certain level of interactivity in your web browser. And soon, you should be able to do this for the server-side thing. So a lot of you that are already JavaScript developer, you probably want to do this in Node.js. And I think they already launched the beta. And you should be able to do JavaScript to call the TensorFlow, TPU, GPU, and CPU. It's just another wrapper for this language for you. It should become out fully soon. All right, just a quick recap. With TensorFlow.js, you have Ops API, which is consistent with the eager execution API in Python that allow you to build model, train model, and execute your model in your web browser. It has layers API, allow you to do high-level things easily with something like Keras-style model building. And you can import TensorFlow.Safe model or Keras models directly to your web browser. And there are tons of demos and examples. All the stuff I show here, you should be able to find code somewhere in the tutorial in the document that they have. And if you're interested, js.tensorflow.org, you can find everything about TensorFlow.js. And that's the GitHub where you can find all the codes. All right. And if you play around with it too much, this is what you would get from your web browser. I got it quite frequently while I was working on the demos. So be careful. All right. Thank you. That's it for me. Thank you. Thank you for being around. Yup. Wonderful.