 Okay. I guess we can get started. So thanks again. Thank you so much for joining us. My name is Sandeep Gupta. I'm from the TensorFlow team in Google, and this talk is about machine learning and JavaScript. I also have my colleague Ping here, and he will co-present with me. So again, thanks and welcome. So we're seeing machine learning is beginning to have a really big impact in almost all fields of life around us. Every day we see major news headlines, we see new breakthroughs, whether it's in transportation, healthcare, environmental sciences, all kinds of applications, even arts and creativity, where problems that were really, really difficult to solve just a few years back are now being solved in very impactful ways and very significant ways by these kinds of new computational approaches and new ways of using computer systems. Maybe just a quick show of hands, how many of you are actively practicing machine learning or have some familiarity with machine learning? Okay. So a relatively small number. Because many of you are somewhat new to the field, let me just take a couple of minutes and introduce some terminology here, and this will give you a flavor of why these kinds of methods are becoming so popular. So in most of classical programming, when we are trying to solve a problem or write a program for a computer to solve a problem, the way we have usually done this is that we first come up with these rules and we try to write explicit code to codify those rules. So for example, if you're trying to write a program that takes images and tries to detect whether it's an activity of, let's say walking or running or bicycling, you might come up with some features or rules to describe that. One way to do that might be that let's measure the speed of the person. If the speed is less than some number, then we call this walking. If the speed is more than that number, maybe it is running. So you come up with these kinds of rules and you implement a computer program to solve the problem that you want to solve. The issue with these approaches is that they very soon run into limits. You encounter situations where your new rules no longer work. Even if your rules work well for the problem you're trying to solve, they're generally not generalizable or extensible to a slightly different problem that you want to solve another time. So that's where classical programming runs into its limits. In machine learning, it turns this whole concept on its head, it turns it upside down. The way you approach this is that, what if I had some examples where I already know the answers, and what if I can feed a lot of these examples and answers, so we call this training data and answers on that training data into a computer program or a model, and this model has the property that it can learn from these examples that it has fed, so that it can come up with what these rules are. These rules may be in a form that humans can understand, or it may be in some abstract form that a computer program is choosing to describe that problem. So this becomes a very generalizable way of solving a problem, and so in practice the way you do that is, you collect a lot of training data, and you have that data labeled, these are called human-generated labels. You feed it into a machine learning program or a model, and then outcome these rules or a trained model. So this is the training phase of the machine learning process, and once you have trained a model, now, so this is some representation of the model. Now you can feed new data into this model, and then that model is ready to give you new answers or new predictions on this model. So this is called the inference phase. So this is at a high level conceptually, this is how a machine learning way of solving a problem will look like. So just to look at this little bit visually, let's say you're trying to classify images. This is what's happening there. So a model is a collection of layers, and all these layers are just computational blocks. Each layer and each element in that layer is just doing a very simple math operation. It's taking in some numbers, it's multiplying those numbers with some other numbers, and it's producing a new output, and you do all this and you feed this forward, and you wire this model so that when an image is fed in, it produces an output that is close to the output that we expect. So if you feed in an image of a cat, we want all these things to flow in, and the output of our model should be a number that we designate as indicating cat. If it is not cat, then we calculate an error or a difference, and we propagate that error back through our model, and we tweak or adjust all the parameters of our model until we get the right answer. We keep on doing this for lots and lots of examples, and then we have a trained model for that particular task that we are trying to solve. So this is how machine learning approach of solving a problem works. Now the reason why machine learning is really taking off now and it has become such an important part of problem solving today, is for three main reasons. First is that as we just saw, it relies on availability of lot of data, and not just quantity of data, but good quality data, data that's labeled and curated and represents the full variety of the situations you will encounter in real life. So now the good news is there are lots and lots of these very large publicly available datasets which make it easy for any developer to get started and train powerful machine learning models. The second aspect is that these models can be computationally quite expensive, although these are simple computations that they're running, but they just run millions and millions of them, and so you need very significant computation power to be able to run these models in a practically useful timeframe. And now there are these custom hardware, there are GPU advances, and new types of accelerators that are coming up, which have made it very practical to run machine learning models in a very, very reasonable amount of time. And then lastly, the research in the field of ML has been growing and advancing at a very fast pace. There are new publications that come out all the time and new ways of solving problems. So basically all of these things kind of now give everyone the ability to do this for whatever problem you are interested in. And this is where frameworks like TensorFlow come in. So TensorFlow is an open source library for doing machine learning, and what that means is that a lot of this mechanics and logistics of training a model, of doing this kind of back propagation and adjusting all the weights, of running your experiments, of creating a deployed model which can then be deployed and used in production, all of these things are managed for you by a library like TensorFlow, so you don't have to reinvent all of this stuff. It also has a bunch of pre-trained models available that you can use off the shelf, right? But TensorFlow was written with a Python front end, right? And this is where most of the machine learning tools that are out there today require one to learn Python. And in fact, Python is termed as the language for data science, right? Which is unfortunate because JavaScript is the most widely used programming language and really there haven't been too many accessible machine learning frameworks that JavaScript developers could use natively in JavaScript without having to have the burden of learning a new stack or a completely new programming paradigm. So motivated by that, we released this library called TensorFlow JavaScript which is basically a version of TensorFlow that is JavaScript native. And so it lets you run machine learning models and even train machine learning models in the browser. So you can run it in web browsers through JavaScript. You can run it server side with node.js and as we'll show you, you can run it in variety of other platforms where JavaScript can be used. This library is GPU accelerated and again, we'll talk a little bit more about performance later but we use WebGL acceleration in the browser. So it is very, very performant for the common types of machine learning models and it is completely open source. So it's open for the community to use, build on, extend and contribute back into the library. So when we released this, our hope was that this gives web developers and JavaScript developers, backend developers an easier way to get started with ML. And we've been very happy actually to see the adoption and uptake. This is an interesting sort of example. This person, Pierre Reimerts, he gives a lot of very influential talks in the JavaScript community and he gave a talk recently at Nordic.js and he sort of highlighted some of exactly these motivational points that we had is that this whole excitement of ML like the JavaScript community is kind of missing out on and now with TensorFlow.js, there's an easier way to get started and to be able to use machine learning. And in fact, he says down there that now you can bring the power of machine learning to your web application or JavaScript application with 10 lines of code. And so we love this testimonial except that it's five lines of code. It's not even 10 lines of code. I'll show you that. So here, for example, this is how this is sort of the rough template of bringing in machine learning into your application. There are first two lines of code which are basically importing our library and here we are showing the node example on top. So you import the TFJS node package and the second line is importing one of our many pre-trained models. So this is the Cocoa SSD model which is an object detection model that is trained so that when you give it images, it will recognize a bunch of common objects present in that image and it will give you bounding boxes for where those objects are. So you can load the library and the model. If you're running this in browser, then the alternate is to just script source it from our hosted libraries and then you create an instance of the model which is right there on that first line. So basically I'm creating an instance of my Cocoa SSD model and loading it and then I load an image and decode a PNG image to convert it into a form that my machine learning model can ingest and that's it and then I call my model detect function. So model.detect, you pass it the image object and you get back your predictions. So what you get back, you see that image on the right, it has identified a cup and a phone and a mouse and it puts bounding boxes on those objects and you get a JSON object which tells you the names of those objects and it tells you the coordinates of where these are and it also gives you a probability of how confident it is of that prediction. So a super powerful model, five lines of code, your web application can now be using an object detector and you can do similar things with text and speech and lots of other types of models. So why is this a good idea? So on client side in the browser, there are many, many advantages of running machine learning in client side. Browser gives you a lot of interactivity, right? It's easy access to sensors like webcam and microphone, et cetera, so you can immediately take advantage of all this sensor data and put it into your machine learning model. So in that object detection case, for example, the images could be coming from a webcam stream. There is nothing to install. So you can share with your users, you just share a URL link and they have a web page which has the model in it and they're using that model directly from that URL. It has huge privacy implications because you are running these models locally client side, no data is going to the server, right? So for healthcare or any other privacy sensitive type of applications, this has enormous implications. Also it reduces server side costs because you don't have to stand up complicated architecture. Sorry, I think I lost my projector. Really? Okay, I think the connector was a little loose, yeah. Okay, let's hope this works. And then lastly, as I mentioned that because we use WebGL acceleration, you get really, really good performance. On the server side, you can run more powerful models that may not be practical to run client side in the browser and you can basically take full advantage of whatever hardware you have and these could be multiple core machine or GPUs or even other custom hardware. There is a very large NPM package ecosystem. So if you are using machine learning in Node, you can benefit from this whole ecosystem of Node packages and you can bring machine learning directly into your Node stack. So you don't have to have like separate Python data science teams and a separate backend Node team who sort of are not really talking to each other. You can bring machine learning directly into Node and have like a single stack. And because we bind to TensorFlow C library when we run server side, we get a lot of performance benefits and sort of directly the ability to use any conventional traditional machine learning model that has been trained on the Python side. So there are three main ways of using this library. One is and that object detection example was one example of this. You can take a existing machine learning model whether it's a TensorFlow JavaScript model or whether it's one of your Python models and you can bring it in and run it with TensorFlow JavaScript. Second way to do this is to take a pre-trained model but then often you have to customize it on your own data to solve a particular problem. So we have easy ways of retraining a model and being able to modify it on a small amount of additional data so that you can retrain it. And by the way, another thing you can do is if any of you are familiar with the Google Cloud AutoML service which is a really nice cloud-based way of bringing your own data to the cloud and getting a custom model trained for you with no ML expertise needed, then that's also compatible with TensorFlow JavaScript. So you can train an AutoML model on the cloud and get a trained TensorFlow JavaScript model that you can run. And lastly, for those of you who want to experiment and write new models from scratch, there is a full programming API, low-level programming API with JavaScript so you're writing JavaScript code and you can write new models. So the library could be used in any of these ways. Because JavaScript is such a versatile language and it runs on many, many platforms, you can use TensorFlow.js in all these places. So you can run it in browser, you can run it on native mobile hybrid platforms such as React Native. We just recently announced integration with React Native and you get first-class support with WebGL acceleration through React. You can run it with Node. And then desktop, there are examples of people building electron applications and using TensorFlow.js through electron. So many ways of using this and we are continuously working on adding support for more and more platforms. So as I said, we prepackage a bunch of ML models for common ML tasks and here are some examples. There are models for doing image classification, object detection. There are models for recognizing human pose. We had some of those demoed at our booth and you're welcome to stop by after this and see some more of these. We do pose detection. There's a very nice model for audio commands. So if you speak words, it can recognize the spoken words and you can use that to drive actions. And then we are increasingly doing more and more around text. So text has a variety of use cases like sentiment and toxicity. And all of these models can be either just used with a script source from our hosted scripts or you can NPM install them. So here are some more examples and using these models as building blocks, you can build applications that solve these types of problems like accessibility or sentiment analysis, conversational agents and a variety of different things. So all of those examples you're seeing on the right are instances of models just running client side. So let me just take a couple of minutes and show you a very quick demo and just to show you how easy it is to retrain a model like this. So this is an application called Teachable Machine. Has anybody seen Teachable Machine so far? So this is something that I would encourage you to try out on your own time as well after this but let's just see how this works. So I'm going to skip this tutorial. So this is the Teachable Machine website and this will show you how you can take an existing machine learning model and retrain it in a matter of seconds. So I'm going to skip this tutorial and what you're seeing here is a simple image classifier and this web session has already loaded a powerful image classification model called MobileNet and it's running in my browser session and we are going to modify this model to do a very simple rock, paper, scissors classification. So we're going to output the word rock for my first class, paper for second and scissors for third, okay? And now we're going to record these training images. So this green sample, green class will be rock, then this will be paper and this will be scissors and we'll just record some images from my webcam. So let's record rock. So I'll hold down this button and record some training images, okay? So this is rock. Rock. Now let's do paper. Rock. And now let's do scissors. Paper. Because I'm just recording some training images right now so ignore its predictions. Scissors. So now we have trained and now it's ready. So now I have a new version of model running in my browser. So let's try it out. Rock, paper, rock, paper, scissors, paper, rock, scissors. So there you see how easy it is to train it with like, you know, 50 images. Scissors, scissors. And the nice thing about this is that you can very similarly train a speech model or a pose model. And now with the new version of Teachable Machine, it gives you the ability to, once you have trained this model to save it and you can put it on your shared drive somewhere or you can download a TFJS compatible model that you can run offline. So very, very approachable way of getting started with machine learning. So at this point I want to turn it over to my colleague Pink who will tell you more about the API. Thank you, Sandeep. Hi everyone. I want to show you guys first system diagram of the library. As Sandeep mentioned, you can use the model directly. Sorry, can you? So I will allow you to use the model API. And the layer API is a high level API for model building. Can you turn your mic, please? All right, so the layer API is a model building API, so it will give you more abstract API for you to build things easily. And the core API, fine-grained API value to control how you construct the neural network internally and how you'll be able to find control over the training and execution. And as Sandeep mentioned, we support both client-side execution as well as server-side. On the client-side, we use WebGL. So give you automatic GPU acceleration. And recently, we just created an alpha release for WebAssembly. So it'll give you better acceleration on CPU-side as well. On Node-side, we use Node.js C-binding directly into TensorFlow C library. So you can utilize the speed of TensorFlow on inside your Node running line. And also, we support GPU support. On any devices that have CUDA support, which like NVIDIA GPU cards, you can use TensorFlow GPU library. And also we have something called HadesGL. What it does give you is that if you have a adapter that doesn't support CUDAs, you can see that GPU acceleration on Node-side. And also, for example, IoT devices that usually don't have a good GPU driver, you can use this as well. What it does give is you, is the WebGL, this sounds weird. Give you the WebGL API, actually. You could use other way if you want to. So all right, so now, we do give you a lot of models, but the fact is that you may have your own models. Like you have an ML department that builds your own model, you have seen some nice model outside. You wanna bring it into your JavaScript application, you can do that. So we give you a converter that can convert any TensorFlow models into a JavaScript friendly format. And we do a lot of optimization for you so it can run faster on your browser or native mobile devices. And we give you also the API for you to download your model from any static file-serving services like S3 or Google Cloud storage. And you can inject that directly into your application and run the prediction as what I showed earlier. So that's for the browser side, but for server side, we just announced a new API. There's basically, if you wanna run Python model inside Node, now you don't need to convert. You can directly run them using our C library. And which means you have better ops support. Our converter actually support about 200 ops, the core ops of TensorFlow, but actually TensorFlow has about 1,000 ops. So this will give you 1,000 ops. So you can run really powerful machine learning models inside the Node right now. And it supports both TensorFlow 1.0 version or 2.0 version and it give you better performance. Why is that? Because Python, TensorFlow actually runs inside Python. So you have the Python layer. It actually cause a lot of delay, but VA is much faster than Python. So that's why we are slightly better than Python when you run TensorFlow model directly inside Node.js. So here are some performance number. For mobile net, that's what, Sandy was showing is an image recognition model. TensorFlow.js give you about 20 millisecond inference time, which means every recognition of the image takes about 20 milliseconds, which give you 50 frames per second. If you're building a real-time application, it has plenty for you to play with. And TF Lite, if you don't know, is a native implementation of Google, another open source project. It runs on the mobile device. It's performance around like 1914 on iPhone. We do have some performance improvement room for improvements for Android form, but are we really working on that? So on server side, you can see the Node performance is very close to the C++ performance. So if you get really cool GPU, you get about eight millisecond pre-inference. You guys ready for some code? All right, so now you can build, you can load your model. You can load our pre-trained model. What about you want to build your own model? I want to show you how to do that. First, we want to tackle the high-level API. So you're probably familiar with this. We're just loading our NPM packages, getting our, so this is loading our Node packages. So basically, Node package, we have the C binding into the TensorFlow C library. We also package the C library inside this Node NPM. And also you can load the GPU version. If you have GPU-enabled car. Let's go back to the image recognition model that's shown earlier. So as Sandy mentioned, a typical neural network are constructed layer by layer. Each layer extracts certain feature out of previous layer and pass on the result from to the next layer. We'll show you how to do that. In the example, we'll show you how to build an image recognition model using the layer API. So it's kind of crazy, right? A lot of code, but I mean, to be honest, it's not that bad. If you consider like using like your query or anything, it's not more complicated than that. So here, the first thing is use a sequential model. What is sequential model is basically just like 90% of the model out there are sequential. So what it means that you layer the layers one by one. So the next, we add a couple layers. Those are very typical neural network layers. The Convert2D give you a feature extraction type of feature and the max pooling give you a kind of zoom in kind of feature. So then you can look at detail of the image. At the end, we will flatten the image into a one-dimensional vector. So in order for us to output the classification for the class that we want. So what this last layer will output is a probability for each class. So that's about it. I mean, that's the model we built and then we use compile method to basically set up how we would train this model. So particularly, we set up the last function to use a categorical cross entropy with long name, but the fact is that you don't care and you just copy that. The optimizer is a gradient descent. It's also easy to copy, three letters. And then take that model, now you're ready to train. The train is a very simple, just one method. You get X, that's your input, Y is that's your output of your data set and epoch is how long you're gonna run this training session. After training is done, of course you can look at, we have other methods to show you what the detail of the training is, what the accuracy is, all the other good stuff. I kind of didn't show it here, but after training is done, you can save the model to a file. That's for no, but you can also save it to local storage in browser or strip it to some server through HV requests. So we provide a lot of those different varieties. At the end, the model is ready, now you can plug it in into your application and start making your predictions. So that's high-level API, or what is a low-level API? The low-level API actually drill down inside the layers. Each layer actually has many, many smaller operations. For example, there are metrics multiply, there's some kind of cost function, or something like cosine or sine function, or some kind of activation function you would use. I wanna show you how you can do that with TensorFlow.js. So let's say we have a set of data set, we wanna estimate, you can see those dots, those are kind of polynomial function output, but it's not really perfect, so you cannot really know what exactly the function is. We wanna estimate the parameter for that function. So A, B, and C will be the goal for this model to estimate. We're doing the same thing, we're loading our library. We create three variables with initial value at 0.1. What is the variable? The variable are the tensor that the model gonna update. The training session was starting to tune to make sure the output is the same as what we gave it to them. So here we use TensorFlow linear low-level API, like say, add, multiply, square. Those kind of low-level API, you could construct a tiny graph, tiny neural network graph. So this looks a little bit crazy, right? So just for a tiny little function and just so long, so we have better way to do that. So we can use chain function, chain master to make it more concise to express the same thing. Here we define the last function. Remember last time we just say, oh, it is a category of entropy, cross entropy. Here we have to define our own because we wanted more control, right? So here is the mean square error. So what it does is that calculate the difference between your model output and your dataset output. And then we use the same SGD function, the gradient descent function to as our optimizer. At the end, we manually run epochs time of minimization of the last function. So at the end of the day, it's the same as what you just saw, internally in the high-level API. So that's about it. So for TensorFlow.js, we also part of the TensorFlow ecosystem. We not only give you JavaScript, but also we also integrate it into TensorFlow world. We have, TensorFlow is a visualization tool, give you a visualization of how the training happens. Like give you a lot of charts to show how the accuracy, increase while the training happens. So those are the, at the end of the line, the last line is how you plug in into the TensorFlow visualization. So basically you can see it live when you have the TensorFlow opened. Oh, yeah, here we go. We do have a graph. So that's what you see when the training happened in the previous example. All right, now I'll hand it back to Sandy to talk about our user and community. Thanks. I know we're running almost out of time. I'll just take about three more minutes and show you a couple more examples and also point to some resources to get started. So TensorFlow.js is a growing community. As I mentioned, it's an open source project. We are very sort of happy to see the download statistics and also seeing more and more people join as contributors. There are more than 200 people contributing code actively to TensorFlow.js and we invite you all to become part of this. And also many developers are building really powerful extensions and libraries on top of TensorFlow.js. So there are some examples there which let you do some specialized custom stuff on top of the library. I want to show you three examples. This first one is from Nearform and Nearform is here at this conference, one of the sponsors of the conference. They have built this really nice application called clinic.js which basically is a profiling tool. So it plugs into your node workloads and it helps you profile and look for performance issues, memory utilization, CPU consumption, things like that. And in this tool, they run a TensorFlow.js model for denoising and understanding what this profile data means. And you can check out a demo of this at the Nearform booth here at this conference. Node.red is an open source library from IBM. This is a flow-based way of wiring together IoT solutions. So you have a drag and drop model and you can create IoT workflows. And Node.red offers integration with TensorFlow.js. So all the TensorFlow.js capabilities are easy drag and drop modules that you can use to bring ML into your IoT stack. And very similarly, Losant is another company that builds enterprise-grade IoT services and solutions. And Losant has been looking at ways of incorporating TensorFlow.js-based prediction for client-side edge prediction of machine learning. And they wrote a really nice blog post to show how you could use this to build like a predictive maintenance application with sensor data, right? So a lot of these types of examples are beginning to show the power of like an easy ML workflow in Node and IoT. So just in closing, I wanted to show this. If you have machine learning needs, if you envision as JavaScript developers or as Node developers, if you have certain needs or requirements from the library, we would love to get your inputs. There's a UXR study that we are doing. So please feel free to join us and give us your feedback. We would love to hear from you. And here are some links that are very useful for getting started. Of course, tensorflow.org slash js, that's our main website. It has all of the things, examples, documentation, tutorials, and the link to the GitHub repositories up there, where we have, again, a lot of these examples built. We have a mailing list in red there. If you join the mailing list, again, that's an excellent way to interact with other developers and directly with us on the TensorFlow.js team. One thing I wanted to show you is that if you go to the Google code labs, and we have these running at the Google booth outside, if you go to the Google code labs and search for TensorFlow.js, all of our examples are available as interactive notebooks. So you can click through this and basically be up and running and get started and explore all these different features we talked about here today. Lastly, there is a new textbook that has come out, which is really a very nice way to learn the basics of machine learning from a JavaScript programmer point of view. All the examples in this book are written in JavaScript. So that's something that's worth checking out. And also we have a overarching comprehensive TensorFlow course on Coursera, which has a TFJS module now just released this week, earlier this week. So plenty of resources to get started and looking forward to your involvement in the community. So again, thank you so much for your attention. Thank you.