 going to be presented by Shivalamba. He's a software developer specialization in DevOps, machine learning, and full stack development. He's an open source enthusiast, has been part of various programs like Google Summer of Code. He has been mentored there. And he's an active community participant also. So let's get him on board and let's start the event. Welcome, Shivalamba. Yeah, thank you so much, Weber, for introducing me. And it's really great to be presenting today at PyCon India. And without wasting any further time, I will quickly just share my screen for this one second. Sure, please. Thank you. So I think I've already shared my screen. You can put it up if you're all right, great. So again, welcome everyone to PyCon India 2021. I'm really excited to be presenting today a talk on using TensorFlow.js Converter to convert Python-based machine learning models into JavaScript-based models so that you can directly use them with JavaScript. And we'll be talking today about why is it necessary to do so, and what are the benefits of actually converting Python-based models into JavaScript models. And we'll be also seeing a small demo on the process through which we can actually do the same. So I'm really excited to be presenting today. Weber already introduced me with just a very quick introduction as well. I'm Shivalamba. I've currently been a Google sort of code mentor with organizations like TensorFlow and Mediapype, where I've been mentoring specifically for TensorFlow.js and also JavaScript models for Mediapype. And I've also been an active contributor to TensorFlow.js. And I'm currently part of the working group SIG and also the community outreach for Asia-Pacific region. And without wasting any further time, let's get started. So first, let's try to introduce what exactly is TensorFlow.js. So TensorFlow.js was actually originally launched as deeplearning.js. Now, ads code TensorFlow.js is an open source machine learning library that was released by Google in 2018. And it's primarily made for being able to actually run machine learning models on web while being able to write these models in JavaScript. Now, traditionally, a lot of the machine learning that we used to do was primarily in Python or MATLAB. And one of the biggest issues being that specifically for a lot of the web developers, I mean, we know that JavaScript is sort of the language of the web. And for most of the web developers, if they had to introduce some kind of a machine learning model in their web applications, they had to either, let's say, learn Python and or let's say, collaborate with someone to actually use Python, then create a machine learning model, deploy it to a server using, let's say, Django, Flask, frameworks, and then use that, let's say, with their APIs. So if they were having a project that was a full stack application using JavaScript with them back and running in Node and a front end running in JavaScript, they had to use a separate server and call that as an external API. But what the Google team wanted to do was being able to give the ability to developers to be able to use machine learning models directly in JavaScript. And that led to the creation of TensorFlow.js. Now, it has primarily two different APIs. And we'll be coming on to what exactly are both of these APIs that include the core and the layers. And can be used to either, let's say, build new models completely in JavaScript. Or it can also be used to actually go ahead and retrain some of the existing models as well. And you'll also have the ability to actually go ahead and load existing models for inference. And it uses the computer CPU and CPU for both the training and the inference. And then you can also create, let's say, server-ready machine learning models with active development using node bindings and the huge NPM ecosystem that comes with running it in Node.js. Now, if we talk about the TSS ecosystem, so essentially it's running the TensorFlow, which is an open source machine learning library. It's using JavaScript. And also, it has a lot of support for a lot of different backends, like, let's say, WebGL, WebAssembly, and also WebGPU. Now, of course, the biggest advantage of actually being able to run machine learning on JavaScript includes being able to run basically JavaScript in all different kind of platforms. That includes browsers, servers, so servers including, let's say, support for Node.js, a desktop, let's say, using Electron.js, mobile devices using, let's say, React Native, and IoT as well, where you can actually run machine learning models on IoT, let's say, with Arduino with a Node.js-based backend. And that is what makes all of this versatile. And as we just discussed that even for browsers, it has support for a wide variety of browsers. Even for mobile, we also have PWFs, which are essentially progressive web applications that allow you to run your web-based machine, web-based web development web applications on and be able to actually create Android, iOS, cross-platform support as well. And basically, what essentially TensorFlow this allows you to do is either you can run some of the pre-existing models. Now, these are basically some of the pre-trained models that come pre-packaged with JavaScript or have been converted from Python using one of the tools that we are going to be using today and that is the TensorFlow.js converter. Or you can also re-drain some of the existing models with transfer learning for those who might not be knowing what exactly is transfer learning. Transfer learning is a process through which you can use an existing machine learning model and then train it over certain data and basically is able to understand the characteristics of that particular data and then modify the weights and modify it's basically whatever training it does, right? According to the data that you're actually using. And or you can also use models directly in JavaScript. You can write machine learning models from scratch directly in JavaScript as well. So the one that we are going to be looking today is basically the models that can be converted from Python into a JavaScript-based format and how you can use it on a live application, live working application. Now, the next thing that we're going to talk about is of course the transfer learning, right? Being able to run basically a preexisting model and then having the ability to use this kind of a preexisting model to be able to now go ahead and run it over any kind of a new data, like your own data so that it can sort of understand whatever is the characteristics of your data and then be able to modify itself and give the relevant results or relevant predictions based on the data that you provide to it. And that is what we're also going to be seeing as an example today. Now, when we come and discuss about the TensorFlowJS APIs we primarily have two different types of APIs. Now, essentially these APIs allow you to create your own machine learning models directly in JavaScript. Now, we have basically two different types of APIs. One is the layers API, which is sort of like Keras and then we have the low level ops API that allows you to do a lot more things to it. For example, you can customize the mathematics running behind like, let's say some kind of linear algebra that is going on behind and that gives you a lot more customizability when it comes to creating these models. And if you just go ahead and look at the entire ecosystem, right? So at the top is sort of we have the pre-trained models. Now, as we have more and more abstraction, we have the layers API that is similar to how Keras has been built on top of TensorFlow, where TensorFlow at its core allows you to customize the mathematics, customize whatever you want to have. But if you want to have more granular details, if you want to actually go ahead and convert, let's say even more and you want more ability to actually, let's say have more freedom on being able to even define the type of models and the machine learning. Let's say you want to define the mathematics that is going on behind the machine learning. You can use the course or the ops API, right? And that has the ability to either run in the client side. So on the client side, you can run it on the browser using, and again, the support will be through CPU, Vozum, WebGL, and also recently, there is a support for WebAssembly as well. Now, the other part of this would be that we can also run it on the back end. So on the server side, we are going to be essentially having using Node.js. And specifically for Node.js, you can either use the TensorFlow CPU or the TensorFlow GPU. So what that means is that basically, you can run machine learning models on Node.js as well. What that allows you to actually do is that basically what this will allow you to do is that you can run much more powerful or much more heavier machine learning models as well. And that is really essential when you're trying to sort of go ahead and let's say try to run larger models because of course, with the client side, there is this, you could say limitation in terms of how powerful machine learning models that you can run. But of course, when it comes to these machine learning models and running them on the back end, you get the ecosystem of all the NPM modules as well. And that is sort of really essential when it comes to being able to run these machine learning models. So the next thing that we're going to talk about is basically in terms of, so if you talk about the layers API, it's similarly to how we have the Keras based models. And when it comes to let's say the core API, this is where now we have the TensorFlow saved model. And if you are not aware of what exactly that TensorFlow save model is a type, whenever you're creating any kind of a TensorFlow flow based model, you can, when you're saving it and you have let's say a .h5 file format, which basically consists of all the different weights that are associated with the model. That can be converted into a TensorFlow.js equivalent model using the TFJS converter that we are going to be looking at right now. And of course, if anyone has any questions as well, you can definitely just ask them as we are discussing about the TensorFlow.js converter. Now, coming to the next one, like we can quickly just discuss about the core API. And the core API is essentially allowing you to do low level operations. It can do things like being able to change whatever kind of linear algebra transformations that you want to have. And it basically targets the GPU either via WebGL or Wasm. And now, if you talk about the layers API again, it has been built on top of the core API and it just allows you to have certain high level abstractions without having to let's say worry too much about the inner linear algebra or basically the mathematics that's going on behind. And now if we talk finally about the TensorFlow.js converter, right? So basically the TensorFlow.js converter allows you to basically convert pre-trained TensorFlow and Keras-based models into a web-friendly format. So if let's say you might be having a TensorFlow-based model or you might be having a Keras-based model, right? So both of them are being able to convert into a web-friendly format. We'll be seeing in a demo shortly how you can actually go ahead and do that. But whether you have a TensorFlow-based model or you're working off a Keras-based model that you have built in Python, that can be converted into a web-friendly format. And that format is generally in a model.json format. We'll be talking about the structure of that file as well very soon. But essentially what it does is that whatever you might have whether it's like the TensorFlow-safe model or even let's say if you have a TensorFlow hub model, now basically a TensorFlow hub, if you're not aware it's an online tool where you can host your machine learning models, right? And you can directly just use a file format that is supported by the TensorFlow hub as well directly converted into a JavaScript-based model.json file that can be used to run inference to TensorFlow.js. So you can use that as well to be able to run successfully. Now basically the different type of file formats if you look at when you want to use the TensorFlow.js input format. So basically the TensorFlow.js converter is basically the function. Now it basically takes into input four different parameters. The first one is the input format. Now what exactly is the type of input format that we are going to be using, whether it's like the TensorFlow-safe model or it could be a Keras-based model or it could be a let's say a TF hub right, it could be a TF hub-based. Then what is the different types of output formats? So the output format is basically the TFDS either the layers model or let's say the graph model. So if you're using let's say that save model it will be a graph model. If you're using basically a Keras-based model it would be basically a layers model. And then you can give basically the path from the model that you want to convert. So here you'll be, let's say, having your .h5 file format if you are having let's say Keras-based model. And then you have to give the path to the final saved output and that output will basically create a folder which will have basically two different components. One is the model.json file and the other is the shards. And we'll be discussing what those are really. Apart from this, the different kind of features that are supported as I've discussed is the saved model that is the TensorFlow saved model then the HD5 format that is basically the Keras model or you can also use the TensorFlow hub model as well. And how to basically use the TFDS converter? It's really simple. It's a two-step process. First, you convert your Keras X5 format or a saved model format that is from TensorFlow or let's say our TensorFlow hub module into a friendly format using the TFDS converter and basically you have to use this particular line or this particular function to convert it from that particular Python-based model to the TensorFlow model. And then you can directly just load it into the browser and start running it. And that basically brings us to our demo time. So without wasting any further time, let me quickly share my screen where I have basically created a small demo for everyone to check. So let me quickly share my screen and let's get started. So I hope my screen is visible. So overhead basically I've used basically this TFDS IPNYB file. Now, if you can clearly see is basically I have created a simple machine learning model, basically a Keras-based model of the Pima-Indian diabetes If you're not aware of the Pima-Indian diabetes disease, it's a very popular disease that is used to basically if I quickly show you the different CSV formats. So basically, essentially it consists of different parameters like the age, number of pregnancies, things like your BMI, the diabetes pedigree function. It has basically all these parameters and the idea is that you can try to use this data set to predict whether a person is going to have or what is the probability or the prediction that they might have potentially diabetes or not. So I've basically created this very simple IPNYB file where I'm using basically the Keras.models and Keras.layers, the dense layer specifically. And what I've done is I have first imported the data set that I have just showed to you. Then what I've done is I've created a very simple neural network model where I basically have three different hidden layers and I have used basically the activation as a relu for my first two layers. And for my third hidden layer, I've used sigmoid as the LED. So once I basically create the model, then I've used basically the model.compile to compile it and basically the losses function that I've used is the binary cross entropy and the optimizers that I've used is the atom optimizer. And once I compile this model, basically I use the model.fit to basically fit the model and the number of epochs that is the number of times that the training will take place will be 150. And then finally I'll be basically predicting the model and I'll be predicting what's the accuracy of my model over here using this particular statement using this print statement to basically evaluate on the performance by using basically amount of data sets. So over here, I'm converting my data set into the training and my testing. And basically I'm now going ahead and basically testing out my model once the model has basically trained over my own data. And I'm just printing basically my model predictions. And finally what I'm doing is that I'm saving it into a H5 based Keras based save file. And that is basically the TFDS model.it's fine. So if I run this very quickly, what you'll see is that, you know, if I go down, like it has already started the training, right? It is running basically the epochs and it will run it till 150 times. Again, the model is right now that the neural network it's taking the input of the dataset. It's running like basically training itself over the 10 different parameters that are given inside the Pima dataset. And as you can see, like it basically, you know, presents me an accuracy of 75%. And I have basically been able to convert and basically save my file as the TFDS underscore model.h5 file. So you can directly go ahead and download this for your reference. Now the next step is to basically use this file format and convert it into a to an equivalent, you know, that is basically the TensorFlow.js, right? And for that, what you'll need to do is basically you'll have to go ahead and, you know, install TensorFlow.js. So we'll basically use a PIP install TensorFlow.js that will basically allow it to use the TensorFlow.js and basically requirements already satisfy for me. And then finally, what we're going to be doing is that we are going to be using the TensorFlow.js converted. Now, very carefully, if you look at the syntax, what we're doing is that basically we're using the TensorFlow.js converted, right? Like I have shown you in the slides. And the input format that we have particularly given is Keras. Then we give it basically the path of the input, right? So essentially this being the TFDS underscore model.h5. And then we give the output path of the directory that, you know, it will be basically converting into and that directory would basically be the TFDS underscore model. So basically this is the directory for our, you know, for basically the output that will be generated. Now, once you basically run this and it generates basically the folder, there are two essential files that will be converted. Now, one of the files is basically the model.json. And in this, you can see that basically it shows you what's the format. For example, the format is the layers model and then it has been generated by Keras because we had used the Keras-based, you know, model and it's been converted by the TensorFlow.js converted. Then you'll find certain things like, you know, a lot more details about the model, like, you know, because since we had used the sequential-based model and losses function that we had used was cross entropy. So all of that information that was defined inside of the Python function, right, that can be seen now inside of the model.json file, right? And now basically you might be asking where exactly are the weights, right? So basically the weights are inside of this file called as the group one chart of one. This basically has all the different files, right, that are necessary to have. Like basically all the weights that are necessary for the, you know, for the pre-trained model that we are going to be using. Now, basically I've also complimented this with a very simple web application as well. So basically I've created this simple index.xml file where, you know, I'm going to be going ahead and, you know, creating a simple form-based system where I'm asking about the various parameters that are expected from this, you know, machine learning model that we have created that is a Prima data set. Sorry to barge in. Can you increase the font size for the code for everyone it to be visible? Is it visible now? Yeah. All right. So basically, yeah, all right. So basically what I've done is I've included an app.js which will be covering very quickly, very soon. And over here I've just basically created a simple file format, you know, form where I'm asking about the different inputs that are expected from, you know, the machine learning model that includes the pregnancies, glucose, blood pressure, insulin, and you'll be basically inputting all of these, right? And finally we'll be basically, you know, initializing or when we click on the button to basically send these as a data, I have written my, you know, function in app.js. Now, if you see over here, basically you're using an async function. Why that is necessary is because whenever you load any kind of a machine learning model in JavaScript, it takes some amount of time to actually load the model, right? So if you try to run your program without actually using an async await which basically waits for the model to load and then only execute it, then basically run into an error. So that is why we use the const model and use await. Now, over here what we have done is we have used tf.loadLayersModel. That means that we are basically using, since it's a layer-based model, right? If let's say you were using like TensorFlow one, you would have used a tf.saveModel. Since you're using Keras-based model, we are using the loadLayersModel. And over here I have basically imported my model.json file. And basically then what it does is that it uses the model.predict and since we are using tf.tenset, like basically it will predict that. Now, over here what I've done is I've taken my data and taken all the different inputs inside of my array. And then what I've done is basically I've passed my data, right? This entire as an array to my prediction function. And what it does is that it basically predicts it. And I've used a very simple thing that whatever is a prediction, if the prediction is less than 0.5, then it will predict that as zero. That means the person does not have the likelihood of getting diabetes. And if the value of the prediction will be greater than 0.5, because this is a binary classification problem, right? Either zero or one, this is a simple logic. And it will basically print it, you know? And so let's say I'll quickly just go ahead and showcase, you know, how it's basically running. So let me open up localhost 5500. And this is where I'm running. So this is basically my webpage. So let's say the number of pregnancies that a woman might be having is let's say two. Let's say that glucose level is 110. Let's say that blood pressure, diastolic blood pressure is let's say 80. Then let's say then skin thickness is, let's say measured in centimeters, it's two centimeters. Let's say that insulin is 120, 29. Let's say that BMI is 21. And the diabetes pedigree function, let's say the value for that is two. And the age, let's say the age of the person is 25. So basically once I submit this, this entire data goes into my, you know, into my function, right? That has been basically defined overhead inside of my JavaScript. And it basically uses the model, right? It loads, it waits for the model to load and then it renders the prediction. So if I go back, you know, to basically my model, over here, what you'll see is that the probability that it gets is 0.17. And since I've used basically a binary classification, the prediction is all like, you know, overall prediction is zero. That means that the person is not really likely to get diabetes. So this way what we are seeing is that by having used this, you know, like we have created a machine learning model in Python using Keras, like a standard, like, you know, what someone might be doing, right? And with that, what we have done is that we have convert, like we have saved that model file, right, .s file. Now, traditionally, you might have to, you know, use like, let's say Django or Flask to, you know, basically host this and use this. But instead of doing that, we basically use the PFJS converter again in Python, and we convert it into a model.json file and use it with a JavaScript function, right? So you are able to write a complete machine learning model in Python, convert it into a format that is supported by JavaScript using TensorFlow.js converter and use it with that. So for example, you are a primary developer, you have a friend who is a Python developer, they can write a machine learning model for you and you can use that, right? And you can simply use these tools within that is supported by TensorFlow.js and you can use that. So basically, that's pretty much it, you know, with the entire presentation, a very quick, you know, so you can basically connect with me on my Twitter on Hardvelop or on GitHub on Shivalama. You'll also find the codebase for this entire project on my GitHub. So I'll be uploading it during the PyCon. So you can definitely connect with me on this YouTube as well, and you can now ask your questions. But thank you so much for attending this presentation. Thanks, Shiva. Thanks a lot for the presentation and your talk actually. Being an ML enthusiast myself, I found this talk like real amazing. So we do not have much questions from the lot, but I am very, like really curious to just to know, how do we start with TensorFlow.js, like if we have some resources available online or something to be looking into so that we can pitch start our carrier in it because being an enthusiast, even I started with Python like everyone else, we started analytics in ML and DL. So just a quick question, like, how do you find the online resources so that we can pitch ourselves in the carrier of TensorFlow.js? What is the best resource available? Yeah, so possibly the best resource that I'll sort of mention are two different resources. One is basically the TensorFlow.js website because TensorFlow.js, you know, Web App was primarily created for, you know, creative web designers who did not know much about Python or machine learning. So it has support for some of these pre-trained models that can directly, you just have to use, like, let's say five to six lines of JavaScript. You don't need to understand what exactly is going on behind the scenes inside of the machine learning model, right? No information needed at all. You can directly just import that particular function and just know about, okay, like, you know, model.train. That means you're training the model and then model.predict. That means you're using the predictions. That's it, that's going to know, right? And basically you can go to TensorFlow.org.js to get started with some of the initial documentation and start using these pre-trained models. Now another really great resource that, you know, is going to be, that I'll recommend to you is a book that is published on O'Reilly by Gantt LeBord. He's MLGD and WebGD, you know, and he has created this book that is present on O'Reilly for basic introduction to TensorFlow.js, which right, like, you know, starts from the most basics of what exactly are tensors, right? Because we know that tensors are these huge high-dimensional arrays, right? That store of basic load values, right? And from tensors to basically how to use TensorFlow.js. So definitely recommend that, you know, that particular book as well. But of course, TensorFlow.js is really easy to get started in. The only thing that you might need to know is, you know, what is JavaScript, some basics of JavaScript, and you can get started there. And the best part is that even if you're a Python developer, you can still write your primary, you know, entire function or your model in Python and then convert it into a JavaScript. That's it. If you don't want to necessarily use Flask or Django. And I think one of the questions is, is the TensorFlow.js converter creating a JSON file? Yes, exactly. Yeah. So basically when you use your H5 file format, or even if it's a saved format or a TensorFlow Hub format, it basically, when you run that command, right, it will create a folder where you are given the name of your output directory. And it will have two different files, right? The two different files are basically your model.json file that has the information about the model that has information, things like whether it's layers, format, or let's say what are, where exactly are the weights defined. And then we basically have the shards, right? Those are essentially your weights, right? Whenever you train a machine learning model, it generates the weights. Particularly, let's say if you're having a neural network or a deep learning based model, it will generate the weights. So those weights are stored in the shards and the model.json file has all the information that is necessary, you know, for successfully basically defining the parameters of the model that you have used, right? So as you saw that basically in the, you know, in my demo, I had used loss, lossless entropy, right? And then I basically used add them, optimize it. All of those are, you know, saved into the model.json file and you can use that directly with JavaScript. Okay, so. Yeah, so let me actually go ahead and share the O'Reilly book inside of the, you know, TensorFlow this model. There's a base book. Let me just quickly share the link of Amazon actually. So learning, it's basically called as learning TensorFlow.js by, oh, you know, so again, I'll share it basically inside of the screen yard link to be shared and also in the hop in. So I'll just share that inside of the question and answer section. Let me quickly just take that and use that. So inside of the chat section in the stage, I'll just quickly share the link. This is pretty much basically the book and you can buy it on Amazon again, well, like not very expensive, but definitely something to, you know, if you are interested in machine learning, if you are a JavaScript developer, although, you know, this is a Python conference, but of course, like, you know, you can definitely have a look at this. Let's see, like, if you have any other questions, do we have any other questions? I think we probably do have, I guess the questioner section. So I guess this was by Rahul who asks a pre-trained model that takes much time to further train even on GPU. How can it reduce time? So Rahul, basically, we have different types of backends that are supported in TensorFlow. So one of them is basically, you can use your standard GPU, right? Or you can use other types of backends like WebAssembly, WebGL, right? And if you try to use them, basically they allow you to, you know, run these native codes, right? Basically, because let's say if you're talking about WebAssembly, that is basically your C++ code that gets converted into WebAssembly format and that runs, it allows you to run it natively directly in your browser. So those are, you know, sort of being used right now to, let's say, further, you know, sort of fast in the process. But of course, whenever you have even a pre-trained model, you can always try to use, of course, these different type of backends. And let's say if this is not supported, then of course you can, if you have access to your main machine learning model, you can always try to optimize that further. Let's say, because JavaScript also has support for creating machine learning models directly inside of JavaScript using the Ops API, you can write custom machine learning models in JavaScript as well. And yeah, Abhishek, I guess we can take one, like a final question, I guess, that is by Abhishek. That is, isn't it easier to use StreamNet? So basically, StreamNet is mainly used for data science purposes. It allows you to create very small applications, but in terms of the limit, like it is certainly limited in terms of what you can build. Because with TensorFlow.js, you have a wide variety of things that you can actually do.