 use machine learning in the client side while Python is doing all the trick in the world. And what are the models which are available? What's the architecture behind the client side ML and also the tech stack and of course a demo. A couple of demos which we already have and which we'll be going through that. Okay, why? So the why part of client side is always big question mark. So when Python was doing so well, why suddenly I have to do it in a JavaScript? So whenever I do things on the server side, it's going to cost me a lot and it's going to a lot of challenges in terms of deployment, dependencies, knowledge transfers. There are a lot of things when it goes into the server side. So when it comes into the client side, my infrastructure cost is reduced, my infrastructure maintenance reduced and latency problem is sorted out and there are a lot of other big advantages in terms of real time interfaces. So this is why we had switched to client side. So now what are the things which are making this client side possible? So the possibility is in a JavaScript, as you all know, is a very big language by itself and it is world's top most used language today. So it has a lot of libraries which we typically call it as. So there are top five libraries which are used in machine learning, so which I've just listed. So TensorFlow is one of the top most. It has more than 8 million stars in GitHub and Convent.js which is nothing but the deep learning part which is brought into the client side using that. That also has close to about 9 million stars and Brain.js. So initially it was Brain and then they deprecated that Brain and then Brain.js came back and so they are doing a lot of things. So Brain.js is a very interesting thing which with very few lines of code we can do wonders with the browser. So more games are done using this Brain.js in the real world. So like this, there's this Neuro.js which I've listed is again one of the most popular ones which are being used in queue learning. So these days all our learning platforms are getting online. I mean with the given situation and the pandemic and all that, so things are getting more and online. So this specific JavaScript library helps a lot of us in terms of learning algorithms, getting the questions online based on how people think and how people work. So this is basically a full stack neural network implementation and it has a lot of support also with that. So that being said, let me just move forward to one of the most top JS libraries which are available. So for today's presentation, I would be taking TensorFlow.js which is on the top of the list which is available in this ML industry. So what's the starting point for TensorFlow.js? So how does all of this start basically? So as you all know, technology in the machine learning prediction and analytics plays a major role in today's century and NLPs have come up and artificial intelligence has come up. So all of these are playing a very, very big role in today's industry and the advantage of having JavaScript in the machine learning because it brings everything into the browser and when it brings into the browser, it becomes a device independent basically. So all of this can be used in any of the devices and all that. And one question might pop up your mind basically when I call it bringing into the browser. So how are the infra going to be? You know something which is running on a server in Python. So how is it able to get supported in a browser with my low end infra? So yes, all of this trick is basically managed by the library. So this TensorFlow basically runs in a single CPU, in a GPU, in a cluster of GPUs, and it can also run in a multi-node TPU. So which means that it is scalable. So when people will always keep in and keeping that things are coming into the client side, so is it not scalable? So the scalability of these libraries are very huge. It can run on any of the environments. So now let's get into the more of the starting points. So where does it all start TensorFlow? So basically, you know, whenever we call a machine learning, there are three approaches which we can do. One, we can run an existing model. So we can retrain an existing model which is already built in Python. So there is a model which is built in Python. So which can be taken, it can be retrained, customized to our data sets and our needs. So and basically, we will need a small data set which will do the transfer learning basically, which we call it technically. So that is the second way of our second approach which TensorFlow can lead us to. The third one, you know, of course, we can build our own models basically. So how do I build my own models, you know, in the front end, so that that's something which always pops up all of our mind. So we have a Keras API. So Keras is again, you know, one big giant in this ML industry. So Keras gives a layer based API and using which I can build my model and I can train it. So it can be trained in the browser or on the server using again a Node.js. So Node.js is again, you know, falling into the JavaScript ecosystem. So this is why I always like calling JavaScript you being US language. So, you know, it has all by itself. So by the virtue of all the capabilities which JavaScript has, you know, TensorFlow and other libraries brain and all other libraries, you know, it can it can handle various different devices, different platforms, it can work. So that being said, so when an application is built, you know, we built a lot of regular web applications and regular mobile applications. So all of this can be done. So what are the other kind of applications which we normally build in the front end when we call it is a front end. So front end, we have a browser based application. So we have a mobile based application using react native and, you know, we have done, you know, a lot of work with react native in and which many of you all were a part of that. And so when it goes into the server side, as I said, you know, it's the Node.js which is going to come into place and desktop. Okay, so when it comes to desktop, yes, again, you know, ML is possible using desktop using electron, which is basically open source given to us by GitHub. And a lot of applications have been built in electron, you know, so I'm pretty sure, you know, one of the very famous ones which all of us use day in and day out, the visual code studio is built on electron. So the advantage of building, you know, a desktop an application is, you know, you have an executable version of it. So you just click and then, you know, you just go. So that's how that's the biggest advantage of building a desktop application also. So all of this comes with, again, you know, the JS framework, which is already built. So building our model using Keras and all of this become very easy for us. Right. So what are the kind of use cases which I can build with this? Okay, so now I divided this into three parts basically. So one, I call it, you know, the browser based or the mobile based apps at the left extreme left of me. And to the extreme right, you know, it could be an on-prem or a cloud based server based application. So to start with, you know, the augmented reality, the speech recognitions, the IOT, you know, all of this, all of this can be built basically, using, using TSJS, which will work on your browser. It can be a mobile or it can be your web browser or anywhere in a, it'll start working. So that is on the left side. On the right side, you know, we have the on server spectrum, which we can call it as, so which is a traditionally ML based pipelines, which solve a lot of enterprise level problems, which is again built in Node.js. And in the middle, we have, you know, the abuse detection, the sentimental analysis and all of this. So this can either sit, these kind of a use case application can either sit on the browser or on the server. So by the way, you know, we are, I don't know, we are doing some POCs on all of this. So already there are a couple of people who are a part of it and I would encourage and invite more people to be a part of it, so that, you know, we can build something huge. So, so, so these are the variety of applications which can be built and these are again all built in the browser, the JavaScript and using TensorFlow basically. So what are the, you know, TensorFlow models which are available for us basically. So these are ready to use models which are available and given to us by TensorFlow. So sorry. So we have, you know, position estimation. So this basically how it behaves is, you know, it basically detects 19 parts of our body. And from the 19 parts, you know, it identifies and it can give you the position in which you stand, the position which you sleep. So based on that, we have, we have built a POC, so which I can share it with you shortly. So that will, you know, detect basically, you know, how you stand, how you work and how you post. So again, you know, image classification is one more data set which is available on the, you know, predefined models of all of this can be found in the GitHub, you know, the models of the TensorFlow. Same way, object detection model. So object detection model, you know, previously, you know, we needed Python and all of this. That is what, you know, that is what is being used currently in the driverless cars, you know, with the CNNs and all of that. But that is now, you know, slowly getting converted or can be replaced by object-oriented detection, you know, object detection. Same way, body segmentation. So this body segmentation is an interesting thing, you know, which is given by TensorFlow, which is a pre-built model available for us. So this device, the body into almost six categories and, you know, it can define your body in terms of colors and textures and all of that. So the next one is your text toxicity. Like for example, see today, we have a very big social networking thing. So we never know what kind of toxic words are being used and, you know, how are, you know, people talking about us and so on and so forth. So, you know, these kind of ML models which are running on your browser or on your server can help us identifying, detecting, you know, many of these things. Right? So same way, universal sentence encoder. You know, all of these, all of these things bring a lot of, so even NLP is now, you know, a part of this model, basically. So hand posture detection, face mask detection and with face mask detection, you know, there is already a POC done by one of our interns. I think, you know, I'll ask him to share it with all of you. So he's already built one, you know, using face mask detection, basically. So he initially did it with Python. So now he's converting it into JavaScript, plain JavaScript, using TensorFlow. So basically what it does is, you know, he will, I'm not sure if he's there and called today. Aman, are you around? No, I don't think so. Yes, sir. Yes, sir. Yes, sir. Oh, yeah. Okay. So he's already built it. So please share it with the team, you know, the URL, whatever you build. So and now he's converted into, you know, TensorFlow model. So what it does basically is, you know, it identifies whether people wearing mask or they are not wearing mask and all of that. So these are the set of, you know, predefined models. And this, this predefined model set could solve a lot of use cases for us, quite a lot of use cases for us out of it, you know, at NQ, we are already solving, we have solved a couple of them. And we are, we are having a couple in the pipeline for which, you know, we will need a lot of support from you guys to be a part of it and to make it happen. Okay. So now coming to the architecture. So this, like for example, you know, if it was a web application or, you know, so we, we all of us are, you know, experts in that. So we know what, what should come in the front and what should sit in the back and how should it be optimized, how should it be built and all of that. So we are, we are already an expert with that. So now the architecture of this is little tricky and so, so what, what is this architecture basically contains? So the architecture basically contains, you know, something called TensorFlow, a, a serverable. So all of these are divided fundamentally into three categories first. So first, preprocessing the data, which is very important for us to building the model is important and training and estimating the model. So these are the fundamental architecture components which will be involved with any of the ML, ML-based client-side rendering. So when I say processing the data, so basically, you know, a collection of processed data, you know, bring, you know, see already always, you know, data which is coming to us is always unstructured. Hence, you know, preprocessing makes it more structured and, you know, it, it gives us the limit limiting values and all of that. So that is why, you know, preprocessing of data is very important and building the model. So, so now I have the data which is now structured. Now I start building my model and then the third step comes, you know, you know, it's like training the model. So, you know, it helps you how, how it looks, how the model comes and all of that. And all of, all of the other, other components which are involved with, with this is how the architecture totally evolves. Okay. Okay. Something is not happening. Okay. Right. So having said the fundamental, you know, structure of the architecture, now let me break it down into little more further. So we have the TensorFlow serverables which, which basically are the components of the TensorFlow, which gives me what, what functionality to use and all of that. So, serverable versions are, you know, different kinds of data sets which come up, you know, so the data sets are put in different kinds of versions. So, you know, building the models. So, you know, the primary three are now subdivided into all of this. So I have to build the model. I have to make sure the model is up as per the data set. And then, you know, it gets loaded, you know, a loader loads it. And then, you know, it goes to the source and the source gets manages it and the core. Okay. So what's the core architecture behind the library? So I have a client and, you know, I have a model which is saved in the client. So I'll have to manage the, manage the saved model technically. So again, so the model is always, you know, structured and, you know, it is controlled. So meaning, you know, I can, I can, I can deal with different kinds of plugins also future in terms of when I have to include it in the core architecture. So lifecycle and, you know, it just, and it just goes into, it just goes into the executable, which is the final version of the trained model, which can be applied and used on the browsers. So moving forward. So what's the text stack? So, so JavaScript is 25 years by December, guys. So that's, I'm not sure if people are aware. So I just thought I will put that. So what's the text stack? So text stack is like, you know, I can sit with a react Angular view and I can go with my brand or JS or TensorFlow or convent. So all of these, you know, help me in terms of building the entire part of it. Right. So now let's go to a demo.