 Let's get started with this. The topic, obviously, we have a similar using TFJS and talking more about me in terms of TensorFlow.js. So I am a part of, say, of TensorFlow.js a member. Also, I'm a working group member of TensorFlow.js. You can say a DevRel team in that sense. GSOC student under TensorFlow, working under TensorFlow.js project, and organized TFUJ Chandigar. So I'm not getting in more on the ideologies of mine. So let's get started. So machine learning JavaScript. Why? Basically, for those who didn't know about what is TensorFlow, just giving you a normal briefing. TensorFlow is unfree and open-source software, library for Dataflow, differential programming. And you can say, numerical scientific calculation, you can say, which make it easy. That's a basic concept of TensorFlow, you can say. Or yeah. Cool. So use ML anywhere JavaScript can run. So from the TF.js, what the concept behind when TF.js came into play. So in initial of this session, I will give you a descriptive point of view of how TensorFlow.js works, what is the architecture out of it, and why it helps in web assembly to enhance a machine learning model to work with web assembly. And after that, I will give you a small about web assembly introduction. Yeah. So JavaScript can mostly run in everywhere, you can say, in your client side that is called browser, server side, that is in this way, we can call this as a Node.js, desktop, mobile, IoT, you can say more only Raspberry Pi side. Cool. So what are the browser that is supported by most of the, like when we call TF.js and TF.js model, that is Chrome as usual, because it's Google product, Apple Safari and Firefox from the server side, Node.js as usual, because why I'm saying, because I will explain why in server and why Node.js is preferred in mobile and react native VChat and PWA professional web application, those who are working more on that and in terms of that, in desktop version, also electron framework is there, but now team is also working under, like more on the ionic framework also. So I hope so, those who are ionic framework, maybe for who are working on that, it is good news for them also. And as usual for IoT, Raspberry Pi right now in community TF.js community, we are so much working on Raspberry Pi to enhance the performance, talking about more on the IoT end. Cool. Now, what is TF.js and how it is important for all the web developers here? So the simple concept behind it, run, retrain, write. So again, if there is some existing models present over there, those who know pretty much about TensorFlow Hub, all your models are listed over there. So you can run normal existing model or you can say pre-trained models, it is available over TensorFlow Hub. If you want to retrain or existing that model, you can use using transfer learning. And also that is a great news if you are not a Python developer and you are only just JavaScript developer and you want to create the model or you can specifically machine learning models in JavaScript, that's the part of it. You can use your JavaScript to create machine learning model. You didn't have to be, you have to know and you can say Python, Python library or you can say any terms of Python. So that's a great thing for most of the JavaScript developer you can say. Cool. So for anything you can may dream of, so most of the time, like many people are not thought of because augmented reality kind of stuff can be included in you can say in TensorFlow.js. But now it is more on that and we are also working on the NLP side that is sentiment analysis, conversational AI, web page optimization. And recently we just got a, you can say collaboration with Microsoft on where we are just working with, how even if you are not a data scientist by dragging dropping using a platform that we are working, it is coming soon in a month I guess and you can see that there are many, many applications out there in TensorFlow.js, try to be a part of community. You can do so a lot of things around that. Cool. Now the pre-trained model, why I'm just touching this part, why, because the scene is that the pre-trained models are mostly those who are, for those who are not good machine learning folk, they are not particularly good with machine learning. So that's a basic concept came into play. So if you want to work on any project and you think that this model is kind of can be used over here, you need to be know about the back and stuff, what actually machine learning or the what are the layering done by the machine learning models, you just go to AI, use this API and you can implement that stuff on the spot on your project. Some of them are image classification, object detection, body segmentation, pose estimation, tech toxicity, sentence encoding, speech command and KNN classifier. Also, recently we have just worked with some of the face mesh models and you can say hand pose net models, but right now one of the fastest model that we just acquired is one of the fast end, it is used by many of them, the model name most on the around mesh model and we are working more on, and if you want to check out more models, you just go to tensorflow.org slash jsjs models and you can see all that pre-trained model that is supported by tensorflow.js in different different platforms, cool. So normal thing, object recognition using Cocoa SSD is the normal. If you are at the end, I can show you a small demo with like how performance is improved by tensorflow.js, how the performance, in terms of architecture is working over it, so cool, yep. So again, I'm just giving you the face mesh model, how the whole tracking is done on your face, I will just give you a normal basic outlet of it. So how your face is detecting, using just three MB in size, training has to be done on the cloud, you didn't need to install anything, that's a power of tensorflow.js, you just directly import that stuff on your phone using CDN, and even if you're not a fan of CDN, you can install tf.js on your system by using npim packages and yarn packages, and yep, cool. Body segmentation is a part of it, and those who know why, why body segmentation is a very, very bigger part of an application right in machine learning industry, and why it is important when you see, you can say many of the camera that is put on over the traffic lights on traffic area where there's so much traffic over there, and they can detect using body segmentation of the car, or if you are considering about the human also, cool. So with a little bit of creativity, you can enhance the superpower, you can say when you are seeing in the Marvel movies, and it's now true, you can say on that again, the power of computation that is done by it. So there is a, I'm just giving you some of the better examples, how tensorflow is indulging with WebGL shaders, and more on you can say WebGL part, and how it worked with the Bezham part also, why I'm giving you these examples so that you can get some motivation to work with Bezham over those projects, cool. So this one, the laser one, second one is the teleportation one that is done by our one and only senior developer advocate at TensorFlow.js, that is Jason Maves, and he's done using WebRTC, he doesn't know what is WebRTC. If you see Google Meet, so Google Meet has a concept of RTC kind of thing where basically the networking concept is came into play, how my video and my audience link with the another person or your video. So that's a concept of teleportation that is indulged in the WebRTC. A frame is another frame of 3.js that is a 3D library, those who did it pretty much know, and indulge with the tensorflow.js to support that stuff, cool. Now my next one will be another delightful creation by again, Jason Maves, in where you just have the body scanned by you, and just in order to that, what is the size of your body is estimated by the model, close size estimation specifically talking about, cool. Now, there's another project that's a very fascinating project for my end because I'm much into the augmented reality stuff and using WebXR, WebGL, plus TensorFlow.js, and now I think so Alexandria is working more on it about the Vesem, how using Vesem can we improve the speed or performance of the whole project? Basically it's what it here it's doing, just taking a snap of it, and just that same thing is creating using augmented reality or extended reality stuff to be in your enhanced real-life models, cool. Now, transfer learning. Now transfer learning is a very fascinating term and many people know pretty much because Jason is advocating transfer learning in a very huge end, and you can go for the website called Teachable Machine Learning by Google, where you just need to just add some data, there's some pre-trained models over there, use that model, add, you can use your, you can say retrain that your model by adding your data and create your new examples and you can easily do that. If you want, I can give you a small demo out of it, if you want, because like Teachable Machine, just you go to the website, you can see, just need to add your examples over there and you can see your model working over there, cool. Now, in terms of code, what I'm seeing over here is you need to understand if it can be possible, there are two installation that I am just showing here. I prefer Cedian because even though Cedian can crash more on a time, but when I use TensorFlow.js, Cedian works very fine. There is no issue right now, but if you think you are not a fan of Cedian, you go to the Yarn packages or the NPM packages of TensorFlow.js, install it. Otherwise, you didn't need to install anything, all things are in the cloud. Use the Cedian link that I am showing that is in the script source equal to Cedian one and second one for the AutoML, if you want, if you are just working with the AutoML stuff, so that's what and just adding you can some image class if you want to do some classification, just add an image and try to reiterate that stuff. So I'm not giving deep dive into the JavaScript and how the whole async function is working over here. So yeah, cool. So yeah, TensorFlow.js, write your own code that what I'm sure shared earlier also. If you are not a Python developer, but like if you are JavaScript developer and you didn't need to go to learn Python, so that's a power out of it. You can use TensorFlow.js as a tool to basically create your own model. You didn't have to create, you didn't have to be a Python developer to create machine learning model. That's the power of TensorFlow.js. Cool. Now, just giving you a API briefing, what you can do, there's two, basically two level API that is high level layer API like basically Keras and low level or you can say ops API that is called mathematical and I will deep dive how from where I just started contributing to that part in kernel send and I will give you basically descriptive point of view how Vesem also works in the mathematical order. So we will get in this session or like you can say a huge however somebody works in TensorFlow.js and yep, let's go to the next one. Let's go to the main important part of it, how the architecture works over it. So you can got some insights about it. So again, I just set a layer after the model when we adding the layers part of it, there's two layers. First one is the high layer API and second one is ops API. You can say core API or basically it's called, you can say backend, the whole mathematical stuff is going on the core API that is a main low level API. But let's structure it, cool. So we have one one that is client, that is browser ended, what we see right now that is on the CPU, WebGL and Vesem and also more on the node GPUs also coming and like it is in the runway but like it's in right now, it's working in the production. So the visual item you can say is in CPU, WebGL and Vesem and on the backend step that what I said earlier, node.js, it is working on TensorFlow CPU and TensorFlow GPU. And if you are using, you can say the TPU endpoint, again, it is in the production and it is trying to work on it so that it will be present maybe in three to four months maybe to visible visible in your Google collab and you can use that stuff, cool. Now, how the whole model structure built out if you suppose if you are a Python developer, so there's a TFJS converter you have having if you want to convert Python to JavaScript or JavaScript to Python, it's onto, if you will suppose if a Python developer, you didn't know JavaScript, so what you can do. So for this also, TFJS bringing your TFJS converter, we're just you didn't, you just need to write your whole code in Python and just add to the final line and just directly convert that stuff into the JavaScript and that's the power out of it and you didn't need to use again anything else if you are not a JavaScript developer. So it's a great tool for also Python developer too, cool. So I'm just talking about the model inference performance and it's improved by, it's by improved like it's like a small old data. So now the accuracy is just touching about for 52 like or less in terms of inference time as it touching 23.1 and 20, now it's reduced to about like half of it like 10 and 10, 10.1 you can say in and around. It's like old data, so I'm just sharing that part and performance is improved day by day by TensorFlow.js team as it's totally work on the WebAssembly because WebAssembly improved that stuff and how it's improving, I will share you that. Again, same data about the hugging phase, how it's reducing inference time is reducing day by day on that end, cool. Now just sharing about some slight super power by TensorFlow.js and we after that this we are going to the main topic that is WebAssembly and how you can work over it, cool. So privacy, how like privacy is totally working right now by the Google and now the differential privacy scheme into play how we can reduce how we can improve the privacy over that and lower that part in terms of the sharing your data. There is no need if you are thinking your data is shared by Google, no, it's not. It's exactly working on that model pre-trained working on that and there's no saving of saving of data of yours. That's the power out of it. Lower latency that's provided augmented reality experience out of it, cool. And second thing last but not the least you can say over that part is that lower latency, lower cost. Again, you didn't need to charge here. There's no charge out of it if you're using KF.js interactivities on the spot and reach and scalability like most of the 84% of device right now like it's a 84 now I think it's 90% like 90% of device via WebGL is supported everywhere. Like you can use your models or you can some machine learning model on your phone on your Chrome browser, that's the power out of it, cool. And just giving us some what about server side and node.js support like again, it is a power of you can convert the same model without any conversion. You like didn't need to understand the JavaScript or Python to just convert from one language to another language, cool. Code in one language if you already use JavaScript that's great. If you're not using JavaScript it's also okay you can just convert that stuff and it also right now working on the C binding for pre and post processing for the Java for the Java developers also working for the Java containerization or you can see GFX production and that is right to working on the TensorFlow team, cool. Main topic, what is WebAssembly? And let's crack this stuff. So talking about WebAssembly means web and assembly. So just converting the web web means basically our HTML CSS stuff and what is assembly when we're talking about assembly level language that is machine level language. So just combining web with assembly level language is called WebAssembly. That's a simple of when I'm breaking that part. So let's go to the main code definition, sorry. Yep, code definition outer. So what is Bazem or WebAssembly? So WebAssembly is a binary instruction form of stack based virtual machine. So from this definition, like it's kind of a bouncer for many of them. So I'm just literally saying what it is. So what we are doing here, we are just converting the HTML part or you can see C++ code into HTML. So many people thought of how it is possible. So I will give you a small, you can say a diagram of how it's whole working, how the process is going over that. Yep, so TensorFlow.js and how TF.js is supporting WebAssembly because as I already told you, TensorFlow.js is a JavaScript framework and supporting by Node.js because Node.js also framework. If you have a same environment, your performance already reach on the next level. Why I'm saying is like that because suppose I'm taking example, if I'm taking a backend as a Django Django, suppose I'm taking a server as a Django server and I'm just integrating that part with the TensorFlow.js. Yeah, it can be possible. It can improve on some or another and or you can say application point of view is also great. But now the terms came into place like that. Your environment issues will be different. So if there is any update in TensorFlow.js or Django, there will be clashes out of it. So yeah, so from that and Node.js supporting in terms of performance also, it's pretty much good and how CPU execution with minimal code changes can bring out to the, you can see, you can use Wasm code over that part. If you're like a Wasm developer, if I'm saying, cool. Now the sum of the key concept, how WebAssembly concept came into play. I'm just giving, didn't go to the definition out of it. I'm just breaking down everything, each and everything bit by bit. Module, memory, table and instance, take it, you can say remember this stuff, I will explain you in a better way. Cool. What are we are doing? We are creating a normal hello world code, like suppose C or C++ code. Now there is a tool called M-scripten that is created by M-scripten only. What they are doing is just, they are just breaking down a Wasm module with the STML document that is, you can say JavaScript glue code specifically, cool. And how it's totally working out of it, how it will be touch on out of it. So yeah, that's a basic concept of Wasm. And in a nutshell, when we the process of work as a flows, M-scripten first feeds a C and C++, then it directly clang with the mature OpenC C++ compiler tool chain, you can say about when you're going for the Linux and I'm just sharing that part also in demonstration. So do not chill with it. M-scripten transformed the compiled result of clang with the Wasm, dot Wasm binary. Now what is the meaning out of it? It's a meaning out of it is like in Wasm, it's totally machine-leveling, you can't understand Wasm. So how you can experience that part if you are not able to understand the machine-level code. So on that ponder, dot Wasm, pipe your binary creates, you can say dot a.out.js file. Now what it is actually, I will share in the demonstrations to do not worry about it. And by itself, whether somebody cannot currently directly access the DOM. So how it can access, again, using JavaScript, because if you can go back to the slide, what we are doing, we are creating a Wasm module and we're creating another JavaScript with HTML component. So because JavaScript can create you basically when we are seeing what is the origin of JavaScript is like DOM, when we're talking about document object model, that's a part of how JavaScript explode. And now Wasm can't directly connect with DOM. So how it can be using JavaScript? That's why JavaScript indulge over here. Cool. Now, does the access any web API using WebAssembly? Just need to call out the JavaScript. If you're good with JavaScript, you can use WebAssembly in a pretty much good way. And I will share you how you can use it using almost TypeScript and CC++ code. Yeah. Cool. Now, how you can use TensorFlow.js as a backend? Even you are not good with Wasm. You didn't need to worry about it. All you, what you have to do, just create a variable TensorFlow. Just need to call up the TensorFlow.js and just add the Wasm backend, like require at the rate TensorFlow slash tfjs backend Wasm. And just need for five to 10 minutes because it takes loading time for it. And tf.set backend, just if you are good with, you can say arrow function, you can easily do that, tf.set backend Wasm then, and you can say to the main function and you just call that main function over here. Cool. Now, the library expected the Wasm binary to relate it to the main JavaScript file. So again, the concept, or you can say the architecture of Wasm binary is more related to the main JavaScript file on the, how you are creating that JavaScript file of TensorFlow.js as usual. And you need to manually indicate the location of Wasm binary because you need to set the Wasm path basically over here because there they didn't get the access out of it. How, where your Wasm, where you are storing your all Wasm file over one directory. That's the main concept behind it. So how you can do, use that. So need, just need to import the set Wasm path from again, tf.js backend Wasm. Set the Wasm path, your custom path, you just set, you can create a let variable about your custom path over there and just set up that part and just call that whole stuff on the script tag. You can, like those which I was going to develop, I can't like, it's like easy for you all. They need, you know, you didn't need to call the CDM file again and just use a script for that to call that Wasm into the set Wasm path. That's basically a custom, custom definition you can say. Cool. Now, when should I use Wasm, cool. That's the main important part because you didn't need to use everywhere Wasm, right? It's like when you're working on your laptop or desktop with, because again, Wasm only works on your system. And if you're working on the cloud, cloud has a like grade one WebGL kind of thing. So you didn't need to use on cloud. But if you're using on your system, then only you use Wasm. How let me, let me show you with some stats. Yeah. So there is some architecture out of it. Let me give you a kind of pointer so that let me see if I caught a pointer over here so that I can explain you better. Cool. Yeah. So if you see Wasm is here like about the open, like this one ONX one and how the whole structure out of it is working over it. So basically if we go to the workload where the whole models are shifted over there and how the whole stuff is working with another framework like ONX, NX, Cafe, TensorFlow, or you can say TF Lite kind of thing. So what basically they are doing, they're just converting the open CV, like those who pretty much know about open CV.js, they have a framework and how they're connected. They are connected with WebGL and WebAssembly because WebGL basically improve the enhance on the cloud and WebAssembly is working on your local system and how its whole total architecture work on it. Let's see. Let's see some stats. Yeah. So we just collaborated with the Intel team and just got the inference latency stats about when we're talking about the 10th, 11th generation. And here we are found for every, when we're talking about Wasm, SIMD, and those who know what is SIMD, single instruction, multiple definition basically. Basically we are just working on a threading part over it. So it didn't go in depth. I'm just sharing some of the stats about it, how Wasm is so, so powerful with SIMD. That's what the concept behind it. And those who didn't know about SIMD, if you're a CSC student and you just work out an MPI, you can say micro processor, you pretty much know what I'm talking about. So that's a part of it. Cool. Yep. Now you are seeing on every and on every to every end, Wasm is improved. Even you are seeing over here, even Wasm, SIMD and why I'm talking about threading here, because basic threading, normal definition, is just you are connecting one dot to another dot with the normal threading kind of like very pretty much here. I'm just taking an example over here. Suppose your grandma is making a woollen kind of flow. So what, what's she doing here? She's just threading one from one connect to another connection. Same thing is happening over here. Here what they are, they are multiple Wasm files over there. So what every Wasm file is threading with another, another file. That basically improve the, you can say improve the performance over it. I didn't go very in depth out of it. If you want, you can check the benchmark tool also. I will share that path also over that. Cool. And yeah, that's the end of my slide, but it's not over yet folks. I'm just showing you the demonstration specifically. So just give me a sec about it. And I'm just sharing the, yep, cool. Let me share my whole screen. That will be pretty much better. So that, cool. I think so my screen is whole is visible to you all. So again, I, again, I use most of the, most of the stuff over here. So let's like, yeah, you didn't need to, again, I installed M script and you, yeah, you can just go to the main website. Yeah. If you want, I can share in the, at the end, the links out of the M script and then you can check that out. So let me check what is happening over here. Cool. So over here, what we are doing here, let me, let me show you the one thing. There's a one file I created. Cool. So cat hello world.c.c. And if you see, if you see over here, let me, let me open the, like the VS code, bear with me folks because it, my system take needs time to open VS code. So just bear with me over that. So actually what I'm doing here, actually when we're talking about the, let's just open till then it's open. And let me show you the, to check the version is working or not. Let's check to be the, from the version. Yeah. It's working. It's a model of POSIX and install, like you can see over, like if you want, let me increase the cool, cool, cool, cool. Yeah. I think so. VS code is also here, VS code is also here. Just give me, it's like folks, let me see. Have a, have a, have a patience folks. It's opening. That's the, the system is pretty much slow. So sorry for that. Yeah. Give me a sec. Cool, cool, cool, cool. Yeah. Cool. So let's create a new file and just run that stuff. Cool. Because I am the, I'm the basically on that format. It's so pretty much. Not hello. So let's create. Yeah. Wait a second. Let's create a new file. Just wait a minute. So, yeah. So let's write tfjs.numbai.c. I think so it's pretty much, okay. Sorry for that. Waiting for, waiting for. Cool. Let me see. Yeah. So what basically I'm doing, like I think so my system will be like pretty much slow. So let me give you the explanation of it. If you see cat eight dot out.js. So what basically what I'm doing here is like that. By just writing this, basically it just create the file that is basically eight dot out.js and eight dot out.basm. And let, yeah, I think so. And let's do scat eight dot out.js. Yeah. So basically directly convert my C plus, C plus plus. You can see and C plus plus code on this JavaScript. You didn't need to need anything kind of thing to convert the whole stuff. And it's like a pretty much big, I know very pretty much big. So I'll let me go to the top so that you, you didn't need to create the whole, this like JavaScript file again. Cool. It is very much. So you didn't need to go in depth of everything. So be chill with it. Why I'm sharing you because you didn't need to, because M script and tool is used here. But when we're talking about machine learning library, again, what do we need to do here? Again, just call the backend stuff and you can do easily stuff. Let me show, let me clear this stuff. Cool. And yep. So let me get it taught how, let me see exact file name. Okay. And if you see, if you see the actual Vezem one, you didn't need to understand able, you didn't need to understand anything. Let's see. Because it's total in the machine learning form, sorry, in the machine level language, if you see, you can't understand anything, anything, let's see, just see this. That's why JavaScript came into play because you are connected with the, you can say the DOM part and how you can see that stuff. Let me show you. If we just create this one and just we just write node eight dot out.js. Let's call this one. So what it will show you, let's see. Hello world. That's what we printed over. Let's see. Cat. Let me see the hello world and cat C. So basically for this one, it's just creating hello world. I'm using, because it's supporting node and node.js at as usual. And if you want to do the, if you want to create, you can see generating HTML files of what you can do. MCC. What is the name of hello world? Cool.C file minus O. Let me call as a TFUG supposed for the name. Like, let me hide this one for the pretty much better one. Cool. So I'm just creating a name and TFUG dot HTML, pretty much good. Let's see. Cool. Now let's see it's created or not, like cat TFUG. Sorry. TFUG HTML. So it's created by the whole and script. And you didn't need to install anything. And that's the part that what I want to show. Like if you've seen the Vezem, but you can't be able to understand anything. I'm trying for the, like my VS code, I think my VS code is crashing. Just give me a sec. But like, I think so. Yeah. That's the, let me see, let me see, let me see. Just give me a sec. Cool. I think so. My most on the most on the agenda is came over here. I hope so. You got something about.