 Hello everyone, welcome to the Cloud Native Wasm Day North America. I hope everyone is having a great time at KubeCon and at all the co-located events. I'm Myself Shivalamba. I'm currently a contributor and mesh made at Layer 5, which is a service mesh open source community and has some of its project under the CNCF landscape. Today, I'll be taking over the tutorial on how we can engage WebAssembly in the path leading to machine learning in the intersection of machine learning and Cloud Native. And with me, I have Nithinjay, who will be also explaining some of the concepts. So over to you, Nithinjay. Thank you so much, Shivay. And thank you so much to everyone who is here. I hope you all are doing well and taking good care of your health in this pandemic. Myself, Nithinjay Sharma. And currently, I am a final year computer science engineering student in India, as well as I'm an SD intern at Hackerang, having been contributed to many Cloud Native projects, like Kubernetes, Buildpacks. I have regularly trying to build more and more, helping in powering communities. And today with this session, we also hope that we are going to learn something amazing from all of you and help share the knowledge that we have. So moving on to the discussion that we are going to have about today. The first thing that we wanted to discuss is like the lingua franca of machine learning is Python, of course. But we did not want to do a versus kind of comparison between Python and Rust, because Python has its own importance and nobody can take its place from the ecosystem of machine learning. But here we are today discussing a relative study, a comparison between both Python and Rust. And if we observe a few points where Python can, like behind Rust, is one of the fundamental reasons is that Rust gives better performance. And why it gives better performance? Because it is compiled directly into machine code. There is no virtual machine or interpreter required between the code and the computer. So that is one great advantage that Rust has another important advantage of Rust over Python is that Rust thread and memory management. Like Rust might not have a garbage collection system like Python, but the compiler kind of being more a compiler type of language, compiled language, it provides better memory difference to security. And thus, saves us from other hazardous or irregularities. Another thing is that Rust can be compiled into WebAssembly bytecode, which is going to be one of the most fundamental ideas of our discussion today. And I mean, this has been also being observed in various studies. And as a matter of fact, a study by IBM already highlights how Rust and WebAssembly together have resulted into 15x performance gain over not only plain Node.js, but also 25x performance gain from Python. And if we look at, I mean, there's one library in Rust that's called Linfa. And it's similar to Skykit, but it is written Rust. And if you look at this graph, if you look at this graph, then we find that Linfa on a Rust web server handles 25 times more requests per second than Skykit learn and 7 times more than Linfa Python wrapper on a Python ERPC server. So this is another result which we can figure out to tell how we have observed that machine learning, which is right now a hot buzzword in the industry that everybody is learning, machine learning, and they are trying to leverage its algorithms to build some cool algorithms and products. But if we look at highly standardized algorithms and how to optimize them, then we can definitely try to figure out and see how Rust and WebAssembly can play a major role in them. And today, this talk is something about that. Now, we will be wondering how the assembly comes into picture. So one of the most important things about WebAssembly is that WebAssembly is a compiled target that effectively provides executables that run at native speed. And that helps in extremely small and efficient containers, which are virtually any bit. And with the added benefit of being highly, highly, highly secure. So if you look at WebAssembly, now they provide highly secure, high-performance, machine-independent bytecodes. And another important aspect of WebAssembly is that it is compatible with multiple languages like C++, Rust, and it allows for them to have a compilation target. And since one of the things that we just not discussed that is a binary instruction format, it enables native decoding, making it faster than competitive runtimes. Now, another important aspect that we are going to discuss is like, of course, we are going to talk about machine learning, how it's going to be simplifying our machine learning model conversion deployments. But we need to know that how this security and high-performance features of WebAssembly are actually implemented or are connected with cloud native and web both. And how is cloud native evolving with WebAssembly? So not missing any time more, let's move on to that slide. So connecting was some to web and cloud native both. So WebAssembly is expanding its space. And not only in the area of machine learning, web or cloud, in fact, even in cloud native. So it has been already approved by W3C. So it is a language of web only now. And thus, it's supported by compatible browsers like Mozilla, Chrome. But it has been being efficiently leveraged by cloud native technologies. And that's why we have CNC or Sandbox which is like pass some ads and pass some cloud, which are utilizing the strength of WebAssembly. They are getting connected to the cloud native and WebAssembly intersection. Now, but before we come to cloud native because that's going to be another discussion, we also need to check how it has also been playing an important role in connecting to the web. We can see this fact from this single point itself like if we see how does this performance, WebAssembly's security and portability and JavaScript's ease of use, if they all combine how they can not only result in faster applications, faster build times, but also can help us drive us things at a more secure space. And how can this be done? Like this can be done like the host app can be of Node.js, not web application written in JavaScript which makes WebAssembly function calls. While the WebAssembly byte code program can be written in Rust and which can run inside the web or some edge runtime. So if we see in these two points that still we have covered like the intersection, the intersection of web assembly with Rust, with web and how it is making the web application arena much more secure and faster. Now, bringing Wasm to cloud native, Wasm has, Wasm has a CNC project that has enabled serverless functions to be embedded into many software platforms. And today, if we are going to like see like how cloud native is evolving, it's not just Wasm Edge. There are many other projects like Wasm Cloud. In fact, many applications built on top of Kubernetes that are going to leverage WebAssembly. And if you look at this table, like this is a comparison. So this comparison comes from Mortis captain from the University of Thiliburg, who published an article comparing WebAssembly against Docker in machine learning. And he ran inference in both Docker and WebAssembly and looked at it as we can see in the results that WebAssembly is about five times faster and 10 times smaller than Docker. So this arena is huge, it's expanding, it's growing and it's ever evolving. And there is a lot of future that we see here. And today, if I suggest so, then I have my friend and my co-speaker, Shivai, who will be showing us a small tutorial that will show the intersection of Web with cloud native and machine learning. So over to Shivai. Thank you so much, Britanzai, for that amazing information regarding the usage of WebAssembly inside of cloud native and web ecosystems. So as WebAssembly is increasingly being used in the cloud, it has now sort of become a universal runtime for cloud native applications because of its high performance and lower resource consumption. And what that means is that now a lot of developers who are building cloud native use cases also want to use JavaScript to write business applications. That means we need to support JavaScript in WebAssembly. And what will happen is that we can actually support calling Rust functions from JavaScript in a WebAssembly runtime to take advantage of the WebAssembly's computational efficiency. And the Wasm Edge WebAssembly, which is one of the fastest WebAssembly runtimes, can actually allow you to do that. And again, as I've only mentioned that what we can do is that to embed JavaScript within WebAssembly, we can try to actually go ahead and build a WebAssembly-based JavaScript interpreter program for WebEdge, for Wasm Edge. And that actually is useful to build with the help of QuickJS, which is a JavaScript engine, is really small and medible, and it's actually much more compact than the standard V8 engine that we typically use. And it also comes with Wasm Edge-based extensions like network circuits and also TensorFlow-based interfaces incorporated into the interpreter as JavaScript APIs. And you need to install Rust to basically build this interpreter. Now, one of the biggest usage of actually going ahead and using this is that we can now basically run JavaScript programs inside of the WebAssembly sandbox. And the WebAssembly edge, or that is the Wasm Edge, runtime provides us that lightweight, high-performance, cloud-native JavaScript applications. Now, with this particular application, with QuickJS and the integration of Wasm Edge, we can basically allow the application itself to be written in Rust and compile into WebAssembly. And then it can be compiled and deployed as a single Wasm-based bytecode file. And JavaScript functions can be directly embedded into the Rust application. And the JavaScript source code can be actually included either at compile time or at runtime through a file. And basically the Rust of a program can actually handle combinationally intensive tasks in the application while the JavaScript can be used for, let's say handling the business logic. So we are going to be looking at one of those examples. I have basically created a simple function over here which is basically the main.rs. Now, this is the following Rust program which basically embeds a JavaScript program at compile time. So as you can see over here, I've defined basically my main function which has defined a code where we have taken up an example of a TensorFlow Lite demo. Now, basically this TensorFlow Lite demo code will be used to showcase the direction of a particular food and it will be able to print its label and the basically the percentage of accuracy, all this is the confidence that will print by using a dataset and we'll be using a DF Lite model. So over here as you can see that we have included the JS code and we have also given the arguments inside of the quick.js which are set into the quick.js runtime since it's a smaller engine. And now if we quickly go ahead and actually look at, basically what now we can do is that we have also provided another example which is basically this main.js file. Now, basically we can build the Rust and the JavaScript application into a single WebAssembly byte code program which I'll just quickly show to you. But basically what we are essentially doing is that the Rust code could actually pass the data into the JavaScript code by passing the arguments or even modifying the included JavaScript source code directly and the JavaScript source code could actually return the values by writing it into an into a temp file. So basically what we have done is that we have included inside of our main.rs that is our Rust file. We have included the code for including our TensorFlow based DF Lite based demo that we are using that is the main.js. Now the main.js code itself contains the wasm edge TensorFlow extension JavaScript API that as I've already mentioned that the quick.js also has TensorFlow based extensions and over here in this one, we are basically using the wasm edge TensorFlow extension in which we are going to be actually reading and classifying an image that is based on the image net model. And as you can see from the code that first we have imported the TensorFlow Lite over here and then we have imported basically an image that we can change and then we're changing the image RGB and we're resizing it. And then we are now starting off a new session in which we are basically calling in our TensorFlow Lite file which is basically a food classifier based on image net that we have created. And then we are going to be printing our outputs inside of the predictions that we're getting using the mobile net view. And we have also created a label file through which we will be able to match the label of the prediction. And finally, then we are going to be printing the label and the confidence that we received. So basically this main.js file is having the entire code of running the logic behind the machine learning part which is basically running the TensorFlow code and being able to detect. Over here we'll be able to change the different types of images and I'll just quickly also show you. So for this current demo, I'm going to be using the image of a pizza that you can see over here. Again, who doesn't like food. So we're going to be using an image of a pizza and we're going to be running this. So basically now we can go ahead and build the Rust plus JavaScript application into a single WebAssembly bytecode. And what we're going to be doing over here is that we are going to be running a command that is going to be using the cargo. So essentially what it will do is that, so let's try to write that quickly. So we're going to be using cargo. Then we're going to be using build because we are going to be using this to actually build the Rust plus the JavaScript application into that single WebAssembly bytecode. So what I'm going to be doing is I'm going to be simply just adding my cargo build and then I'm going to be giving my target as the wasm32 wasi. And then I'm going to be giving it the release a flag. And now in extension to this, what I'm going to be doing is I'm going to be giving it another flag and that's going to be the features flag. And in the features, I'm going to be giving it TensorFlow. And the reason behind this is that why we're adding this features flag is because it asks the Rust compiler to basically use the wasm extension API. And that is going to be really important to use when we are actually going at and building this application. And because we are actually going at and using the TensorFlow based extension that is provided within the wasm edge with support for JavaScript that you're using, using QuickJS. Now, another thing that you'll also sort of look over here which we can sort of see is that we're also using basically the wasm32 wasi and that is also super important to use when we are going to be building basically, when we are going to be building the JavaScript interpreter for our wasm edge. So adding that features and adding that wasm wasi is really important. So once we have done this, we can actually go ahead and execute this command and then simply go ahead and just run this command. And as you can see that it very quickly is able to generate the build. And this build basically helps to create, it basically helps to create the build. And for that, what we'll be going to be doing is we're going to be using the wasm edge command and we can simply use wasm edge and go into the directory and go into that specific path where basically our executables have been created. For that, I'll be providing the link for that. So basically it's under the target and we are going to go into wasm32 since we are using that and wasi. And then we're going to be going to the releases and going to basically the quick js. And one thing to keep in mind is that I'm missing something over here. And the important part is that I'm missing basically the TensorFlow extension. And since we are using the TensorFlow extension, we also need to actually add that to the wasm edge. So as I'll add over here, the rust-based wasm, the thing that we'll also be adding over here is rather than just using the wasm edge, we also need to include in this wasm edge TensorFlow. So we're going to be using that as our key over here. And let's try to run this now. Let's see, I think, okay. So now once the target has been created, the next step is to basically go ahead and run this executable with the help of wasm edge. For that, I'll quickly just go ahead and write the wasm edge command. Now, one thing to keep in mind is since we are using a TensorFlow-based extension, we also need to be adding that inside of wasm edge. So we'll be using wasm edge TensorFlow. And apart from this, we'll be going to be using the directory flag and we're going to be adding the path for our target file. So we're going to be using target and then we're going to be using wasm 32 wasi and then you're going to be going to the release folder and going to quick.js and add the rest file. So this will be wasi.wasm. So this should be good enough to now run and give us, so this should be now good enough to give us basically the value and it should print basically what it finds inside of that image that has been given and it will give out the, or it will sort of render the result for you. And the meantime, it actually does that again, just to sort of look at what we have sort of seen so far is that over here, as you can see that we are embedding the entire function of this JavaScript code inside of Rust and it's basically helping us to run the business logic through JavaScript and the business of over here being the machine learning program through which we are able to go ahead and classify a particular food item. So things like creation of the data and rendering of the data is all can be handled with the help of wasm and while the business logic can be written in JavaScript and all of that is being run with the help of the wasm edge run time. And as you can see basically gives us the label as pepperoni, which is pizza and gives us a confidence of 38%. Now let's try to go ahead and change the pieces. So apart from pizza, we have a lovely burger. So let's try to actually go ahead and change the pizza to a burger. And for this, once we actually go ahead and make this change, we'll have to create another wasm bill. So we'll go ahead and simply just use the command again. So we're going to be going ahead and creating another wasm bill. And once we are done with that, we can simply again go and run our wasm edge intensive flow to render the next label and conference. So we should likely get a burger with a confidence as well. But I hope with this demo, you're able to see how we are able to actually go ahead and build a wasm edge-based run time that is able to import or sort of embed rust and inside of the rust we're embedding JavaScript. And that basically allows us to run all these machine learning based applications in these edge devices with the help of wasm being that center point of being that run time. Wasm edge is officially adopted by the CNCF. And again, it's a hugely popular run time for building cloud native and also smart contract based application. So as you can see, the label is a hamburger and the confidence is 60% So I hope that sort of gives you a quick overview on how we can basically go ahead and create these applications. So just some of the things to keep in mind is that you can use this, that is a really small embedded run time that is provided and it can be used directly with wasm edge because it has a lot of different embeddings. And you can directly use that with different kinds of TensorFlow models. You can use your own TensorFlow model by embedding your own custom TensorFlow Lite model or you can also run it for standard TensorFlow models as well. And that way you can run these applications and build very quick to use applications. Now, of course, wasm edge is one of the fastest WebisMD based run times. And you can even actually make this quicker by using let's say wasm edge TensorFlow Lite which can actually help to bring at least 10 times the improvement in terms of the overall detection and the friction that are being made. So thank you so much for watching this demo. Thank you so much, Shivai, for that awesome tutorial. And I hope that the audience would have loved it but would have got the inclusion to learn more about how WebisMD is interconnecting ultimately cloud native, but also machine learning as well as web. So thank you so much to everyone who joined us. Over to you, Shivai. Yeah, thank you so much, everyone, as you have said. I hope that with this talk, you're able to understand how these wide range of technologies today can be brought into a single functioning application with the help of WebisMD. WebisMD is definitely the future for cloud native applications. You can reach out to us on these following handles. And now we will be waiting for your question and answers. Thank you so much. Thank you so much.