 Thank you so much for coming in for this talk. I'm super excited to be presenting on the topic of Supercharging your community's AI deployments to the help of WebAssembly. My co-presenter unfortunately couldn't make it because of visa issues. So I have recorded a few of his slides and the demo that he'll be running. So please, I hope that you're able to learn something today from this talk. A quick introduction about myself. I'm Shwai. I'm a developer advocate at Millie Search. It's a rough space search engine and also a contributor at Layer 5, which is a service mesh community. And Arishat, who is my co-presenter who couldn't make it, is an incoming student at University of Toronto. And yeah, so of course the first thing that I'd like to cover is what exactly is WebAssembly, right? Now, one of the most common misconceptions for those who might have heard the terminology or might also have had some experience working with or having heard about it in the news or in open source is that people often related to web since it's like in the word contains web. So people usually attribute it to just being limited to web browsers or web applications or JavaScript. And they also like to compare it with assembly language, but it is neither web nor assembly. It did start as a browser technology back in 2017. But of course, it has now more usage outside the browser, especially for cloud native use cases. And we'll also see today how you can use that with the help of Kubernetes as well in order to deploy your applications. And of course, the main idea of what WebAssembly is that it is a binary instruction format. So it does look very similar to how an assembly language program would work. That means that it runs natively. And the structure, if you were to see how a dot wasm file, you will see that it's very similar to how an assembly language is. But of course, it's much more different than how a similar language works. And it's primarily used as a compilation target. So you can consider WebAssembly not as a primitive technology, but as a compilation target where you would basically take a program, a function written in any popular programming language like Rust, C++, and then compile it down to the WebAssembly bytecode. And your WebAssembly bytecode is what is interacting with the host environment. And since like, you know, it's a polyglot environment, essentially. So that means that whether you use functional programming languages, or you may use scripting programming languages like Python, JavaScript, or even your proper object oriented programming languages like C++ or Java, those can be compiled into WebAssembly. So whether you're coming from a Web background or you're coming from a DevOps background, from a Golang background, you'll find the ability to actually convert your functions that might have been written in these languages into the WebAssembly bytecode. And that helps us to bring some of the main key features about WebAssembly. So the first one is that it is super efficient and fast. The biggest reason for this is that your WebAssembly bytecodes are super small. And this allows you to run them very efficiently. And this is one of the biggest use cases that we'll also see when we are comparing it against, let's say, your standard Docker images or containers in general. The other one is that it's open and debugable. So of course, the entire WebAssembly scenario, it's completely open sourced. So the WebAssembly is powered by the bytecode alliance. It's the alliance that is kind of the governing body for WebAssembly. So it's completely open sourced. And all the protocols that are surrounding WebAssembly are regularly discussed at the bytecode alliance meetings. And of course, the biggest one is that that we'll also cover today is that it's also working for non Web platforms, specifically in cloud-native ecosystem. And one of the latest surveys that is conducted by the CNCF right before KubeCon North America. And they announced these results at KubeCon North America was that the usage of WebAssembly today in server-side applications and cloud applications is actually two to three times more than in web applications. So standard web applications that you might see for the web platforms is using WebAssembly bytecodes inside of your JavaScript applications. And some of the most common applications that you might see them being run is like Figma, Adobe, LightShot. So these are applications that are running on the browser. And they use WebAssembly to be able to run more computationally intensive tasks that regular JavaScript might not be able to. So you run, for example, Figma uses MScript in to convert your C++ codes into WebAssembly bytecodes and then run them alongside JavaScript to power Figma. And it is also safe. So WebAssembly comes with a security model. So it's kind of a sandbox model that keeps it super safe. Now, one of the drawbacks that we'll see with WebAssembly is that in itself as a bytecode, it cannot do anything on its own. So if you were to make it interact with, let's say your system files or your file system, you'll need other dependencies to be able to actually manage that. And we'll see how that is done in the later part of this presentation. So what you're seeing so far is that WebAssembly is really fast and efficient way to be able to run applications, right? And this is also a great benefit that you just have one bytecode and this bytecode can run across multiple platforms. So it is not platform dependent. Usually, whenever you are writing any program, you might have to install certain dependencies that might be limited to one specific environment. But with WebAssembly, you can run this across different system architectures and not have to worry about whether it's an X64 architecture or some other architecture as well. And again, safe and portable. And then primarily, when we talk about the ability for the WebAssembly bytecodes or essentially the entire ecosystem to be able to interact with your file systems, that is where we are going to be talking about the binary compatibility and also what we are going to introduce, which is WASI, which is the WebAssembly system interface. So this is the technology that enables you to let interact your WebAssembly bytecode or your modules with, let's say, system resources and network resources. And here I'll basically now forward to the presentation recorded by my co-presenter and I'll just play the video. WebAssembly systems interface or short file WASI can play a great role in using WASI outside the web. And this is a very popular tweet. You might have seen it a lot. So this is where the founder of Docker and he said, first of all, was he existed in 2008. We wouldn't have needed to create Docker. That's how important it is. So this is just a very interesting tweet about WASI and how important WASI is. So I want to start talking about WASI by talking about system interfaces. So just take the statement C gives you the red access to system resources, which is most certainly false because it's far too important for stability and security. So what happens is, and this is also a pretty popular diagram is that applications go to the kernel to do all kinds of system calls, maybe something to redefine, delete the file, all kinds of system calls. The applications are the kernel, can I do the system call? And the kernel then facilitates the system calls for the applications. And these would probably be in some standards like POSIX or somebody to do it. All applications or programming languages have different ways of doing system calls. You do system calls a different way in C then in Python or Java. So yeah, all applications are a different way of doing the system calls. But these are essentially just as the kernel are doing the system calls. And this is also a problem we were talking about allowing users and applications to do what they have the right to do without stopping other applications. And this part is also pretty easy. What WASI does is C or Rust might have very different ways of doing the system calls or very different standards as well. So what WASI does is it gives you programming language independent methods. So using WASI, let's say if you want to do a system call in C and Rust, you just use WASI instead of CR, Rust is native way of doing system calls. And this becomes pretty helpful because now you can, when you compile this down to WebAssembly, you can run this WebAssembly wherever you want. You can run this WebAssembly even on the server server. So this becomes pretty important when you're trying to run WASI outside the Web because you're not now trying to put in a kind of system calls and you just use WASI whenever you want to make a system call. So WASI just gives you independent methods to do the system calls. So that's what WASI does is and it's pretty helpful to run WASI outside the Web. So the demo we'll be seeing will actually be using WASI as well because we want to run WASM on the server side. So in those cases, WASI is pretty helpful. So that's what we'll be seeing in the demos as well. And finally to end with, these are some benchmarks I had done for running a machine learning model on a Linux container using TensorFlow Python and the same TensorFlow Python but in a WASM container, in the Node.js WASM runtime, I have like a bunch of different benchmarks over here. And I also have the code for this upon my GitHub. But I've done these benchmarks and what I particularly want you to see is the TensorFlow Python in the Linux container and the WASM part. So we are using the WASM time which is a very popular WASM runtime running the exact same machine learning model in all of these. So I wanted to see the WASM with AOT compilation, which is some pretty interesting results. It's pretty fast container would take to do this. These are just some benchmarks we need to explore more or look through the code for producing these benchmarks. But yeah, this is a pretty interesting benchmark to see the use of WASM and particularly AOT compilation. So AOT compilation also plays a great. So I hope by the way everyone is able to hear properly. Okay, perfect. So what we are going to solve was that from this particular demonstration that for running a simple inference using one of the most popular machine learning models which is MobileNet with the help of WASM, it was two to 2.5 times faster than your standard container image that we ran with Python inside of Linux container. So these kind of showcases the excellent approach when it comes to being able to do more highly computational tasks with the help of WebAssembly. Of course, this was just one example. But now we'll be moving on to the demos. And within the demos, and within the demos, what we'll do is that we'll focus on and we'll show you that how you can basically create a WebAssembly based APIs and microservices and then deploy them on side of your communities clusters on running them as communities nodes. So and the example that we're going to be showing you is that inside of this specific example that we'll run. And again, we'll show you in the demonstration as well, that we'll have a few pools of Linux containers. So there'll be nodes that are running your standard Linux containers and there'll be nodes that will run your WebAssembly container as well. And then how you can interoperate between them and make system calls between them as well. So first, I'll just set the context for the demo that we're going to be showing. So what we'll be using is either spin or slide. So spin and slide are based in just run wasm, just run on host wasm apps very easily on the server side. And what we also use is the container view wasm shim. So the container view wasm shim is pretty helpful in kind of giving your privilege within Kubernetes and container view. It uses run wasm as a library and allows you and makes it very easy to run wasm apps. So think of it as a shim layer you would want to add. If you see how the overall thing looks like right now, it can be managed using a CRI plugin as well, which is what we brought. This is what we're assuming this worker node is just running wasm applications. This is essentially what you want to do in our spin application. I need to use that as a class. And you can also do this for spin, of course. Just have the other one side by side. So basically what you saw was that what we are trying to intend is that we are not trying to replace the Docker containers with your WebAssembly containers. But instead of being able to run them side by side so that your Docker container can actually because of the extensive ecosystem that and the tool chains that come inbuilt with Docker. Those can be managed with the help of your Docker containers, whereas the high computational tasks those specific tasks can be managed with the help of your with the help of the WebAssembly containers. And the demo that we are going to be showing is with the help of Microsoft Azure. And I'll quickly walk through over the demo and then of course I'll be open to questions as well later on. Here is a single Kubernetes cluster and right now just look at the system nodes I have. So I have the system nodes which is pretty standard. And right now just look at that. Those are the system nodes I have. It is clustered. I already have this Kubernetes cluster created on Azure, but you can't use literally anything you want to image. And what we're trying is just making a sample wasm image. And we use something called slide. And slide just allows you to easily run your wasm apps, host your wasm apps. And it actually uses run bossy as a library. Run bossy allows you to very easily use the capabilities of bossy. So that is what we'll be doing. And to do this right, I have a rush application, which is a very simple rush application. Just print out hello, can HTTP call to it. And that's what we'll be looking at. Just something simple, but showing the HTTP capabilities, making a request, handling the request with wasm. That's what we are actually interested about. So we'll start going out by minus wasi target, all right, minus wasi target. So I'll just go ahead and build this. So wasm apps very easy. So this is a single file and let's just go to the handle hello function. And this is return to our hello with the sponsor. This is what we have company using the this will just allow us to allow us to convert. So next up, that's what we'll be seeing and not talking about slide forward makes it very easy for you. So I'm already created this notepool for you. And if you see, this is where I have the last system system and one of them is the this is the node I have. And this is actually running wasm wasi. And you look in slide application. That's what we'll be doing. So this is because we can essentially do it the same way. This demo if you want to use spin instead, I do. So this needs to run 10 classes, 10 classes. So let's go ahead and apply this. So this is next up what I want to do is actually deploy this is what I'll be using and class it needs to use this. So wasm time is the name of the is the name of the wasm run. Of course, use others like wasm edge, which was on time very well. So we'll try using wasm time. What I hadn't shown in the slides is load balancer. Of course, you'll need a load balancer over here. So I also deploy this load balancer. So let's go. Yeah, so just for keeping the time safe. But I hope that with this demo, you could see that what we did was that in the demo, we took four different node pools. The first three were your standard Docker containers. And the last one that we deployed, and you can see the YAML structure. And I'll also share the link if you want to see the default YAML structure for the deployment YAML that has been created to deploy our node, node specifically for our wasi node. And the other demo that I wanted to quickly showcase is with K wasm. So what K wasm is that it's a communities operator that allows you to directly run your web assembly workloads on communities. Now one great thing is that it comes out of the box with multiple different types of ways in which you can run communities. So whether it's mini cube or it's microk, it's right. It has a really great support for multiple communities distributions and also with multiple web assembly run times. And in case you want to get started with this, you can install K wasm by using Helm. So if you have Helm, you can install Helm and install K wasm and the K wasm operator and then just run this Cubectl command in order to install the web assembly example. And over here, I have already done that. So this is my terminal and this is what I'm going to be doing. So over here, I'll just clear this out and I'll run this. So I'll apply this over here. And then what I'll do is I will go ahead and test out this particular job. So what we are doing over here is that we ran a sample web assembly job and you can see the result over here. And if I take a look at all of my nodes that I have, so let me go ahead and do that. Let me bring up all my nodes. So you can see that all my pods over here, there is one that was the wasm test. So this is running and this runs successfully. And of course my K wasm operator is also running perfectly fine. And just to kind of summarize what we have covered. So of course, there's a two way relationship why communities needs wasm is because of the fact that wasm has tiny containers. It is a much faster startup time. And of course, it is platform independent. And why wasm needs communities because of course, we know that communities today has a golden standard in terms of how you can scale up applications with the help of communities. So the main idea for today's presentation was to showcase that that balance and how you can run your web assembly workloads inside of communities. And especially if you're having more highly computational tasks such as machine learning or even outside of machine learning, you can run them very easily on web assembly because of the small size and the faster processing. And then of course manage those workloads on the help of or on the top of communities. And of course, please take a look at the slides and the code snippet that we covered in today's demo. This might be useful. And yeah, with that, thank you so much. And you can connect with us on our Twitter in case you have any questions. And of course, I'll be here to answer any of your questions. Thank you.