 All right, hello, everyone. Welcome to my talk at Communities on Edge Day. The title of my talk is Managing Web Assembly Applications with Kubernetes. So today, we'll be covering a lot about why talk about Web Assembly and how it's the future of its computing and why it's really necessary for its computing and how you can manage that with the help of communities since we are having Kubernetes today very well integrated on Edge devices as well. Very quick introduction to myself. I'm Shivalamba. I'm a developer advocate at Millisearch. It's an open-source Rust-based search engine. And I'm also contributed at Layer 5. Layer 5 is the service-meshed community that runs a lot of different open-source foregrounds that are currently incubated in the CNCF landscape. And I'm also an evangelist for Wasm Edge project, which I'll be talking about a lot more about today. And if you want to connect with me, you can connect me on Twitter. But yeah, let's get started. First of all, just a very quick rundown on what we are going to be covering. So in the outline, as you can see, that we are going to be covering what is the Web Assembly. We'll be talking about the Wasm Edge project. We'll be talking about why the Kubernetes community actually needs Wasm Edge. And then we're also going to be covering what is one of the limitations that comes with Web Assembly and how we can overcome that using Kubernetes. And then you're also going to be looking at some examples where we are going to be seeing how you can actually run WebAssembly applications on Kubernetes. And then, yeah, what's next, what things are for the future for running WebAssembly applications with Kubernetes. Well, so first of all, just a very quick recap of what exactly is WebAssembly, right? So you can just think of WebAssembly as a binary instruction format. Essentially, it's a format that is primarily devised a long time back as a method to actually run alongside in the web browser. Because we know that with JavaScript, there are a lot of performance limitations that actually comes if we want to run, let's say, some kind of really high end applications, such as video editing on web browsers. Using JavaScript can be having a lot of limitations. So that is why WebAssembly was born. And the main idea was to be able to actually use languages such as C, C++, and create bite-sized executables that could actually run on the browser with the help of this binary instruction format that is WebAssembly. And that allowed for very, very highly intensive and highly computational applications to actually be run directly in the browser with the help of WebAssembly. So you could actually run these C or C++ compiled programs and run them directly in the browser. But today, WebAssembly has grown and matured and is now no longer just part of the browser. In fact, the application that we have today with WebAssembly is just a very small part of within the browser. It has moved outside the browser. It has covered server, serverless, and today, of course, with its computation as well. And that's why we're going to be talking today a lot more about how WebAssembly is revolutionizing edge, because when it comes to edge, we are seeing a lot of edge applications in machine learning and even in Web3 and blockchain running dapper or server-side applications. So that's why today WebAssembly has grown a lot more in popularity. And also, it has moved outside the web to support a lot of server-side applications as well. And now, we also have actually support for a lot of different other scripting languages, like Python, Rust. And you can actually compile all these different languages and to be actually executed using WebAssembly. Well, then I'd like to officially introduce the Wasmets project. The Wasmets project is an open-source project that was currently, it was recently incubated on the Cloud Native Computing Foundation. And essentially, it's a lightweight runtime for WebAssembly. And it is mainly used for cloud native edge applications. And let's talk about some of the features of the Wasmets project. Well, it is one of the fastest WebAssembly virtual machines because it uses ahead-of-time compilation as compared to any other types of WebAssembly runtimes that are there. And it supports out-of-the-box interfaces, especially with machine learning. There's a very popular TensorFlow interface that comes directly with Wasmets. So you can definitely try it out if you are trying to run machine learning applications on edge-based devices. And it also comes with support for a lot of different scripting languages, like JavaScript, Python, as well. And it is also, say, compliant. That means you can use it with any kind of container applications, and it supports all of the different policies that are there. But yeah, before we move on to the next part, I'll definitely recommend checking this particular Wasmets project. Of course, I mean, it's a relatively new project right now. So if you are curious and if you are interested in WebAssembly, it's a really good platform to be at. It's still a very small community that is there. We have weekly, monthly meetings and working group meetings for the Wasmets project. And of course, there is a lot of scope. So you can also join the Slack channel for WebAssembly. But yeah, moving towards why does the Kubernetes community actually need Wasmets, right? Why are we just talking about Kubernetes on this platform for Kubernetes on edge? Why does it actually need Wasmets? So one of the biggest important reasons is that today, edge competition, of course, we are talking about edge today, has increased quite considerably. But if we talk about the standard containers, right, these are usually the Linux-based containers. These occupy a lot of space. And they, of course, with edge competition, you are restricted in terms of how much space and how much computation that you might have. And of course, in a lot of times, you require your runtime to be very quick, right? The containers have to span up very quickly, especially when we are dealing with edge-based devices. And that's where WebAssembly actually comes into the picture and has immediate benefits as compared to the standard Docker or Linux-based runtimes that we have. Because they occupy only 1% of the tool space. And there are much more quicker in terms of setting up and very easily quickly to set up and run them. So that is why Wasm or WebAssembly is a really great choice for the Kubernetes ecosystem, especially if you are trying to set up some containers for your edge applications. But of course, there are some pain points when it comes to WebAssembly. One of the biggest ones that we have is that it requires its own set of tool chains, SDKs. And that, of course, makes it a little bit more difficult to actually set up, especially if you have already a very well-set up infrastructure for your Linux-based containers. But the good part is that you can actually run both your standard Docker or your normal Linux-based containers side-by-side with WebAssembly containers as well. And that's what we are going to be seeing how this entire infrastructure sort of looks like on the upper way. But yes, I mean, developers can actually use a lot of different container tools like Kubernetes, Docker, to actually deploy WebAssembly applications. Because we have support for basically for deployment of these WebAssembly runtime applications with any of these different platforms like Docker or Kubernetes. So of course, the question is, how do we achieve this? So I like to spend some amount of time talking about this particular architecture. So here, what you can see is the entire container ecosystem. So we have our higher-level runtimes. We have our lower-level runtimes. And of course, we have the entire Kubernetes stack that you can see. And of course, the goal is that if we can actually load both your Linux container images and WebAssembly images both together inside just one single application. So what you're seeing over here are some of the highlighted ways. So for example, if you want to implement Kubernetes and to basically help manage these WebAssembly applications, you could either use Cryo, or you could use ContainerD, or you can even use Docker as well to help run these applications. And for example, we know about different types of low-level OCI compliant runtimes like C-Run. So the reason why C-Run is chosen, it's a very lightweight C-based runtime. So basically, we built an integration on top of C-Run by the name of C-Run W, which is mainly for running C-Run with WebAssembly. So that is why you get a lot of different options based on this Kubernetes stack. You can use any type of communities platform either. If you have a larger setup, you could use standard communities. Or if you have a lighter set, you could use K3D, K3S, smaller communities as well. So essentially, what this particular entire ecosystem is demonstrating is that as we talk about our standard Kubernetes deployment, when we are talking about the different type of container runtimes, typically that involves just Linux containers. But instead, what now you can do is that in order to overcome the limitations that are put forward by running WebAssembly, which requires a certain set of tool chains, you can actually run it in compliance with the existing Linux containers. And that overcomes the issues that are there in actually deploying the standard variation of WebAssembly. So that is why this entire ecosystem that we are looking at is really amazing. And it's very easy to actually set up and run WebAssembly applications by including this as part of this entire ecosystem. And then I'd like to sort of now cover a bit more about how you can actually go ahead and start. There are, of course, some prerequisites. For example, you'll have to install Rust. You'll have to install communities. But apart from that, you will also need to require some of the other different tools, including Cryo, which is a high-level container runtime that pulls a lot of different images from things like Docker Hub. You need Wasmets. So you can install Wasmets very easily by going to github.com slash Wasmets slash Wasmets. And it's very easy to set up. Of course, I mean, if you have a Linux system, it will be the easiest. And of course, you need communities as well. So once you have actually installed all of these different things, we'll just cover some of the different steps in which you can very easily set up and actually run a Wasi image application inside of your community support. So the first thing is like you can just follow these set of commands. And at the end, I'll also just share some links. I've already done that over here in this slide. You can just follow some of these steps in the documentation to very easily set up communities Wasmets on site in your system. But yeah, the first step would be to actually go ahead and start communities. Of course, this particular example showcases running communities in your local development environment. But if you have a hosted instance of communities, you can also do that. So just run these commands. And then of course, once you run these particular commands, your community's local cluster will set up and it will run. And then the next thing is to start the actual cluster. So once you follow these particular steps where you provide your community's provider, and then we basically are going ahead and running all these different commands to essentially allow for the WebAssembly applications to start running in communities as pods. And once we are able to accomplish that, basically we have a very simple to use GitHub project, open source project, where essentially we have created our Docker image. So you can pull that as a container image from Docker Hub and we'll run that inside our community's cluster. And what you see is the end result is that we have successfully actually run the WebAssembly application. It's a simple Wasly demo. And that has successfully run inside of our community's cluster. Because normally, if you were to actually run these WebAssembly application, of course, there are different other runtimes where you can support them. But it's super simple with just at max, like, in half an hour or one hour, to set up and start running your WebAssembly application on inside of your community's cluster. Similarly, you can also use this for doing machine learning in France as well. Of course, what you're essentially doing with the Wasm Edge WebAssembly interface is that you're offloading all of your machine learning tasks into your WebAssembly application. So you'll have, basically, let's say, a Rust function that takes into account all the different inferences that you can do with machine learning. And you can also set up that entire infrastructure directly inside of your community's cluster. Similarly, if you're running some side cars inside of on-y proxy, you can also very easily set that up and put that inside your community's cluster. Of course, the one, the implementation that we covered, covers the Wasm Edge project. But outside of the Wasm Edge project, there are other different ways in which you can also run WebAssembly applications using Kubernetes. So one of the projects is Crustlet. It's a Rust-based runtime that allows for running WebAssembly workloads in a Kubernetes cluster. So you can also check this out as well. It has a very good support and really great developer documentation as you speak. And this is also a good way of actually running your WebAssembly workloads inside of a Kubernetes cluster. And then, of course, these are some of these resources you can have look at. There are dedicated working groups today for WebAssembly. You can join those. Of course, the Wasm Edge project has its own working group today. And you can also look at some of these other links, specifically for Crustlet as well. And for example, let's say, if you want to get a little more context into the world of WebAssembly, there is also Wasi. And of course, if you are interested a lot more inside of how Wasm Edge actually works, there are a lot of different talks that are already there. One of the co-founders of Second State, Michael Yuan, had also given a talk yesterday for the cloud native Wasm Edge. So if you are also generally interested in the ecosystem of cloud native and how WebAssembly is interacting with it, yesterday there was the co-located event for the cloud native Wasm day. So of course, a lot of the activities, especially for the Wasm Edge project, are going on inside of that particular conference co-located event as well. But of course, today we are now also starting to see a lot of different companies using different types of communities platforms like K3D, K3S to also start using Wasm Edge inside of their workloads. So it's definitely a very new field as well in terms of how WebAssembly is being used in its computation. And yeah, with that, that sort of concludes my talk. But of course, what I'd like to conclude is that WebAssembly is the future for serverless and for Edge. And communities is a very great way to be able to actually manage these applications that if you have a WebAssembly-based application. Yeah, if you have any other questions, feel free to ask. Otherwise, thank you so much for attending today's talk. So I've got a question, too. I understand that there are ways to run WebAssemblies on very small systems, like maybe systems that don't even run Linux. Could you tell us about that and what's available? So the question is mainly specific about... Could you repeat the question once? Oh, I'd like to hear more about running WebAssemblies on very low resource systems, like systems that maybe are so small that they can't even run Linux. Something like an Arduino, for example, that has minimal or no OS. Yeah, so that's a really good question, first of all. Of course, right? We have seen that if you're running Arduino, if you have an Arduino or, like, let's say, you have a Jetson Nano, right? Nvidia Jetson Nano. So those are completely capable of running, like, let's say, machine learning in France. And we usually talk about how machine learning on Edge today is becoming very popular. So if you're talking about actually running these highly computational, like, you know, competitions, for example, with machine learning, the way in which you'll actually run them is that you'll have a deployed source of Wasm Edge that enables you to provide an environment and how this sort of work is that, like, let's say you have a Python-based implementation or you have a JavaScript implementation for your, like, machine learning, let's say. So you'll be offloading your main inference for machine learning, or in this case, it could be any other different type of application to your Rust-based function, because Rust is a lot more smaller footprint and it also is a lot more highly computational. So you'll basically have your Rust act as a way of being able to implement all the highly computational functions and this will run inside of the Wasm container. And Wasm container is a lot more smaller as compared to a, like, let's say, Linux-based container. And the other benefit is also that WebAssembly provides a lot more secure environment. It's a lot more secure. So essentially, this is the way in which you'll architect your program to run on a very less resource-intensive, like hardware-limited system, like, let's say, an Arduino. So you can deploy your Wasm Edge instance on that and then offload all of your different high-competition tasks to, like, let's say, Rust, and then run them. I hope that answers the question. That's good. Thank you. Anybody else got questions? We have a few minutes before the next speaker is due to come on. Maybe I'll come up with another one then. Do you have any recommended platforms? Because I think for a lot of us, we're used to Docker at this point, but WebAssembly is kind of new. So what would you recommend the steps be if I just want to play around with it on my laptop or something, or would you recommend using a Pi instead, or how would you go about that? Yeah, that's a really good question. So for me, how I started off was actually, I was working with TensorFlow.js, and there you have an ability to basically have Wasm or WebAssembly as a backend. So the way I got started with Wasm was simply just, first of all, understanding how Wasm actually works in the web, because that's one of the easiest way to get started, because there's a lot of wide developer support specifically for using Wasm on the web browser. How basically you are converting your, like, let's say, C++ function into a Wasm executable. So generally, that's the process that I'll recommend to everyone to just create a simple C++ program, compile that into a Wasm executable, and then actually execute that. So because that is generally the starting point in which how you can get started to basically convert any of your existing C++ code, Rust code, and compile that into a Wasm executable and implement that instead of your program. So that's definitely one of the good ways. And then once you have gotten a hang of how the Wasm executable looks like, then you can start implementing other things as well. Hello, thank you very much for your talk. Are there any kinds of constraints in terms of the complexity of application that you might want to run in WebAssembly? Yeah, so of course, that's first of all a very good question in terms of like, let's say you have a lot of different applications running. And of course, today, when you talk about any kind of a microservices-based architecture, there can be a lot of different services that are running instead of an application. So of course, the way where we sort of see right now WebAssembly and specifically you talk about on the edge is that it's very well suited for simpler applications which might not have a number of different services interdependent on each other. So that's definitely true that scope of WebAssembly in Kubernetes is improving. And of course, so far, what we have seen is simple applications that do not have a lot of different services. But the way we can architect more number of services to actually run together instead of a WebAssembly application, that's something that we're still exploring. Is there something, some distribution of Kubernetes that's really easy to get started? Or do you have to install every component of Kubernetes on your own, like in a Raspberry Pi or something? Yeah, so basically, I'll recommend two things. So of course, the standard communities itself is relatively simple. But let's say if you want to run a lightweight instance of communities, you can look at K3D and K3S, because those are lightweight communities instance that you can set up. So I'll definitely recommend using those. You talked about packaging these in OCI containers. Is the process for building these as capable as it is with Docker where there's mechanisms? You can just use the standard mechanisms for signing these things and moving them around in conventional container image registries? Yeah, definitely. That's possible. We've got still seven minutes before the next one, if anybody else has questions. OK, thank you. Thank you.