 Hi, welcome to KubeCon 2021 in Los Angeles. This session is about running workloads at Edge in a new way using WebAssembly with Kubernetes. You're about to get updates from a... I'm Steve Wong with VMware, representing the Kubernetes IoT Edge Working Group. I'm joined by Dan Bosnack of Red Hat and Kilton Hopkins of EdgeWorks, but they couldn't be here physically. But they graciously prepared recordings, which we'll be patching in, and they're gonna be online for the Q&A section. The agenda, we're gonna do a brief intro to what WebAssembly is, why and where it might be useful. Then Dan and Kilton jumped hands on and tried out WebAssembly with Kubernetes, and they're going to give a report on what the current experience and future potential is like. We'll wrap up with links on how you can join the working group of people trying to apply Kubernetes at Edge. What is WebAssembly? Well, it's an open standard for portable programs that can be written in almost any language. They are extremely portable. WebAssembly can be run unchanged on ARM, X86, and other CPUs, and they're also not locked into an OS. WebAssembly was originally designed to run inside of a web browser in a safe way, but people discovered that it was also useful as a portable and efficient way to run sandbox code in other places far beyond the browser context. These were the runtimes that support WebAssembly range down to support tiny OS-less microcontrollers. The kinds of things that might cost $8 and run for two years on a coin cell battery. All these aspects make them attractive for running on devices with constrained resources. So how does WebAssembly compare to Docker? Well, Solomon Hikes, who's often credited as being the inventor of Docker, has been quoted as saying that if WebAssembly existed in 2008, Docker wouldn't have been needed. Now, Docker is much more mature, and perhaps at this point, locked into some decisions and use cases, but people have found issues with attempting to run Docker containers at edge on low-resource devices. WebAssembly has a few attributes that position it behind Docker. It's less mature. It doesn't have the supporting landscape, but it's also got these attributes that you can read on the slide that potentially position it ahead of Docker for these edge use cases. I'll tell you that I got in here at KubeCon on Saturday for the REJECS conference, and by my read, the hallway track here has pretty much a third of the cool kids talking about WebAssembly. This just has a feel about it, that something big could happen here. It feels to me a lot like what Docker was like circa 2013. Now, one of Docker's advantages is the maturity and the supporting landscape, but what if the existing tools, things like Kubernetes, could be adapted to do similar, the same kinds of things they do with Docker containers, but for WebAssemblies. Could that be made to work? Well, you're about to see some demos where a couple people went out there and checked it out, and the quick answer is yes, but is it perfect? No. So it's hard to predict what will happen exactly. This could be optimized, and Docker turns out to be the perfect orchestrator for WebAssemblies, or we could have a replay of the orchestrator wars that came about in the early days of containers. I'm not gonna pretend that I've got the answers to that. Another observation is that, almost 20 years ago, we had technology like MapReduce and Hadoop bringing compute to the data. In that case, it was to avoid expensive transport of the data, and Edge has something similar going on. They'd like to avoid the big network transfer cost, but a lot of this is about latency and resilience, and WebAssembly looks like a good way to bring compute close to sources that are dealing with data sensors and data generation and event generation. So I don't know, why am I wearing this crazy suit? I'm wondering that myself. It's pretty uncomfortable. It's a plastic Halloween costume, but I think the potential here is really big. The original Kubernetes was designed for giant data centers where you would pool resource and use that pooling to gain efficiency and operation at a large scale. In a way, Edge is similar but different. It isn't really pooling resources. It's the opposite of it, but you're managing containers at a large scale. If somebody can bring about this whole big picture of putting code in WebAssembly and managing to manage and govern it at scale, this has the potential to not just be like the existing cloud. This is as disruptive as a new type of cloud, and I think that there's a chance that could happen. So in the demos you're about to see, I'm just gonna advance this if you can read fast fine, but I put it in the deck so you can look at it later, but we're gonna cover a lot of things. So right now, I'll bring on Dayan of Red Hat. Okay, it's demo time. And today we actually have two demos. The first demo will show how we can use container technologies at the Edge today and what are some of the specifics of that use case. We'll talk about different CPU architectures and how we can allow our workloads to access the peripherals because our Edge use case is most likely would like to access some sensors or some peripherals that's the main use case. So let's start with the actual example that we're gonna use. This is a simple Python script that uses the DHT sensor attached to the GPIO interface and that script basically reads the temperature and humidity values from the script and formats a JSON payload and send that payload to our cloud, the draw cloud in this particular instance as we will see soon. So what do we need to do now in order to build this and run this on Raspberry Pi? As you can see here, I'm currently running an OSX system which is an X88 platform in this instance and I wanna build an image that will run on Raspberry Pi. There are multiple ways that we can do that like getting a proper withdrawal machine that will do the build but fortunately, for example, Docker BuildX tool can help us here and as you can see in this example, we can provide a platform as a parameter to the build and build the image of a appropriate architecture that can be run on an ARM platform. So now that we have this, we actually want to run this script and allow it to access a GPIO, right? And for that, we need to do things. We need to run our container in a privileged mode and we want to enable specifically a device that will be accessible from our container. And in this case, this is the dev GPIO zero. So as you can see now, in a Raspberry Pi console, we have a container that's running this container and if you take a look at the logs, you can see that we are successfully reading the sensor from the container and sending the data to the cloud. We won't spend too much time on the cloud today because it's not the main topic, but we can say that there's a Kubernetes IoT Cloud Platform called the Drog IoT that can accept different kind of payloads and in this example, we use this payload so you can see that our data are being sent from our device. Moreover, there's an example of the Quarkus application that we can use to build our backend applications that will connect to the cloud. In this case, the Quarkus application uses the MQTT integration provided by the Drog cloud. So you can find more information about all this in the links that will be provided in the slides. So finally, so now we can see that we have our container that is running, it's accessing the peripherals, it's accessing the cloud. So how are we gonna schedule that? There are multiple ways and we covered something in the slides of this session. The simplest way could be to provide a simple system control system control settings that will allow us to run this every time that our device reboots. So we provide a simple system control entry that will run our container every time the Docker service starts every time the Raspberry Pi reboots. This is like a super minimal possible scheduling, so to say, of the system, but something like IOFOG using a lightweight Kubernetes like K3S or some other system like Ansible is usually a better solution. And that's a topic for some other day. That concludes our containers at the edge demo for today. And for the change of pace, we will create a second demo now in which we will compare a state of web assembly today, how they compare to containers, how we can run them in the Kubernetes environment and if and how we can use them at the edge today. So in order to run web assembly payloads in Kubernetes, we will use the Crosslet project, which is currently a CNCF sandbox project. And Crosslet basically allows us to add a node to the Kubernetes cluster, which will be able to run web assembly payloads in a vezy environment, as we explained earlier in the slides, right? So in order to start the demo, I will use my Kubernetes on the desktop, on the OSX desktop. And as you can see here, I have one additional node here, which is currently not active, right? So in order to start this node, I need to start Crosslet by executing Crosslet vezy command and providing some basic bootstrap configuration. Once the Crosslet vezy is running, we will see that our node is now is ready. And as you can see, this is a special container runtime, which means that this node will be able to run a Vesem payload. So with this, we have our basic infrastructure. We have Kubernetes running with a boot control plane in the Worker node that can run Vesem payloads. And now let's take a look at the concrete payload that we want to run. So I prepared a small cloud events vezy example. This is the project that will emit cloud events to the cloud and receive in parse responses as a cloud events as well. One of the reasons why we are not being able to replicate the original project is that as we said, the vezy at the moment is not able to access sockets or peripherals like a GPIO. But I expect this to change in the future and we will definitely follow up with a similar example that will do this. For now, we will use vezy experimental HTTP library and connect it to the cloud events SDK, also written in Rust, and provide a way to basically serialize the serialized cloud events from the HTTP requests and responses. As you can see here, we have our event builder that will create the appropriate event that we want to send. And we can easily convert that into the HTTP request that can be used by the vezy experimental library. We'll then post that to the cloud and appropriately try to serialize and parse the cloud event from the response. So what we need to do now to actually build this payload and run it as a Kubernetes port that's the real question, right? So in order to do that, we have to compile it with a special Rust target. I hope, luckily, Rust comes with a good support for the multiple architectures. And what we need to do here is actually only install vezen32-vezy target and use that target to build our binary, right? So let's try this. I mean, I have the target already installed at my machine, as you can see here, so that's all good. Now we can go and try to build our program, our workload. This will take a little bit, but as a result, our cargo builder will actually provide a vezen binary that can be run in a vezy environment. And one example of that environment is called vezentime, and that's actually used by the crosslet as well to execute these binaries. If you take a look at the payload now, an interesting thing to see here is the size of this payload, and it's only three megabytes, which will be important for our later discussion. But at the moment, our job is not done yet because what we need to do is actually to... we need to convert this vezen binary into a container, into an OSCI container that actually can be pulled by the crosslet. Luckily for that, there's a vezen OSCI tool that can help us to provide... to convert this to appropriate image, as you can see here, and push that to the container registry. It's important to say here that not all container registers today support all this, but the GitHub one that I was using is supporting it. And another important thing is you can see here that the size of this container is basically the size of the vezen binary, so it's three megabytes, right? And if you compare that to any kind of container, even the simple one we run previously, it's quite a big difference. There are tools that can help us to minimize the size of the containers, but even with all that, I expect the size difference to be at least in order of magnitude different. And that could be one of the big advantages of using vezen in edge environments, because it really takes a very little size of the container, and it's easier to schedule over the networks. So now we have to schedule this container as a pod, and I have an example of that here. So you can see we will pull our image. But the important part here is this tolerations section. And that basically tells us that this payload is a vezen to vezi architecture, and that we need to schedule that on a proper node that supports that architecture. And what basically this means is that when we apply this pod and add it to our Kubernetes to execute it, it will actually schedule the pod on our Crosslet node as you can see here. That's the exact node that we started earlier. Now we can see that if the pod is actually running properly and as we can see here, it is. So we create our, every couple of seconds, we create our cloud event, and then we serialize it to a proper HD request by the cloud event specification, post that to the appropriate service, and then parse the response back. And that works properly. And that concludes our vezen demo. Just to repeat, we saw how to use... In the interest of time, so we have time for Q&A and jump to Kilton's, but he was showing a slide with resources and I'll close the presentation with that. So next up Kilton Hopkins, CTO of Edgeworks, and he's also lead of the Ioflog project under the Eclipse Foundation. And it's an industrial computer that is sitting inside your smart factory. Yeah, I think this started playing mid-streams. Let me see if I can... It's an industrial computer that is sitting inside your smart factory. In that situation, on top of the host Debian OS, choose a WebAssembly runtime and you're good. That's not the only thing you need. Something has to coordinate. Play the role similar to what Kublet plays. So what is the agent that's going to start up, shut down and monitor the status of the WebAssembly modules? That agent is going to do your workload management and your workload administration at the Edge. So a lot of opportunities for that. And I'm going to get into a couple more slides talking about some specific open source projects that are going in that direction are going to add that capability. But you also need to have Edge connectivity handler. So here's a situation where a pure Kubernetes environment might not be suitable. And this has to do with the Edge. Doesn't have to do particularly with WebAssembly. But just something to note. In a pure Kubernetes environment, you have what I would call the standard issue Kublet. And the communication back to the remainder of the system, the master and so on, is going to be rather fast and frequent. In an Edge environment, that's probably not suitable. What you would rather have is a lighter amount of traffic with a little bit more tolerance for breaks and connectivity or limited bandwidth, et cetera. In some cases, pure disconnection. I'm offline for two days, but I'm still listed as an active node and no thing, no pod is getting evicted from that node because it's understood that it will be checking in maybe every couple of days. It's a really good example of Edge connectivity environment problem. So choose an Edge native technology if you have an Edge environment that is not suitable in terms of bandwidth and always on connection for being able to deploy directly your Kubernetes environment out to that Edge location. Of course, if you run all of Kubernetes on the Edge, then including something like MicroKates or K3S, then it's different because now you talk about land traffic, but that still may not be suitable for devices that are on low power wireless, et cetera, et cetera, just something to think about. And so then the last requirement is you're going to need a repository of the binary code, the WASM files, similar to Docker Hub for Docker containers, right? So where are you storing? How are you serving? How are you verifying the integrity of? Now, when you're going to get out into production and at scale, these are all pieces needed. So since we have the requirements, now let's actually talk about the types of Edge deployments wherein you might want to use WebAssembly. For the purposes of this presentation, there are two types of Edge deployment that you need to worry about. In the first type, it's no Edge device interfacing required. It's all about latency benefits, bandwidth constraint benefits, security privacy. In other words, I'm going to take a workload and I want to package it as a WebAssembly module and I want to push it to an Edge node that is close to where people will consume it. So on the land as a Web interface for people to do real-time voting interaction for some entertainment purpose. Great. It's just a box on the Edge. If it's on the network, it's fine. It doesn't have any specific interfacing with an Edge device, just a network layer. But that's not that many Edge use cases. That's just some. The vast majority of Edge use cases are the second type here, which I've listed as processing data from Edge devices. Here, you're actually taking in some kind of sensor data. You're talking to a camera. You're doing some actuation. Maybe you're using the GPIO pins of a board in order to drive LED indicators or taking data from microphones. There's all kinds of possibilities. Here is where you actually need an interface layer. So WASI is going to be required or WASI or similar, some similar concept. And now the host environment becomes application specific. What I mean by that is you can't just pick any old Edge node. You have to pick the Edge node that has the camera that you want. And so that's something to keep in mind about Edge deployments with WebAssembly or otherwise, is that you have to have the right stuff in place in the right places on the globe in order to get the data that you want. And then you have to find a way to interface. This is reminiscent of like with Android with the hardware abstraction interface there. So the HIDL, the hardware abstraction layer interface definition language. How am I defining the way for calls like open camera and get frame and so on to interface with the actual hardware. And so this is work to be done because this is not ready. This is not out there universal ready for you to use. If your Edge deployment, it's this type of Edge deployment. Think about how you're going to access the special Edge resources that you want to actually process the data from with your WebAssembly and then how are you going to accomplish that? So what's the current state and the future? So what does Wazi look like today? Super exciting. That's for sure. There's a link at the end of this presentation to look at a blog post about basically defining Wazi. Here's Wazi, come help. It's a great blog post to get you up to speed just one read. But currently it's about handling the core system calls, things like files and networks. So like read, write a file, I'll get access to the network and so on. And these things are going to take some time and the principles of portability and security are fantastic for building out Wazi. But it means that we're going to do it right. And doing it right means you're not going to have a proliferation of a whole bunch of Wazi features, which means access to cameras and stuff. You're not going to have that immediately because it's going to be done right. And so do you need access to that stuff today? If you do, how can you find a way to interface? Maybe it's best to wrap it as a container and then your WebAssembly running in container can have that stuff available. There's ways to think about it. And so let's talk for a moment about edge specific challenges. So WebAssembly modules are the new microservices. What does that mean? Well, microservice doesn't mean container, but it arrived those two technologies. So the architecture of microservices arrived around the time of container technology. Well, any modular code that can be mixed and matched and paired together with microservices. And then that architecture to multi microservice world, meaning your application is likely to be built of multiple microservices. So if you're multiple WebAssembly modules, now when you're new architecture, how do you do the interchange data between them? That is a common problem in edge computing, because things are not always interconnected, especially between edge nodes that might be in different buildings, different cities, different vehicles, something to think about. And how do you go from edge to cloud and so on? With the data, not in terms of the administrative connectivity, right, what's my workload, but in terms of where do I send these bytes that I gathered from this camera. And then now another edge challenge is dynamic access. I start a WebAssembly module on an edge node that needs access to the microphone that's installed in the board. Well, I didn't know I needed access to that until I discovered I have a microphone. And that's when I deployed the workload. And then the agent on that edge node told me, hey, these are the things we've got. Oh, great microphone. We want that. We want to process that data in real time. Now, if unless you've planned the entire host OS to have all this stuff exposed through a wazee or wazee like layer, chances are pretty good things will pop up and you'll need to allocate permissions, allocate permissions upon discovery. This is a big challenge of edge computing in general. And you couple that with the challenge of the not yet build out interface layer for WebAssembly and you've got a lot to think about. And that means an opportunity to contribute to all of the build out of this in the world. So jump in and help. So last is certainly not least, what is going to be like to integrate WebAssembly with existing edge technologies? The easiest path is to go where edge technologies already exist. Why would you deploy new WebAssembly edge technologies when building the edge computing world is hard enough? Let's put it on top over integrated with. So it's really a choice of WebAssembly runtime and that'll drive what features you have available because not all the runtimes are the same and they don't all have the standard set of features yet that eventually it will all be table stakes. Every WebAssembly runtime will kind of have the basics, but now you kind of pick a runtime based on what it's giving you and what's been built out. Seems like we're going to be operating containers in WebAssembly module side by side for quite some time, maybe even one wrapped in another, something to get used to. It's probably going to allow us to fix a lot of things before we move completely to a WebAssembly world. And the use cases of WebAssembly at the edge are going to drive the specific advancements. So in other words, I want to access a camera on a battery powered smart camera device. I would like to use WebAssembly. I will do the work to build out what's needed for the value of my use case. Will I also build out access to the temperature sensor in the camera? No, I didn't need it. I'm skipping it, right? So the use case determines where advancements are made and everything else will probably just have to wait until it can get caught up. That's about it. So hopefully that was helpful and let's pass it back to Steve Wong to wrap up this session. Thanks, Hilton. So here are the resources that they use. There's a GitHub repository with the code that Dion was showing. When they took this thing around the block for the test drive, they found these various open source projects highly recommended. The blog post that Hilton talked about is the one stop to, if you're going to read one thing, pick up on everything, get up to speed. That's it. A few other things to read that are recommended. So this session was put on by the Kubernetes IoT Edge Working Group. If you're interested in applying Kubernetes in ways that get jobs done at Edge, that's a good place to come and discuss it. Some people are looking for a solution that would run whole Kubernetes clusters at Edge, but some of the others are just using things like WebAssembly that will feed data and event streams into a Kubernetes cluster up at a higher level. You can download the deck and get the links to join the group itself, join the meetings that are held by Zoom, converse on the Slack channel. This Slack channel we're going to be using for the Q&A at the end of this session, so if you're watching this on the recording or we get shut down because we hit the time limit, go on that Slack channel for this group and Kilton and Deanna are on there right now and they'll hang around a little while after this closes, so you can ask questions there. We do have a YouTube channel with the recordings of the meetings. These are speaker contacts. We're on GitHub. I think we all share those same GitHub IDs on Twitter and actually the Slack channel is probably the best way to get ahold of us after this event. So that's it. You can download the deck right here. That's the sked site for the conference and I have uploaded the deck. I may be made one or two edits, so later this afternoon I might push an updated version there if you downloaded a couple hours from now, but this one is pretty close to what I just presented. So that's it. If anybody's got any questions and Deanna and Kilton, if the questions are about their demos, they are online now, I believe. So I got a few confusion about the whole context of running WebAssembly in an Edge and Kubernetes situation. So it sounds like the Kubernetes itself becomes a bottleneck, needs to be resolved by things like K3S first. Then you can run some more efficient container in WebAssembly. No, I don't think that's what the situation is. Kubernetes, of course, was designed to orchestrate Docker containers, and a lot of the assumptions that went into the standard Kubelet that gets loaded on a worker node in every Kubernetes cluster presumes that it's managing Docker containers running on a Docker runtime. What changes with WebAssembly is you have a different runtime, some kind of runtime that would run the WebAssembly, and some of the presumptions that Kubelet made are a little different. So there is an open-source project called Crustlet that sort of convinces the... This is an oversimplification, but let's just say it convinces the Kubernetes control plane that it's talking to a Kubelet, and it's close enough that you can utilize it to disclose what workloads you want running under what kind of context, and have these things scheduled to run on nodes running the Crustlet instead of a Kubelet, and then the Crustlet knows how to deal with the WebAssembly runtime and gets that workload running. So it should work with any standard Kubernetes. I don't think you necessarily would need any particular one groomed for edge, but the Kubernetes distros that are made for edge should certainly work for just fine. Is there, like, production use case of this thing you just mentioned, the cross-link something? Well, I don't know. I think personal opinion is that this stuff is so new that production use cases and a lot of those edge use cases are pretty mission-critical. I wouldn't do it, but a lot depends on your tolerance for being a pioneer and exploring things that might still be a little buggy and unproven. Okay, we've reached our time limit. I'll hang around on the hallway a little bit, and, like I say, a few questions are about those demos Kilton and Deon are online, and I would just log in and go on to that Slack channel and talk to them through there. Thank you.