 I could project, but why? So just to kick off with a little bit of a story, I live in the Northeast and lost power in a really bad northeaster one time. And those of you who have smart homes are probably going to feel a little bit of cringe. But so I was in a pretty big home automation phase without the time. And I had installed the hub, the light bulbs, everything else. And they all follow a pretty common pattern. They connect up to some REST API in AWS, bad news if US East one goes down, or Cloudflare as it was last night goes down. The mobile app calls some endpoint and message comes down from that service down to the light bulbs or anything else and turns them on and off. I mean, I'm sure everyone here sees the problem in all of that. So northeaster comes through, nails is pretty bad, and our power came back within a day or two. But internet and cell tower was out for three days. And so being stuck in the dark for three days gets a little old after about 48 hours in when you can't turn those on. So think about these problems a lot. Think about a lot of local edge embedded software and how to fix some of the problems that we're in. So I'm Jared Dillon. I'm the CTO at a company called Mycelial. We are building out state edge networks and doing that with WebAssembly and building out development kits and clients. So you'll probably hear a lot about server side WebAssembly throughout this entire conference. We look more at the clients. We look more at the edge. And we just raised $3.8 million through Crane, who is also believing in that vision of better connected clients. And if anyone's interested in those problems instead of the server side, we're looking for people. So yeah. So I'm going to talk about a mixture of things in this talk. And half of that will be a little bit of the embedded systems imply a lot of clients and at the edge. And so you end up with a lot of different problems that are unique to these devices out in the field, not just in the fact that some of them may be running real time operating systems, but also in their modes of operation. So if we're looking at today's orientation towards a client's server dimension of the gravity of your services and the scaling of your services with very thin clients, this doesn't work very well in an offline or peer to peer or a local first environment for a lot of reasons. You're relying on this central scale. You're relying on data centers far away to do that. And we have more compute out there than ever before, closer to us than ever before. If you think about what's in your pocket or just what's in this building, there's massive amounts of compute all around us. So today we're going to do a quick survey of the embedded edge. We're going to look at WebAssembly at the edge. We're looking at WebAssembly and embedded in real time in OS environments a little bit and embeddable clients. And then kind of take a look at an embeddable WebAssembly runtime we're working on open sourcing and upstreaming. And then I do have a demo. I have a video of it. At the moment I managed to fry the demo device. I have a backup. So if anyone's interested in this, there's contact information at the end. I'll be able to show it once I reflash it tonight. As I was preparing for this today, I found out that I had bricked it. So all right, what is my cloud made up? Well, right now if you look at the environment where we have that large cloud, little edge universe, we have these virtual machines ultimately connected with RAFT, especially if you're using something like Kubernetes or Mezos, serving millions of these clients, these IoT clients, these embedded clients, these edge clients. And we're taking all that data and we're putting it into these massive data lakes and proceeding to do data on it. Now there's been advancements here where we're not running these giant Hadoop jobs anymore. Some people are, but we do have things like us, now are doing like SIMD, JSON, and we're able to at least query and get an understanding of this data. But we're trying to make decisions about that data. And we're trying to scale this out by adding more availability zones and regions of which these large clouds only have so many. And it's the making decisions about the data that I really want to talk about because these edge embedded devices are really all about making decisions out in the real world. Now if you're building out a driverless car, making a decision up in AWS is probably not what you really are intending to do. So as I said, the availability of compute is massively growing out in the real world and quickly outstripping non-IoT devices. ARM is everywhere, RISC-5 is everywhere. It's going to continue being everywhere. And we're even seeing ARM in the cloud, but low power available, ubiquitous compute everywhere that in a lot of ways right now is probably underutilized if it's just collecting and pushing sensor data up to cloud services. So as the movie might say, what do planes, trains, and automobiles all have in common? Well, if we look at a plane, it's sipping power. It's using as little power as possible to put all of that engine thrust and be as efficient as it can be. But it has many sensors all operating at low power. It's generating off all of those sensors a massive amount of even redundant data. You have angle of attack sensors, you have avionics data, you have fly-by-wire. These are all very complicated real-time OS systems. And that's before you can consider anything in the cabin. That's just what you interface with on that plane. And this is a real-time system. And I will take it aside and talk about real-time systems because I think in a lot of embedded real-time becomes a very important concept. And the difference between hard real-time and soft real-time. Before I go into that, many people have a strong familiarity with hard and soft real-time. OK, couple. So we'll dive into that a little bit. And in a plane, you have, at best, satellite networks. There's radio, there's other forms of communication. But you're not pushing avionics data all the way up beyond the resolution of what flight radar is getting. Now, you look at a train. And a train is basically a massive power plant on wheels. And we talk to and we work with some people in this arena that effectively, they tell us that they can run an entire data center on that engine, and their next generation engines, because it's just a power plant on wheels if it can survive that 150 to 160 degree environment. Also, many sensors, a lot of these are trying to do predictive maintenance. We're looking at freight engines now that have 30,000 sensors on them to try to make decisions out in the middle of nowhere. Again, massive amounts of data. You're collecting data about the track. You're collecting data about the environment, about the locomotive itself. Trying to identify, next time that comes into the yard, does it need to be pulled? And again, this is making hard real-time decisions. If you miss a signaling decision, you're creating a catastrophic error. And a lot of these environments, you're looking into the middle of Nebraska. You're looking at the middle of your places with no power, no network going out to where you just have to make local decisions and how do I, as a developer, create software that can interact without that network and make decisions locally, but also get that information back. And we'll talk a little bit about that shortly. And then we look at cars, right, especially self-driving. In reality, we're talking about a limited power environment. I mean, my alternator can probably charge this laptop. I mean, you start to get too many devices and you start to notice a big draw there, but you look at modern cars, again, many sensors, lots of real-time data, especially lane assist, especially those sorts of technologies, real-time, and you may or may not have network, right? So you have to be able to operate an environment where you don't know if you're gonna have network or not. And what does that mean for the development environment around you and how do I robustly develop software where I don't know if I'm gonna have a network at a given moment, right? So what do they have in common? Well, they really have all of their constraints in common, right? Despite some diversity in their power and sensor needs, the big thing in common is that these are embedded devices that are basically spitting out massive feeds of data at all times, and they need to make decisions about it in real-time no matter what that network availability is. So this gets us into those real-time systems that effectively have unlimited compute relative to the device. You throw a couple of, I mean, even modern Raspberry Pi 4s in a car, and that's a lot of compute power there available to those systems. You throw FPGAs with RTOS in there and it's even better. You have dodgy networks, right? And you can't shove all this data into the data lake. It would be impossible the amount of data that this throws off. And so we usually start to do things like summarize that data, right? We start to take Windows and time spans and drop data. And we're talking about these systems being networks of their own, of interconnected devices local to that environment, right? That all have to go coordinate much like you would in any other data center except it's embedded devices and you don't have any elasticity to it. So I just wanna talk a little bit just so everyone's on the same page about what real-time is that that term gets thrown around a lot. And I think it has a different connotation when we're talking about these sort of environments, these sort of devices. And so we have hard real-time and soft real-time and we're really talking about degrees of latency and tolerance here. Or system degradation in the event of any sort of this intolerance and so quality service failures can have varying levels of catastrophe. So hard real-time systems, we consider that a total system failure if QoS guarantees are not met. And we consider soft real-times degraded. So these systems at the edge are a mix of hard and soft real-time systems and how they interact with the real world, right? A hard real-time failure is very catastrophic for a plane if avionics fail. But if there's issues with your streaming in-flight video, your experience is degraded but you just might have to talk to someone next to you or pull out your Nintendo Switch instead, right? You're fine by comparison. And so that's the difference we sort of look like, look to in these systems and how we think about decision-making in these systems and the data we're generating out of these systems. So, well, why does this matter? If you, and this is kind of a weird slide to throw up here with a bunch of streaming network services that are coming from the cloud, and you have Mighty up here, I don't know if anyone's seen that, but it's streaming Chrome right to your, right to your thin client on your computer. You have streaming gaming, you have streaming for code with Gitpod and VS code remote plugins, right? Or code spaces, that's what it is. So, we're talking already about like very, like more and more and more of a world becoming very latency sensitive, right? If we have dead environments being streamed, then you expect when you type a character it's gonna respond. If I am playing a game, again, latency sensitive, I'm gonna move my mouse, I'd expect the mouse in game to move, right? And so we're seeing this cataclysmic shift from to using this massive amount of computing power we have in the cloud and not using any of the computing power we have out in the edge. And if you look at it, like there's many definitions of edge, some people would consider the, you know, your last point of presence for CDN and Edge. And sure that's technically true, but for the purposes of this talk we're talking about those far out and client embedded edge devices. But the assumption remains the same, right? The network was always gonna be good. Everyone's gonna have a great time, you know, fibers everywhere. And the consequences of network outages is just some lost productivity, right? Or you're kicked out of a game. Or again, if you were, if you witnessed cloud players outage last night, then half the internet just, you know, you go to bed instead. So what does this matter? And this is gonna start leading us into a bit of the web assembly part of this talk, but there was a very interesting talk from Peter Levine, or I guess Talk and Blog Post. The end of cloud computing, and I encourage everyone to go read it because it very much impacts these sorts of devices that we're talking about. And it's called, yeah, the end of cloud computing. And so I'm gonna read just two excerpts out of that, and we'll talk about the excerpts as we go. But sensor data explosion will kill the cloud. Sensors will produce massive amounts of data. And this was written, I think in 2018, we're four years removed from that post. I think we're seeing it more than ever. But existing infrastructure will not be able to handle the volumes or the rates. Data will be stuck at the edge, right? That's a very important thing. Too much data at the edge, too much data coming off these embedded devices to get it back. And computing is gonna move along with that data to the edge. And I think we've seen a little bit of that with move compute to your data instead of data to your compute, right? We haven't quite seen it like this at the edge, but we're going to and we are. And this is, you know, that's what we're working on. So we're absolutely gonna return to a peer-to-peer computing model where the edge devices connect together, creating a network of endpoint devices, not unlike the distributed computing model. It goes on further to say, real-time processes will need to occur at the edge where real-world information is being collected. So again, we have these massive data streams we're wanting to make decisions. We have to do the compute there. The notion of real-time becomes a very important ingredient given the massive amounts of real-world information. And we're not just talking about text information, right? We're talking about collecting video and streams of information. Both these things need to work together and data will absolutely drive this change, right? We're not just talking about just values. We're talking about structured data. We're talking about audio. We're talking about text. We're talking about all sorts of rich formats optimized towards their environments. And some of those, especially in, we're talking about CRDTs or collaborative models, are partial information updates, right? For what you need to make a whole. So why does this all matter? Well, at the edge, we really want to do some sense and fur and we want to take an action. And he refers to this cycle. And this is really the core function of a lot of these edge and embedded devices, sensors, and things that act upon them. We want to gather some information. We want to make an inference on it and we want to make a decision and do something, right? But we want to get that data back. And he stops with this idea of learning. So we want to then take some of that information and say improve our models. We want to make a better decision next time or make a better decision for other clients in light of new information. We then want to process analyze. We want to actually go and update those back out, right? It's one thing to go learn from it. And I think this is where we see a lot of current efforts, the improvement and where web someone can really help. And like I said, we'll talk about that in a minute, but actually going and updating these devices to go a match in a lightweight way that takes into account everything we just learned from those decisions, right? Make the whole of these fleets better. So we're all used to, I think, a lot of cloud data was used to dev and packaging. You have containers, you have Kubernetes. You're acting with processes. You have some sort of observability model and you have plenty of ways of doing data in comms and between streaming systems and long-term data storage solutions. So we need some new building blocks, right? And one of those building blocks that I look at a lot or relook a lot is using WebAssembly out in the edge and out at and embedded. How familiar is this room with WebAssembly just before I give a primer? Okay, lightly, so great. So let's just talk about what WebAssembly is. There's a lot of blog posts. There's a lot of like, I wouldn't say it's fud, but everyone gets very hyped up when they start talking about it, but it's really all it is, is a binary instruction format for a stack-based virtual machine, right? It's a specification and it's a compilation target. So what's interesting about this is we both have a specification for a virtual machine and it's designed in a way that anything can go or as many things can go compile to it as possible. So you have this interesting fixed point between the things that run WebAssembly and the things that can compile to WebAssembly. And then for those who are still interested after hearing that, well, you get a lot out of running WebAssembly. Now there's some restrictions and I'm sure you'll hear about a lot of those throughout this conference if you're going to other WebAssembly talks, but you get a lot of benefits out of that too, right? It's very size and low time efficient. The binary format is very compact. The text format looks like a Lisp and it's structured like a Lisp. You can get this executing at native or near native speed. Not every runtime will do this, not every language will compile. You're still going to pay the cost of your language, right? If you're compiling go down, you're still going to have the go run time. But if you're running C, you can get about 94, 95% the speed of C right there and you can do that very efficiently. And one big thing about WebAssembly is and if you've heard of WASI, WebAssembly System Interface or Web Components, we'll talk about that a little, but it enables an abstraction for a lot of common hardware capabilities. Now we're, and this one deserves a star by it. It's memory safe and is in a sandbox execution environment. Now, Spectre does exist. Attacks do exist, right? There are side channel attacks, but it's about a sandbox as you can possibly get and that's going to be very runtime dependent. There are ways around that. A bit beyond the scope of this talk, but I'm happy to chat about that. And I think one of the most critical things about WebAssembly that gets is what people are super excited about it is it is very embeddable that you can do today. And so we wanna be able to perform like a really fast reliable prediction every time a state change happens, right? Okay, well, this drone has new information, let's go detect if we're about to hit the ground, right? Or we're falling or whatever. And then we go update the state of the drone with the decision made. And because we're using WebAssembly and this is running on the same hardware, the same physical device, we're not, this team is potentially not an expert over in WebAssembly or sorry, in TensorFlow models, right? But this team is. So they wrote that as a TensorFlow model, compiled it down to TensorFlow Lite, deployed it out, now it's running. Now there's something interesting there, right? We've now sort of made this team fungible instead of having an entirely just embedded team running and building and trying to maintain this device. And it looks a little bit more like the infrastructure that you might see in say, containerized cloud-native environment. And now we perform an action, right? We send a signal to physical systems, we're gonna run a bundle for that instruction, maybe we've written to that in Rust or maybe we've written in C or AssemblyScript or something else. But the team that's the expert with that, that particular actuator can now sort of go and own that part of the system. Now we wanna go learn for that, right? Like, we wanna go even further because now we've composed our system out of these reusable building blocks that can run at the edge, we have hardware abstractions, and there's no reason, and I'll show a little example of this too. Whoops, hang on, I realized my notes are, there we go, there's my learn one. So we wanna then go synchronize the minimal amount we need back to the cloud in order to do a partial training update, right? And so now we've gone, we've gone and updated the model, we can go push that back out to all these drones, and we've actually improved the totality of the system by sending the minimal back instead of trying to send out this massive feed of data. And there's no actual reason before I go further, I think I have another slide on this, but there's no reason because of these abstractions, we can't actually go simulate this in the browser. One thing that I think is very compelling for WebAssembly outside the browser is that you can put it back in the browser. And so instead of having these device labs, you have a high fidelity, we have these high fidelity environment that basically mirrors out the state of a real world device. You clone that with its real data, you can go see how it acts, you have everything you need in order to go make a high confidence decision when you go to update with one of these. So just a quick survey of various WASM run times for embedded, for the worth going and taking a look at. And we'll talk a little bit about compilation style here as well. So WASM time is sort of the canonical byte code alliance runtime. It is a full, just in time compiled VM for a runtime for WebAssembly. And it is, I think probably the most used if you look at a lot of the users and out there, runtime out there. But one thing about these run times is they're intended to be portable, they're intended to be bindable, right? You can actually sort of swap these out depending on what you need. The, and this is for the demo. Once I get this back up and running and the video is actually using the WebAssembly micro runtime. And I started that because it's a little bit different. It's actually using a different page size. But the idea is, is you have a runtime that can go fit on the tiniest of possible devices down to little originals, down to these sensors. And that's the case for WASM3 as well, which is an incredibly fast interpreter. You have WASERO, which is by Tetrate Labs. It's entirely in go. And intended to be compiled with tiny go as well. So you could actually go and compile this in theory to WebAssembly or down to an Arduino device and have it a jitting or interpreted runtime there. And you have, and there's plenty of others. In fact, I think Michael from WASM Edge is in the back there. He'll be talking about WASM Edge later, which is another great runtime, especially for machine learning. So, a quick note on compilation style. This is how it really is matters for these embedded environments. You're often flashing a ROM, right? And so being able to come in and jump instructions, being able to do a full jitting compiler can be really difficult in these environments. And not only do you have to rely on generating, or jumping to native instructions in the case of a JIT, but you also have to build out for each platform. So interpreters can actually be very useful here, even though there's technically the slowest mode of execution, but you just need to get that interpreter pointed or ported to every single environment that you could possibly run on. So, there's pros and cons, but the biggest thing is here really, there's a lot of WASM runtimes out there, but there's not really one for every single job, right? So it's worth taking a server. It's worth evaluating the runtimes out there depending on what you're doing. So what we think about more is with what we're doing, we're not really thinking too hard about the runtime itself. We try to be very runtime portable. We're really thinking about what's that abstraction look like? What's that look like over hardware and over sensors and over these clients? So there's gonna be some talks on these, I believe, but we're looking at things like the WebAssembly interface types and component model. The entire idea here is about shipping out WebAssembly components that can co-interact with each other natively. So you have a Python module interacting with a Rust module and being able to publish that out, right? Or being able to publish interface types for a temperature sensor that's also running and being able to do that portably across languages. And so you have the WebAssembly system interface that looks to port SysCalls. Now, a lot of embedded, you don't have SysCalls. So we're really looking at the WebAssembly device interface, which we're looking at for contributors for that's looking at, how does this look with I squared C? How does this look like with GPIO? How does this look like with more physical IO devices? And so getting into that runtime, we're really looking at having sort of an environment for applications to run in that takes the underlying system hardware, ports it up, has the WebAssembly, has everything needed to go push updates, to go get data back, and the user does not need to think about these low level details in order to run their application. And that application is very portable, right? So you have one that's optimized out for a Raspberry Pi, you have one that's optimized out for even an Arduino, and then you have one that just runs natively and seamlessly in the browser. So the important thing here is that, I'm trying to be mindful of time, that we're really looking at the user focusing on their application, right? What am I actually building? And thinking about synchronizing the state and not worrying about these big event systems, right? You have MQTT, you have a lot of ways to get events back and forth, but if you're in bandwidth constrained environments, trying to send every single event becomes incredibly expensive if you can do it all. We're back to facing that data streaming problem, right? And it doesn't help that a lot of these messaging systems are not designed to run as a distributed database, right? If you lose quorum on your Kafka cluster, you're down. And this is not the case with these devices that have this intermixed, intermingled, and spotting connectivity. And then just eliminate all the user-level code for working with low-level networking, storage, display, IO, all the device details needed for these clients. The other important thing I think is other than language portability is being able to reproduce these environments back in the browser. Like I said, one thing we focus on very heavily is if I can pull WebAssembly out of the browser, we like to be able to put it back in. We think that's a very important thing to be able to actually see what's going on, being able to debug, being able to understand and make confident updates to these systems. Yeah, so we're working on a couple things in this realm. We'd love contributors for, especially in the device interface and actually some of these abstractions. And we're not the only people working on this. There's a couple great posts out there as we've been working on this where Disney Plus actually uses this model to deliver out to all their clients, all their TV set boxes because they're trying to ship out hundreds of millions of different clients to different devices and they want to write the same code. And they're gonna be on different set top boxes. There's gonna be different capabilities for each of these. And so this was one of their major projects to actually run. And all their user space code runs in WebAssembly and they wrap around that to create this environment for those edge and embedded applications. And Amazon Prime's doing this with a few parts of their application, but not as extensively as how Disney Plus is doing it now. So I'm gonna show a quick video. Like I said, I will get my device flashback tonight. And the last slide will have some context. So I'd love to show that. Let's see which one. So I have a micro bit, had a micro bit, running, is this the right one? It pops up in the right spot. What are you doing VLC? Nope, not that one. That one. Another one's the code. So what we have here is this is the micro bit running the WebAssembly micro runtime. And what we've done is we've abstracted over the Bluetooth for it and moving buttons, we were starting to create a connect for game, but it is running. So it's WebAssembly micro runtime, then you have a tiny go application that is, and then the device interface there abstracts over the buttons, the LED display and the Bluetooth for networking. So that's the only ones we implemented here. The micro bit also has a gyroscope, but I think the V2 even has speakers on a microphone. The idea being is all of those get exposed up to, through something called WIT, and WIT by Gen, so that you can then go compile and create those applications with tiny go, assembly script, grain, any WebAssembly runtime out there. And importantly, we also have a network abstraction here so that I'm not sending a button press event, I'm updating the state of those LEDs local to the device. And the runtime that's actually running that WebAssembly takes care of all the information needed, and this is running up in Chrome. So I press those buttons and I'm mirroring the state in about 15, 20 lines of code that would have otherwise taken, I go bootstrap the network, I go bootstrap the buttons and operate the buttons, I have to then go send the button press event and then I have to go synchronize between the two sides. And so, actually, this is a good, in this case, all I'm dealing with, all I'm dealing with here is a board state, right? And I'm just updating values on that, runtime takes care of the rest. So I think that's where having these development kits that are runtime for WebAssembly for the client is gonna become more and more interesting and important as they mature. So really, what we're talking about here, let me pop this back on. And what I'm excited about is enabling, of course it does not wanna play nice now, there we go, enabling modular composable systems for real-time systems at the edge and in embedded environments. So thanks everyone. Like I said, I'm a C2C2 co-founder of Mycelial. Details are up there. I will be getting the actual physical demo going back up tonight on fresh hardware, depending on if I need to go pop by an electronic store or not. And that's it. I think we have a few minutes for questions. Michael Tannenbaum, my co-founder, I can get you in touch with him, too. Yeah, I don't get to answer any questions you have for that as well. That's one of the use cases we're sort of targeting with that state replication. Could you talk a bit about how, like, porting to different devices or like, say a new board gets made, how does that work as far as getting the runtime made up so that you could use the WebAssembly? Yeah, that's a great question. So when we face some of that. So the question is, say a new board comes out, a new architecture comes out, how do you end up porting to, port between systems and different architectures and different boards? And I think we're still trying to figure out the best way to go about a little bit of that. We've got the benefit in a lot of cases of we can set on top of the work done in the Zygon Rust embedded communities where there's often an architecture ready to go. When it comes to new peripheral devices, it ends up being a lot of spec sheets. We're looking at making that composable as well because I think one of the thinnest layer to actually go and add a new sensor, for example, is kind of the goal, right? And in a lot of cases, when you look at hardware shortages and everything else, you just really need a temperature sensor that can get you the temperature out to like milled degrees, right, Celsius. And so at this point it's reading some spec sheets because there's really just not an IDL for a lot of these just consumer grade devices and there's been a little bit of porting that. But we are looking and part of that web assembly device interface, and once we get it more up with love ideas on is how to kind of automate some of that process. When it comes to the run times themselves, this is where an interpreter becomes easier, right? Because all you need to do is take that interpreter and get it to that platform. And that's where, say, ZigCC comes in or LLVM comes in really nicely to be able to port to those. For those AOT and JIT environments, for example, Wall Zero, if I remember correctly, does JIT for AMD64 right now, but doesn't do it for ARM, right? And so at some point there, when they're writing out assembly instructions at the end of the day before the jump. And so there's a lot of porting work there. And I think right now for a speed and use case and deployability perspective we're really kind of preferring interpreters at the moment so that porting becomes a little bit easier. And I think right now if you look at it, you can support most of the reduced instruction set boards out there today. But for example, like RP2040 took a little bit of time to support when that came out with the Wasm3. And I don't know, but I think they still may be one of the only ones where you can compile down to that. So time and energy, I guess. The nice thing about the interpreter model though is once you have it for that environment and it can write a WebAssembly instruction, you can write it, right? You don't have to generate out machine code for the WebAssembly itself. Does that answer your question? Any other questions? Yes? Hi. Is there some way to evaluate the course of an execution, for example, the number of instructions for specific function or the maximum memory required for specific function? WebAssembly? Which is kind of, I think the main difference with EVPF. Fana? Yeah, could you repeat the first part of the question? Are you asking about the, yeah, could you repeat the first part, please? Yeah, so is there some way to evaluate the cost in time or memory usage of a specific code? So that's a great question. Is there a way to go evaluate the specific cost or the overhead of executing these instructions? Obviously it's gonna be a bit runtime dependent. One of the areas where WebAssembly falls down a little bit and today and needs some work around is debugging. It's a little, there's not as many tools as I would like, but there is a wasm to see. Michael, do you know what I'm talking about? Which one? Yeah, I'll usually try to go to that to try to get some idea. But right now, no, there's, I'm not aware of a really good way to profile and debug. And so there's just a little bit of trial and error, especially down like the WebAssembly micro runtime right now. Because the page size is only four kilobytes there instead of the 64 kilobyte page that you have in the other runtimes. And so it runs into some more interesting explosions that are a little harder to debug at the moment. Hopefully soon. There's some people who are really interested in that problem and working on that. If you have timing goals, whether they're hard real-time or just general responsiveness with both WebAssembly and with your framework, what kind of approaches do you recommend? Yeah, that's a great question. So if you have a harder soft real-time requirements and you have timing goals that you need to meet or QS that you need to make, how do you go about that in WebAssembly and in our framework? This gets into kind of a scheduling question we could talk about too later. But in a non-RTOS environment where you have full threads and everything, I think one of the better models I've seen, I think there's a new paper on it. I could find and send you, but the Beam, like the early VMs, Work Ceiling Scheduler and the Go Work Ceiling Scheduler are really great for ensuring those latency requirements. Obviously you're giving up some trade-off to get to, in those environments, actually, obviously, I shouldn't say that, but in those sorts of environments, you're giving up a trade of absolute latency for average latency across the entirety of the system. And so I think there is, there's a project I found that's inserting, it's actually rewriting the actual WebAssembly bundle to insert at IO points those yielding hooks to let the runtime decide. That's how I think we're thinking about it right now. I can find that project and send that to you, but it was a really interesting approach that was trying to basically jack in a Work Ceiling Scheduler into any running WebAssembly bundle. It didn't matter if the language supported it or not. And the intent there was to support runtime that wanted to be able to create more beam-style environments. I can't remember if Lunatic is using it or not, but I can find that for you. Yeah, there's no other questions. Thank you, everyone, very much. What do you think? Thank you. I wish...