 Hey, good morning. Let's give it a little bit more time for people to join. Kevin, are you there? Yes, I'm here. Can you hear me? Yeah, I can hear you. It looks like we don't have any anybody joining yet. I don't know. Maybe we can wait another couple minutes. Yeah, I've got a few minutes, but I also dumped a bunch of links into the Slack channel. So if people, I think there's a bunch of YouTube links in there for some talks they can watch. Yeah, yeah. And maybe gotta see if somebody else joins. Otherwise, I think you can just go by the links on the channel because I mean, I don't know if it really makes sense to go over the same material, right? Nobody's joining. Yeah, no worries. Maybe we could do a little bit of an interactive session, I guess. And maybe I can just kind of, we can just ask questions or I can ask questions and, you know, so I'm curious, how do you get started? I mean, you have WebAssembly and then you have a runtime and you want to be able to run the bytecode with the runtime, right? And how would you get started with WSEC? Sure. So there's a couple of tutorials on the wasp.dev website that walk you through the process of getting started. But the basic idea is that WebAssembly on its own is a pretty basic format. You can only pass and receive numeric arguments and return values. So to do anything more interesting than that, you've got to put a layer on top of the core WebAssembly spec. And for us, that layer is a WebAssembly RPC standard called WAPC. I'll put the WAPC link in the Slack channel as well. So that basically, that one puts a wrapper around basic WebAssembly function calls that let you send and receive arbitrary binary payloads. So it doesn't really care what the contents of those binaries are. So they can mean whatever you want them to mean as long as the WebAssembly module in the runtime agree on that meaning. And so above the WAPC stuff is where WASC sits and WASC adds the cloud native runtime aspect and the actor model aspect to the WebAssembly stuff. And so the tutorial that's out on the website, the first thing that it has you do is you can make an actor that responds to HTTP requests with the typical JSON hello world. And what's unique about WASC and how that works is unlike the experience of creating a microservice in Go or C sharp or any of the other languages that I might use for my cloud native type services. In WASC the creation of an HTTP server endpoint is done by what's called a capability provider. It isn't part of the actor code that you write. All of your business logic is decoupled from all of the things that you would think of as cloud native capabilities. So you don't start your own HTTP endpoints. You don't have to choose a library dependency when you want to talk to a database. You don't have to choose a library dependency when you want to talk to a blob store. All of those are abstractions and the providers that satisfy those for you are bound at runtime. You compile your hello world actor to return an adjacent payload, but how it returns that payload is not your concern. So at runtime you can switch from maybe a lightweight web server when you're in development and testing to a super high throughput multi-threaded beast of a web server in production. It's all up to you, but our opinion is that you shouldn't have to recompile and redeploy your code in order to scale it. So there's a runtime you can change the components depending on how you want to run whatever you're trying to run. Then also you can add on some modules. For example, like if you want to connect to a database, there will be an additional module, but that can be added also to a runtime. Is that accurate? That's exactly it. Do you have a higher level overview of kind of how this fits in compared to like other kind of normal container runtime? I saw it mentioned like container lists because I know there's like the wazzy stuff where you could basically run wazam almost like a normal container that I was kind of curious. Sounds like this is almost an alternative to that, but I was just curious how the flow is. If you have any diagrams or something that kind of shows how the request flows. Yeah, so I've got some diagrams. There's a couple that you can see in the slides and in the videos that are up on that YouTube playlist. I actually had a bunch of slides that I was going to walk through today, but they they're actually on my my company machine and I can't see them because it's a decent way to get them out from behind that firewall. But an overall where this sits is, you know, we started off with our cloud applications building stuff that runs on, you know, on bare metal and data centers. And then we moved from bare metal metal and data centers to virtual machines. But the virtual machines that we built were still kind of bespoke and customized to a particular application. I remember some really ugly deployment days when we had individuals or even entire teams in charge of essentially stamping out these virtual machines when we needed to do a deployment. And so that's where how we ended up with containers. And so now we deploy our Docker images to Kubernetes or Nomad or, you know, any any other container runtime. And then what I think is the next evolution of that is not deploying containers but deploying WebAssembly modules. And with a WebAssembly module, it's smaller, it's faster, it's more secure than regular containers. The only difference is that WebAssembly modules can't do on their own what your software running in a container can do. So how do you give a WebAssembly module the capability to do things in the cloud? And that's where the last runtime comes in. It's basically a matchmaker between a WebAssembly module that has a declarative set of uses for capabilities and these capability providers. It binds them at runtime and then allows them to communicate. And so the one of the design goals behind WASC is that you can run it anywhere. And so I can run, I can put this runtime on a Raspberry Pi. I can put it in a cloud. I can put it on my laptop and the WebAssembly modules, which are my actors and my business logic, those don't have to be recompiled at all because the WebAssembly format is portable. Answer your question or just the domain. I think for me it's still a little abstract in terms of like how the deployment looks, how the how the kind of request architecture goes through saying like a Kubernetes environment. Maybe I'm just thinking about it from like to conventional of an approach. But I was comparing it with WASI because the WASI approach is like using WebAssembly, but it's basically building, you could use it to build containers that are kind of like containers today. It's just a different architecture. Yeah. So the difference between WASC and WASI is with a WASI module, you're basically poking holes in the WebAssembly module through these imports that are satisfied by a runtime that knows about the WASI spec. And so there's a couple of them. WASM time is one. WASM three will do it. WASM are will do it. Most of them will allow you to choose to expose the WASI functions to a module. And what that really does is give it access to the file system and some other basic capabilities. And you can make that. It's still a little bit more secure than if you were just running that stuff on a base operating system or in a raw container. But our opinion at the WASC level is that actors in being pure business logic shouldn't have access to the file system. They shouldn't have access to an environment. They shouldn't have access to anything that they don't have permission to have access to. And so we have a security system in place that secures capabilities at a high level. So the capabilities in WASI are very much at the kernel level. Can you write to this file descriptor? Yes or no. The capabilities at the WASC level are more at the cloud service level. So can you communicate with a blob store? Can you communicate with a key value store? Yes or no. And I'll give it to you if I can share my screen and show some of that. Okay, yeah, that makes a little more sense. That's pretty interesting. For some reason, my terminal window does not show up as an option for something that I can share in this meeting. Not really sure. Are you in your Mac or something? No, I don't own a Mac. This is Linux. Linux Zoom would be different, yes. So I can share some code and then maybe that will be, maybe that'll be a little bit less abstract. So is everybody looking at a... Some Rust code? Okay, so you can see that then. Okay, so what we're looking at is the code for an actor written in Rust, but it's using the WASC runtime. And the key pieces here is that actors are reactive. They're very much like, think of them like lambdas, where they get an event or a message and in response to that, they execute some logic and then they return. What's different about an actor? So this is the list of messages that it handles. In this case, it handles something called an ADSB message, which there's a capability provider that I have running on a Raspberry Pi. I can't show it to you because the camera's not mobile, but it's over there-ish. The capability that I have running on that Pi is pulling radio signals off of an antenna that contain a list of planes that are flying overhead within about 100 kilometer radius. And so every time I get one of those messages, that provider decodes it. And if I had access to the terminal, I could show you what those raw messages look like. It decodes it and then sends it to this actor. And so in response, this actor loads up the state of an aircraft, applies the new state, and then puts it back in the key value store. And so if you look at this load state function here, this call right here does a key value get. Now, obviously the WebAssembly module itself doesn't have access to a key value store, but it can tell the host, please fulfill this key value get request on my behalf. And then if the actor has the secure privilege to do so, it'll then go and talk to it. And what's important about this function call is that it's an abstraction. It's a key value get, but you don't see what you typically would see in a microservice in Go or another language, even in Rust, is you would see creating a client connection to Redis and then initializing it with some connection string and some security information, you know, the username and the password and the host. None of that is there. All you're doing is declaring your intent to fetch a data at a specific key from a key value store. And what that allows you to do is first and foremost, it allows you to test this thing in isolation, because you can talk to any key value store, including an in memory one, without ever recompiling your WebAssembly module. But in production, what it allows you to do is things like switch your capability provider from Redis to Cassandra to console or memcached without ever redeploying, even redeploying your module. This module can stay running live in production and have its capability provider swapped out without ever even dropping a message. So in this case, it's a runtime that has all the drivers that satisfy all these interfaces? Yeah, capability providers are plugins. So you can write one for anything that is a capability. And in my case, I've got a capability that decodes the ADSB messages that come in off of specific radio frequency. But, you know, we've got stock capability providers for S3 for graph databases, for I've got one for a telnet server, whose HTTP server, HTTP client, you know, all of the things that you can think of that your application or your business logic might need in the cloud. But you're no longer carrying around all of that boilerplate as baggage on every single thing that you write. The goal is that you get to write this pure intent of this is what I want my logic to do. And you don't care, particularly, how that logic is fulfilled. But you can control how it gets fulfilled and runtime by how you configure all of the providers. And what's the interface between the business logic here and the actual capability implementation? So at a low level, what happens is when I call this key value default dot get, it sends a binary payload up to the host that contains essentially an RPC invocation. The host figures out what the target of that invocation is. And in my case, it's a key value store. The host took care of binding my actor to, in this case, the Redis key value store. And so the host will then forward that payload to the Redis key value store capability provider, the provider handles the request gets the gets the response. And then the host delivers the response back to the WebAssembly manual. That same interaction also works in reverse where I have a message broker provider for NATS. And in that case, you can create a subscription or multiple subscriptions. And every time that provider gets a NATS message, it can then deliver it to the appropriate actor. And so the host runtime is basically a dispatcher between these tiny actor modules, which contain as close to raw business logic as possible. And the capability providers that are satisfying the non-functional requirements. Is this using any, like are these functions capabilities exposed as functions to WASM? Or is there any sort of like remote RPC that happens here? Yeah. So this is essentially remote RPC. At a low level, there is a standard called WAPC that allows us to send and receive binary payloads between the WebAssembly module and the host. And then WASC adds meaning to those payloads on top. Things like the key value stores and blob stores and all of that. Those are basically just a set of data types defined in something that can be serialized in and out of message pack. And so the host is basically just sending these message pack invocation and invocation response payloads around. And in the demo that I might be able to show, if I can figure out how to run a terminal session, the way the host runtime works is it doesn't care where any of these things are. So I can run an actor on this laptop. I can run the capability provider over there ish on my Raspberry Pi. I can run other providers in Amazon, other ones in Google, other ones in Azure. And through the message bus that this runtime uses, it treats everything like it's a single flat topology. It doesn't matter whether you're scheduling it in Kubernetes or not. It just treats it like one flat topology. And so these actors can communicate with providers without need for service discovery and in a location-independent manner. This is very interesting. So thank you for the explanation. So I have a question. So how do you handle errors? Like if you one of your actors is not responding or, I mean, you have that message bus, right? But then underneath you need to have network in between, right? So if you have some errors, do you have an error handling? There's a number of different levels of error handling. So if you look at this function here, in Rust, there's a result type. And if the result type contains okay and then a payload, we know everything succeeded. If it contains the error enum and then a payload, we know there was an error. And so as part of that protocol between actors and the runtime host, it knows how to store an arbitrary error payload on behalf of the actor. And so if the actor fails at any point in processing, we can set that value in the host. And so the host knows when the actor failed. The reverse is also true. The actor will know when the host fails. So these little question marks here in Rust mean, you know, attempt to get the result value of that execution. But if it fails, abort and then return an error in response. And do you have a retry mechanism too? So there is a retry mechanism, but it won't retry when the errors are explicit. So if the host calls the WebAssembly module locally and WebAssembly module returns an error, it will assume that that is a legitimate error and it will pass that error on to the consumer of that. There's an automatic health check system in the host runtime that invokes the health function on all of these actors. And if the health function doesn't come back or it returns an error, or it returns explicitly an error that you can provide on your own, then the host runtime will know that your actor is unhealthy. And there's an issue open to finish the implementation of this. But basically what will happen is the host runtime will bounce your actor. It will dispose of it and then reload it. Got it. And that will happen regardless of whether your host runtime is running in or out of Kubernetes. Microsoft has a project called Crustlet that is essentially an alternate version of Kubelet that runs on Kubernetes nodes that allows you to deploy WebAssembly modules straight to a node without the use of a container. And you can choose either a WASC module or a WASC actor to deploy straight to that node. There's other error checking. Like I said before, when you're running this in what we call a lattice, which is just a cluster of these host run times, when you're running it in a lattice, we will detect things like RPC timeouts and do RPC retries and so on. So if we can't communicate with the other host, then that is treated as a different type of failure than if we did communicate with the host and the host did invoke the WebAssembly call and the host got an explicit error in response. Did you cover which protocol is the highest level protocol? Is this GRPC? I came in a little late. How are you communicating the highest level of the network? I mean, it's TCP, but what's is that? Okay, I'm barely able to hear you. I think the question was... Oh, okay. So the protocol, it is not GRPC or anything else like that. It's below-ask is an RPC protocol designed specifically for WebAssembly called WAPC. And without getting too far into the weeds, one of the problems that I think exists with some WebAssembly run times right now is they're either very, very JavaScript specific, so they assume the existence of a browser as your host, or they assume that there is JavaScript glue code like BindGen or things like M-scripten, earlier things like ASM.js, where long story short, the host could allocate long lived pointers into the WebAssembly module or the reverse, where the WebAssembly module could tell the host to allocate long lived pointers. And that made RPC style calls between the host and the guest stateful, which meant that if a WebAssembly module died after it allocated but did not free, then you could have memory leaks, you could have inconsistent state, and all sorts of terrible problems that we know we don't want to have when building stateless services in the cloud, even for things like lambdas. And so WAPC is specifically designed to not only be allocation agnostic, so it doesn't rely on the existence or non-existence of a garbage collector inside the WebAssembly module, but it also doesn't allocate anything. It is unaware how either side of that conversation allocates memory. And so what that means is that between any two function calls, which in the host runtime are managed in a durable queue, so between any two of those function calls, the WebAssembly module can be destroyed or its memory can be wiped, and it will have no impact on either the module or the host. That allows me to take an actor that's running on a host in one node, start up a second copy of that actor on another node, continue to distribute function calls between the two, take one down and bring another one up, all without ever having to worry about whether I left some sort of dangling pointer somewhere. So GRPC is an excellent point-to-point protocol, but in trying to adapt it to stateless distributed load balanced function calls across WebAssembly modules just wasn't quite what we were looking for. Okay, thanks. I'm going to try one more time and see if I can share my my terminal. You mentioned some of these other projects which kind of emulate a kubelet for running these. Is there something similar for a WASC that can act as like a kubelet in the Kubernetes environment? Yes, there's a very specific one called Crustlet, and you can find it in Microsoft has an organization called Deus Labs. You can find that on GitHub. I'll post the link in the Slack channel as well, but that's precisely what that does is if you run Crustlet instead of kubelet, you can choose to deploy either WASI modules directly to a Kubernetes node with the Kubernetes deployment manifest, or you can choose to deploy a WASC actor directly to a Kubernetes node. And like I said earlier, the way WASC forms, cell forms these networking clusters is that some of your hosts could be running on in Crustlet, and some of them can be running in a virtual machine, and some of them can be running on Raspberry Pis or even smaller constrained devices. And because WASC uses NATS as its message bus, all of those things are able to discover each other as though it was a single flat topology with no need for service discovery. And one of the things that NATS lets us do is with leaf nodes, we can actually control when the traffic is localized versus when it leaves a cluster or leaves a Kubernetes cluster or hops across clouds and so on. I am still completely unable to share that terminal, which is here we go. Do you see the terminal window that I've got? Okay, so this is right now, you should just see an empty prompt, right? So I have this demo where one of the things I wanted to try and do with WASC is to make it generally much easier and much simpler to build distributed applications, but also to enable the possibility of thinking about distributed applications in a new way. So when we build microservices and we deploy them to the cloud, we just sort of assume that we're going to build this walled self-contained structure, whether it's a Kubernetes cluster or not, that houses all of our compute and that's what we think of as a distributed application. But WASC has a more broad view of what a distributed application is and so we can run all of these hosts anywhere we want to and they will stitch themselves together into a larger distributed application. And so this demo called WASM Air essentially builds a clone of the FlightAware application. I don't know if anyone's familiar with it, but with FlightAware you can basically buy these little $10 devices, put an antenna on the outside of your house, and you'll pull radio data and you can see it locally, but it also sends it up to a cloud aggregation service so you can see all of these flights in one aggregate pool. And with this application here, we did the same thing, but it took about four hours rather than however long it would have taken to build the same thing using traditional microservices. So there's a couple of components here that are actors. I showed the code for this thing called the ADSB processor, and ADSB is just the acronym for the type of radio signal that we have. And I mentioned earlier that we've got a level of security that's embedded inside the WebAssembly modules. So one of the things that has burned me as a role of DevOps in deploying Docker containers is how to secure the applications that are running in them. In general, they can do whatever they feel like doing, and we can apply networking and security policies, but the policies are in environments. They don't follow the Docker image. And so if we make a mistake, forget to redeploy something, forget to resynchronize something, it's super easy to accidentally let one of those Docker images fall through our policy enforcement. With WASC, these actors have their security credentials directly embedded in the file. So I can take a look at the security information on one of these on one of these actors. And I have to remember to use my own syntax here. Let me see if I can um, there we go. So can you see this pretty well or do I need to zoom in more? It's good. Good. So what we have embedded directly in this module, which is a little over a meg, by the way. So instead of having this Docker image where I have a runtime embedded in it and I have all of my dependencies embedded in it, that quote about saying that you own all of your dependencies is quite true because they all end up, even if you're building a static binary, those things still end up in your application. WASC actually doesn't have the dependencies in them. So inside this module is a cryptographically signed JSON web token. And so in that token, I have a globally unique identifier for the module itself. So anywhere this module goes after it's been signed, people always have this identity. I have the identity of the thing that signed it. So essentially the token's issuer. And I can have a chain of issuers as long as I want to have so I can verify the provenance of this module. But my security system can choose whether or not I trust things that were issued by this account. I have things like expiration date when the token is valid. But the most important thing here is this list of capabilities. This module is allowed to use a key value store. It's allowed to bind to a message broker. It can do standard out logging. And it has access to a custom capability that is my radio receiver or my radio provider. So that's that module. And I also have a restful service. And this restful service exposes the aggregate flight data that my processor actor has generated by that small amount of code. And if I look at this one, the code for this is also remarkably small. The only thing that we have here is a declaration that we handle HTTP requests, the mandatory health request. And in response to handling this HTTP request, I can query the list of stations. So that's the list of people that are running capability providers that are attached to these radio receivers. And then I have the list of this aggregate list of all the aircraft in the system. And again, the actor is responsible for producing that list. So in this case, all I have to do is query the cache. And so this line here gets the list of all aircraft in the system. And this one gets the details for a specific aircraft. And again, there's no code to create a Redis connection, no code to create a Cassandra connection, just a declaration of what I want to do with the abstract capabilities that my actor is allowed to access. I'm going to see if I can start some of these other services in another terminal window so that I can run this demo. Hopefully this will all work without a massive explosion. So I'm going to start a WASC host for the processor, which is the thing that talks to the provider. And that's the thing that reads the radio signals. And then once it's bound to an actor, it'll deliver the converted or decoded radio signal messages. All right, so I have started one of those. Just pretend like you can see a bunch of really interesting console output. I can start the processor in another terminal window. And again, just pretend as though you can see that. That's actually not working. Just give me one second here. Let me see if my radio is actually working. So I'm going to start the processor host. Sorry about this. I didn't expect to my Raspberry Pi over there went to sleep and I'm not able to establish a network connection to it. So just bear with me while this. So I still seem to have a networking problem here. Oh, there we go. I don't know if you can see this, but I've got a couple of flights that are not advertising their call sign. So that's probably for the Air Force base that's near my house. And then this other one here, LXJ381, that those, these two here look, the bottom two look like regular commercial flights. So what's happening here is I have a capability provider running on a device or running with Telnet access to a device that has radio receiver on it. The provider is decoding the signal. The actor is doing business logic with that, with those events. And so it's computing state from this event stream. And the event stream on this one radio is about four or 500 radio packets per second. And then it's aggregating all of that into this flight state. And now I have a just a regular console application that is reading the aggregate state generated by these actors or this one. Yeah, the two actors and also subscribing to the event stream of events that one of the actors is emitting. And you saw earlier that the amount of code to get all this stuff to work was, you know, measured in the tens of lines of code. But that's not really the most important thing here. The most important thing here is that while this demo currently works on a, an RTL SDR hardware device, and the aggregate data is stored in Redis, I could change the hardware device that I'm getting this radio from, from the little antenna outside my window to some giant thing mounted in my backyard without having to redeploy my actors. And I could switch it from storing the data in Redis to storing it in Cassandra or any other key value store, again, without redeploying the actors. And these actors are unforgeable. So I can't pretend to, these actors can't pretend to have an identity that they weren't given. You can't tamper with them because the, the hash of the bytes of the WebAssembly module will, will fail that check. And further, since I know the identity of the, the entity that signed these modules, I know whether or not I trust them to run in this particular environment. So I could have identities that I trust to sign my production payout workloads. And I could not trust the entities that sign my, my Dev and QA workloads. Got it. So I have a question. So this is one, this is the data that's being displayed by signals that are coming to one Raspberry Pi. Is that right? Yeah. Right now. And the idea is that you have some sort of distributed database that has all these flights in it and you have these stations all over the world. Is that the idea and they communicate? Is there like a distributed database like DNS or something similar to what DNS does? Is that the idea or am I getting that wrong? Yeah. So basically what it, what it amounts to is, so if you can see on the top here, there's a list of stations. Each one of those stations is some piece of hardware that has a capability provider reading data from it. And so because I'm using Nats to connect those stations could be anywhere in the world. And one that's currently offline right now, friend of mine, he has his station set up in DC area. So I can have all of these stations running. The host for them does not need to be running in the cloud. It can be running close to the source of data, which is, you know, the edge computing is getting more and more important as the amount of raw data we get from these edge devices like radio receivers becomes, you know, unmanageable. So if I were to send all of these events from all of these different flights all the way up to the cloud in order to do pre-processing on them, I would quickly overwhelm my cloud and I have to scale it up and pay for all of that scale. But if I can do some of that pre-processing on the device that massively impacts the type of architecture I need to support this, the distributed database can be anything I want it to be. So right now it's Redis. And so as long as the provider running has a connection to the Redis database, it doesn't matter where that Redis database is. So for this demo it's running on the tower under my desk, but it could be it could be an anycast type DNS address so that it finds the most geographically appropriate node based on the IP address of the source. Or it just could be a single address to any of the Cassandra nodes that are in my network. Again, what I really want to achieve here is the choice of the distributed database should be an implementation detail. It's just something that I choose when I'm choosing how big I want my deployment to be and how much of it I want there to be. But my business logic, the core brains of what it is I'm doing shouldn't change. The problem is that when we build things like this today, that stuff has to change. If I were to change these services from aggregating over Telnet to aggregating over some other protocol and then storing data from Redis to Cassandra, I have to recompile everything and I probably have to re-engineer half of it. But by doing it this way with this very simple actor model where all of my business logic is in these portable distributed actors that can run anywhere in my mesh, I don't have to recompile or redeploy anything to make scale decisions and move compute to where it's most appropriate. Okay, but at some high level, you choose at one point in time what that distributed database is going to be. You can swap it out to different things, but for the entire system it's the same distributed database. Yep. So the core components of the system here are there's an actor that processes flight tracking events that come in from a flight tracking capability provider and there's an actor that provides a restful service on top of the aggregate data. And those two things are really the key components that make up this system. There's the UI here, but that's just essentially a passive consumer. As long as the design of my system is to process these radio events and convert them into aggregate state, then I can satisfy those requirements however I want without having to change my actors. More importantly, I can move my capability providers from one host to another. I can move my actors from one host to another and I can choose where I run my compute based on what my needs are, not based on what technology I chose for my client library at the time that I was developing this stuff. Got it. Is it a good practice to have the capability providers close to the actors or it really depends on how the architecture application. So that's the cool part is that it doesn't really matter. I can have the capability provider in process inside the same host as the actor and in some cases that might make a lot of sense. So if the capability provider is sending tens of thousands of messages per second to an actor for processing, it might make a lot of sense to put the actor and that provider in the same host and then have the host processing actor be in another host downstream somewhere. But the beauty of it is that because these things are portable and they'll run wherever I want them to, if while I'm prototyping this out and I'm testing this all at home, just to see if all of this stuff works together, I can run it all in a single process and verify that everything is the way I want it. And then when I want to scale out to production, I don't have to re-engineer my hello world to production anymore. I can take what was running in my single process and just split all of the constituent parts out and then run them at whatever scale I want. So if I want to run 500 instances of the radio processing actor, I can do that. I can spread 500 of those across 500 hosts wherever I want to run those. And the capability provider will distribute its messages evenly across all 500 of those actors. And if I want to run five of those capability providers that are attached to five different radios and still have 500 different processing actors, I still have that message pattern and I haven't had to re-engineer anything. Got it. Makes sense. Any other questions? We're running pretty close to time. Yeah, we've got one minute left. So any other questions? I think this is great. Thanks for the overview. I think it's very interesting how things are moving forward. I mean, I can see this being used in edge computing a lot. We're going to move to some of those capabilities and maybe have some actors at the edge or somewhere else. So I think the important takeaway here is that the way people are building stuff right now today is if I want something to run at the edge, I have to choose a very specific set of technologies in order for my compute to run on the edge. And if I want my stuff to run in the cloud, I use a totally different set of technologies to run in the cloud. If I want to have compute running inside a browser, I have another entirely different set of technologies that I now have to deal with. And if I want all of those things to talk to each other, I have yet another set of technologies I need to deal with. But if I write all of my stuff as these portable actors in WebAssembly, I can deploy them and run them at the edge. I can run them in the cloud. I can run them on tiny devices. And I can run them anywhere I want to. I can even run them in the browser. And I don't have to re-engineer my application to make, to change how I split up my compute. And more importantly, I don't have to have five different teams with five different skill sets to manage all of these components. I essentially have one stack of stuff that I need to worry about. Yeah. Are there any people using it? Like in the right now? Or is it just still Yeah, so. Or trying it out? There's a number of people using it. IBM is Hyperledger is using the underlying RPC mechanism. So the WAPC stuff. There is a company whose name I'm not allowed to mention, but they build 5G appliances that allow customers to grab, to download content as well as reshare their 5G bandwidth with other neighbors that are within a certain distance of that appliance. And all of the code running on those appliances is WASC actors running inside WASC hosts. There's a couple other companies as well that are using and exploring it. And Capital One is also working on finding internal projects to use WASC as a basis for. Awesome. Well, thank you very much. Any last questions? All right. So yeah, all this meeting is recorded. So if anybody wants to watch it, they can come back. So. All right. Thanks very much for having me. Appreciate it. Thanks. He's very good. I appreciate it. Thank you all.