 All right, thank you for attending from napkin to the cloud WebAssembly journey. First, I want to take a quick detour and explain a little bit of who I am and why I'm doing this. I created the open-source project WasmCloud and I'm also the author of Programming WebAssembly and Rust. So what I want to talk about today is a journey from design to production and I want to comment on how that's different when using WebAssembly as the underlying technology. To do that, first I want to start off with a real quick two minute tour of what WebAssembly is and what WasmCloud is. Then I'll talk about the application that we're going to build and that I have built for this demo, what the conceptual design is, and then we'll talk about the actual building. And in this case, I'll show some code and I'll compare and contrast the difference between what I designed and what we ended up building. And the point there is to see if there is a noticeable gap between the two. And then I'll take you through a demo of the application that we built and then finally there is a Q&A period at the end of the session. So quick tour of WebAssembly in the cloud. There's a couple of things that we typically want in our cloud-native workloads whether we've been using Docker or we've been just straight up running processes. There are a number of things that are fairly common needs that we have such as security, predictability, speed, and size. And by size I mean lack of size. We want small workloads. And coincidentally what we get from WebAssembly is a number of those things. So WebAssembly is secure. It's predictable. It's fast. WebAssembly modules are small and I'll show during the demo just how small that can be. It's portable. WebAssembly can free us from dependency hell. And it has what I like to refer to as beneficial limitations which means there are things that you can't do in WebAssembly that you can do freely in other environments. And I think that's actually a good thing. So if we get all these things from WebAssembly and there is this great overlap between what WebAssembly gives us and what we want in our cloud-native workloads then maybe we should explore using WebAssembly for cloud-native workloads. So WebAssembly is a stack-based virtual machine and without going into too much details it just refers to how that virtual machine processes its instructions. The important point though is that it is a virtual machine. It's a binary file format that has a text representation as well. WebAssembly is CPU agnostic. It's operating system agnostic. The modules are fast, small, efficient. And this is probably the key point here which is that they are entirely dependent on the host runtime. So in the case of a web browser the host runtime is some kind of JavaScript engine. So in Chrome it might be V8 and so on. But in the browser your host runtime is done through JavaScript. When you are outside the browser there are a number of different WebAssembly host runtimes available to us. We have WasmTime from the BiteTone Alliance. We have Wasm3 which is a popular C-based library that also has Rust bindings. And then we have higher level libraries that build on top of engines like that specifically WasmCloud. And this may be a controversial bullet point here but I firmly believe that WebAssembly is the future of distributed computing. So WasmCloud. WasmCloud is an open source project that I started probably a little over a year and a half ago based on these assumptions here which is if we want all of these things from our cloud native workloads and WebAssembly can give them to us then maybe there's an open source project that can create the tooling and host runtimes to bridge that gap and to give us what we need. So WasmCloud is an actor runtime and for those that are familiar with it that just means that it treats WebAssembly workloads as though they are actors as part of the actor pattern. WasmCloud makes it so that your portable business logic runs anywhere. So that could be at the edge in the cloud on IoT devices and also in the browser. WasmCloud is secure by default. And in the demo I'll show a little bit of how that security works. It eliminates boilerplate. We want to strip off all of the code that we write day in and day out in service of boilerplate that really has nothing to do with our business logic and our true functional requirements. WasmCloud supports a rapid feedback loop. We have a rebel in one of our command line tools. WasmCloud also provides a self-healing, self-forming mesh called the lattice that connects your host runtimes in clouds, IoT devices, edges, and more. It provides a single flat topology network that you can use to manage and observe your active workloads. So let's take a look at the combination of WebAssembly and WasmCloud. We saw earlier that WebAssembly provides security. Most of the security from WebAssembly at the core comes from just the fact that it contains no bytecode operations for interacting with the operating system. It can't do things that the host runtime doesn't allow it to do. It's portable. All the instructions in WebAssembly are CPU and iOS agnostic. WebAssembly is fast. It has a small size. And then WasmCloud builds on top of this by giving us access to loosely coupled capabilities and those capabilities might be web servers, database clients, message brokers, those types of things, blog stores, seamless distributed compute that runs anywhere is a feature that we add through WasmCloud. You can horizontally scale your actors and your capability providers. Again, I mentioned the actor model earlier. And we also do contract-driven design. So when you build a WasmCloud actor, you're writing against an abstract contract. So you might be writing against the key value contract or abstraction or the web server abstraction or some database client abstraction and not the actual database itself, which means all those things can change at runtime without you having to rebuild or redeploy your code. So let's take a look at the sample application that we built. And the idea here was to track and document the journey that starts with having an idea and finishes with that idea running in production. And to take note on how that changes or how that experience might change using small composable WebAssembly modules rather than our traditional throw a microservice into a Docker image approach. So the application that we're building is called WasmCloud Chat. And it's a multi-channel real-time messaging application. So instead of a simple hello world app, this chat application has a full backend. It records all of the messages that go through it. And it has multiple inbound and outbound channels including Message Broker and for our demo we're going to use Telnet. So this is essentially the napkin sketch or the blueprint for what we wanted to build for the sample application. So you can see that we have some users coming into the backend through the Telnet channel and some coming in through the broker channel. And in the back, in the true backend, we have functionality lumped together in categories called Presence, which is who's online and how long have they been online for messages, which is essentially the message processing and storage rooms. So that's the chat rooms. Think of those as channels and Slack and authentication and authorization. And again, this is a completely abstract concept here. This is to represent the idea that we had before we started to implement it. And probably I sort of knew what to expect before I went down this road of trying to build out this application, which was that in the past my plans immediately go bad as soon as I try to turn them into an implementation. And whether that's because of tooling or technology or something else, there is always a very large gap between my original idea and what I'm forced to build in the end. So let's take a look at the journey of how we generally go about converting our ideas into something that runs in production. So we start off with this amazing idea and we're inspired and that inspiration gives us just enough energy to get over and around that next curve. And so we have this idea that we then convert into a design. And the design is that abstract design that has very little to do with the actual implementation and it's just a rough sketch of what it is we're trying to build. We then do some early experimentation on this idea and then that early experimentation invariably results in us having to throw away everything that we've done, including some of our original ideas, and redo what we were building. And that's maybe because we learned some valuable lessons or we got some technical things to interact, but this iteration of experimentation and redesign is pretty common, including another round of refactoring and redesign. And during that process we might actually get something out into production and deployed but it's certainly in many cases nothing like what we wanted to build back when we had our amazing idea. And this of course leads to despair. And so what I'm hoping is that through the combination of WebAssembly and other cloud tooling we might be able to change this journey for the better. So we start off again with our amazing idea and then we get to an initial design, our napkin drawing which I drew on a slide earlier. And then again we go to early experimentation and I think this is where the paths diverge. The result of the early experimentation is going to be different this time. And that's where we're able to quickly and easily deploy and test our experiments in different environments including production and production simulated environments. And then the iterative process after these successfully deployed experiments is now instead of complete refactors and redesigns we're just able to tweak things, change the knobs on our scaling and deployments and the shape of our final production environment to deal with scale and expected volume and load. And we haven't had to throw away any of our initial conceptions this way. And this hopefully leads to joy. So back on the original journey path of microservices when we build our services with hard service boundaries. I think if you've built any microservices this pattern should look pretty familiar. Each one of these represents one of the potential units of deployment that we might want to build. And if we were building this as regular microservices, a small fraction of each one of these services on the screen is our business logic. But then we now have all of these embedded dependencies. Each one of these has an RPC client in it. Some of them have message broker clients. There's a telnet server in one of them. There are database clients in others. There's caches. All of these non-functional requirements. And as alluded to earlier, the problem with these is that these are now dependencies that we own and they are tightly coupled dependencies. And even if we've managed to compile these, we're still in a form of dependency hell because if we want to change how we get our telnet services from one approach to another, we have to refactor and rebuild and redeploy. If we want to change one of our database clients from Redis to Cassandra, we have to refactor and rebuild and redeploy. And that entire cycle happens a lot more often than we expect. And so these hard service boundaries are very much like containment walls and not in a good way. And so hopefully we can do better. And so with WebAssembly and using Wasm Cloud Actors and a bunch of beneficial open-source runtimes, we can get flexible service boundaries. So one deployment option here would be to have a single runtime process that has all of our actors in it. So we can choose on our laptop during early development to have a monolith that's the most convenient and easiest to work with for that time. And then as we experiment and we start meeting different shapes to handle different loads, we can split that. We could split it into two runtime hosts. And now we have our actors spread in different ways across the different hosts and we can pick and choose which ones go where. We're not locked into any binding between an actor and a host at compile time. And finally, we could also choose the Super Microservice where one of our hosts is hosting one actor. And we could choose that for a number of reasons. But the important point is that this choice is now, at least in the world of WebAssembly, a runtime, an operational choice, and not an architectural design choice that is baked into the overall implementation of the system. So let's take a look at how we built it. And I'm going to go through some rust fairly quickly, but the main thing that I want to show that I want you to keep in mind as I go through this code is not what the code looks like, but what's missing from the code. And so in this one, the first thing that an actor does is it registers the message handlers. Actors are reactive and they receive messages and they emit a reply to those messages. So in this case, one of these actors is simply registering a handler to process messages. And again, let's take a look at the code more with an eye towards what's missing. In this snippet we can see that it's publishing a message to the backend. And this code is actually from the Message Broker channel, which acts as a proxy between the outside Message Broker and the backend inside our chat system, which is far more complicated than a Hello World sample. But what's missing is actual communication with a tightly coupled broker. That publish function just takes the topic, a reply topic, and the payload to publish. The choice of which broker is doing the publishing, how that broker does the publishing, and what configuration we use in order to enable that broker to publish. That's all done at runtime. We could completely swap out RabbitMQ for Kafka for Nats without having to recompile this actor. Or even redeploy. We could swap those at runtime. Similar scenario here with data services. So in this one, we are adding a room to the chat back end. And in order to do that, we need to communicate with a database. And in this case, we're communicating with a key value store. But again, what's missing from here is that the key value store isn't actually defined here. We're referring to the key value abstraction, but we're not talking explicitly to Redis or to Cassandra or Memcasty or EtsyD. We're simply saying set this key to this value and add this value to a set with this key. And again, the binding of the provider is done at runtime. So now I'm going to take through a quick demo of the chat application that we built. And I'll show all of this code in action. And more importantly, I'll show the incredibly small size of our unit of distribution and our unit of compute and how flexible it is and the options that we have without needing to recompile anything. And again, before I get into the demo, I just want to have this slide up here real quick as a brief reminder of the architecture that we're building. Everything with a gear and in one case a padlock is an actor written in Wasm Cloud for this system. And I'll go through that in a second. So let's get started with the demo. I've got a couple of terminal windows open here because in general my demos are pretty devoid of graphics. I'm going to be going through some console stuff and we'll be using the Telnet channel on the chat example. So the first thing I want to go through is looking at some of the actors. You saw some of the source code for the actors in the slide deck that I went through earlier. And I just want to show you what that sort of looks like in the shell here. So what I want to look at is first how small these things are. So if I take a look at the signed version of our actor which is just a .wasm file, you'll see that it's actually only 702K. And when you think about the unit of deployment for most microservices even if we're using a language that's known for fairly small self-contained binaries, nothing produces binaries as small as 700K. And we're able to embed them with the cryptographic signature that tells the runtime what these modules are allowed to do. So if I inspect the claims on this module, you'll see that what I've got here is the name of the actor which is called Telnet Channel. And the actor has an account, a module, expires can be used, and so on. And if you're paying close attention and you've seen Jason Web Token before, you'll recognize that these are similar fields where we have an issuer, a subject, and then some other fields for how that token can be used. But more importantly, this Telnet Channel is capable of using the key value store, messaging broker, logging extras for random number generation, and the Telnet capability. And it's worth pointing out that nowhere in that list of capabilities is there a specific mention of vendor projects. So we don't mention Redis, we can use the key value store, and so on. So that's the Telnet Channel. And we also have in the actor list, we have authorization, a broker channel. So anything that is using, let's say Nats, can use Nats as a gateway into the chat system. We have the messages actor, a presence actor for determining who's online, rooms, and then again the Telnet Channel. So what I'd like to do next is start up the Wasm Cloud a host. And I want to start all of the important actors. I want to start all of the capability providers. And then I want to configure those. And we do that through manifest file. And first looking at this manifest file may seem a little bit overwhelming. I'm only using YAML here for an example. You can also use JSON. There's no requirement there. So the first thing we have on lines 5 through 7 is a list of actors that we want to start up in the Wasm Cloud host. And then on lines 9, 11, 13, 15, and 17, you'll see we're actually using this thing called an image ref for capability. And what that means is we're actually able to load a capability provider from an OCI registry. That means that we can store them the same way you store Docker images and we can retrieve them the same way you retrieve Docker images. And so we have a number of our first party capability providers sitting in our Wasm Cloud OCI registry in Azure. And then starting on line 18 we have links. And link definitions essentially consist of the actor, the provider that we're linking to, and then the name of the link, and then set of parameters. So we think environment variables. And so if you see here we've got the broker channel actor as being linked to the front-end version of the Wasm Cloud messaging. And so by using these two different link names we can actually have two different instances of the same capability provider but with different configuration parameters. And so in this case we have a NAT broker configured with a front-end connection string and one configured with a back-end connection string. So as we go through here you'll see we configure the telnet actor by configuring it with logging extras access to the telnet provider and access to a key value store. And you'll see that key value store is configured to communicate with Redis. So if I start Wasm Cloud with this manifest file you'll see a whole bunch of log spam that should look fairly familiar to anybody who has spent time looking at microservices, standard out logs in Kubernetes or any other cloud environment. Most of this is informational, not really worth going into too much detail but you can see how we have a discrete log of all of the actors, all of the providers that are running. If an actor stops and starts we'll see that here. If a capability provider stops and starts we'll see that there as well. So the first thing we now have is we know that we have a telnet provider running and because we configured it we know it's running on port 8500. And we got a fancy ASCII art welcome message. So we'll log in here, I shall. Those are the list of the commands that we support. And I mentioned earlier the back end system deals with things like storing and retrieving messages as a stream, creating and deleting rooms, joining rooms and all of that sort of thing. And then the channels are responsible for the actual communication. So right now I'm using the telnet channel and I typed a message there. And since there was no one listening, if there was no one listening to a real-time chat message in the forest, did it actually happen? And if we go into our Redis CLI we'll see that it indeed did still happen. So I'll take a look at the stream for the general room. Don't worry if the Redis syntax isn't familiar, it's not all that important right now. But you can see what happened here is I have a stream of messages that has been going through the different rooms in my chat back end. And this most recent one we have each message has a unique ID. The text was hello from KubeCon. We can see the origin user is a WazenCloud chat URL. There's a timestamp on it. And the origin channel is the telnet broker. Now assuming that we have other channels running, we might have as an origin channel you might see the NATS broker or you might see an IRC broker or even a Slack broker or a Slack channel depending on the integrations that we support. It's designed to support any number of integrations written using WazenCloud actors. But in any case, the moral of this story isn't that I've been able to build a chat application. The moral of this story is that we've been able to build the chat application by composing tiny units of compute and distribution that literally consume no more than one leg each that are cryptographically signed that have a verifiable secure provenance that can be deployed anywhere and can connect to capabilities that are also deployed anywhere. I can choose to run this entire thing as a monolith which is what I'm doing in this one terminal window. Or I could choose to run it as any number of processes as I want. I could run 30 copies of one actor and 12 copies of the capability provider that it speaks to. And the WazenCloud runtime takes care of distributing the function calls between those and makes it all seamless for you. So the end result of the experiment that we did to build this chat application was that thinking about the problem that we were trying to solve was almost directly in line with how we solved it when we went with building actors and connecting them to capabilities versus other avenues that might have been higher friction had we decided to go with the traditional microservices route. So if I could ask one thing it would just be to take a look at the tutorials and kick the tires and see if your next microservice project might be something that you might want to make as your first WebAssembly in the cloud project. Thanks.