 I think now is a good time as anything 130 is the start time. So we'll go ahead and get started. This is deploying your back end like a CDN with WebAssembly. My name is Brooks Townsend. I'm a lead software engineer at Cosmonic. And I've been a maintainer of the CNCF WasmCloud project since 2019, effectively since the project started in open source. So I love contributing to open source. Obviously, I'm a WebAssembly enjoyer, which is why I'm here. Avid Rust Station and demo enthusiast as you'll get to see as a part of this talk today. On the, I always like providing an agenda for talks. We're going to talk a little bit about what a CDN is, why people use CDNs. Go a little into kind of my passion for network optimization here. We'll look at some diagrams. We'll do some demos. We'll look at some more demos. We'll look at some more diagrams. It's going to take up the medius portion of the talk. And then we'll kind of wrap it up with what is this all for and kind of where are we going with all of this next. This mascot, I feel like I should point out, is Cosmonic's Tata Grade, Terry the Tata Grade. Very cute. Take pictures, come get stickers, all that stuff. So what is a CDN? Obviously short for Content Delivery Network. Found this great quote from Cloudflare. But at its core, a CDN is just a distributed network of edge nodes with the goal of getting static web content or static content delivered to the users as quickly, cheaply, reliably, and securely as possible. I really like this definition. It's very simple. It describes the technology well and is mostly focused, of course, on the front-end side of applications. And if you haven't used a CDN before, haven't heard of the concept, I think this is a great illustration where when I run an application, when I host it, I may run a web app, a full stack thing, in the US West region. But people are going to be accessing that application from all over the globe. You never know where your users are going to be. And actually transmitting that data from your application all the way to say if somebody accesses it from Australia is going to take up the vast majority of just time to complete the thing that your application is doing. You could be doing something totally simple, but if you have to traverse all the way across the globe, there's only so fast a network request can go. So a CDN takes your static web assets whenever somebody requests something from your website, like your index page, will pull through an edge node and then cache that at a node as close to the user as possible so that whenever somebody else makes a request in that region, it's making a request to that edge node instead of the central server. All in the goal of optimizing how long people have to wait for that request to come back. And this is what I want to talk about. Why do people use CDNs? I've already started talking about it a little bit, but really people want their applications to load quickly. They want users to load up their application, feel happy about how snappy it is, and go on their merry way. There's that oh so famous Google study that if a mobile app or a mobile website takes longer than two or three seconds to load, over 50% of the people will just abandon the page, never to come back. So it's obviously important to get things, your application to load as quick as possible. People also use CDNs because they want to spend less money. Cloud egress bandwidth costs are very, very expensive. And if you're transmitting the same five meg image to a million people, you're paying money when you really could just be caching it closer to them and paying less money. People like money, people don't like spending money. So people like using CDNs. There are other benefits of course to using them, many different projects, many different companies that host CDNs, including security, DDoS protection, things like that. But by and large, CDNs are a win-win. Spend less money, users hit your app faster, users happier, you happier, right? So it's worth pointing out, and this is in the Cloudflare article as well, that this only applies to static web front-end assets. If you're hosting a server or a backend, that is just going to live on an actual hosting platform. You're not distributing your entire backend with those static assets. So if your application is very heavy on the backend, you're hitting the database every single request. Using a CDN will help for the initial load, but you're still gonna be sending that request back to the central server every time. And so I feel like this kind of begs the question, why aren't we just distributing back-ends with the static assets? Why don't we just throw our back-ends all over the place, all over the globe, and the users can load the entire application quicker, right? Surely these CDNs have figured out a way to take a cloud-native application and throw it all over the place. So can we do it? Nope, thanks everyone. I really enjoyed coming for this talk. I think I've got about 30 minutes left for Q&A, if you wanna know. The real answer to this, why we haven't done this for back-ends is because it's really hard, right? Like there are so many, it's more than just throw application in place when it comes to back-ends. You have to deal with where your data is specifically gonna be located. What kind of platform are you developing and deploying on? There's a lot more to thinking about where you can deploy your application to when it comes to back-ends. And the no, as we'll start to rotate away from just the hard no, is really more of like no with the way that you can build apps now. You can deploy your application to a bunch of different geolocated servers if you specifically design it that way. You can have that in mind from the beginning. You can think, oh, this is where I'm gonna put my data. I'm gonna be able to deploy to all these different servers, but you need to know what kind of platform you're gonna deploy to. You need to know lots of specific things about the deployment target, and that's gonna depend per application. So CDNs don't really have a way to just do that for the majority of apps. And if you're doing this with your application, it probably means you run or control your own infrastructure. You stand that up, and it almost certainly means you're not using Kubernetes. Distributing Kubernetes across multiple regions is hard. Multiple clouds is harder. And to edge nodes, things that can dynamically add and remove from the cluster is almost certainly impossible. I'm happy to be proved wrong in that aspect, but even if you're doing like a hub and spoke model and all kinds of things, say you get it all working, it's gonna cost you so much money to run Kubernetes clusters in all these different places. And this kind of brings me to the goal, like why I have been working on Wasm Cloud, why I've been working at Cosmonic and applied for this talk. So many applications that we write are really just bounded by the network cost. Compute is kind of cheap. We have really powerful computers now, and it's pretty rare that consumer applications or even industry applications are bounded by the amount of CPU and RAM that we can throw at it. Fast majority of the time it takes to handle an individual request is getting there and getting back. And I feel like we should be able to do this. We should be able to optimize all of our applications by pushing it as close to the user as possible by minimizing that data and the data size and distance across the whole network, right? And that's what I wanna focus on today. I have two demos where I'm working on this. One is more of a hobby, singular application, not very distributed that I wanna spread around. And the other one is an industry a little bit more of a real world example that's distributed. It has hard requirements for a real world application. And I wanna deploy it like the way CDNs deploy static assets, which is all over the place to as many places as possible. And I'm gonna do it using WasmCloud. WasmCloud is an application runtime in the CNCF. It's a sandbox project and we're working to apply for incubation, but you can see it on the landscape that they published yesterday in the CNCF blog post. We've been around since 2019 working on the project and what WasmCloud does is it's a web assembly orchestrator with declarative deployments. It's a single binary. So it's completely cloud edge and platform agnostic. You don't need to run it in a container, but you can. You don't need to run it in Kubernetes, but you can. And really what this creates is just a nice vendor neutral binary to run your web assembly apps. You can securely access vendor list capabilities, application level capabilities like HTTP, key value, messaging, files with a blob store, things like that. And in my opinion, what I love about the most about WasmCloud is the seamless compute mesh. We use NATS, the CNCF project under the hood for our networking stack. And so what that means is when you can set up your NATS network, all, if you run multiple WasmCloud runtimes, multiple web assembly applications, they can all talk to each other seamlessly, whether they're local or distributed across multiple different machines. And you don't have to build any of that into the app. It's awesome. Automatic load balancing, automatic failover, all those kind of things. And WasmCloud not only is part of the CNCF, it uses as many cloud native standards that aren't container specific as possible. We publish all of our web assembly modules into OCI registries. We use the open application model spec to define our declarative applications. We publish cloud events as a standard event format for ingestion, NATS for our networking layer and everything is instrumented with open telemetry. It's very much a cloud native technology, just bringing web assembly to that part. So let's talk about the hobby application, right? Hobby applications, this is your fruit jokes app, this is your chat GPT-5 wrapper for your new startup, it's your image rotator, the small applications that you can run in one place. And the goal of distributing this around with a CDN is just push that logic as close to the user as possible so that your little requests or for your hobby app just get handled quicker, right? This is gonna have a static front end. So you can front this with a CDN and then all the static assets will get there quicker. But we're also minimizing the data distance by distributing it closer or to multiple regions so people can get that quicker. Now the actual architecture of this application, this is a WasmCloud app, but really specifically it's a web assembly application where we have one web assembly module called fruit jokes, of course, that just has some business logic around generating a fruit joke or fetching it from a database and then returning it as an HTTP request. So we have two capabilities here and this is kind of what's different about WasmCloud, the secure access of capabilities, but what it does it just abstracts these non-functional requirements from your app. So you don't build in the HTTP library, you dynamically kind of link to it or point to it at runtime, same with the NATS database, the key value store that I'll use, you can swap that at any time. And what makes this different is this is what this application would look like when it's just deployed like on my local machine, I throw these things into a WasmCloud runtime, it's able to orchestrate it and then I can get my fruit jokes. When I run it on another machine, WasmCloud is communicating with each other with different WasmCloud runtimes using NATS, but it's just the same copy of an application running in different places, that's how we're distributing this. And we're gonna do it, I'm sorry for the light mode, we're gonna do it with Wadam. Wadam is a project inside of the WasmCloud organization, stands for WasmCloud application deployment manager. You don't really need to know about the internals of this, it's just a reconciliation loop. You define a manifest, you say I want these things to run very much like a Kubernetes deployment, same kind of deal. And this is how it looks when you define it. Just like a Kubernetes deployment, I say I'm gonna run this OCI or I'm gonna run this actor from an OCI registry, I wanna run three copies of it. And using the Damon Scaler, which is like a Kubernetes Damon set, I'm gonna run it on every single WasmCloud runtime that I can find in my network. And let's take a look at this application, right? So you can actually hit this yourself if you'd like, this is fruitjokes.cosmonic.app. Very fun, I'll leave it up after the demo so you can generate as many jokes as you'd like to your heart's content. So, what do you call a fruit that's rough around the edges? A bad apple. Okay, okay, one more, one more. What did the lemon say to the lime? Sour you doing? You can tell I figured out how to do a CSS animation and it made me very excited. But this application itself is very simple. This is just a different visualization to the architecture diagram I showed before. It's your one WebAssembly component and it's connected to an HTTP server and a Nats key value store. If you actually look at the infrastructure that this is running on, we can see that there is a WasmCloud host that's running in AWS. It's on an ARM machine and running Linux. There's a WasmCloud host that's running on GCP, which is actually on x86. There's a WasmCloud host that's actually running on my MacBook, which is an M1 Mac, obviously on Mac. And one running in Azure. And all of these are running in different regions, from west to central to east. And the great thing is my little fun hobby app, because I'm compiling it to WebAssembly, I didn't have to worry about if I was gonna run it on my MacBook or on x86 or ARM. You can just distribute it to all of these different places. And because WasmCloud has a runtime that can run there, you can run your WebAssembly app. So this is kind of the first step, the first thing that we're gonna look at for deploying backends like a CDN. WebAssembly makes this awesome, because just like a static asset that doesn't really need to run on anything, WebAssembly is platform agnostic. And you may be thinking to yourself, wow, with such a complex application, I can't believe you were able to do that. And the real question is, like is this really the best use of WasmCloud? Like is this really the best use of our effort to write this distributed framework so that you can run this simple application? Well, not really. There are products out there, like edge functions or fast style products that really do this type of application better. Even whether it's WebAssembly or their own specific thing, things like CDNs offer places where you can add logic like this for generating a little dumb fruit joke, and you can deploy it all over the place just like your CDN. And that's honestly a better use case for this type of application. And I really like to point this out because I write a lot of examples for WasmCloud. I write a lot of WebAssembly examples. And sometimes they can kind of feel like this. They're little like hobby apps or little like demo apps. And I really wanna focus on taking advantage of the full distributed nature of WasmCloud and what its strengths are. Even if you were to do this with this type of edge functions, you may even be looking for something more based on the compute, the expense, all that stuff. And so the main demo for this talk, the thing that I'm most excited about is the industry application. This is an inventory management application that has some pretty hard requirements. We're gonna have a central server for administration like a corporate. We're gonna be running this part of the application. It could be in a cloud. We're persisting data with a cloud data store. And we're gonna be keeping track of all the inventories of our different branches that are all over the US. Now these deployments are, depending on your definition of edge, but certainly not have to be in the cloud. These deployments are gonna be on-premises in the office location, all local compute. And it's a hard requirement that this app has to work even if the central network is offline. So if you are working locally in a branch and you receive an order for 100 pieces of, or 100, what's a unit of paper that's not a piece, whatever, I'm not gonna spend time on that. 100 pieces of paper, you need to be able to take that order if corporate is having some problems with their wifi, right? You can't just collapse. New branches can be added at any time. Business is booming. We'll open up more branches. We don't need to be adding more cluster nodes or the complexity there. And branches can be closed at any time. Limitless paper in a paperless world. We may not need one of those branches anymore. So I'm calling this, of course, very fun, the Munderdiff one app. And the architecture, let me tell you why Wasm Cloud is perfect for this. We have hotswappable capabilities. You can choose different implementations based on the actual requirements of where we're running. So we could store things in a warehouse database, and then the corporate app can store things in a corporate cloud database. We can hotswap those, and we don't have to worry about committing ourselves to a specific vendor. In Wasm Cloud, what it's doing here, and you'll see, is creating this compute mesh with automatic failover. And so, combined with NAS, we can have a deployment topology that can work in the branch. And even if the network goes offline, everything is still key processing locally and still working, then as soon as the network comes back online, everything reconnects, and your WebAssembly app is good to go. Taking a look at the application diagram, it's slightly more complicated, but using the same kind of component building blocks that the other app is using. We have an HTTP server, we have the notion of a messaging, or like a PubSub messaging capability, and then a key value capability that has two actual different implementations. So for corporate, I'm gonna be storing everything in NAT's JetStream, or NAT's KV. That's just gonna be replicated across the different NAT servers, which is great. And then the individual branches are gonna be listening for messages, saying what's your inventory looking like. It's another WebAssembly component that's managing the branch, and then storing things locally in Redis. This is something that we can spin up on the work warehouse computer, and we're good to go. Now, I already kind of talked through the different pieces of that application, but it's really key to note that the original architecture diagram extends very well with WasmCloud, because all we're doing when we're adding a new branch to this, it's not dealing with cluster IPs or doing anything complicated with like SCD. We're simply adding another instance of the WasmCloud runtime, and adding another instance of the WasmCloud runtime, essentially spinning up another copy of this part of the application. Again, we're gonna be doing that using Wasm. I can lay out in a declarative manifest to say, here's what I wanna run on the corporate computers, here's what I wanna run on all of the different branch computers, and then that will just be maintained for me declaratively. So let's take a look at this application. Please arc. So, like I said, I figure out how to do a CSS spin, and I love it, I'd slow this one down so it can be going all the time. This application, this is kind of the corporate dashboard where we can look at all of our different branches. We can request a run down, which says, hey, give me all the rest of your inventory by the end of the day, and we can query it, and we can see that there are three different branches that we have connected. Scranton, Seattle, of course that one's running in US West on Azure, and Stanford, which is actually running here on my local MacBook. We've got some fancy UI things. Here we can filter by the different branches, but proud of doing a simple stupid front end aside, let's talk a little bit about this application. When we look at actually managing the inventory on a branch, we could do a couple of different things. I'll show you kind of the raw thing first. We might have someone in the warehouse who is using Redis directly to update the inventory of this app, right? So I can set something like the printers value. We got a new shipment of printers. We're upgrading to having 100. This is my machine running in Azure. I can request a run down and hit a query, and that one in Seattle has now updated its inventory to 100, right? So this dashboard is essentially able to make requests of all of the branches and get their inventory. The real thing that I love about this is that if we take a look at the infrastructure again, we can see that it's the same application, including fruit jokes, actually, running on all of these different cloud and edge nodes. It's completely cloud agnostic across Azure, GCP, AWS, and then on my local MacBook. I'm spreading it around different clouds to illustrate different branches because I don't have access to paper company warehouses to run this deployment on, but you get the picture. Now the really cool thing I love about this application is the ability to go offline. So the Stanford branch is the one that's running here on my local machine. I can process inventory orders by sending a NATS message. I can say, hey, I wanna get another shipment of ink. We're getting five more, and I want you to process this shipment. So we process that. If we request a run down and query the inventory, you can see our ink level goes up by five. Great, we process that message. Now this is all being stored in corporates database. The thing that I'm querying when I hit query inventory is actually running in the cloud, not here on my MacBook. So what I can do is I can remove WiFi connection on my computer, just disconnect. This is not working anymore, because, you know, no internet. I can come back and I can continue to publish messages here, getting more shipments of things. I can take an order so someone comes in and says, hey, you know, I want to make sure to get five more instances of ink. And then I can, all of this is just continuing to process locally. So this distributed app that we put on all the different branches, even though I lost connection for a little bit, still working just fine. All processing locally. I can turn back on WiFi. Hopefully I can get back here. And then if we look at the Stanford branch here, we're still kind of, if we query the inventory, the corporate database hasn't caught up yet, because everything is still processing locally. By request to run down, which is saying, hey, give me all the new things, you can see that the new ink copies came and synced up to the corporate database. And this works the same across all of the different databases if Azure has a temporary network meltdown, or what have you, all of it works the same there. And we've kind of satisfied the, we have one more thing to satisfy about this application, which is that we can add and remove branches at any time. This is the really cool thing about the declarative deployments with Wadam. I can actually launch a new WasmCloud host. And I'll just do it, well, I was having fun with this testing. I can just do this on my local machine. And as long as I have this label that this is a branch saying, hey, I'm running this in a warehouse, we can do that one too. Warehouse equals true. We can launch another WasmCloud host that's connected to this NAS network. And you can see as soon as this pops up here that Wadam is going to take care of deploying an entire new copy of all of that infrastructure on that host. So as soon as this comes in, it's gonna work on a little bit. Now the branch manager is deployed there, then that's messaging provider and the Redis key value store is deployed there. And this is now a completely new setup branch. So we've just added a completely new cluster node and that just works. We can also remove this cluster node and that's it. It doesn't take any more than that to add or remove to distribute this application around more because of the way that WasmCloud combines with Nats to do this deployment. This really enables a flexible deployment topology that you really could not do with containers and Kubernetes. So what is all this good for? Just as a general category, outside of inventory management, outside of fruit jokes, what does this deployment, what does this enable us to do? This is really good for distributed data locations, pushing the application as close to the data as possible. Everything is working locally in that individual branch. It doesn't go to a central database. So it's great. That's where the data should live. It works better. It's great for heterogeneous environments, things that change all the time. Network constrained applications. So any requests that or any of those applications that don't really do that much compute, they're constrained by the network. Any time you're trying to go multi-region, multi-cloud deployed to a variety of edges, this works really, really well for that. And obviously for failover, the ability for me to just turn off the Wi-Fi on my laptop, process things, come back online, then it just works. It's awesome. Things that might be better elsewhere, single component applications. This is the, or web front ends. This is kind of the amalgam of fruit jokes, right? It's a very simple application and really doesn't take advantage of the distributed nature, the capabilities of Wasm Cloud. Of course it works. It just is probably gonna be easier to start with Create React app and go on your way. Anything that, when it comes to web assembly, you have an abstraction layer from the CPU in the operating system. It lets you deploy on any platform. It's awesome. But if you have an application that's specific to a CPU architecture or specific to running on Linux, you're gonna pay a slight penalty for that abstraction and you might not have access to all the same APIs. So it kind of just makes more sense to run a native application if you're binding yourself to that. And heavily optimized code, I think can give the impression that I mean that web assembly apps aren't optimized. But I wanna just be clear that this means that if you've essentially optimized your code to the point where you're watching specific syscalls or dealing with system level APIs, that the platform abstraction that you get with web assembly isn't really a benefit anymore. You are actually running a CPU intensive application and the network may not be the thing that is the constrainer for that application. And if I have to boil this all down to one slide, what does this all mean? This is a seamless, painless distributed application. I wrote this example in the cram for the conference. It didn't take me any code or logic to deal with distributing this out to different machines. Web assembly lets you run your applications anywhere. Wasm Cloud orchestrates those applications and lets those applications talk to each other the same no matter how they're distributed. So Wasm runs your applications anywhere. Wasm Cloud lets them all talk to each other. And that really unlocks the next epoch of computing to really take advantage of that beautiful method of deploying to any platform, any cloud, any edge. If we had to look next on maybe how I would improve this talk, improve what we talked about today, we're moving fruit jokes to the front end. That doesn't have to be a backend app anymore. Obviously, I wanna improve the demo, have a little management UI so I'm not, so we can look at another spinning logo. I know that you all like that. It'd be awesome to look at deploying WebAssembly on device, really do true geo affinity for all of these requests so that I would love to host a server in this room that you all hit instead of going kind of over the network and work on maybe a first class data caching solution again to just continue to network optimize these applications. I think that is all that I have for today. I wanna leave some time for Q&A with the Qt logo but I'll leave these up on screen. Everything that I've talked about today is enabled by WasmCloud, the open source project, the CNCF Sandbox project. We would love to have you in our open source Slack. It's a really lively bunch of folks in there and you can check out our GitHub organization to see all the projects that I talked about today. We actually host community meetings every Wednesday at one PM Eastern time. So if you come in, get in the Slack and you can't find the link to that for some reason, please let me know. We'd love to have you on, everybody is welcome. Thanks everyone. I think I do have actually like four or five minutes for questions so anybody is welcome. Yeah, that's essentially starting up just like, so I did show that like starting up a WasmCloud host and then every like the whole application kind of drops on it and gets scheduled but essentially just automating that part of it. I think that's exactly what we wanna do. It obviously just comes down to having control of the infrastructure. You don't wanna have a like an instance running constantly like waiting for you to deploy that application onto it. So I really see the future of that being on kind of a multi-tenant shared kind of cluster where you can run your application on demand for requests and then close it afterwards. Yes, yeah, so this answer actually changed in the last, I don't know, two, four weeks but the boundary between WebAssembly and WasmCloud is the component model. All of the interactions that we're doing here like when you invoke the WebAssembly guest or whenever you do the host calls, invoking the WebAssembly module is feels very classic component model. You have an export that you can call. When the WebAssembly module itself makes a call say to a key value store, WasmCloud takes that invocation from Wasi Cloud KV and will actually package it up into an invocation and send it over Nats. And the great thing about that is that on the local loopback, it's actually a very negligible penalty to send a message over a TCP connection but it allows that message to be routed to any available key value store. So if I'm not running it on the same machine, it still works the exact same. Did that answer your question? Awesome. Probably enough time for another one. I started at 1.31, so I'm going a minute over. All right, well, I'll definitely be hanging out around the afterwards, around the talk so I appreciate you all for coming today.