 Hi everyone. We're really excited to be at CloudNative North America, Wasm Day 2021. It's a real honor to be selected to talk today and we have some really exciting stuff packed into the next 20 minutes or so. We want to talk about the use of WebAssembly server side and how it's driving a revolution in platform design. Coupled with products like Wasm Cloud and NATS, WebAssembly is creating a new paradigm for CloudNative, which as new paradigms should, eliminates entire classes of problems that we struggle with today when building distributed applications. But before we dive into that, let's introduce ourselves. I'm Stuart Harris, founder and chief scientist at Red Badger. And I'm Ayush. I'm a senior software engineer at Red Badger and I'll be going through the demo later on. We're a London-based consultancy that's obsessed with helping organizations streamline their ability to deliver digital products and services. We're very honored to be here and a big thank you to CNCF for organizing this event today. As a consultancy, we've experienced the pain over and over again of building microservice applications with today's architectures. It's not straightforward by any stretch of the imagination and there's a lot to think about. One of our clients, one of Europe's largest banks, engaged us recently to describe to them what a future state platform architecture could look like. They want to prepare themselves for a world where they can easily deploy workloads securely and reliably across on-premise and any cloud without having to constantly adjust network topology to suit. This is really refreshing that large enterprises are thinking ahead like this, but this thinking is driven by their pain of working with today's complexity. We've built a proof concept to show what such a platform might look like. It's a really cool demo and Ayush is going to show you in a minute, but first some background. It seems to us as though the last decade has been characterized by a rush to the cloud. Now that we're there, well most of us, we're rightly worried about having all our eggs in one basket and we've realized that it's still far too complicated. It's easier than filling out a form and sending it to the IT department, but the burden of managing infrastructure is all too real. I think the next decade will be characterized by a rush out of the cloud, well not out completely, but out enough to be independent of any one provider and out enough to make good use of the edge on-premise, Internet of Things, etc. If you're in a regulated industry like finance or healthcare, the regulator will insist that you have a get-out plan. These plans aren't real though, you wouldn't be able to execute them at least not quickly or cheaply. They're really just there to satisfy the regulator. The cost of moving everything to another cloud provider is going to be prohibitive. Containers and Kubernetes have been at an higher net for our industry. It's the first time that there has been a standard and consistent and uniform way to deploy our applications regardless of which cloud provider we want to use. But we're still locked into cloud provider specific services. AWS is famous for the number of services it keeps introducing and every cloud provider specific service that we use locks us in even further. Kubernetes was the first step on this ladder, but what's the next step? It definitely needs to be multi-cloud or I should say multi-location and to do that, it needs to sit above the HTTP TCPIP networking stack that dominates micro services today. But multi-cloud is not really a thing, not yet, not transparently. The best we can do today is to join discrete Kubernetes clusters together in a service mesh using point-to-point connections between cloud providers. And we can't yet have workloads that are truly location-independent. The next step on the ladder out of the cloud or let's say above the cloud is WebAssembly running server side on one of the many runtimes that we have now. And probably with a platform like Buzz and Cloud running on top of NATs on top of Kubernetes for now on top of cloud. Today, I want to talk about how these technologies are changing the game completely. So let's start with a demo. Are you just going to blow our socks off? Cool. Thanks, Drew. So yeah, as Drew mentioned, I'm going to be demonstrating a service ball tolerance in a multi-cloud architecture. In other words, we're going to be running same application code in two different clouds, well, two different Kubernetes clusters within that. And so yeah, we'll just see once one of the accesses become unavailable or unhealthy for whatever reason, how does a service fail over to be another or a healthy service running in the other cluster. But before we jump into our demo, I wanted to talk a little bit about the architecture of what a platform looks like when you're working with Wasm Cloud. So here we have two cloud providers. So you have GCP and AWS. And within that, you have this dotted line or dotted container, which is essentially our pod. So it's running a Kubernetes and it has a pod and a service dot. So we have a Wasm Cloud host runtime running as a pod and on which you can schedule your Wasm workloads and be able to scale them. And obviously, with leveraging the Kubernetes capabilities, you're able to scale your pod as well. So that's what a platform looks like when you're working with Wasm Cloud host runtime. But for this group of concepts, the architecture that we went with was having a pod, a Wasm Cloud host runtime pod, each for a type of Wasm workload that we wanted to run. So as you can see, our Wasm Cloud pod B is running our business logic. And respectively, our Wasm Cloud pod A and C are running our capability providers. So it just becomes easier for us to identify and also simulate outage when we have this kind of architecture. So that's why we went with something like this. So yeah, so just talking about our business logic. So we have our to-do application. So it's just another to-do application that supports all the CRUD operations and it complies with our to-do back in spec. So we have a to-do actor running in both of the clouds. And then we want to be able to make a request to those to-do actors running into different clouds. So we have an external IP. That's why we have two HTTP capabilities providers as well, sorry, one each. And so yeah, we can make a request to our to-do actor. So yeah, and then we have just one database. This was to represent our client's mainframe system. So where they have only one database, then everything is just built on top of that. So yeah, and so to end, just something to note that we have a business logic, but the capability providers are essentially the IO layers of the onion that help you interact or your help you services to interact with the outside world. But how does this all networking work inside the cluster? So I'm just going to zoom this out, so it's visible. So we have our pods running in both of the clusters, but they communicate with each other over NATs. So NATs is essentially you have your topics and services subscribe to those topics, so their messages are being passed through. And in a scenario where a service wants to talk across to the different cluster, we are using NGS that essentially sits right between both of our clusters. So NGS is NATs global service. So NATs is essentially another cluster that you have working as a, acting as a gateway between your clusters. So our message is propagated up and then down to the other cluster. So now is the fun time for demo time. So we have our terminal here, which is a lot of terminal windows. So the reason we have this kind of setup is because if you go back to this logical representation here that I showed you earlier. So on the left hand side, we show all the GCP stuff. And on the right hand side, we have all the AWS. So that's the just wanting to replicate that in terms of terminal. So just easier to visualize in this demo. So yeah. So similarly, we're going to use two top windows here as our clients. So I'm just going to be using Curl. So it's like Curl to GCP and Curl to AWS. And these two windows here, I'm going to be using Stern to pull the logs from the to-do actor running in GCP. So as you can see, the context is GCP running to-do actor and excluding some of the noise that we are getting from our logs, from the health checks and stuff. So it just makes it easier for us to identify the logs we care about. So I'm just going to run that. And now we are fetching all the logs. We are pulling all the logs. And similarly, I'm going to do the same for AWS. So in the middle windows, we are listening to parts on the left. We are listening to actor running GCP. On the right, we have our actor running in AWS. And at the bottom, we have our ops windows. And this is where I'll be doing some cube cuddle fun and pretending to be a chaos monkey. So I'm going to be taking down actors and bringing them back up. So yeah. So this is the setup. And hopefully, it's easy enough to follow. So if I curl here, so we can see, we're getting some logs from the actor running in GCP. And if I make a color request AWS, we are seeing some logs from actor running in AWS. But we don't have any to-do's at the moment. So we might as well go ahead and create one. So we can create to-do that goes to our GCP service. And then we can make a to-do saying something hello to GCP, just easy to identify. So we make a request and our actor running in GCP handles our request. And similarly, we can actually, let's just curl that to show that how our actor running in AWS is still able to fetch the data from Redis running in GCP. So it has the consistency already. So it can see the data. And that was quick. So I'm going to go ahead and just create a to-do from AWS as well. Just we have two easily identifiable to-do's that we created. So I'm just going to pull this hello to AWS. And before I do that, I should clear these consoles. So I'm going to go ahead and create that. So yeah, as expected, our request to AWS is handled by the actor running in AWS. And now I'm just going to make a get request again. And we should be able to see the two to-do's that we just created. They're already here. So we can see our hello to GCP and hello to AWS. And similarly here as well, as we can see both of the assets when we call from GCP. So what I'm going to do next is, as I said, I'm going to be a chaos monkey and I'm going to go ahead and delete the actor running in GCP. So this will be the business object that we have that handles our incoming request that we want to create it to do or get to do. So what have we actually done? So initially we saw the happy path. So we saw the solid green lines where we made the request to the GCP and it did some work and responded back up. And then we did the same work with AWS where we see the solid green lines have a path where that request is made to do actor. It goes across to the various database in GCP, does the work and then responds back to the client. So now by deleting the actor, we've done something that's represented in this diagram. So we've deleted the actor in running in GCP. And we can see what, let's see what happens when the actor has been removed. So that actor should be deleted. So what we can do is get the pods from GCP and see that the actor has been removed and we only have HTTP capability, Redis capability and the Redis itself. And if we just do the same thing to AWS and just get the pods that are available right now and we should have actor and HTTP capability both available in AWS. So we have one actor healthy but it's a different cluster. So let's see how that works for us. So if you make a request to GCP, in theory our healthy actor should pick up that request and work with it. And you can see that in the logs now. And yeah, so just one more time, so we can make a request to GCP and it's worked. And the work is done by the actor, the healthy actor running in AWS in a different cluster. And when we make a request to AWS, it should just behave the same as it should because it has a local actor to do the work. Cool. And for, so now I'll just bring that actor back up so just we can see how the cluster recovers. So now we can see the logs coming through. So we have a wasm, or pods coming up, our wasm cloud host is coming back up as well. And we can quickly go back to the diagram here. So what we just witnessed was I took down the actor running in GCP which made our service to fall onto the unexpected path or the unhappy path represented by the dotted line. So it went across to the healthy to do actor running in a completely different cluster, did the work by querying, by making the request to the database that's all back to the same cluster that request coming from and do the work and yeah, resolving the request in success. So what we've done now is just now is we bring back the service or recovering the service back up, which means we're back to where we started with the full service. Both were to act as healthy and we should go back to how the things were when we started. So if I make a request now to GCP, we can see our actors back up and can then handle the request again. And we can do a similar thing. Sorry, actually before I move on to move on, oh, sorry, that's AWS. Before I move on, I just want to show you the actor is back up. So we have an actor pod running previously. We didn't have that. And now it's all back up. And that's why we gained a successful response. And similarly, we can do the same thing as we can see on the on on AWS that we still have an actor available. So what we'll do is delete that actor as well, just so we can see both clusters. How do both of the clusters handle the failover? So go ahead and delete that. And what essentially we're doing now is is doing something like this. So on AWS, the only thing we'll be left with is the HTTP capability provider. And that's it. And we'll see how that kind of works out for us in a minute. So what I'll do is I'm just going to wait for the pod to terminate. So we're going to do a watch command on here. So we can see the pod is terminating. So what it's essentially doing is just getting rid of all the but our wasn't close runtime with all the actors are scheduled on it. And essentially, that just should just leave us with just one service, which is just one pod, which is HTTP capability. And that's all. So now if you make a request to AWS, it still works. And the healthy service running in GCP is just handling our work. Even though we have nothing here, see, we have nothing other than HTTP capability. That's all. There's no actors that can do the business logic. There's no even database in the AWS cluster. Still can handle the incoming request successfully. So yeah, so I'm just going to go ahead and bring that service back up again. And we can see it will recover similarly as we saw previously. And so yeah, the, you can already see the logs that the pod is coming back up. And what we'll end up with is back to where we started again, with both of our actors in place and our, so yeah, we can see the actual module has started. So I'm going to go ahead and clear and just for one last time, we can see the recovery of our service back to where we started. So we can see the logs coming from the respectful, respectively from the clusters they belong to. So yeah, what we saw essentially is our services running in two clusters or actors running in two different Rasmus Cloud runtimes in two different Kubernetes cluster and in a two different major cloud providers, and we can see our service just seamlessly failing over and in a following the unexpected path and just recovering and yeah, recovering as it comes back up automatically. So yeah, that's all I had to show. And now I'm going to hand it all to Stu. Thanks, Aish. That was absolutely amazing. I think this is the first time I've seen anything like this really, where you can have a true multi-location cluster spread across geographic locations across the world. And it's made possible because of WebAssembly for lightweight possible workloads, Rasmus Cloud for the application run time, and that's for location independence. I'm absolutely convinced that in the next few years or maybe even months, service our WebAssembly runtimes will make significant inroads into platform design, subtly influencing our software architecture on the way. And I say this because there's another thing happening here. Micro-services that today are delineated by network boundaries will give way to much smaller, lighter act as compiled to WebAssembly that talk over potentially global messaging backplains. And importantly, they'll be freed from almost all the boilerplate code that sucks up so much of our energy today. This is already happening. This diagram shows this evolution. On top of Kubernetes today, we can layer Istio or another service mesh to remove boilerplate for network-related concerns such as traffic routing, role-based access control, etc. We can also layer Dapper on top of Kubernetes, which removes boilerplate for application-related concerns like talking to a key value store or a database. These help us to make our services smaller and more focused. Excuse me. Then we have the paradigm shift with WebAssembly and Rasmus Cloud. This is where our services are now more like actors, sitting above the network layer and representing our pure core. In the future, this may go even further, hosting immutable functions, for instance, on platforms such as the upcoming Unison Cloud. Imagine a typical microservice today, say it's built with Rust, good choice, and placed in a Docker container, say WNBuster Slim. That's an operating system, which we need because we want to talk to services over HTTP, so we need a networking stack, and we want to write a lot of the standard out. We also want to talk to a database, so we need to pull in all the code to do that. Now we're quite chunky, and we need direct access to a network, and routes configured to allow us to reach the database and for our consumers to reach us, maybe firewalls or at least security groups and the like. We need our team crucially to be DevOps capable so that we can manage all this stuff. And before we know it, we've spent a lot of time not working on our core business value. If we visualize the onion architecture, or ports and adapters, or hexagonal architecture, or clean architecture, or whatever we want to call it, we basically need to strip away all the outer layers and just focus on our core application and business logic. We want to push the side effects to the edge so that we can keep the core pure and therefore simple to test. And we want to iterate on this core quickly so that we can deliver real value to our customers. Dapper allows us to do this today on for microservices on Kubernetes. But WasmCloud allows us to do this even more. We now have actors that consist of our core logic, which are compiled to WebAssembly and that can run literally anywhere. We can use WebAssembly to host anything, including WasmCloud itself. But in our demo, we hosted WasmCloud in a container on Kubernetes. The application runtime is built with Elixir OTP, leveraging years of battle testing, and it schedules capability providers and actors. The actors talk to capability providers through a contract declared with Amazon's Smithy IDL, which is protocol and language independent. The capability providers are where the side effects are, we build them once and we use, or we use a first party provider out of the box. They're just like frameworks of old, but they're independently scalable and resilient and also language agnostic. This theme has come through from SDO and Dapper. It's the same sort of thing, like pulling this stuff to the edge and doing it once instead of every single time. So WasmCloud has this concept of composable actors compiled from any language into WebAssembly that talk to the capability providers, which can be either first or third party. Actors are scheduled to run on hosts that self-form into a self-healing lattice made from a NATS backplane. The NATS is crucial here. It's amazing. It's the key to all this. A simple, yet incredibly powerful PubSub-based messaging infrastructure, which is itself a platform for building platforms. That sounds familiar. Any NATS infrastructure can be used, and in the demo that Aish did, we used NGS, which has endpoints all over the world. It's a global communication system from Synodia. If a WasmCloud host can see a NATS leaf node, then it can join a lattice. Hosts or nodes can join and leave this self-healing lattice. Hosts are just compute that can be donated from anywhere, any cloud, on-prem, any edge or IoT device, even a web browser tab, because, as I said earlier, WasmCloud itself has been compiled to WebAssembly and can run in a web page. For the first time, as far as I'm aware, we can have truly global clusters spread across geographical locations, all because of NATS and WasmCloud. NATS flattens the WasmCloud lattice and elevates it above the network, allowing it to be independent of network topology, no more firewalls or perimeter-based security models. But NATS is even more secure, making extensive use of public key cryptography to provide a secure, multi-tenant substrate or backplane on top of which we can easily build globally distributed systems. So what does an XGEM platform look like? In a large enterprise, you might expect a single global multi-tenant NATS backplane to be the ceiling for the infrastructure team and the floor for the platform team. WasmCloud becomes a ceiling for the platform team and the floor for the application teams, and it provides a great developer experience for building modern distributed applications with high velocity and a core focus on building customer value. That's us. We hope you've enjoyed it and the demo. Thanks, Ayush. Thanks for listening. We're here for any questions if you have any, and these are our details, so hit us up. Thanks.