 All right, well, I think it's about time to begin. Thanks, everybody, for coming. My name is Joel Dice. This is Joe. We're here to talk to you about Wazzy Cloud Core, which is a set of specifications, a set of worlds, if you will, to use the wit terminology to allow applications to target a native cloud environment. And if you attended Brendan Burns keynote this morning, you can probably heard some of the rationale for that. But the basic idea here is cloud applications don't necessarily want to deal with traditional operating system sorts of interfaces, such as low-level file system calls and environment variables and so on. Instead, they want to deal with high-level concepts, such as key value stores, messaging cues, and HTTP, and so on. So we're going to use the SPIN open source project as a showcase for an implementation of Wazzy Cloud Core and talk a bit about the motivation behind it, as well as the potential for making this a portable platform that could be run on a variety of infrastructure, cloud providers, service providers, and so on. So like I said, my name is Joel. I'm not really on social media, so you can just email me. Or you could probably catch me on the Byte Code Alliance Zulip, if you have a quick question. Always happy to chat. I am the principal engineer, a principal engineer at Fermion. Been there for about a year. Primarily, my focus has been working on improving programming language support, making WebAssembly more truly polyglot, and also working on a lot of other things, trying to help get Wazzy Preview to shipped, and then also help with this cloud core standard. I've implemented parts of Wazzy Messaging and Wazzy HTTP. And the good news is it was pretty easy to implement. And some of my hobbies are bike packing, which if you don't know what that is, it's like backpacking, except there's more crashes. Actually, that's what some of this little scar is there. And then wrestling with my two boys, which are very cute and also kind of rough when you get in the ring with them. And here's Joe. My name is Joe. Hey, everyone. Self-engineer Microsoft. And Microsoft is pretty huge. Working under Azure Container Upstream. And I work on various open source projects under Byte Code Alliance in CNCF Container D, and others. I'm a programming language enthusiastic. I really love the functional programming paradigm that exists in some programming languages like Haskell and Rust. Besides work, I love playing video games, love skiing with friends, and lifting weights against gravity. All right, today's talk is going to split into three parts. First, I'm going to talk about what is Wazzy Cloud Core and why. And then I'll hand off to Joe to talk about spin. And in the end, if you still have time, we'll do a demo of spin-running Wazzy Cloud Core capabilities. To motivate things, I want to first talk about monolithic applications. It's a pretty popular paradigm of programming models that Google and MATA are still using today. One characteristic of monolithic applications are they are typically developed in a single runtime in a single family of languages. An example would be Java Virtual Machine and applications written in Java. So applications need a lot of common capabilities to do state management, networking, error handling, logging, telemetry, and for detection and recovery from thoughts. And connecting to external services. One consequence of application developing in a single runtime is that they have to use those libraries and SDKs from this single language ecosystem. And the result of that is those libraries are tightly integrated with the core business logic of the applications. Moving on to more cloud-native ecosystem, and there are some tendencies that these two application features are moving to the platform level. So nowadays we have a strong sense of separation of application developers and platform engineers. A lot of the application ops side, like packaging, deployment, health check, for detection and recovery, are being automated by orchestrators like Kubernetes and containers. Applications are required to run on different platforms, OSs and CPU architectures. If you asked me like five or six years ago, Linux containers are mostly compiled to running on x86, but today when we package application to containers, we want to build them in this cross-platform manner. There is also a stronger security requirement for cloud-native applications. So what can Wazen provide in this sense? Well, if you went to Luke Wagner's talk yesterday of what is component Y, he kind of summarized three characteristics or advantage of Wazen to provide to accelerate this cloud-native tendencies. One is language neutrality. You can write your services in whatever language that that can compile to Wazen. Second is sandbox support, so you can securely run your applications. And three is linkable modules. That means you can link to modules and to allow them to communicate with each other. Well, we'll probably ask, what about containers? Well, containers give you language neutrality. You can package your application in whatever language into a container and deploy to the production environment. Containers also use Linux kernel features like C-groups and Secum to give you sandbox support. Containers can also be composed together like Kubernetes. Well, I will argue that containers Kubernetes do a great job on the application lifecycle management and automating a lot of operations stuff like scaling, deployment, networking, for detection recovery and much more. But inside of containers, you still write your application that use a language specific SDKs or libraries to do data plan operations. For example, accessing to a key value store like Redis or listening to events from Apache Kafka. So I'd like to shift focus to, like to go back to the year of 2004 and there is a paper coming from Google. It's called MapReduce. MapReduce is a very classic and famous programming model that allow you to do large data processing on a cluster of hundreds or thousands of machines. What I found interesting or fascinating about MapReduce is that it gave user two simple APIs, a map function and a reduced function to implement. Once you implement those two simple functions, it spawns a huge distributed systems to process a lot of data in Google to just process large data. So in this particular example, I have a map function that takes a document name and document contents and scan each word in that document that emit one for that word. In the reduced function, it takes a word and all of those emitted ones as a list of values or iterator of values and sum them up. So that give you the number of occurrences of the word in all of the documents. So this is a distributed WordCon program. What if we can abstract common distributed application capabilities into interfaces that are as simple as MapReduce and as powerful too? Well, first we need to define common distributed application capability. What do you mean by common? I will argue common means we can satisfy 80% of application needs and from the experience of building applications, cloud native applications. We need things like accessing to key value stores for durability, exchanging events through messaging brokers, uploading files to blob storage, doing inbound and outbound HTTP requests, listen to a key change in front-end configurations and some locking mechanism too for critical paths of your transactions and maybe more. So let's dive into one thing in that list of common capabilities. I'd like to talk more about key value store. We want to find MapReduce APIs for key value store but there are so many key value store out there and each one of them has their own flavor or unique sets of features. For example, memcashd is commonly used as a caching layer and we have cloud providers like Azure Cosmos DB and Amazon Dynamo DB. We have no SQL databases like MongoDB and Cassandra. So in order to find an API set that works for lots of the key value stores, we need to find the minimum set of APIs that every single key value store implements and at the same time it's going to be useful for application developers. So I believe a simple interface like rewrite that give you get, set, delete and exist functions is pretty useful. You can use those functions to access your key value store, retrieve a value given a key or set a key value pair and this interface can be implemented by all of those key value stores mentioned before and that gave us a decoupled business logic from platform concerns because now key value stores are provided by the platform. It's out of the concern of your application developers doing implementing core business logic. So that makes your business logic more portable, simpler, has less dependencies and more modular. Well, you may say this idea is not new. We have sidecar containers and there are popular runtime framework dapper is already doing this. In sidecar container, we have the business logic is doing a request into the sidecar container and the sidecar container is going to take care of the job to access to a key value store. In this case, we can link into Weizen's linkable modules to provide better performance. Instead of doing a full stack networking call from the business logic into a sidecar container, what we can do is a cheap function call from your business logic that compiled to Weizen and which composed or linked to Weizen key value interface that's provided by a gas component or provided by the host platform. All right, so we think this idea is pretty useful and we go through each, every other capabilities that I listed before and we standardize all of the capabilities like Weizen key value, Weizen blob storage, Weizen messaging, SQL under the WebAssembly organization W3C. And then we think we can bundle all of them together to create a cloud core world that targets serverless functions and so we come up with this proposal on the Weizen and you can check out the proposal in this QR code. Okay, what's the tradeoff here? What's the cost when you use Weizen cloud core? I think there is a fundamental tradeoff between portability and feature richness. If I draw a diagram where the Y axis is portability the higher you go, the more portable application is and access being the feature richness. There might be a curve like this and I think if we draw dots on that curve there is a life-side dot called YZHP proxy. It's a very simple interface that allows you to handle incoming and album HP requests. So application targeting this world is extremely portable. It can port from your local environment to cloud environment to edge environment at the cost of its minimum API set so you don't get too many features. Well, on the other side, if you use a specific provider like AWS or Azure you get all the powerful features from the cloud that provides but you can't port it from one cloud to another. But I think there is a sweet spot in the middle. Why is it called core that gives you 80% of the features that you need in your application while at the same time being portable you can port from one cloud to the other? And this idea, as I said, is not new. Dapper is standardizing the API for common district applications in a special interest group called SIG API. And we are happy to announce that we are working with the Dapper team to standardize the APIs in the community so that we can unify the APIs. At the same time, we can do interoperate between YZ Cloud Core and the Dapper. We are also working with external implementers to push in the proposal forward. And the requirement of phase two, we need to work with implementers to prototype and we find the design. And then eventually when we reach the stable version of YZ Cloud Core we need at least two production ready external implementers. Early this year at YZ and IO, my colleague Dan and Bailey from Cosmonic did a talk about CNC of YZ Cloud with implementing YZ Cloud Core capabilities. In the last few months, I'm working with the Fermion team to bring YZ Cloud Core capabilities into Spring. So now I'm going to hand it off to Joe to talk more about Sting. Thank you, Joe. Yeah, so I'd like to talk a little bit about what Spin is, why it exists, and how it's relevant to YZ Cloud Core. The first thing I'll say is that it's in a nutshell, it's a tool for building and running event-driven microservices. So think functions as a service, serverless types of workloads, and not just handle events, but essentially use a fairly rich set of functionalities such as key value stores, SQL, messaging, PubSub, that sort of thing. And so as you might imagine, YZ Cloud Core is of great interest to us. It's built on Wazen Time, which has a really interesting set of features for optimizing for low startup latency and quick switching between multi-tenant workloads. It's entirely open source, it's got an open process, we use Spin improvement proposals from the community and from Fermion to move the project forward. So it's not just your typical corporate throw code over the wall type of thing, it's a true community. And it's also both internally modular in the sense that you can plug host components in as desired and extensible via plugins, which can be either native written in any language or written in WebAssembly itself. So why would we have chosen WebAssembly to run a service like this? Well, it's a variety of reasons and it turns out that a lot of the reasons why WebAssembly is such a good fit for a web browser also make a good fit for a multi-tenant cloud scenario. The first, of course, is security. Each request that comes in to Spin is handled by its own instance, which is isolated from all other concurrent or prior or subsequent requests. And not only that, with the component model support that was introduced in Spin 1.1 experimentally, we now have the per component, subcomponent sandboxing that Luke talked about yesterday. And so the idea there is that if you have sensitive information, you can ensure that only a subset of your code actually has access to that information and third party dependencies are verifiably not able to access it. Another big reason, again, inspired by the browser use case is portability. It's platform agnostic, we talk about that to death but it's really important. It also means the development experience is a lot smoother, especially if you're not on a Linux platform on your laptop or what have you. There's no VM to manage, there's no overhead in running a WebAssembly workload, at least not at the scale of a virtual machine running a whole other kernel. And then performance is a big deal, being able to pre-initialize these modules and or components results in sub millisecond startup times. And I think there's a relationship between these two things, portability and performance. Hypothetically, you could write a state snapshotting tool for a native process, say a Linux process running on X86. It would take a lot of work, you could do it. And then it would only work on Linux on X86 and then you'd have to do it again for Windows on X86 and again for Mac on ARM and so on and so forth. And none of the products of these snapshots would be runnable on other platforms. And so eventually you'd probably throw up your hands and say it's not worth it. Whereas with WebAssembly, literally I did implemented one of these pre-initializers in an afternoon. Wasn't that difficult. So performance, big deal, portability. And then finally related to performance cost efficiency. One of our big theses at Fermion is that we can deliver serverless computing features at lower cost through, and that tends to be a result of this quick startup and tear down experience that you have with Wasm Time. And on the next slide I'll kind of go into what I mean by this. And the analogy I sometimes use here is a utility company that's providing electricity to customers. If you had a utility company that only had a handful of customers, they would have a really hard time delivering power efficiently because the load would be so variable. As soon as customer A turns on, there are conditioning, suddenly you've doubled the amount of load and spinning up a steam turbine is not what you want to do frequently. And so having a small number of customers would make it really hard to both handle the peak load but also minimize the gap between that and the average load. And that's kind of what we see here illustrated. The dashed line at the bottom or near the, in the middle there is the average load, but you see it's a big gap between that and the peak load when all the peaks of these resource usage is kind of correspond. That's in contrast when if you increase the number of tenants, just even add in a couple more tenants here, you start to see that gap close. And that's really important for ensuring that you are providing the amount of capacity that's appropriate to the workload. The smaller that gap is and that gap indeed will, assuming you have a lot of uncorrelated diverse workloads, will continue to shrink. And thus you're utilizing the resources, your hardware, your electricity, et cetera, more efficiently than you did in the less multi-tenant scenario. And then maybe the question, or this is an obvious question to people, but why Wazzy Cloud Core and Spin? Well, like I said, Spin already has been providing from day one these elements of functionality, but at the time there wasn't, at least in the wasm world, a standard for expressing these things. And so you tended to have to, you spin to target Spin, but now the vision is that you can use Spin or another tool to target Spin or another tool that provides these same interfaces. And then just a few more things about Spin, the language support, this is something I'm passionate about. I'll be given a talk about Python later today. We fully support Rust and Go. We have pretty mature, but still technically experimental support for TypeScript, JavaScript, Python and .NET. We're also on the bleeding edge of developments for the component model. The next generation Python support is what I'll be talking about later. Guy's work on JCo and Commodatize.js is what we're hoping to base the next generation Spin JavaScript SDK on. And then I've also talked to a few of you about the future of the JVM family of languages on WebAssembly. That one's still a little bit more up in the air. I gave a talk last year at Wasm Day. We can talk more about that later if you want to, but there's a lot of dynamics going on there. And then unofficially, people have written Spin apps in a variety of languages, both obscure and popular, including Swift, ZIG, Haskell, Idris, what have you. Anything that can target WebAssembly potentially could run on Spin, depending on how much work you want to put into it. And then hopefully some of those SDKs will graduate to official status. And then finally, Spin hosting options. WebAssembly, the big tagline is it scales to zero and it can scale way up as well. So you can run it on Kubernetes. There's a variety of options. There's Container Dshim. There's a K-Wasm, AKS has built-in support for it. You could run it, bring your own orchestrator, whether it's distributed or local, like No Matter System D. You can even run it on a single board computer lining your closet. So you choose your own adventure there. And then finally, if you don't want to mess with platform engineering at all, at any scale, you can run it on Permion Cloud. And then I mentioned that we really feel that Wazzy Cloud Core is a great fit and a natural fit for Spin. We've actually supported the component model and thus Wazzy Preview 2, things like file system access and so on. Since Spin 1.1, I say support because it's unstable. You have to make sure the versions line up of all your tools. Once it releases, we will make a Spin release that actually is stable. And we've done some experimental work that we're gonna demo here today, implementing Wazzy HTTP key value and messaging and wrapping all that up into a nice little web crawler demo. And then all this stuff exists in a fork right now because it's just, everything's changing so quickly, but as it stabilizes, we will merge it into upstream Spin. This is my last slide. The last thing I wanna mention is the state of asynchronous Wazzy. This was alluded to in a few different presentations, including Luke's yesterday, which is that the state of Wazzy, asynchronous and concurrent IO is not where we want it to be, but it's actually surprisingly straightforward for some languages. Any language that has languages that are based on essentially stackless co-routines that use this async await syntax with a little bit of glue code, you can create a very idiomatic experience even with Wazzy preview one, certainly Wazzy preview two. It's gonna get even better with preview three, but I don't wanna scare people off and think they can't do concurrent IO. We'll show you, it can be done, it can be idiomatic, but preview three will make it composable. And then for languages that are more focused on a stackful co-routine model, such as GoRoutines, the new Java fibers, it's a little bit more awkward. You have to use this in-script in a syncify transformation. Again, that should be eliminated and you'll have fully idiomatic native support in preview three. And with that, I'll hand it back to Joe. All right, it's time to see a demo. If you want to see the source code, I have a QR code here. Let me switch to here. So the demo we did with Spin and Wazzy Cloud Core is a web crawler. If you don't know what web crawler is, it's a crown job that kicks in when you give an URL and they fetch the HTML from the URL and find all the other URLs in that HTML and then recursely going to fetch, crawl the other website that can be reached from the single source you provided. All right, I'm going to make this. Is this good enough, folks? All right, so the web crawler application is targeting a crawler world. In this crawler world, we are implementing a message producers and that allow you to send messages to a broker. We are importing outgoing handler, which allow you to do outbound HP requests. We are importing Wazzy key value rewrite interface. I just showed you in the previous slides, it has get, set, delete, and exist functions. All of those imports are, well, where are they implemented? They are implementing the host, where in this case is Spin. So we implemented Spin in a branch that has Wazzy messaging, key value, and HP. This application is exporting a messaging guest which handles a event or kind of like a subscriber and export a incoming HP handler. So whenever there is a request comes in, the handler module is going to be filed up. So I want to first run this, this is breaking down into kind of two application, two services. One is called a publisher and the other one is called a subscriber. The publisher is taking Redis address and works as a HTTP server. So it's using the, it's implementing the HP inbound HP request. So now it's serving on the local host. The subscriber on the other hand is implementing Wazzy messaging handler. So it's also using a Redis address to listen to that Redis stream. So now both applications are running. I'm going to do a curl. So this curl is going to hit this local endpoint of crawl path and sending the original url, fermion.com. And what happened is the publisher is receiving this HTTP request and send a message into the Redis pop-up. So it's publishing message. And the subscriber is subscribing to that stream. So you got a message, you're getting fermion.com as the first url and then try to fetch the HTML and then find all the links from that HTML. And then it's going to recursively publishing events to Redis and receiving those events. So that's why you're seeing now I got 32 messages. I'm assuming those are 32 urls at fermion.com. And still going. I want to quickly mention that the entire applications are written in Python. Although I know people love Rust, to be able to write application in Python componentize with componentizepy is just awesome experience. I want to quickly show you the publisher code. All you need to implement is a handle function. You wrap that into a class that inherits a incoming handler class, which is automatically generated. So this is the binding generated from componentizepy. And now the handler function taking a request or a resource and try to return a response. The subscriber on the other hand is implementing a messaging guest. So it also uses componentizepy's generated bindings to implement to inherit that class and try to implement a messaging handler. And the messaging handler you may guess is going to fetch the url using allbound HTTP. At the same time, it's also using YZ key value to save those images we get from the url into a red store. So you can see here we're opening a bucket and then we are going to set the url as the key and the content of that url as a value into the key value store. In this code, there is no specific provider. It's just key value interface. But when we run it in Spring, it knows what kind of providers I'll be talking about, maybe through a configuration file. All right, that's the demo. And thank you so much. Any questions? The difference as I see this is that if it wasn't for providers, they run on, they run anywhere, so they can run to the file. So it requires them on the same nodes or whatever, or those as a business code? No, they shouldn't be required. I mean, speaking of Fermion Cloud as one implementation, I mean, you could rebuild something like Fermion Cloud, but that's the one I'm most familiar with. No, any given application is generally on at least two nodes, if not more. And we're actually working on geographic distribution and smart routing that way, more of an edge approach. But yes, the idea is that part of the appeal, of course, of serverless types of computing is that the state lives in the key value store, et cetera, and so having the same app concurrently deployed to a variety of nodes is the intention. So I showed the interface for YZ key value, and if the business logic is targeting YZ Cloud Core, it can use the key value rewrite interface as a library. So where is the library from? It's being generated by two chains like Web Engine. And so when it's tried to call a gap in set function, it's actually making a function call in the module. Yes, and then that makes a, if it's a Redis backed key value get, say, then that would cause a network, you know, using the Redis protocol, a network request to the Redis server. Yes. Correct. Right. Great question. Yeah, the vision is that eventually these popular HTTP client libraries and any socket libraries would be just as we expect that socket libraries would be usually built in the standard library of your programming language of choice would be implemented in terms of wazzy sockets. Likewise, HTTP libraries, we'd ideally like to see them ported to wazzy HTTP and be using that instead of sockets. Yeah, correct. Yeah, it comes down to what kind of host you're trying to target is the host kind of more low level and it wants to provide you sockets and not provide you HTTP, in which case, yes, you'd want to, but you had a component that wanted to speak HTTP, then that's where Luke's kind of style of virtualization would come in where you'd have an HTTP to sockets adapter, essentially, that would be inserted between your application component and your host, if that makes sense. And I want to follow up very quickly. At Microsoft, we develop a runtime called slide and slide is also targeting, implementing the host side of wazzy cloud core and we have AWS DynamoDB and AWS S3 targeting wazzy key value. So that's more like a directed approach. The host is using AWS SDKs to talk to AWS services that implements the host side of wazzy key value, but those are transparent from the guest side. Any more questions? Thanks everybody. Thanks. Thank you.