 Hello everyone, I'm Joe. I'm soft engineer from Microsoft and I do most of the work related to web assembly and bring web assembly to the clouds through projects like run wazzy continue the wasn't shames and spider lightning and today I will be presenting virtually with Dan from fastly Dan is a co-inventor of wazen and his original developer of wazzy and he's pretty much everywhere on the wise and wazzy world and Let's first from Dan. So one of the questions we often get when you talk about wazzy in the cloud is Is wazzy just about POSIX? And before we can even answer that question We need to take a few steps back and look at some of the kind of underlying ingredients So we need to have before we can even talk about API's like POSIX We need to start from a place where we have a standards forum We're talking about API is that many different engines will ferment and many different source languages Many different tool chains many different libraries are all going to be sharing And so it's important for us to have a place where these people can come together and have consensus on a set of API's that they'll share and This forum will need to define vocabulary conventions and tools for defining API's And when we have this when we have a common set of vocabulary convention the tools many different people are going to want to Use this they're one of these the same vocabulary convention tools for their own API's That don't necessarily have to be standards And so it's pretty clear that we're end up having to set up here is the basis for an API ecosystem Right now The API system is just getting started. This is a really great time to ask a really fundamental question What do we want of our ecosystem? What do we want this ecosystem that we're going to all share to look like what properties we want to have a Natural way to answer those questions is to look at wasm itself and say what are things we really like about wasm Can we have those things in the API as well? One of the big properties of wasm that makes it really interesting is portability Wasm, of course, is portable across CPU architectures. It can monitor things like x86 and arm and risk 5 and many other different CPU architectures Wasm can also run on different kinds of operating systems When we look at the API space the kind of use case they're stepping up people want this kind of portability But they say when we have this kind of portability actually what we want is more of that kind of portability We won't be able to take portability beyond just like different kinds of desktop computers with different kinds of CPUs and different systems, but we want to have Portability across funding on different kinds of computers So when we look at the cloud we want to have portability between like different kinds of machines running inside the cloud But different kinds of cloud infrastructure And even beyond that we want to have portability beyond cloud between cloud and edge and and cloud and edge and browser and client side and mobile devices and embedded devices Kind of portability across different kinds of environments which requires a greater degree of abstraction That just portability across the instruction set or basic complex system functionality Wasm is across language It can support many different source languages today and it's always adding features if we go forward We're adding more features to support more languages in better ways. That's one of its core strengths And we look at the web-due cases for why is he stepping forward? We see people want to use many different source languages and really importantly They want to have these most if it's also good as we really interoperate We don't want to have to have fragmentation in the ecosystem between people using some languages on this side and other languages on this side We want to have a common ecosystem where maybe we can combine to have the same ecosystem Wasm starts out as sandboxed comes from browsers which have to have a sandbox against hostile code coming from the internet So it has to be robust against this kind of attacks And so we want to preserve this kind of strong sandboxing as we extended the API space But it's important that we don't do sandboxes in a way that we say run the code in a world where I can pretend that it's the only thing that exists We want to want to have code be composable If we look at the core wasm system with modules and imports and exports They kind of apply system where imports could be connected to exports and it suggests a very very composable system We can have many different modules that can be connected together with just their imports and exports because they don't share anything else There's no implicit shared state The realization of all these kind of properties in the API space is the wit idl in that component model So quick overview of the wit idl the vocabulary for defining apis So our core wasm has either the two types which makes sense because it's mainly a CPU abstraction and CPUs Don't have sign unsigned registers. They just have integer registers But it interfaces it's important to talk about sign unsigned values because we need to be able to interpret the meaning of the values So we have a sign unsigned integer types also floating point types and boolean types This interface types also give us Dynanthesize types things like strings and lists and just first-class types. We can just pass through interfaces These are really valuable for being able to define APIs The idl also gives us types for doing records and variants It also has types for result type for error handling and this is really interesting where if you have a Bindings to JavaScript, we can have JavaScript through an exception because that would be the natural way to handle error in JavaScript If we have bindings to Rust, we can have Rust return a result because that would be the natural way to handle an error in Rust If we have bindings to C, C can return a special error number Which is kind of the natural way of handing errors in C So every language can have their own kind of error handling and having a result type in the idl means We can make bindings that are specialized for the different error types The wit idl also defines types for handles and resources which allow us to have Awareness of the outside world limited awareness of the outside world so we can share handles which are essentially pointers To resources which essentially states and we can share these around in a very limited way. That's controlled by APIs and types And in the future the wit idl will be extended to handle async of integrated features for streams and futures This gives us a very powerful API to kind of get a sense of what all these types enable They give us source languages that are unexposed If you have an API that takes a list of string and returns a list of u8 Nothing in the API says and this was implemented by C++ or and this was implemented by JavaScript or and this was implemented by Python It's just list of strings listed u8. These types are source language independent types This means that the ecosystem doesn't have to worry about being frightened along API boundaries because the boundaries of any given API It won't matter what source language you could run in. You could talk to any other language and any other source language Using the bosom model and extending the kind of bosom concept of modules the imports and exports And and linking these things in the kind of module the logic system means there's no global namespace or registries or brokers or buses at runtime And this is one of the key properties that means that interfaces are fully virtualizable We don't need to worry about two things Needing to share a registry or needing to share a broker needing to share a bus Any of the things once we've looked them together can be virtualized when you take that combination and like other things And that combination to be virtualized so any given time after we've done some linking we can take the result and fully virtualize it It also means that features like GC and also any kind of memory management whether it's GC or linear memory It can be compartmentalized so we have individual components can have individual GCs Of course out of the covers and places can be doing lots of different things But it's because I believe the interface level the GCs are not entangled each other So we don't have a common common cycles across interface boundaries Which will be some really interesting properties. We talk about partial failure in the future This is also really important when we look at WASMVC because one of the properties of WASMVC is that it's not inside the edge of space It's not inside the linear memory So if we have API's that are defined in terms of linear memory and pointers GC languages can't talk directly to those things So this avoids us having to have an awkward fragmentation between GC languages on one side and linear memory languages in the other Putting this all together This is the foundation for a very solid WASM native API system an API system built for WASM that takes the properties that make WASM strong and Realizes them with an API as well and we put all these pieces together From the standard body with the WASMVC group to the common vocabulary conventions and tooling To the tooling that preserves the great properties of WASM with the portability and isolation and the cross-language properties and possibility We have the foundations for a WASM native API ecosystem And we have the foundations for a platform where people want to do lots of different things from cloud things to neural network things to cryptography And also POSIX things one of the concerns as we talk about these like greater API's is Do all the engines have to implement all the WASI interfaces and And on the other side we have conservative developers saying like you know WASI has all these APIs in it Do developers need to know which engine support which interfaces it's going to be a convoluted support matrix How do I know what I can use? And so the component model has answered that as well. That's called worlds worlds are a mechanism for defining subsets of APIs that can say we have WASI constant And WASI sockets and environments variables and command line options and we can pull that all together I'd call that a command world Another hand we have WASI messaging and WASI SQL and WASI HTTP and we can put those all together and call it a cloud world So these different worlds are both Independent to each other. They can both define their own APIs and define a set of things you can work with in a particular world Which will directly map to particular set of use cases command line environment programs or set of use cases like cloud programs Want to be portable quite different clouds and with that context when we talk about WASI in the cloud It makes a lot of sense Thanks the past and I'd like to take it further and talk about the world of the cloud First in the foremost, what is cloud? Well, a cloud is a condensed form of vapor of H2O also known as water Floating on the atmosphere you probably can't see because we are inside the building No, sorry. That was wrong definition What I really was asking is what is the cloud the cloud computing Cloud computing is an own-demand delivery of IT infrastructures for hosting your applications At least a very corporate as the cloud evolves more and more features and services being added to it a cloud provider like AWS has over 200 unique services For your applications and they provide things like highly available for tolerant blob storage. They provide fully managed Sharded replicated database to application. They manage your networking. They manage your entire identity platform so your application offset 80 or 90 percent of the features to a Specific cloud provider obviously comes with a cost you have to pay cash to them, but also you have to Know the cost of your mental Developers are facing ever-increasing expectations of building scalable applications and they need to learn a bunch of SDKs round-time environments and a set of API's So not only do they have to understand what features they need to add to the application they also have to ask their question of How much does it cost me when I change my mind? How much does it cost me when I was required to migrate from one platform to another platform? A lot of the technologies like kubernetes service mesh Dapper k-native and many more simplify our life Dapper proves that the concept of abstracting the layer of data plan operations like state management pops up and Config really works and it's really what developers want and I think as we are building a ecosystem for wasn't native API's it's Really important to start standardizing and unifying a set of interfaces for building Distribute applications and this we propose a new world called wazzy cloud core Written wazzy cloud core in the wheat IDL as a world It is made up with smaller worlds like wazzy key value wazzy block store sequel messaging round-time config and the distributed lock service that gives you 80% of the capabilities that your application probably needs and In this world unlike the command world which exposed lower-level API is like sockets Why is it called core owning export a single? HTTP handler so that allows you to do this bursty function serverless way and in your application that Handle HP request you can use key value to access whatever key value Stores you have or blob store or do you transaction execute queries with sequel a repeating theme in this conference and when people talk about wazzy and its portability and Then also mentioned portability is one of the things among other that make wazzy unique I like to take this chance to categorize different layers of portability On the bottom layer we have CPU portability that allows you to have a binary that can port from one CPU Architecture to another like x86 to arm and why the module does exactly like that On the middle layer we have opening system portability Pazix allows any application that use the Pazix system call to run on Pazix compatible opening systems But why is this more than Pazix? I think on the cloud environment for distribution applications We need a even higher layer of portability. We need to isolate your core business logic by abstracting away all the capabilities and this is a Business business logic portability and how do we achieve this level of portability? Well, we take a look at distribution applications. We find common patterns What are some of the services and? API's application needs to use and we found that multiple services like Rathus AWS Dynamo DB or Azure Blob Storage or Azure Cosmos DB They all share a common set of API's for key value store, which is get set delete and exists So if we put them together into an interface called key value read write We get exactly like that. We get a get function that returns a key value Key value pair set function to set your key value pair delete and check if the key exists Any application targeting this interface doesn't have to understand what's the underlying infrastructure that provides your key value capability So that application ideally is extremely portable own different platforms that provide different key value implementations However, however, it comes with a trade-off If you want to use more advanced features like Transactional API's or batch operations on key value store. You don't get it because not so many key value store implement This kind of advanced features and any key value store has their own uniqueness So I think there is a relationship between portability and feature richness and if we draw a diagram We will be a curve like that The more polarity ability you get the less feature you have If we draw that on the upper left side We have YZ HTTP proxy, which is a YZ world that give you a HP handler for Processing HP requests and this world is extremely portable It can be implemented by envoy by any other platforms You can port your application targeting HTTP proxy from on-prem to the cloud To constraint devices to edge to IOT devices On the lower right side, we have a specific provide provider world that is giving you Full set of features from a cloud provider and that's extremely powerful but at the same time not so much portability. I Think there is a sweet spot in this curve Which is on the middle that we want Why is the cloud court we have a set of core set of features That gives you 80 percent of the that satisfy 80 percent of application needs Without sacrificing too much of portability and that's the goal. We want to achieve To recap why is it called core uses high-level APIs and the wheat ideal to define interfaces? It gives you the business logic level of portability Because he uses the tool trans like we bind gen to generate bindings You have you can write less code that needs to be bound always applications uses rich semantics and So you don't have to define your IO types like of opaque types and it is programming language of no state Sorry, let me just replock here. All right. We've been experimenting a whole implementation of why is it called core at the day's lab Microsoft over the last year and we call it slight It embeds wasn't time as the wise and runtime and it uses wheat ideal to define interfaces for why is it called core and when you When you have to build your application when you deploy to production or Environment you have to use a configuration file to bind a specific Capability to the infrastructure for example, you have to say this key value store capability Used by my application. It's really talking to writers and that's in the the deploy state And now I want to give you a demo This is a very simple demo that shows a chat up that uses the messaging capability and it starts a HTTP server and has three simple end points login send and Receive the send will send the message to every people in that group and receive will grab a message to the user so in this Chat up we have a configuration file Called a slide file and currently we are using the messaging capability and they use is the the file system implementation on my local machine and let me just Starts the server and starts the client side. All right, so log in to users and you can say hello and You can say hi back So this very simple chat up and now when I I can simply change the configuration file without recompiling the application to use the net service for messaging capability I Just need to round this again You can see hello from one side and I can say hi back on the other side And this is using that instead of my file system implementation. All right, and I Didn't talk too much about the component mark How this is how everything works and I highly recommend you to take a look the future of component tooling talk by Peter and the guy And KubeCon we have a tutorial workshop So 45 minutes workshop that will do a hands-on with WebAssembly micro services and Kubernetes and will deploy this application to AKS using wrong and We have all the links to each of the YSE proposals and they are all in this back work And we have the implementation of slide here and thank you so much