 Hello, I'm Saul and I work at Shopify in the Scripts team, making commerce extensible with WebAssembly. In this talk, I'll walk you through our adoption of JavaScript as the primary language for the Scripts platform. Before getting into any other technical details, I'll briefly explain Shopify's use case and the role that WebAssembly plays in it. WebAssembly powers Shopify's Scripts platform. This platform is responsible for enabling synchronous third-party business customizations to Shopify. You can think of Scripts as a function, as a service platform, in which functions are executed in performance-sensitive contexts, like inside a checkout, for example. From this definition, we can highlight three main concepts that come into play, synchronous, third-party, and portability and developer experience. Synchronous because script customizations are synchronous. This means that they are normally executed in the context of an already existing request. This means that they need to be extremely fast in order to avoid delaying the request that they are executed in. In fact, one of our SLOs is to execute the scripts, execute the most complex scripts in five milliseconds or less. Then we have third-party. By definition, customizations are written by third-party developers. This means that their code is unsafe and untrusted. So this code needs a sandbox. And then we have the concept of portability and developer experience. We want to ensure that developer's writing scripts can use the tools that they are most familiar with. All these concepts make WebAssembly a perfect fit for Shopify's use case. Now, most of our research to support JavaScript was fueled by the desire to improve developer experience, to maximize developer experience. That's mostly because WebAssembly, by default, will provide security and performance out of the box. But the developer experience is mostly dependent, at least today, on the tools that the platform is going to give its developers. Now, with these ideas in mind, we could classify the existing languages that target WebAssembly in one of these three buckets. The first one is languages that offer native support for WebAssembly. The second one are languages that are designed exclusively for WebAssembly. And then the third one are languages that require the language runtime to be compiled to WebAssembly. So let's look at each of them. Languages that offer native support. This normally are languages that are considered system program languages. And you can think here of languages like Go, C++, C, Rust, SIG, et cetera. These languages have pros and cons in terms of developer experience and performance. So the pros in terms of performance is that they offer good performance, they offer native support for WebAssembly, and they are mature. Then the con is that they have sometimes a high learning curve. This means that they are not as approachable for developers that just wanna write general purpose apps, for example. Then we have languages that are designed exclusively for WebAssembly. They normally offer some sort of decent performance, but they have some cons in terms of developer experience. Like they lack editor support, libraries, debugging, et cetera. And the other con that can affect developer experience is maturity. They probably don't offer all the language features that developers might be used to. Now, let's talk about the third bucket. Languages that require a runtime dependency to be able to execute. The pros in this bucket are that they are, these languages are often mature and that they offer a lot of developer experience. Many, they are widely known by most developers out there, but the disadvantage is that they are probably not as performant, especially when embedded in a context like WebAssembly. So this means that choosing these kinds of languages generally involves assuming some sort of performance trade-off. Now, let's talk about the road to JavaScript and how these buckets come into play. As a team, we started seeing that the languages in the third bucket in which you need a runtime might be the best bet if we were to maximize for developer experience. And if we were to have a calculated risk on the amount of performance trade-offs that we could make. So in theory, when we started exploring this, a bit less of performance might mean more gains in other areas like developer experience. And then it comes to the question like why JavaScript? Well, JavaScript is extremely popular language and the majority of our target audience is mostly familiar with JavaScript or TypeScript. And that's why we initially had chosen AssemblerScript. So up until now, our platform has been supporting AssemblerScript because we think that the intentions of the language are good bringing JavaScript or TypeScript developers closer to WebAssembly without necessarily having to learn systems for every languages. And then aside from that, JavaScript has a good tooling and a good ecosystem. In order to make JavaScript possible in our platform, we had to rely on two essential tools. The first one is QuickJS, which is a small and embeddable JavaScript engine. This means that it's performant in environments that don't support just-in-time optimizations like wasn't. So that was one of the main reasons why we chose it. Then there are other minor reasons like it's written in C and its code base is easily compiled to WebAssembly. But in the future, it's likely that we want to experiment with other engines like SpiderMonkey or V8. Other alternatives to engines that could be compiled to WebAssembly, which are targeted for embedded systems or smaller places are the Hermes engine. It's the one responsible for making React Native possible. But that's going to be something that we're going to see in the future. The second tool that is essential to make JavaScript work is Wiser, which is defined as the WebAssembly pre-initializer. Wiser is a tool that improves startup latency of WebAssembly modules. The overall idea behind Wiser is to record the state of a module instance and rewrite that state into a new module. So let's say this example. We have module A.wasm. Then we instantiate that module in any runtime. That could be a browser or that could be a runtime like wasm time. Then after instantiating that module, we execute an init function that the module is going to export. That module normally is going to do some heavy work, some expensive work that we don't want to run every time that we have to execute the module. When that function executes, the state of the current module, A.wasm in this example changes. When that state changes, that state gets recorded and gets rewritten into a new module, in this case, B.wasm. So there are important things that we need to consider here. That exported function is crucial to make Wiser work. So that function, it's a function that exports work that you know that can be done ahead of time. And then the second point that it's important to note is what's the state of a module? Normally, Wiser looks for globals, memories and instances as the state of the module. Now, let's look to a more concrete example. Let's assume that we have this piece of Roscoe that is doing some expensive operation. In this case, we are calculating Fibonacci of 12. And let's assume that we already know beforehand that we want to calculate Fibonacci of 12 every time. So by using Wiser, we could export a function called init. That's going to be renamed to Wiser initialize when it compiles to WebAssembly that is going to calculate Fibonacci of 12. So when we execute Wiser on this, Wiser is going to execute this block of code. When this block of code gets executed, expensive value is going to get assigned to Fibonacci of 12 to the result of Fibonacci of 12. And then the state is going to change and that state is going to get rewritten to a new module. Now, when you execute a new module, expensive value won't need to be recalculated again. It will be already accessible in the module's memory. And this will be a constant operation of just returning a 932. Now, let's go a bit more into detail. So by inspect, let's inspect the two modules, A.wasm and B.wasm. I mentioned before that state is composed of globals, memories and instances. We're not going to look into instances because that's mostly related to module linking and it's something that we are not dealing with at Shopify at the moment. So the globals of A.wasm look like this. They are three. And now we have the globals of B.wasm. They are the same. They're basically the stack pointer, the data and the heap base. So nothing changed here. Now let's look at the data segments. For those of you not familiar with data segments, segments in WebAssembly are elements that are used to initialize the memory either manually through an instruction or at instantiation. When we do one or the other, it depends on the type of segment if it's active or passive. In this case, they are active. So they are used to initialize the memory on instantiation. So before applying Wiser to a WebAssembly module, we can see that we have one segment that is pointing to a specific location in memory. Once we do apply Wiser, we can see that Wiser created a new segment. This segment is recording the work that was done when the initialized function was executed. So this means that when the WebAssembly module is initialized, this segment is copied into the module's memory and it's already accessible for the program to use it without having to rerun that expensive operation again. Now, you might ask yourself, how does this apply to JavaScript? Well, JavaScript engines normally take some time to do two main things before executing your code. That's the engine initialization and then the application initialization. So the engine initialization is mostly starting up the engine and everything that is going to be used to execute your code. And then application initialization is dealing with your code, basically. Part of your code and everything that has to do with preparing the code to be executed. So these are two things that we know, ahead of time that need to happen every time that you execute a JavaScript program, for example. So these are the two things that we could use Wiser to optimize. And how it looks, let's say in a pseudocode manner, it's like this. So we have a JS engine variable, global variable that is going to be set to initializing a default engine, that could be anything in our case, it's quick JS. And then we run this Wiser initialize function that would make sure that this engine is initialized correctly. When we want to execute code, this means that when we want to run code that the user has given us, we go to the run function and we just get the engine that has already been initialized and then evaluate whatever code we want to evaluate. So with this approach, we've seen a 50% performance improvement in a start of time by using Wiser and quick JS. So this is pretty impressive, given that all our SLOs are very, very aggressive. So by benchmarking a simple Fibonacci program, we saw these numbers more or less. So as I said in the slide before, this is very important for us because in a five millisecond SLO, even if you only have an improvement of 250 or 300 microseconds, that's definitely considerable. Now let's see a demo of how this thing works together. Now we are going to see how everything is glued together. So let's assume that we have a very basic structure of a TypeScript project. In this TypeScript project, we have a couple of important files, a webpack file and then a package.json file. Let's start inspecting the package.json file. Here we can see that we have some development dependencies which are pretty normal and then we have a couple of scripts. The first one is compile, which would only trigger TypeScript's compiler and then we have build. This build compiler invokes webpack. Webpack will do all its magic to transform our project into a single JavaScript file. Let's take a look at the webpack file. Now we can see that in the webpack file we have a couple of important parts. We see that we have a target that is going to be CES 2019. That is because create.js supports CES 2019 out of the box and then we have some other configuration and then around line 27, we have an instruction that says library. This means that this package is going to be compiled to a library that will have Shopify as a top level variable. Now if we go back to our package.json file, we can see that after invoking webpack, we invoke something called javi and then we pass in the final index file and we pass it a location where to output the final wasm file. So javi is the tool chain that we built that wraps quick.js and then that is in charge of taking the user's code and bundling that into a WebAssembly file. So let's see what the code looks like. We can see that the code is very straightforward. We have some type request that it's any and type response that it's any and then we define a main function that is going to take an input and just echo that input and return the input back to the color. So let's go to the process of building this into WebAssembly. So as I mentioned before, this is going to invoke webpack and then invoke javi. I have a global installation of javi so this is going to work. Webpack has succeeded and then we're just waiting on javi to succeed. Once that is done, let's inspect the build file. As we can see, we have the index.wasm. This index.wasm has all the optimizations already in place. So let's verify that our code is working as intended. We're going to do so by invoking this index.wasm with wasm time. As we can see, we get the expected result. We get the log and then we get that input back. So that's it. After having taken a look at the demo, let's now talk about trade-offs. Talking about trade-offs is important because not everything is roses. I mean, there are several things that we are giving up in order to gain other things. And here is mainly performance. Like if your use case requires to be the most performant use case and you don't care about developer experience, then JavaScript is probably not the fastest option available. It's probably not the best option for you. In fact, most instances of our benchmarks, we saw a performance impact of 400 to 600 microseconds. So this means that our JavaScript programs were around 400 or 600 microseconds slower than those written in a sender script. This is still fine for us because everything is running under our SLOs. To be a bit more concrete, there are several aspects that we know that developers are going to use in the programs that they write for the Shopify platform that we wanted to inspect. And a sender script was faster in things like floating-point arithmetic and string operations. Now, not everything is bad. We also had games. And mainly those games were around tooling, language features. Now we can access all the most recent features of TypeScript or JavaScript. And then we have a huge community ecosystem from the JavaScript and TypeScript world. So this is important. And also, not surprising, we had some gains in terms of performance too. It's important to note that JavaScript's regex implementation was almost twice as fast than the one that is currently written in a sender script. I'm not bringing this slide to bring up pure language-language performance, but this slide relates to the three buckets that we saw earlier, in which we could classify languages. And this just hints that languages that are designed exclusively for WebAssembly, given their maturity, might not have all the features that you expect in a language, or they might not be as performant as languages that have been there for quite some time now. To conclude the talk, I would like to end up with this general idea. We've been, after exploring with JavaScript and WebAssembly, we've been seeing that by doing this, by having this approach in use cases that can afford it, the mix of high-level languages plus WebAssembly gives a result of more secure, more performant, and better developer experience in apps. Again, I'm making, I have here this start just to remind myself that this statement is relative to our use case, but could be evaluated for other use cases too. So thanks to everything I have. I have this handle here, where you can reach out in Twitter or GitHub. My DMs are open if you have questions or you just want to chat. So thank you very much.