 Hi everyone, my name is Thomas, and along with my colleague Vivek, we will be covering WebAssembly and how it's enabling a new paradigm of web development. So let's start off by covering what WebAssembly actually is and why you would use it. Well, WebAssembly is a low-level binary format for the web that is compiled from other languages to offer maximized performance and is meant to augment the places where JavaScript isn't sufficient. So why use WebAssembly? Well, there are three main advantages. First, it offers more reliable and maximized performance. Second, it enables great portability since you can compile from other languages, enabling you to share your code across deployments. Lastly, it offers greater flexibility for developers who can now write for the web in languages other than JavaScript. Perhaps best of all, it is now shipping fully in every major browser so you can reliably reach all of your users. In this talk, we want to go over four languages to show how they're coming to the web and how you can get started yourself. We're focusing on these four languages in this talk, but it's worth noting that there are a ton of other languages that are also adding support for WebAssembly. So let's dig in and start off with C++. One of the earliest areas where WebAssembly transformed the web was by enabling large applications such as AutoCAD, Figma, and most recently Photoshop on the web. These are performance-demanding applications with large code bases often ported from other platforms. AutoCAD brought their code base, which was started over 40 years ago before the first-ever browser, and now parts of that original code base are directly accessible with a simple link. Figma made a big bet on WebAssembly from the start and wrote their engine in C++ for maximized performance. Photoshop has brought their complex application to the web, enabling easy sharing across platforms with commenting and editing. They'll also soon be using WebML for optimized machine learning operations, which you can hear more about in the What's New with WebML talk here at IO. In the last year, we've continued to see more of these incredible advanced apps leveraging Wasm to come to the web. Snapchat wanted to expand their audience while using a single code base to hit all of their platforms. They decided that C++ would give them the performance and portability that they needed. By leveraging WebAssembly, they can deliver their entire application directly in the browser across all operating systems. Snap is also investing in bringing their amazing camera kit to the web through WebAssembly, which you can try yourself right at web.snapchat.com. Here, they're again reusing their C++ implementation that is shared across platforms. They're using M-scripten, including its M-bind binding systems, and OpenGL to WebGL conversion. They're also using TensorFlow.js for ML inference, web workers for off-screen rendering, and performance optimizations using the Chrome profiling tool. WordPress has done something rather incredible and actually managed to get the WordPress server environment built to run directly in the browser. This relies on compiling the PHP interpreter itself and SQLite to WebAssembly. With this, users can try out WordPress directly in the browser with zero setup. This is amazingly impactful for getting started, enabling interactive tutorials, and eventually supporting easy publishing of backends. By running on Wasm, they can now also deploy to non-browser environments, including Node.js or other Wasm server environments. You can read tons of interesting details in this deep-dive block post. All of these examples are powered by the M-scripten tool chain. M-scripten will help compile your C++ code, but also helps port code built against POSIX APIs and will even translate OpenGL calls to WebGL. To get started, go to M-scripten.org. We've also made really substantial improvements in enabling WebAssembly debugging. With this support, you can now see your C++ code in DevTools, set breakpoints, step through your code, and even see runtime values of variables. Visit this link to see how. Now, you might be thinking to yourself, I'm not writing C++ or building a large cross-platform application, I'm doing web development. Well, this is where WebAssembly libraries are going to change your life. These libraries include things like OpenCV for image analysis, TensorFlow.js for ML, Skiya for graphics, SQLite for database, F of MPEG for video manipulation, and so, so much more. You can dig into these examples at this URL. These examples all expose JavaScript APIs, and you might not even know that they're powered by WebAssembly under the hood, except for how amazingly performant they are. So, let's take the incredible web app for Telegram as an example. Telegram is a traditional web application with the majority of functionality built in JavaScript, but there were a few areas where they needed some additional functionality that they found in WebAssembly. Specifically, they use the RLOTI renderer for animated stickers, OPUS recorder for voice recording and decoding, fast text web for language detection, and WebP Hero for .WebP support in Safari. And indeed, when looking at MPM, there were 1500 packages with WebAssembly. If you're a library author in really any language and you're interested in bringing your library to the web, now is your moment. If you'd like to connect directly with us, provide feedback on your experience, or even get featured on web.dev, you can visit this link. So, now that we've covered C++, let me jump into Swift. It's been possible to compile Swift to WebAssembly for a while, but it's only recently that the tool chain and ecosystem have matured to a point where this is truly shipable in production. One such application that is shipping Swift on WebAssembly in production and coming to the web is GoodNotes. They had invested a decade of work in creating the most incredible iOS application, full of cool features with a 4.8 star rating. They decided it was time to spread their application to non-IOS users, and rather than having to do an incredibly expensive rewrite that would also have to be separately maintained, they decided to reuse their Swift investments through WebAssembly. This means their decade of work can be reused while also minimizing maintenance costs. When we talked to them, one of my favorite things that they said was, quote, every day our iOS developers contribute something new to our Swift code base, and our web app benefits from that. In terms of their tech stack, they're utilizing Swift on WebAssembly, React as their UI framework, and PWA for installability. They have a great tech talk with lots more details at this link. They do their surrounding UI and React, and have a central canvas connected to the Swift engine. This is the same pattern that we saw with our previous partners. As an example, when a user clicks something like Add Page, it calls from the click handler in React into the Swift code base to add the page, and then make it ready for input. The tool chain they use is called Swiftwasm, and you can get started with that on Swiftwasm.org. The tool chain includes JavaScript Kit, which enables your Swift code to interact with JavaScript through bindings and translating types and objects. It also offers Carton, which provides a Swift alternative to something like Webpack. It lets you easily bundle and deploy your app while also shipping your code to other platforms. As with anything, there are some limitations that developers should be aware of. I want to be clear that this isn't a magic button that will make your Swift app run directly on the web, and while it's dramatically faster than a rewrite, it's not zero effort. Swift code and Swift UI should work well, but things like storage, UI kit, networking, and files need to utilize web alternatives. Still, if you're a Swift developer who has been hoping to expand your addressable market, this is your moment. So, now I want to take a step back from any specific language and update you a bit on the progress of the WebAssembly standard itself. In the past, we've discussed how WebAssembly threads and SIMD can offer a 10x or more improvement for performance-sensitive workloads. For an overview of how powerful these features can be and how to get started, check out this video from Chrome Dev Summit. And we're continuing to expand with even more SIMD instructions to further maximize performance. The tail call proposal is a critical optimization for functional programming languages and enables better support for C++ programs using code routines. The Memory64 proposal gives applications the ability to reference more than four gigs of memory and making it easier to port code that assumes a 64-bit architecture. Lastly, the JavaScript Promise Integration API lets synchronous code access asynchronous APIs without substantial code size or performance overhead. This is a big deal if you're trying to make code that assumes a synchronous environment work with the web. Now, to continue our language journey, I'm going to turn it over to my colleague Vivek. Thanks, Thomas. You may remember at last year's Google I.O., we previewed our plan to bring new languages like Java, Kotlin, and Dart to the web. Over the past year, the WebAssembly community has been busy making this happen, so let's talk a bit about what we built and what this new technology makes possible for developers across the web and native mobile platforms. As Thomas mentioned, WebAssembly has taken off among developers using C and C++ as well as a growing community around Rust. In these languages, developers are responsible for, in a sense, cleaning up after themselves, freeing objects from memory after an application is done using them. This class of languages was the primary focus of the early WebAssembly standard, what we call WebAssembly MVP, in part because these were the languages many large desktop applications were written in, and also because they had somewhat simpler requirements when developing the WebAssembly standard. Another class of languages manages memory on behalf of the developer. The language's own runtime automatically finds and frees memory that the app is no longer using. This class of languages is really interesting if you're building web or mobile apps because JavaScript is the language in which the web's own APIs are specified and standardized, and Kotlin and Dart are increasingly popular among developers building cross-platform native mobile apps. So we wanted to figure out what would it take to extend the web platform to applications written in these languages in a performant way. So let's walk through how we do that. When a web app starts in the browser, it's given a context for its JavaScript code and some heat memory. JavaScript memory is garbage collected, so there's a garbage collector provided by the browser behind the scenes. Now, when an app instantiates a WebAssembly module, it asks for and is allocated a region of linear memory for its own use. If the developer is using a language like C or C++, then the WebAssembly module uses some of this memory for a dynamic heap, and the developer would handle freeing objects on that heap after they're used. On the other hand, if the developer wanted to use a managed memory language, then the WebAssembly module will need to include that language's garbage collector code to manage the heap and automatically free up unused memory. There were two main problems with this approach. The first is obviously bloat. The WebAssembly module has to ship and instantiate that garbage collector every time the app is loaded. This increases the module's size and delays the application startup. Despite the fact that every standard compliant browser out there already contains a garbage collector for apps to use. Another form of bloat comes from the need for developers to have a kind of clairvoyance when deciding how much memory to request for their module. To avoid crashes, the typical thing to do is to set a maximum memory size that is just beyond the upper bound of your anticipated memory needs. This puts more pressure on implementations who have to manage the app's JavaScript and WebAssembly memories separately, alongside the memory needed by other apps and tabs that the user might be using. The second problem is what I call the split brain problem. In this architecture, those two memories and their garbage collectors know nothing about each other. This means developers need to be careful to architect their applications to avoid corruption when, for example, the JavaScript garbage collector comes along and frees up memory that's actually still needed by the Wasm side or vice versa. All this adds up to more bookkeeping that developers have to do themselves, which kind of breaks the whole reason to use a managed memory language in the first place. But even if you put all of your objects on one side, on the WebAssembly side, you can't avoid dealing with the JavaScript heap. And that's because of WebAPIs. WebAPIs are specified to accept and return JavaScript objects, which naturally live on the JavaScript heap and are collected by the JavaScript garbage collector. In the original version of WebAssembly, this meant copying your data in both directions, between WebAssembly and JavaScript, anytime you wanted to call a WebAPI. Graphics APIs like the DOM, Canvas, WebGL, and WebGPU are especially impacted by this, since in some cases they need to be called hundreds of times per frame or thousands of times per second with very strict latency requirements to avoid user-visible jank. The net result is even after we built a fast compilation target for code, many frameworks and applications ran the risk of producing jankier experiences on the Web, compared to what they would be able to produce on native mobile platforms. So how did we address this? Well, the WebAssembly community created a new extension that in effect shares a joint heap between JavaScript and WebAssembly GC modules. Now, your managed memory code can just allocate modules on this joint heap, and when the browser's garbage collector comes around, JavaScript and WebAssembly GC objects are garbage collected together. This means no more bloat. Your WebAssembly module doesn't have to ship its own full garbage collector implementation with your app, and your WebAssembly app can more easily grow or shrink its memory consumption as needed, just like a JavaScript app can. Some browsers, including Chrome, will even return unused WebAssembly memory from this shared heap to the operating system whenever possible, helping ensure all apps running on the user's device remain efficient and responsive. The WebAPI story improves as well. WebAssembly GC modules create objects in the same heap where the JavaScript WebAPIs will look for them, and return values are easily passed back as well, all without excessive copying. So that's WebAssembly GC, smaller binaries for modern managed memory languages, faster interop with JavaScript code and the JavaScript-based WebAPIs, and a dynamically resizable memory footprint that grows and shrinks to provide your module with what it needs. With WebAssembly GC, the web can finally give a proper welcome to our developer friends building Flutter, Android, and Kotlin multi-platform apps. Let's look at what WebAssembly GC means for you. Early data shows that WebAssembly GC now runs code compiled from these languages in the browser up to two times faster than compiling them to JavaScript. From a user's point of view, this level of performance is increasingly indistinguishable from what they would see on native mobile platforms. We're talking about apps running at 120 frames per second with single millisecond frame update times. We can now imagine a world where cross-platform frameworks can build apps for native mobile platforms and the web with no perceivable difference in capabilities or performance, and the developer experience gets better too. Previously, developers would have to build separate native Android and iOS apps, as well as a web app, to reach the broadest set of users. Cross-platform frameworks like Kotlin Multi-platform Mobile let developers write their mobile app's business logic, the part that manages user data and implements your app's features, in a single code base that compiles to both Android and iOS, while implementing their user interface using platform-native frameworks and widgets. But extending this cross-platform capability to the web ran into some challenges. For a long time, you couldn't really compile mobile languages like Kotlin to the web. At best, you could transpile it to JavaScript and then run that JavaScript in a browser. This approach produced apps on the web that just weren't as fast and smooth for users as they would be on native mobile platforms. Now, thanks to WebAssembly's new support for managed memory languages, cross-platform apps can compile directly to the native runtime of all three platforms, giving developers access to the reach and instant startup of the web, and giving users a fast and smooth experience wherever they find your app. The Kotlin community also benefits from UI frameworks like Compose, which can help developers share much of their UI code as well across platforms, again with the performance level matching that of native platforms. Here's Sebastian from JetBrains to share more about Kotlin multi-platform and Compose. Thank you, Vivek. At JetBrains, we think WebAssembly is a promising technology, and we want Kotlin developers to get all the benefits it has to offer. Just recently, we released an experimental version of the WebAssembly target for the Kotlin compiler. We see a lot of potential use cases in Kotlin Wasm's future from building high-performance web applications running in the browser to building speedy serverless functions. I want to show you a concrete example of one such thing that is part of the potential future of Kotlin Wasm. So here you can see an application build using Jetpack Compose, the declarative and modern UI framework created by Google for Android. You may have actually seen this specific app before. Now at JetBrains, we're working on bringing Jetpack Compose to multiple other platforms beside Android like desktop, iOS and web. In practice, this means that you can take any knowledge about the APIs of Jetpack Compose that you might have from developing for Android and use it to target other platforms. The web target for Compose is built on top of Kotlin Wasm. It's still experimental, but let me show you what we can already do with it. Here is the same application you just saw on Android running in Google Chrome. It looks and behaves just like we saw on Android before. If we want to get an idea of the current performance of this app, we can open the FPS counter in Chrome's DevTools and watch some of these beautiful animations. We're also working on making it possible for you to debug Kotlin code right inside your browser and inspect variables and stack traces right in Chrome's DevTools using the build-in support for source maps. Now let's move back to IntelliJ IDEA and take a quick look at the code of our app. As you can see, the UI and business logic of the app are all located in the common module. It's all code that is shared between the different targets for the project. But we can also specify platform-specific logic for the individual targets in the respective sub-projects. That way, we still get to use everything in terms of platform-specific APIs. Okay, so after making a small change to our code, we can actually have a look at the result of the changes. And look at that, they appear right in the browser. We hope this little demo made you want to learn more about this experimental technology we're building at JetBrains. If that's the case, follow the link and check out the project sources for this demo yourself. Thank you and take care. Thanks, Sebastian. You can learn more about Kotlin running on the web with WebAssembly at kotlinlang.org. And we're not just making Android apps multi-platform with Kotlin. Multi-platform developers have been using Flutter for years to target Android, iOS, and the web. And on the web, Flutter developers have also had to transpile to JavaScript to run in the browser. But we're unlocking faster performance for Flutter web as well. For the first time this year, compiling Dart code directly to fast and efficient WebAssembly code in the browser. You can read more about the exciting new performance boost offered by Flutter web at flutter.dev slash wasm. On the open web, your app is just a click away from new users who can discover it and share it just as easily as they share a web page with no stores getting in the way and no revenue split affecting your profitability. The productivity of cross-platform development, the performance of native mobile apps, and the openness of the web. That's why we love WebAssembly. On behalf of Thomas, myself, the Flutter team, and our friends at JetBrains, thank you so much for joining us. We can't wait to see what you'll build next.