 So our next speaker is Lukimov. Luk is one of the creators of the Lumen project, which he will tell you all about, is also the creator of the IntelliJ Elixir plugin, which he will talk about tomorrow in the free editor's room. If you want to catch that, that's also gonna be super cool for sure. And yes, so let's give it up for Luk with Lumen. As he said, I'm Lukimov. I'm known pretty much everywhere online as Chronic Death because I'm filled with autoimmune diseases. So let's start with an overview before diving into the details. Lumen is a new compiler in runtime for Erlang Elixir and anything else we can convert from Erlang abstract form. It targets things that are difficult or impossible to target with beams such as WebAssembly, XA6, single executable binaries. So not eScripps, but just single executable, like competing with Rust or Go, that sort of thing. Or two embedded systems without an OS such as microcontrollers. Which web apps represent a significant portion of the work we do and with the client's ecosystem in constant flux and long-term maintenance and extension is quite painful because you go away from a project for a while that's written with a JS framework and it just will not build too many things that moved on. Meanwhile, the server-side ecosystem is more stable. People have not really had upgrade problems with Phoenix, even though we've added new features. So the one thing that's really missing is we can't take those server-side languages and put them on the web, but WebAssembly changes that. We can use previous back and only languages for client-side apps. And the reason why we're doing it now, even though WebAssembly hasn't really been reached 1.0 yet, is because we don't want Erlang and Elixir to be passed over because they don't have a WebAssembly target. There's already targets for C, C++ and Rust, and people have done toy versions of Python, Ruby and PHP, but we want to make sure that Elixir and Erlang have a true client. It's not just a huge 100 megabyte download to get a Python CLI repel in the browser. We want this to be something you can use in production so that once again, people don't push aside functional languages. So the WebAssembly spec even says we want to make sure that this doesn't just work for imperative languages, but all the implementations are just C, C++ and Rust. So if we get it in there now, we make sure that they stay compatible with functional languages. Not everyone in the audience may have seen Lynn Clark's excellent articles on WebAssembly on the Mozilla Web Developer Vlog, so let me give you a quick introduction. WebAssembly has a format specification in a test suite and development happens as part of the W3C community group while formal sanitization occurs under the purview of the WebAssembly working group. So these working groups are the reasons why browsers remain compatible. It is Google and Apple and Mozilla fighting to get a standard that everyone can agree to so we don't go back to like, it works in everything but IE. Or it works only in IE. The Lumen Core team is part of that working group so that we can advocate for functional features being there now and not being an afterthought. Or it works but you have to ship way more code to make it work. The overall goal of the designs are to make a safe, fast, sandboxable language for the Web. Separate code and data means you can't address code so you can't store code, you can't do go to for exception handling but it also means we can't get exploits that require Rop gadgets where you jump to the very end of a function to set up registers to be in a certain state which is a big problem with x86. The caveat with JSFFI right now is that we can only pass integers over the bridge but JavaScript can see the entire memory of a WebAssembly module if it's shared and so you can say I'm here and I'm this length and read the bytes out. There is a proposal called interface types where this translation will happen automatically but for now we depend on the Rust WebAssembly support to do this translation for us automatically. In browsers, WebAssembly is faster than JS because as a binary format it can parse faster. It's also set up to allow streaming compilation because there's a section that kind of implements like the equivalent of a header and C so we know all the types of functions and then each individual function can be stream compiled and you don't have to have the entire thing downloaded before you start compiling which happens with even the most minified version of JavaScript. But that being said, a lot of people think this means that it's a completely different stack on the browsers, it's not. Once you get down to the level of it's parsed and it's in some sort of structure and memory both of those things can go through the JIT so we still benefit from JIT even though we get a binary format that's faster to parse and compile when it's for that first load effect. Now that you know what WebAssembly is and its benefits, let's see how Lumen targets it. Code size is critical, it directly impacts time to load the page as well as compilation time of the client. Load time is a major portion of the time to the first paint and any delay that is noticeable to the users. Threading the WebAssembly is very different animal than politics or Windows threads. Right now the browsers take the requirement to have threading support in WebAssembly to mean we can use web workers. And web workers and browsers for sandboxing reasons are included as processes, not actual threads. We and other people are pushing for them to get real threads but for now they are web workers and so it's not as nice. Going the other way, browsers, the main thread is very different than the main thread in normal OSs because that is the one that interacts with the DOM and if you're doing work in the main thread you freeze the UI. To the point that Firefox or Chrome will pop up like do you want to kill the scripts in this page if you freeze it for too long? So there's a lot of gotchas. Async APIs require a callback which interact differently with DC than normal closures. We are able to have async callbacks for the things that browsers support because the Rust wasm support allows us to do that sort of thing. So we can use features there but not everything supports that. So for actual events that have to be concurrent we need special support in the runtime we ship to the browsers to allow the Erlang code to be woken up from the scheduler and run immediately in a blocking manner which wouldn't be the normal way to run a process. And once again for FFI we need to translate between the JavaScript values and the Elixir values or the Erlang terms. If you're wondering why we chose to build a new compiler runtime rather than try to port the beam to web assembly so like if you have something so like the toy examples for PHP, Python, and Ruby is they take a tool called M-scripten which exists in before JavaScript when they're just AMJS and they'll take any C code and just it's runable in the browser but it's huge and usually pretty slow. There are some bins of caveats where people have like gotten Unreal Engine games to run under M-scripten but it took a lot of work. It doesn't just happen that it's in the browser and as this passes native. Much of the beam runtime support depends on APIs that are not available or entirely on support in web assembly because they would violate the security of the web. So like init assumes your file system to read or the way the memory allocator works is it does a thing that makes total sense with virtual memory on an OS where you ask for like two gigs of memory and you only write to the parts you need and the browser only gives you the parts you need and it's perfectly okay that you ask for two gigs but for safety on the web when you ask for memory in web assembly you get that back immediately and it is zeroed out. So you can't do that sort of you try and even fake those sort of calls on the web. In the web assembly like I said the way the main thread works is it's blocking but also if we did ever have a scheduler on the web workers they couldn't do DOM access. So we have to know which thread you're on to know if the API call is valid or like automatically go over to the other thread when we need transfer control for a DOM call from a web worker. Initially JS values are being needed to be tracked by the runtime so we have to have sort of like almost a NIF term type to keep track of those JS values so that they garbage collect correctly when they're on the other side of the JS gym that Rust generates for us. Using the beam in the browser means that we would need to ship all the dot beam bytecode files to the browser but as an example individual libraries like the Timex library that allows you to do human readable times and time zones takes a 1.1 megabytes and like 1.1 megabytes is you should feel ashamed that your client side web app is that big. So that's one library. That's not shipping the server library. That's one like extra support library on top. That's not viable to be competitive. Remember I said we want this to be a competitive thing not a toy set of tools. So one way we could do this and what even JS can do now is dead code elimination through what JS people call tree shaking but beam files aren't really set up for tree shaking because we want to be able to do hot code reloading and so there's no real way to set up to tree shake or to do dead code elimination. The other problem is we'd be running the beam VM interpreting we would be running bytecode in a VM and that VM itself access bytecode in web assembly to the web assembly VM in the browser. So the VM on VM VM there's a lot of indirection that will just slow everything down. And additionally because that beam bytecode is opaque it's not going to magically jit your code even if it's in hot loop and Erlang into native code there's just too many levels of indirection. So we won't get the benefit of the jit then. So I've explained issues with the beam itself but I haven't really explained how the design of Lumen would solve those issues. First Lumen drop support for hot code loading which gives us a benefit. The reason why this is okay in the web is we can't really replace a web assembly module. We can have a web assembly module of code to download a new web assembly module and then do it again but it's not really the same as on the beam where all your code would just know to call this one. It would have a different ID. There would be no replacement. It's much more cooperative than it would be where it just works on the beam anyway. But because of this we can do full ahead of time compilation so we only pay for what you're actually using. If you have an OTP application but you're only using one function we only have to ship that function. This also matters a lot because we don't have to ship the entire center library we only ship the bits you're actually calling. And like Erlang is huge like the Erlang module itself is huge and a mess and like a lot of unrelated functions to each other and additionally it ships a bunch of checks some functions that no one should use in a modern context because it's like MD5 and Adler32 which are completely unsecure and so like there's stuff we just don't want to ship. But initially because we're ahead of time we get everything that Rust and LLVM can do. So we get dead co-elimination not just of individual functions but of individual instructions, arguments, stores and loads it can do instruction combining if a better fused function exists and it can do loop authorization or vectorization sort of like how the Pelame team is doing in Japan. Like I said, we're built on top of LLVM and Rust and we use Wasm-BindGen which is the part that allows us to call DOM APIs in a more transparent manner. Being on top of LLVM is kind of the default for new languages, Erlang just existed before it so it's not built on it. But because of this thing is that like constant full beam or dead code elimination just happen out the LLVM level and we don't need to do that which means we're not in that case we're like five years after the language is out then it becomes fast again because it finally gets all those optimizations that are in every language already. So we're not falling behind that way. On the LLVM core team I'm primarily responsible for the runtime and the BIFs. So the runtime is composed of five layers the memory management terms, processes, schedulers and BIFs. The first layer is memory management memory management is the layer the runtime was work between Paul Schoenfelder and myself Paul looked at how the code is written in beam and poured that so like chunk sizing super chunks, super carriers, all that stuff it's in us so you don't have to worry that we'll behave unexpectedly from how you're used to thinking about memory and GC in the beam. And all that memory management and all like the BIFs are property tested using Rust prop tests. So it's all, we know it's sake fault safe because we have to use unsafe code to do memory management in Rust. It's not a safe Rust but I'm not just saying that like oh we process it therefore it must be safe. No, no, I'm saying it found some sake faults and so now that we run the prop tests we can trust that we've eliminated the sake faults. From perspective everyone here that's just running Erlang or Elixir the memory for processes are the same processes have heaps, they have reference count binaries and there are 64 bytes and each garbage question is per process. The processes have similar features to those in the beam and from Erlang Elixir code it's gonna behave identically so you don't really have to think about it. Lumen scheduler works similar to the beam scheduler in that there is one per thread. Each time the scheduler runs it checks if any timers to the time out exactly one is processed. Right now we haven't implemented dirty schedulers because for the main web target there's not really a concept of what would that mean to have a dirty scheduler. We don't really think that would be safe because you're gonna try to freeze a thread so that'd be bad. How the schedulers work of course differs on web assembly versus native. On web we have to worry about the main thread is the one we get and everything else has to be a web worker and the Rust wisdom ecosystem deals with that for us. On native there's no special threads and from the testing because we have thousands of tests we know that Rust generates a new thread for every test to make it cleaner and let them run parallel. So we know we can spin up like 10,000 schedulers in like 13 minutes and nothing bad happens so having lots of schedulers works just fine. So web assembly calls are blocking. There's no built-in support for declaring a function as async the way there is in JavaScript. Even if we could declare an async wrapper we want the timers to work without pulling and without hoping to event listener callbacks are called when enough schedulers wake up. So we want the scheduler to somehow to be running in the background all the time. For web assembly so that we don't block that our third we use request animation frame which is like people in JavaScript used to use set interval and just keep calling their code over and over again but browsers started to block that because it was bad. And so now the way to do it even if you're not doing animation is to call request animation frame and just keep re-scheduling over and over because then every time it has a paint which unfortunately browsers can do whatever they want but they say just assume 60 milliseconds but they don't allow you to actually detect the frame rate which is kind of annoying. So for us we just assume we have 16 milliseconds to work with and so that leads to like a three to 4% CPU overhead at idle to just check if any process should be woken up or if any timers time out. We can optimize it later though based on knowledge about the timers. The runtime needs two ways to interact through the web through calls, the JS calling into the lumen and lumen calling out to the JS stuff. As described in the schedule section once the web assembly module is instantiated the scheduler has started it and just went from the background it actually has no process. It does not have a knit process it does not have an application tree for the demo I'm about to show you. This is just like running one application by itself. We will eventually support all that knit application stuff but it didn't need it for this so we didn't go through the extra trouble. So we do these steps to happen asynchronously. So to call asynchronously we use the wait keyword in JavaScript. Using the lumen web library we spawn a special process that is immediately executed. This allows us to do the JavaScript to term conversion in that process heap without having to like make a junk heap somewhere if we just did it as an apply and then we're going to put a special function call that will get a promise in the bottom and on top that will do the apply because that way when the apply returns that special call at the bottom will be able to shove that over the wall to JS land. To generate the promise we actually give promise to our executor and we only need to record the executor doesn't need to be anything special it is just a struct that holds the two callbacks that we get from a promise which is you can call resolve or reject. Resolve is when everything went good reject is when everything went bad and reject just happens automatically if your process dies on exit or abnormal exit I should say normal exit's fine. Back in JavaScript chain run one returns but a promise so the wait code just waits. In any given frame of the compiled code we're going to get the arguments from the stack replace the current frame and pointer and next label and do that. Let me just got like, yeah, yeah I got two minutes. Okay so we're going to jump straight to the demo. So this is, oh wait was I not on that screen though? No, no right cause it's yeah sorry I forgot how this works. Okay so here is it working we have full DOM interaction so I can do a form and it will generate tables and Illumina Web gives you all this so you can call these as normal Erlang functions. What this is doing is as a sponge and demo where we enum reduce over a list of processes process n gets process n minus ones pid and then I give it zero and it goes back the other way adding and so this shows both that we can spawn processes and we can pass integers, we can pass integers when they get too big, we can print and if you want to, oh I'm out of time. Okay we also have support for and so this shows that the runtime works but the actual like Elixir code was me translated by with Russ we're still working on the compiler but there's an interpreter that works and so if I go here you can see it says Elixir in your browser I go right so this Erlang file is just sitting on my disk and if I reload the page the interpreter can read that file using the normal JavaScript APIs that get a file and we're able to read it into the interpreter, the interpreter compiles the AST from just Erlang source and is able to run the code and this is printing into a div element so it's doing DOM interaction also. Yeah that's the equivalent of what it's doing so this was converted from Elixir code and I just did it in Erlang code because it's faster to convert. So there, one of the core team members of Elixir has a thing that will decompile any beam file back to Erlang and so we use that to convert the Elixir to Erlang for this. It's just a mixed task called mixed decompile and so it's on his GitHub, I don't think it's a hex package yet. Okay, questions I guess? Right. Yeah. So my follow up to that sort of grapple that's that we can have or would that be? We can easily do the REPL with the interpreter. Oh, do we expect there to be a REPL? We can do the REPL easily with the interpreter because we can just ship you the interpreter but with the compiled version it would be harder because we'd have to ship a REPL loop that we normally wouldn't ship so I would say that you'd probably have to unfortunately use the interpreter for the REPL version. We might because we want to support you being able to give like almost like a boot script so that you don't have to enable, you know the same reason why you sometimes, you boot your app when you have a release with everything booted and sometimes you do a clean counsel with not everything booted. We would probably be able to support that because it's not going to change what you ship it's just what you have running and so we potentially will need a term parser so that you would be able to change the arguments to the applications when you boot them with the boot script but the boot script is usually in a term to binary format already it's not like human readable Erling term format and so we will definitely support term to binary or a boot script but we don't know if we'll get to a point where you could just do a REPL that a human could read for stuff that's compiled with interpreter. Yeah, like I'm loading the file here but it doesn't have to be loaded for a file. Mike also could have just put in the text in the counsel and that would have also worked. It just would have been more error prone copy and paste or a problem. Anything else? We do have stickers for Lumen. If anyone wants stickers. Yes, of course. Uh huh. Thank you.