 Hello everyone and thank you for coming to this last talk of the day. I know you both are probably itching to get home. So my name is Dominic Pijeta and I am a long time core contributor to NIM. I am actually the one that wrote the first book on NIM called NIM in Action. Right now though I spend my days working at Facebook and my nights working in NIM whenever I can. But let's get right into the talk so async away to NIM. So many of you here likely do not need to hear this. Maybe some of you already heard of NIM but for those not familiar with it and for the benefit of those watching at home I just want to say a few words of introduction. So what is NIM? So obviously NIM is a programming language but it has a few characteristics that make it quite special. It is efficient and portable. It compiles to either C, C++ or even Objective-C and JavaScript, yes, thank you. So that allows it to match C speed and it also gives it access to C's wide range of libraries and things like that. It is easy to pick up, NIM focuses on building a small language core with more features implemented using its brilliant macro system and this makes it easy for anyone to pick up. It is a modern language as it includes many features you'd expect out of one. This includes generics, iterators, closures, a brilliant module system, things like that and it is production ready. So we recently had a 1.0 release just last year so NIM now guarantees backwards compatibility making it ready for use in production. And the great thing about the 1.0 release is that it's shipped with two very amazing features, those being procedural macros and async await and we will touch on both of these in this talk. So without other way, let's move on to the main topic but let's go over some basics. So what is the problem with IO? IO operations such as reading data from a hard drive or receiving information over the network can be very slow and performing synchronous IO will result in your application not doing any useful work while your IO operations are in progress. In other words, your application becomes blocked. Asynchronous IO on the other hand solves this by offering a mechanism which allows you to check whether an IO operation has completed repeatedly but there's no way to do this in a simple way. So what can we do? How do you manage thousands of IO operations with many different actions to be taken when each operation completes? Now the most basic solution is to use callbacks but let's face it. As many of you likely know, callbacks become very difficult to manage. That's an example of some callbacks in NIM. The main reason for this is that they basically fail to compose well. Ideally what we want is to write our IO code in a similar way that we write our non-IO code already. Just to explain what this code is, it's basically three functions. The get data at the bottom reads 100 bytes of data from a socket, takes a callback called on God first data, then reads another 100 bytes, then takes in another callback and then that callback finally prints out the result from the receive call. So callbacks suck. I hope we can all agree on that. So I think one of the best solutions is what we call asyncho8. And this is another example showing just that. So we have, again, a get data procedure taking a socket, an async socket. You see the async pragma there which signifies that it's in a synchronous procedure. And it's immediately clear that the first 100 bytes that we read from the socket just gets discarded and we only use the second 100 bytes and we only print it out. So the code is much easier to reason about and while you have these await calls, they do offer a useful hint as to where IO is being performed. Now you've probably seen this in other languages like C sharp and rust. But there's something special about this. It's completely implemented using macros in NIM. There is no support for this in the compiler at all. And this is basically what I will outline in more detail now. But let's first go through how all of the components of NIM's async fit together. Yeah, so much easier to read and understand, hopefully. So there really isn't much to it. There's four components. You've got your future, your async procedures, selectors module and the async dispatcher. So let's look at these in a little bit more detail. So the future is just a simple object which acts as a container. If I just run through the code at the top, you see it's a simple type definition, a generic one which takes a generic type T. And there's four fields, value which stores the value stored in the future. A callback which you can set to a procedure. And when the future gets completed, that callback gets called. A finished field to just track whether the future has completed or not. And an exception went, for example, some error occurs during the computation of your future. So let's move on to the async procedures now. So here we have another example of one. It's a procedure called find page size. It takes two arguments, an async HTTP client and a URL. And it returns a future containing an integer. And what you see after that is obviously the async pragma again to signify that it's an async procedure. And in the body we just use the HTTP client to send an HTTP get to the URL. And then we return the length of the data that we receive. So question now becomes, since NIMM has no idea about how async procedures work or anything like that, how do we express this without the async pragma? So one possible way is to translate it to use callbacks. And this is what this would look like. You have first a line, like the function is very similar. You don't have the async pragma anymore. You have a result being set to a newly allocated future. And then we call the get content procedure again, which returns a future. We assign it to a new variable. We assign a new procedure to its callback field. And then in that callback field, we complete the resulting future with the length of our data future. There's a lot of futures going around here, but I hope you understand what I mean. So this is not ideal. The problem with it is that it doesn't scale. As soon as you introduce more control flow into your asynchronous procedure, you will run into problems. Translating it will become very difficult. Now I should preface this with something that Andreas told me. And that is that apparently it is possible to achieve this. But I looked into it and I couldn't find anything, any programming language that does this. Even JavaScript, if you use Babel to translate JavaScript code with await into ETSMAS script 5, it still uses iterators, which takes us to our second translation attempt, which is using an iterator. And you see here we have, again, very similar code. It's just that we're using iterators. We have the closure pragma, which in name basically turns the iterator into something that can be allocated on the heap. And this makes it much easier to translate because it allows us to simply change each await statement into a yield. And the rest is fairly similar to the previous code. But hopefully that helps you see that it wouldn't be as difficult to translate a more complicated example. So the scalability problems are solved. So now we go on to some metaprogramming in NIM. I'm going to show you how you would achieve this translation. Okay. So we have this, again, asynchronous procedure. It's a bit, I would say, simpler than the previous one, even though it's slightly simpler, but still just for the case of this example, I think we need to simplify it a little bit. So when you're starting developing macros in NIM, usually what you start with is something like this. So this is an ASIC macro. It takes a body parameter of type on typed. And this is like a magical type which just refers to more than one code statement. And we return it on typed as well because we are basically transforming a procedure code statement into another code statement. And in the body of it, all we're doing is we're displaying the tree representation of our abstract syntax tree. And this is what that looks like. So at the bottom you have a link if you want to try it out on the NIM playground. You can run it in your browser. But basically you get this nice tree structure where you have each of the components of our procedure, the name, test async, the parameters. In this case, it's just a return value and the body, which contains the await and the return statement. So going back to our example, how do we develop our macro to translate our procedure, our asynchronous procedure into an equivalent iterator? Well, we do something like this. And I've obviously taken some liberties here because I wouldn't be able to show something that would work generically for all asynchronous procedures. It would just wouldn't fit on the slide obviously. So what we do here is we just hard code the location and the ASD for each of the nodes that we're translating. And that's what you see there. So we take the first child node of our body and we assign it to the name variable, which is the name of our procedure. And then we take the return type, which is in the third child, and then the first gives us the actual return type. And the awaited function, which just grabs it from the body and assumes that there is just one. So obviously this would break pretty quickly. And then we use this nice feature in them, which allows us to basically quote what we want to output in our macro. We use the back text to fill in the ASD nodes that we want. And basically that will be the result. And then we display the result of our macro, the ASD nodes that we are returning in the form of NIM code. And again, you can use that link at the bottom there to play around with this. And if we run this code, we will get that displayed in our console, which is the result of our macro. So that concludes the meta-programming. Hopefully that gives you a bit of an idea of how it works and maybe inspires you to take a look at it in more detail. So let's just quickly run through some of the other components. So we have the selectors module, which is basically in the standard library and it implements a readiness-based, a synchronous IO API. It basically wraps EPOL, KQ and things like that and gives it in a nice API. It's dependency-free, it's high-performance, and yeah, it's extremely portable because it supports basically everything. We also have the async dispatcher, which is built on top of selectors, and it implements a pro-actor API. So instead of asking the system, hey, I want to read from the socket, can I read from the socket? Look at you say, I want to read 100 bytes of data from the socket and then it lets you know when that is ready. And this is actually how it works on Windows with IO completion ports. So this module also implements IO completion ports on Windows and provides like a layer on top of the selectors module to provide a pro-actor API. Okay. So really quickly, the current status of Nims async, it's used in production. This is the NIM forum. It runs on it. We also have this HTTP server, which gets quite good numbers on the tech and power benchmarks. It's up there with Rust. It's called HTTP beast if you want to take a look. And the future of async, you know, borrowing some ideas from Rust, maybe using zero-cost abstractions by using pulling futures. Better integration with Nims parallelism. We don't have any way to use spawn and await spawn currently. And better factories as well. So best way to learn. Grab my book. And that's it. There's some more links. And yeah, happy to ask any questions. Okay. Okay, good. Yeah. That is a good question. Well, I think that would have ended up being a lot more complex in a language like NIM, which is supposed to be a systems programming language, where if you have green threads, I would assume you would also need a runtime to use them. Whereas with this, you can kind of choose not to use it if you don't really want to. That's the main reason. Yeah. So the situations where you have. So I'm not entirely sure what you mean. Like, well, the way it currently works in NIM is that each future basically can emulate a callback. So because you're returning a future from every asynchronous procedure and you can say, okay, assign a callback to this future and call it when, whenever it's ready. So that's how you do it. Every time you read from the socket, you would get a new future and you would assign the callback to it repeatedly. Does that make sense? Okay. Cool. Thank you so much for the brilliant talk. If you leave the room, look around to see if there's any.