 Oh, we got the thumbs up, okay, people, everyone's here, so that's good. If more people are wondering, that's also cool. This is Timothy Jones, he seems pretty cool, and his presentation should be interesting, otherwise he wouldn't have been here, so hands together for Tim. Yeah, presumably you're here because you like JavaScript or, I guess like me, you're forced to write in JavaScript because you need to write something in the browser. I didn't actually have JavaScript in the title of the talk, which actually kind of makes this a bit interesting, but yeah. So this talk is about implementing a fairly large asynchronous JavaScript application and how awful an experience is, and how we can use these promise objects to sort of represent asynchronous actions and tame this awfulness down into something that's less awful, I guess. I won't go so far as to say that it's good. And part of what this talk is looking at is not only implementing a custom implementation of it, but taking the standard concept of a promise, the specified concept of a promise for JavaScript, and extending it for, extending it to achieve more interesting things that were required by the application. So the context of this talk is grace. Usually there's more on the slide when I'm introducing the language, but unfortunately we don't actually need to know too much about the language, so I don't need to spend too much time on it. It's a language for education, and if you're interested in that, then you can go to Michael's talk on Friday, he is talking about the actual language, so that'll be an interesting talk. But just so you can recognize it and tell the difference between that and JavaScript, here's a little bit of what it looks like. Yeah, you notice there's some weird little oddities going on there, like the while loop has curly braces in some cases instead of the parentheses that you'd normally expect, but we don't really need to go into it. So then the current implementation for the language is called mini-grace, and what that does is cross compile the grace code to either C or JavaScript. And in particular, the JavaScript backend for it is slightly more experimental than the C backend, and it generates code that looks like the C code, so it doesn't really take advantage of many JavaScript features that sort of still flattens everything out like you would have to in C. And we have run a trial course using this language and using this implementation by compiling student code to JavaScript and running it. And running it in the browser is because it's compiled to JavaScript, obviously. And this is interesting because I built a little editor for this language, just a simple text box that highlighted the code and then you could press run and it would potentially spit some things out. And the interesting thing about that is that the editor that the students are using to write their code is running in the same environment as the code that they're writing, right? So if we were to go and implement a while loop in JavaScript and then expose it in grace, so this is when you run a while loop in grace, this is the JavaScript code that's running in the backend. Well, we might just defer to the JavaScript while loop. So while the condition block, every time we run it, it gives you back true, run the do block. But this poses a bit of a problem because now if someone using this editor writes code like this, or in fact, please leave more complicated code like this, but that still runs forever, or even just runs for a long time, then everything gets stuck, right? Because the environment that they're using to run the code is running on the same thread as the code that they've written in the environment. Okay, so the whole browser locks up and they can't press the stop button because the stop button is waiting for the code to finish running for the later to stop, so in fact, there is no stop button because it doesn't make sense to have a stop button. And unfortunately, we're in the browser and we're using JavaScript. And so we can't use threads to, right, we have an explicitly single threaded environment. And WebWork is the sort of faux threading that JavaScript has. I'm going to help us here because both of these two things need direct access to the DOM. They both have interfaces that are going to interact with the DOM. And WebWork is mean you can't do that. So Hopper is my solution to this problem. So first of all, it's not a compiler, it's an interpreter, partly just because that made a lot more sense in this context for running student code, not having to go through the process of compiling things and then running things. And the whole interpreter is written using asynchronous JavaScript. And that means that essentially while the interpreter is running, after a certain amount of time, it can stop, yield control of the single thread, let other things run, like the editing environment, other events associated with the user interacting with the browser. And then later on, start up again and keep running the code. And the main problem is that writing JavaScript in an asynchronous style is really terrible and has lots of problems. So one of them is just a purely acidic thing, I guess, is the pyramid of DOM. I've heard this called a bunch of other things, but essentially you indent your code because you're entering into a new function that is what you're going to do when the previous action is done. And then that runs another asynchronous action and you have to pass that a function to say what you're going to do after that asynchronous action finish is running. You just indent and indent and your code goes off the side of the page and you can't read your code anymore because you've got to move around too much. It's also really hard to reason about what the asynchronous code is going to do. You don't have any guarantees about when the function, the callbacks that you're passing to these asynchronous actions are going to be run because the API might make guarantees and that's useful. But in general, we don't know when functions are going to be run. The asynchronous operations aren't composable. It's really difficult to do things like take a list of operations and say give me a new operation that runs once all of these things are finished. Run them all side by side and then when they're finished, when they've all finished, run this other action over here. And then also, if you're doing the sort of no JS style that's sort of become a standard of asynchronous JavaScript, every time you pass in a callback, that callback has to do explicit error handling. Because the language can't help you anymore and errors can still happen. So every callback has to take an error as its first argument. And every callback should be checking at the top of it whether or not that error happened. And every time you do any asynchronous action, you have to check those errors. So here's an example of some asynchronous JavaScript and why it's kind of awful. So we've got a list of URLs and we want to go off and get fetch data from all those URLs. And report each of the URLs that we're getting from. And then once they all succeed, or assuming they all succeed, we're going to say which one was the last URL, right? And there's some really interesting problems here where, for instance, we've got the console.log right at the bottom, which we expect to run for the function that we passed in. Because the function is presumably going to run at some point in the future. But we don't know that, we don't know how get is implemented. Get might not necessarily guarantee that it's not going to run that function. And so we can get weird outputs. This code is also broken. It will not necessarily actually, if they'll run and they'll succeed, it will still not necessarily print out the file that was the last one to be. So the URL that was the last one to be fetched, mostly just to awful JavaScript. Scoping that the local to the loop. And so in fact, this always prints out the last URL. And it's really, that's not necessarily just an asynchronous problem. It's also just a problem with JavaScript in general. But it's sort of showing off how really awful this can be. This will always print out the last URL regardless of when that URL was actually successfully fetched. And so when I was started implementing Hopper, I knew I wanted to be asynchronous and so I was writing it this way. So every time I ran any action, almost a large majority of the functions in the system were asynchronous and it meant that as the interpreter got bigger and bigger, I had more and more callbacks everywhere. And I had more and more explicit error handling. And I had more and more weird behavior that took me longer and longer to find out why things were going weird. And it just became too much and it was basically impossible to reason about what was going to happen anymore. So the JavaScript community has realized that this is a problem. And the solution they've come up with is this thing called promises. They're also called futures in other contexts, but the JavaScript community decided to call them promises in this case. And rather than passing around callbacks and manually handling errors, what we do instead is we encode this concept of this asynchronous action that's going off and running things and then bringing back values as an object. So when you run in a synchronous action rather than passing in the callback, it gives you back an object that represents this future operation, this promise that a value will arrive. And then if you want to perform an action after it, then if you want to get the resulting future value out of it, then you can pass that a callback. And the active passing at a callback gives you back another promise that represents the callback's asynchronous execution. And so, yeah, the things that we want to do with these promises are things like detecting whether the operation is finished, getting the value out of it if it has finished, and then as I said, performing operations after they did before. And in fact, all those three things there are really just the callback operation. So if we have this synchronous operation get, we don't pass it a callback. Now it returns a promise, and then we call this method then on it, passing it that callback, and then you can see that that's sort of doing all three of those operations at once. It's saying we will find out when it has finished, because the callback we gave it will run. We will get the value that the promise finished with, and then we can run more actions. In this case, we're posting to a different URL, the same data we got once it has, in fact, finished. And the really cool thing is that these promises are composable. So now we can actually say, by returning the promise returned from post in the callback, we get a promise that is the composition of both get and post, right? So the whole expression is also a promise, which is the execution of get, the running of the callback, and then the execution of the post. And we can call then on this expression, and that will run after the get and the post have run. So promises for JavaScript have been specified by eventually updated to be called promises A+, and they've now been implemented in the modern versions of Chrome and Firefox, they've just found out. So there is a native implementation of them in some browsers. And what the specification defines is actually just the behavior of that then method, and then a promise is just an object that has that method that behaves that way. So there are multiple implementations in JavaScript for these promise objects, and they provide other things, other than then they often provide constructors for creating new promises, and they provide helper methods for doing things like the little problem that I had before where I said, I wanted to take all these operations and run something once they've all finished. So they'll often have a method like all that takes a list of promises and gives you back a promise that represents the execution of all of those things. So this is sort of the interface of then, you can pass it two functions, you can pass either one of them if you want. You can pass none of them, that doesn't really do anything. And there are some guarantees about the way that this works. The first thing is that neither of these functions will be called more than once. In fact, there will only be one call of either of these functions. And the other interesting thing is that neither of them will be called before the stack is empty. Yeah, because remember that code before we had the log underneath the asynchronous action, we didn't actually have a guarantee of whether or not that would come after or before the callback we passed would run. Now we have a guarantee that the callbacks that we passed to promises won't be run now, they cannot be run synchronously. The stack has to complete up and essentially that all the calls that we had at that point have to be finished before the callback will be run. Even if the asynchronous action has already finished. So just sort of like a visual diagram of how these promises work. We've got some running code that has a reference to a promise. I call then on it with some function f. And what I get back is this sort of container tasks, right? So you've got a task on the outside that represents the execution of the promise and the function with the resulting value of the promise. And then the important thing as well is that if the function returns another task, then the other task encapsulates all of that behavior. So the other task won't be finished until both the original task and the resulting task are finished. So now we can implement our well-do loop. This is great. It's a little bit more complicated, but it has to be because it's asynchronous. We can't defer to the while loop anymore. So we use recursion and see it and we say, okay, apply the com block. And when that's finished, I have a cond object that I can look at. They have this condition that is the boolean. And I can say if that was true, then run the do block. And when that's finished, loop back around to the top and check the com block again, right? Whereas if the condition is not true, the whole thing just stops. Because there's no because of cool. So in Hopper, Hopper has its own implementation of promises and they're called tasks because they're promised like they don't correctly implement the spec. This is on purpose because Hopper has some requirements that require reaching into the implementation and fiddling around with some things. They can also have this context bound into the callbacks that run on those tasks. Which the spec explicitly says you're not allowed to do. If anyone has tangled with this before in JavaScript, you know that it's never works and you have to be really careful. This is just sort of making that a bit tidier and making it the value of this being what you expect it to be. And originally I also called them tasks because I wasn't clearing the stack and I quickly found out why you have to because basically JavaScript has a really small stack and it just blows up immediately. Because now you're doing a lot of recursion and there's also a lot of methods that inside the task implementation that it has to pass through and it just really quickly fills the stack up. The clearing up the stack is essentially like a tail call optimization where we're saying take code that is recursive but in a standard way we'll transform it into sort of like a loop. So clearing out the stack is essentially the loop of asynchronous actions. And originally it was a cause for consumer because there was no way to implement clearing out the stack efficiently. And then it turned out that it could be done in a cross-compliant way. Just different hacks of different browsers. So it clears out the stack and it doesn't defer to the event loop. It literally just clears out the stack of currently running JavaScript code, doesn't allow anything else to run before running some other function. So it means that essentially deferring to the whole event loop is quite an expensive operation. What this means is we can clear out the stack and then immediately run the thing that we wanted to run. So in Hopper we can also manually construct tasks using this task constructor, you pass it a function and you can choose whether to resolve or reject it asynchronously. So here what we're doing is we're taking this sort of node style get API, this standard callback asynchronous API and turning it into a task. In fact, that's sort of a standard. There's a helper method called taskify, which is give me a node style callback function and I'll turn it into a task function instead. And then the other interesting thing is that the tasks manually yield to the event loop every so often, as I guess is the best way of saying it, that we do actually want to yield control to the event loop just not all the time. This means that this is essentially what causes the well-do loop to properly yield and allow other things to run in essentially the most efficient way. Yielding all the time is too much, but it has to yield at some point. And essentially at the moment, it's just an arbitrary amount of time passes and then it yields. There could be more efficient ways of doing that, but in fact, they probably are, but I haven't investigated them right now. So using the system now that Hopper is asynchronous in a nice way, we get a really interesting effect where we have synchronous looking grace code. So this is some grace code that is essentially doing what we had before. But it's written in a synchronous style, it's written in an imperative style where we can say, okay, get the URL and we expect everything to block until the get is finished, and then we can run the post. But because behind the scenes, the interpreter is running this asynchronously, it doesn't block the JavaScript thread. And we sort of have somehow achieved this sort of appearance of threading. We've essentially achieved lightweight threading on top of the single main thread of JavaScript. And in fact, we can implement a really basic lightweight threading mechanism. Here we've got a JavaScript function which asynchronously spawns off some action and then returns. So the action that you call is run synchronously, but it yields. If it's going to run for a long time, it just yields. We come back to here and we say, okay, spawn itself is done. But this other task, we don't care about it, it's running off somewhere else. And then when we expose that to grace, what we get is sort of standard lightweight threading where we can say, spawn off an action, run some action inside of that sort of other lightweight thread, and continue on and do these other things. And so you can get the interleaving that you expect from normal lightweight threading where we've got spawned original, spawned original, spawned original. Except it's actually 50 mils of spawn, 50 mils of spawn. So just for the sake of performance, it's easier to do it that way than constantly interleaving between them. One of the big problems with implementing async is you can always take a synchronous action and embed it inside an asynchronous action. The asynchronous action will just block while that synchronous thing is running. So if you've got a function that you expect to return a promise, then that code that is generating the promise can still go off and run synchronous actions and then return a promise. The problem the other way around is harder. If we've got a code which is generating a normal value and that the calculation of the value depends on asynchronous operation, then we essentially can't implement that function. We want to say, go off and do something and then return a result of the asynchronous thing. But we want that function to run synchronously. We've got problems because the asynchronous thing will yield and we have to essentially clear out the stack before we can return the value that needs to be returned from this function. And so we just get stuck and we can't do anything. So this is really important hopper because it's supposed to interact with JavaScript in a nice convenient way. JavaScript objects should appear in grace as grace objects. And grace objects should appear in JavaScript as JavaScript objects. And the problem starts arising is that because the whole interpreter is asynchronous, all the grace methods return tasks. And so if I give a grace object to JavaScript and it expects a certain interface. So in this case, it expects this object to have a speaking time method that returns a number. The problem is that at the moment without any sort of annotation there, when I call this method, I'm just gonna get a task back. And that's not what the JavaScript is expecting. So I need to have a way of saying, okay, this method in particular doesn't do any asynchronous actions, just run it synchronously. But the problem is that all the things that it's gonna do are still assumed to be asynchronous. So in this case, the call do the method number from two. It's going to be an asynchronous call because all method calls are synchronous calls in Hopper. And so what we need to do is essentially take this synchronous mode and make it transitive and say each method call that I make while I'm in synchronous mode has to also happen synchronously. And so we've got a series of asynchronous operations. And we need to take them and say, okay, run now. I'm ready for you to run right now and I need the result right now. And so I implemented, as well as then on tasks, I implemented a method called now. And now it looks like then it takes the same arguments as then and it runs in the same way as then, except you have a guarantee that those functions will run now rather than after clearing the stack or the task you get back will break. So either that thing is this asynchronous wrapper around what is essentially asynchronous operation and that's fine. It will go off and actually force the task to finish running the behavior that was expected to behave and then immediately run the callbacks. Or the task represented an actually asynchronous operation had a get and a post or something inside of it and we couldn't force it. And so calling now just essentially throws an error. We can't force this action. So there are certain class of things which appear asynchronous but are secretly synchronous. And if we know what those are then we can force them to run immediately. But the problem is in general we have to assume that everything is asynchronous. The interesting thing about now is that it sort of breaks this concept of a promise as a black box, you can't see what the operation inside is. You can only asynchronously ask it to say, when you're done, run this other bit of code. What now does is it reaches into the black box and says, hey, hurry up, I need the result now. And you're not supposed to be able to reach into that black box. Another extension that I needed to implement was being able to stop tasks because this is an interpreter and when code runs forever, we want to be able to stop things. That was part of the original problem. And the way that this is achieved at the moment is essentially each task is waiting on another task to finish when you've got this chain of thens that you've called. So given some task that represents an asynchronous action, there is essentially a linked list of tasks inside that task that each of the subcomponents of that currently running asynchronous action. So you have this outer task that represents the entire asynchronous operation and then at the moment there is one task that you can follow, one chain of tasks that you can follow down to the task that is actually currently running. And so if we store that explicitly as a linked list, then we can just traverse that linked list and stop all the tasks on the way down until we eventually get to the one that doesn't have a link anymore because we stop it and we just get this error that says, someone stopped me. Come back up as well. And that again kind of breaks this black box idea. I can reach into a task and say, no, I actually don't run anymore. And it's a lot worse than now. It's not just run now. It's actually, I'm going to completely break you. And all the thens that would have run in the future are no longer run because the code would have succeeded or may have succeeded, but now it won't because I intentionally reached in and broke it essentially. This is kind of weird behavior. It's probably better to try and kill the interpreter at the moment. The problem is that not everything runs inside the interpreter. This sort of fine grain control over the asynchronous actions. If we say, if we try and stop the interpreter, then we have to wait for code to come back to the interpreter and realize that it's been stopped for things to stop. This is sort of a little bit more immediate. Okay, so yeah, these tasks, they provide a nice way of taming these sort of horrible callback style. And part of the problem is that they're a very simple, simple concept. And they're simple for a reason because it's such a complex problem that trying to approach it with as much simplicity as possible. Trying to force the problem down into as simple a problem as possible so that we can solve it in a simple way. And it's a really nice approach, but in some cases, in particular in this interpreter, there are requirements that mean that I can't be as simple as just promises require me to be. I do need to reach into the promise and fit around with the operation because I need to interact with the operation in some way. And then one of the big points that I should reiterate is that Hopper uses tasks everywhere. Originally, the parser was synchronous, and then I put in a really big file and it froze the browser because it took so long to parse the huge file. So I realized I had to go in and change the parser to make it asynchronous as well. So I've got a reasonably large application and everything is asynchronous. And all that asynchronous nature is represented by promises. Every single asynchronous task that I undergo generates this promise. Then how does that affect the performance of the application? So as I've said, yielding to the event loop is expensive. But in this case, it's necessary in certain circumstances. And the main thing is essentially that the cost of these tasks gets expensive as the application gets bigger because we're making so many of them. And we want to think about how we can cut down on those problems. How we can increase the performance of the application by trying to avoid holding on to promises longer than we need to. And trying to avoid creating promises when we don't need to create them. So at the moment, I've found that garbage is the main problem. There are a lot of promises that are sitting around not doing anything, really, but they're still being retained. Because there are links to them somehow. And there are a lot of tasks being created, but that's not too big of a problem. It's sort of necessary that if we're doing all these asynchronous actions, we have to be allocating quite a lot of promises. And modern garbage collectors are founded on this idea of, well, you're going to allocate a lot of objects. That's okay as long as we can collect a lot of them as quickly as possible. So there are long-lived objects that are going to be ignored by the garbage collector. And then there are the short-lived objects that have just been created and now are needed anymore and we can quickly get rid of those as quickly as possible. So we should be able to not notice the performance too much. The garbage collector is sort of designed to handle this problem. So looking at the well-do loop, yay, now we don't freeze the browser when we run this code and we can stop it. But what happens to our performance when we run it? Well, this is a graph of the flow of the plan line, a quorum of memory that is currently being used. All the objects that are currently sitting in memory in the JavaScript VM. And it goes up and it doesn't stop going up. So I don't know how well you can see that. That's about 120 megs on the right-hand side. And I cut the graph off there, it keeps going up further and further and there's a little bit of sharing, so you can sort of see it's collecting some things. But there's way too much stuff that's left around. Specifically, this is literally the graph from running this code. It's a wallet that doesn't do anything. So when we run a wallet that doesn't do anything, we are running a synchronous thing. So we expect there to be some promises being allocated. But there's no reason for us to be holding on to those anymore. It doesn't make sense that it should be running this way. So this is just the previous implementation of Voldu. If we think about what's happening here is there's a out of task that represents the currently running execution of Voldu. And each of the inner tasks ends up being composed with this outer one. So what we end up with is the outer task represents a running inner task. And that running inner task produces another task, which itself is wrapping in a task. So we've got this sort of outer task that is running the execution of the condition. And once that finishes, it's going to run our callback, and that's going to produce another task. And the outer task represents the execution of both of those two tasks. But the problem is that the produced task itself was an outer task around the do block. And what the do block does is loops back around to the top of the wallet, which produces another outer task. And what's happening is we're getting all this big nesting of tasks where the inner tasks aren't really doing anything. They're just sitting there saying, okay, when you're done, you're going to say that the task that I'm around is going to say I'm done. And so now I can say I'm done, so I can tell the task above me that I'm done. But that's all they're doing, and they're not really necessary anymore. And then this is sort of like a very cut down implementation of the task that is generated by the call to then. And you can essentially see that what we do is the value, the callback here, is the result of the F, the result of the callback. And we check if that's a task, and if it is, we say, okay, newly generated task is done, resolve the outer task. And what that is doing is it's creating this sort of implicit link between these tasks. So we don't have an explicit field that's joining each of these tasks together. But the fact that we've saved resolve saves the outer task. So the inner task captures the outer task through that resolve function. And it kind of makes sense when you think about it because the inner task takes to exist for the outer task through the finish. Because the inner tasks, we're waiting until the inner task to finish. And then it's gonna, that's gonna be the thing that resolves the outer task. And we need all those tasks in the chain to preserve the expected behavior of them. So this is essentially what we end up with. It's another link list. And the resolve was in an actual field, but it's the cause of that collection behavior. And it's sort of made worse by the fact that we've also saving, waiting on fields for stopping things. So now we have this doubly linked list between tasks. And if you know anything about garbage collection, a doubly linked list is one of the worst things you can end up with. Where you have strong references between everything because it guarantees that nothing will be collected unless nothing is referencing anything from the outside. So the real problem here is that maybe the stop behavior isn't necessary. And maybe we could fix that with weak pointers, which I've just again found out that I'm now on the latest version of Chrome. But part of what we want to do in the browser is we don't have weak pointers. And this doesn't really solve that resolve chain, because we need that resolve chain to actually have preserved the expected behavior of these thens. So the simple solution is just to change the implementation of Wilder. So that we drop all the returns, the tasks don't get composed together. And we manually create this outer task. But if you implemented Wilder recursively in grace, you'd end up with the same structure that we had before. And so it's sort of the case that loops are more efficient than recursion again. We sort of lost that benefit of the tail core optimization that we had before. And this solves the problem. It's still sort of going up, but you see it's not as high. And the reason it's still sort of going up is that the GC is just kind of lazy. It's fine to allocate 40 megs. So it lets it, it's not collecting everything it can. What you can see down there is later on, it got a little bit too high and the GC got its butt kicked and it had to collect more things. So it drops down automatically a little bit. And then pretty soon after that, I actually pressed the force garbage collection button, and that's where it drops right off. Where I said to the garbage collector, no, you actually have to run right now. And it sort of shows that it can collect pretty much everything that was in memory at that point. So yes, we're still allocating things, but it's getting collected. So in order to properly solve this problem, in order to get the recursive implementation working properly again, why don't we say, okay, let's try and identify all these tasks in the middle. And say, let's just not store them, all right, let's skip over them entirely. Because the only thing they're gonna do is pass a value back up the chain, let's just ignore them entirely and skip back to the one that is actually relevant in this case. So remove that resolve link, say we're just gonna resolve that one at the front. And those two in the middle don't matter because there's no callbacks waiting on them, so it doesn't matter if they don't resolve. But the problem is we don't really know what is happening with those tasks. They could have been held by something. So if we implemented that, this behavior would be really weird, because the execution of this promise here depends on store finishing. But because we skip over store, when we get into here, it's running because store is finished, but then we also say, hey, store, when you're finished, run some other code, and that never runs, because we've skipped over store. We put store to sleep, and store never knows that it's finished. But the cool thing is that, as I said before, there's no way to actually learn anything about a task except about calling the end on it. There's no synchronous way of interacting with the task, say, are you done, what's the value you finish with? So what we can say is, okay, this task has been put to sleep. And if you call the end on it, that wakes it back up again, right? And then the only problem really is that in the original structure, there was no way of finding out whether or not it had finished. So what if we flip the arrows around on the ones that are asleep? We say instead of this task is going to resolve this thing here, let's let it remember what its resolver is. And it still allows the task in the middle to be garbage-clicked because there's still a missing link to them. And essentially, it means that if I wake up, say, one of those tasks in the middle, it can follow that linkless chain to the one that is actually running and say, have you finished? And if it has, we can just, okay, that's good. We can run that then call back with the value you finished with. And if not, then we have to do a little bit of trickery to change this really big long line here to point to the one that was woken up and sort of fit around with the callbacks a little bit. But it is possible. So yeah, really the point is that promises are a nice way of doing this behavior, but they're large in numbers, just like any objects. They're just objects. And if you allocate a lot of objects, you use up a lot of memory. That's basically how it works. And you kind of avoid the allocation. So yeah, essentially the idea is it's okay, if you allocate a lot of those things, as long as you don't retain them unnecessarily. And it's just eventually a bunch of degree poverty around the point is to get it to work. I had some other stuff about asynchronous behavior, but I'm gonna skip that. So yeah, the main thing is that promises can still fill the complexity of the budget just like other asynchronous callback style stuff is. And I guess the main problem is that asynchronous code is hard to write. And there's no sort of silver bullet to solving this problem. It's a bit like concurrency. If you get a big concurrent application, you're gonna have problems. And treating promises like a lightweight treating solution isn't really right because they should be simple enough for that to work. And then yeah, I guess like abstractions have costs and using the abstraction across a large application, you're gonna feel that cost. Even that cost is small when you're using it in a large way. You're gonna get a large cost. Cool, if you want to use Hopper, you can just run, it's on Node. You can use it and sort using the MPM package manager. Otherwise you can go check out the source. Cool, thank you. Thank you for too, I'm sure that was awesome. And I'm sorry, I'm sorry, I'm sorry for taking you wrong. On behalf of everyone in LCA, I must say thank you for coming. We have a small gift for you. Thank you. And high five, that seems cool. Great, cheers. You can just- You can't ask a promise whether or not it's done. But there are other promise implementations in other languages where you have that ability where you can say are you done? Right, so I guess, yeah, yeah, yeah, that's what I was about to do. The question is how necessary, you can tell me if I've got this right, how necessary is it for the specification to exist and how, in particular, how necessary is that specifically the constraints that the specification put on implementations. Given that a lot of implementations do actually add a lot of extensions. The main thing is that a lot of the implementations, they do have a lot of extensions, but they're on top of a otherwise conformant operation. The point is that the specification just says as long as the then method works this way, and the guarantees that then give you are the most important part. And it means that the simplest one works the same way as the most complicated one, in particular that it's all black box, that you don't have any more power than the callbacks would allow you to have, you've got a lot more guarantees, and not having power in this sense is a good thing because it's so hard to reason about it, you kind of want to do as little as possible to avoid getting weird things. The specification's a bit weird because there's this definition of a thing called Vennable, where you notice there where I said when the callback is finished running it checks if it's an instance of a task, that's actually what the code said. In a compliant specification, you check if it's an instance of your kind of promise, and you also check if it has a then method. So that way it can actually operate with other implementations of promises. That's been quite controversial because there are other things which aren't promises and which have a then method that doesn't behave anywhere like what you expect, but that's kind of the idea behind it. And in particular, because we're getting to native promises now, those native promises can interact with custom built ones. You're trying to create your own lightweight, like cooperative threading. So I guess this is for a future when neural browsers for generators, that would pretty much give you that facility to do cooperative threading. Yep. We promise this, so don't give me that promises plus generators give you essentially- Yeah, yeah, so the question is the generators in the future which actually give you a yield keyword and implement clearing out the stack for you and all that sort of stuff. Once that's built in natively, will this be relevant anymore? I guess, is that kind of your question? Yeah, yeah, yeah, yeah, because I run a lot of Haskell code. And I kind of just wanted to dump this onto a asynchronous monad and write it imperatively and it would sort of write the grace equivalent there where I had synchronous code that's secretly running asynchronously in the background. The problem at the moment is that JavaScript's just not, I kind of want to say powerful but I almost want to actually say expressive enough to make this easy. And definitely other languages make it easier. JavaScript's moving in the right direction to make it easier and be more expressive, do more interesting things like yielding doing generators here. Cool, awesome, thank you.