 Okay, I'm talking about Cheney on the MTA, it's a technique that has a compilation strategy that's quite interesting and that's used in dynamic language implementation. As I said, it's a strategy for compilers of translating high-level languages to C. It's invented by Henry Baker quite a long time ago. Andrew Appel, another compiler hacker, had input for this strategy. It's used in scheme compilers or in a scheme compiler and utilizes the capabilities of this language but it can be applied to all languages that need these features. It provides support for the efficient implementation of continuations, the language feature of the scheme language and it supports tail call optimization which means that a procedure in a certain position can be optimized, a procedure call. Okay, why compile to C? You can use it on many platforms. It's often simpler than compiling directly to native code or to byte code and it takes advantage of existing compilers and the optimizations they can provide. You can use it for cross compilation. External libraries written in C++ or C are easily to interface to and it simplifies deployment and bootstrapping of a system. If you have something in C, you can always build it up from scratch. So, what's continuation? It represents the current state of the computation. It represents what happens next. You can view it as a snapshot of the stack, the local variables, return addresses, that's going to be used after a computation takes place. And you can reify them, it's the term used for this by re-entering them and instantiating the continuation and the computation that's represented by it. Some languages allow to use a continuation or to reify it multiple times. That means you can return more than once, which is an interesting concept and it's quite powerful in certain situations. Here I have an example where the attempts to show where the continuation is, what continuation represents while code is executing. So, there's a scheme code now and this is a function definition of the very well-known function. I don't explain it now. And in every position, in every place in the computation, there is an implicit continuation that represents what happens next. So, in the conditional here, you have the continuation that represents what happens when it returns, when the function returns. And as the expressions nest and the code executes and the results of the code are then used, you see then, this is here, you see it in the zero call for example for the conditional, the continuation is the conditional itself. And in here, where you have variable access and the constant number, again, it's on the variable access, it's the continuation is the zero call and so on. It's like here you have an expression that nests and every nested expression returns its result to the outer computation and this is the continuation that uses this result and passes it on to the next one. It's a bit difficult to explain and I have my problems with it, but it tries to show that there's always a continuation as an abstract concept that represents what happens next, what happens with the result of something that executes. How can you use these continuations? You take a snapshot effectively of the current state and by re-entering this state you can implement every sort of control flow like exceptions where you just jump down the stack like coroutines where different continuations invoke each other and backtracking which retries effectively an alternative branch and a set of possible options or things like go-to which are just jumps and the continuation can be used if you have explicit access to this feature it can be used to implement all these control flow forms. Another thing is that you can use it to implement threads because a thread is just a state, a stack and local data and if you have some method of creation you can do real threads. These are green threads as they call them user level threads which are the advantage that they're very efficient. You don't have system calls and switches in between. Continuations are difficult to implement. It drags through a whole language implementation to provide support for this. Activation frames what builds up on the stack have indefinite extent which means that they don't follow a stack-like manner of allocation and releasing. They can be allocated and they can be reused at some weird point in time and they're not necessarily released in the same or in the opposite order than they were allocated. The next thing is, yeah, exactly. Continuations can be created at a very high frequency so this thing has to work fast. If you have a threading implementation based on continuation this must burn. The next thing is that activation frames must be heap allocated to have this indefinite lifetime. An alternative is to allocate them on the stack and then move them into the heap once they're captured. If not, then you just leave it on the stack. But that's a possible implementation. That's exactly the next thing. It's how do I implement these things? You can actually take a stack snapshot that's theoretically possible. It's very heavy weight. It needs a lot of space and it's just crazy. The next thing is there's a make context. There are certain APIs that in Linux I think there are and I'm not sure about other operating systems where you can create such an execution context with a specific stack. But it's specific for operating systems. You don't have it everywhere and it's a bit hairy to set up and have your way too. This approach I think, but I'm not sure if the actual implementation now uses it. You can use OS threads, operating system threads to have a separate state that you can manipulate or lightweight threads like fibers. But again, this is specific to a particular operating system. Another alternative is to use exceptions and to have some certain bookkeeping code that is possible to re-instantiate this call chain and use exceptions to jump down again. It's very hairy technique, but it works and is used in some implementations. The problem is that most techniques are platform dependent, highly complex or just too heavyweight, too much slowdown in an implementation. An alternative approach is that you don't use a stack at all or you don't use it in a stack-like manner. You just make sure that these activation frames that you can handle manually are created and reclaimed efficiently or as efficiently as possible. This simplifies things naturally and you can concentrate on doing these two operations as fast as possible. The next step is to translate to continuation passing style, something that I will explain now. Continuation passing style is a transformation, a source code or a program transformation where every procedure call gets an additional argument that represents this continuation. The implicit continuation is going to be made explicit and pass on into every procedure call. This results in all calls being in tail position. The last thing a procedure does is that the continuation itself is just a closure that is a function and the local data put together into a data structure and it can be invoked then, which is effectively returning. This is now an example of the transformation. We have this familiar function here again and now if we perform this transformation you see that every procedure or this user-level procedure gets an additional argument, which is the continuation itself. Every procedure call gets passed a continuation argument. This continuation is just a function which again takes the result and continues with what follows this call. The call to zero is followed by the test, whether it's true or not. This test is followed by returning either one or computing the next number. Every procedure call gets this additional parameter which is the continuation itself. Here we have the recursion and it performs a tail call. A call in tail position where it jumps back and calls itself. Again with the continuation argument, note that this is the same argument that's used here so it's a tail call or what they... No, it's not a tail call. But yes, the time, the star, the multiplication is actually a tail call. It's the last thing that happens in this procedure. That is CPS or continuation passing style. Tail call optimization means if the CPS transformation transforms the code in such a way that every procedure call that's done inside such a procedure is the last thing it does. It doesn't have to build up stack space. It's in tail position and it's the last thing it does. It allows you to perform this call in constant space. You can do recursion which is actually an iteration. In languages like Scheme, you don't have iteration constructs. It just uses recursion, tail recursion in this case that you can use to perform loops. Here we have this... This tries to attempt which part of the code is in tail position and which is not. This is in tail position and again the one is the last thing. This is in tail position too but it's an actual procedure call so it's called a tail call. How do you implement this? GCC does actually a bit of tail call implementation but it has restrictions. If you take the address of a local variable and pass it around, it won't do this optimization and it's very specific to GCC and it's hairy to figure out when the cases apply. Another possibility is trample lines where you have every procedure called returns a pointer again to the next procedure and you have a driver loop that lets this run. This is pretty inefficient and you're effectively interpreting. The next thing is if you just put everything together in a big function and have a switch or computed go-to that implement these tail calls but the problem is that you need either mechanisms to reduce the size of these large procedures or these last C functions and you need static analysis or something. Oh my God, is it already over? I can't do much then, no? No, that's fine. Shall I continue a bit or shall I just stop? A few minutes. CPS can be used to do this. I miscalculated there, so a little better. It's fine. Sorry. Thank you very much.