 Okay, so they show up in the coroutine indeed as the return value of yield. So the coroutine calls yield, which is it returns back to the resumer, whoever started the coroutine, and when the yield comes back to the coroutine, its return value is indeed the resume. Okay, so these are return values of yields. Okay, so this is good. Now how about the return value from resume? What will show up there? Okay, exactly. Now yield, what are the arguments to yield? It should be all clear now. Whatever we pass into yield shows up as the return value in resume and vice versa, right? So we have the following picture. Resume, the argument here is the coroutine, but the value here shows up as the return value of yield. So the return value of yield and whatever we pass here, shows up as the return value of resume. Okay, so the two constructs are passing value between each other. Okay. Now, which of these constructs actually behaves like a procedure, like a function call, in the sense that when you call it, it returns where it was called from. All of them, some of them, none of them, right? So what is a function call? It has the property that you call it, some execution happens, but you can trust that eventually the execution comes back. Well, at least on a program that terminates without an exception. So let's go through one by one. How about coroutine? Does it behave like a procedure? You call it, does it return? Yes, this is a plain call. It doesn't actually do any transfers of control between coroutine. So coroutine indeed is a procedure call. How about the resume? Does that behave like a procedure call? Yes, no. So indeed, from the caller resume looks like a procedure call. On the other side, it looks funny because you may return, not using a return statement, but using a yield. But from the point of your resume, it's beautiful. You can think of it as a procedure call. How about the yield? Does that behave like a procedure call? Yes, no. Now it's not so clear. So who thinks it's both? In what sense it is like a procedure call? Okay. So in what sense is yield like a procedure call? Can you say in what sense it is like a procedure call? The caller is whoever invokes the yield, right? I think the way we want to think about it, whoever invokes yield to that who invokes yield, does yield appear like a procedure call? Does it behave like that? So in what ways does it behave like a procedure call? So yield goes back to whoever originally resumed to that coroutine. Are we guaranteed to come back to yield? We are not. So if we do return back to yield, it looks like a procedure call most because you yield it and you wait there and when the control returns you continue. That looks like a procedure call, right? Except that are you guaranteed to return? Clearly, the program crashes you or not, but there are situations when in a normal program execution you do not come back to that yield. Can you offer one example when that happens? Okay. If you don't call resumed, then you don't come back to the yield. Then when could that happen? Can you think of a scenario when the resume would not be called at the end of a coroutine when you yield it? Okay. So that's a slightly different topic. We'll get to it on the next slide. But when would resume not be called back to yield? Exactly. So when the yield is sitting inside a loop acting as an iterator, the loop could decide, I don't need any more elements, I'm done iterating so you would not yield any, not the resume anymore and so yield would just be sitting there essentially unused until it is garbage collected. So the question is what happens with the return value of yield? So each time we resume, we go to a yield or the special case, the beginning of the coroutine, and the coroutine doesn't have to use the return value. It can ignore it. Sometimes the coroutine is just happy that the control was transferred to it and no value is actually communicated. But each time we resume, there is a return value, could be null and so the coroutine can decide to use it or not. But it never happens what I think you're asking, do we ever pass a return value to yield without actually transferring? Yes. Yes. So you're asking about this, I think, and what this says is that when yield returns when a coroutine yields, you supply an argument to yield and that becomes the value that is passed into the resume and becomes a return value of resume. So that's an excellent comment. The comment is that the prologue interpretation looks very much like a coroutine in that you go through all the permutations of stamps and you check for which of them works. In fact, in PA3, you are going to implement the prologue interpreter that is built on coroutines. So you will see actually how it looks inside. So we know that yield is not quite like a procedural for one reason that we don't have to resume to it and not only in the case of an error. In a normal execution, we just say, we don't need any more values from the iterator. So we don't. There is another reason why yield is not like a function call. This one may be harder. Well, so the answer is it doesn't handle stack frames the same way. That's true. But as you will see, neither does resume handle stack frames quite the same way. We still think of it as a coroutine. You can call it like a coroutine, you know it comes back. But there is something else. Let's try. It behaves more like a return statement exactly, but in what way? So we are getting there. You're saying that instead of starting at the beginning, it returns to wherever the caller or resumer in this case left off. So we are getting closer. That's still not quite what I was looking for. So what arguments does a function call have? What arguments does yield have? Think of the call node in the ASD in your interpreter. What children does it have? So what's the child number one? Function name or indeed an expression that could evaluate to a lambda, right? What argument would yield have once we give you an ASD with a yield? How many arguments would yield have? How many children in the ASD? Just the arguments, right? Well, okay. So the part of the answer is what the arguments yield have is correct. It only has arguments that supply the value, whereas the function also has the function. It's not quite true that the yield is a statement because remember yield has returns of value, can say seven plus yield, and the second argument to the plus will be whatever is resumed to you. But here is the difference, right? The call has an argument that is a function, and yield does not accept an argument that tells you very true deal. So when you make a function call, you can say call this, call that. When you invoke a yield, do you have a choice where you are yielding? So in that sense, it is like a return, right? It just has to return wherever there is a humor stopped. So yield cannot control where it's going. In that sense, it is like a return statement, whereas function, when you make a call, you can say where it's going. So that's another reason why yield is not quite a function call. All right. So here, to make sure you understand, we'll play a little contest, corner cake contest, and there are a few corner cases we did not quite cover. One of them is, well, what if the yields argument is blah, and what if you do something else? So I want you to talk to your neighbors and find the craziest corner case that we did not specify. Let's resume in a, you'll yield to me in a few minutes. So to encourage you to think, I'll make one random, random resume, and then you can propose the craziest corner case. So spend some more time in case you haven't yet discussed it. I'll generate a subjective random number. Okay. So I'll first pick somebody. So I did try to count to the dates today, which is 1 slash 31. I ended up somewhere there. I'm not guaranteeing I'm correct, but I ended up at you. So can you suggest a corner case? Yes, you in the hoodie. Yes. So can somebody suggest a corner case? Okay. If you yield a lambda that has a yield inside, so well, what if I say that the value yielded is just a lambda? Okay. Well, who cares? Lambda is just a value, right? Like any other value. So since lambdas are values, they are so-called first-class value. You can pass them around just like integers. Now, it really depends on what the other part of the program, the resumer does with the value. It could say, well, I know by how the program is written that this lambda could be used as a body of the coroutine, and maybe it will use it to start a coroutine. So this is fine. In fact, this is why the languages are so, maybe I misunderstood. So what is the case again that? Okay. So this is the same as just calling. Now, I think we can, so the answer is what if you yield a lambda that has a yield inside? I can reduce this case to a much simpler case, which is you call a function that has a yield inside. The yield is superfluous there, right? But the really tricky thing you're asking is, if I call a lambda that has a yield, okay, so is this necessarily bad or not? Can we reduce it to a yet simpler case? Now, that lambda could be part of a coroutine in which case the yield just leads yields to resume, right? But what if that lambda is just called directly from main? In fact, what if yield is in main? So our program is as simple as it has a one line and we say this. So one line program, right? So is this bad? Should it crash? What should it do? This is clearly a case that we have not discussed. So we need to better define the semantics. Well, I don't know what we have decided, but we could feed it to you as a test case in PA2. Well, what do you think it should do? Should it throw an error? It should return, okay. So one proposal is it should just return. Should it return a value? I guess main doesn't quite return a value, right? So it could return. So essentially under your semantics, we are considering main to be the main coroutine, which seems fine to me unless you guys see a problem with that. If the yield one is followed by print two, then print is out of luck. Essentially, you have a return before print, and so print is never reached. That seems to be consistent with how returns and yields behave, right? All right. Or we could make the decision that the yield cannot appear at the main level, and then this should throw some sort of an error. But it seems easier to design it in such a way that the yields could appear in main and they just return, all right? How about some other corner cases? Okay, so, well, I'll try to reproduce it and then we'll reduce it really to the simple case. This is good. So a coroutine creates another coroutine, right? So that's number one. Is that, am I correct? So let's reproduce the steps. So this is our test case one, test case two. So first, a coroutine creates a coroutine, okay? Let's call it a, it creates a coroutine b. Now it yields b. Okay, now what happens in step three? The main, whoever resumed into a now has a handle on coroutine b, right? And it would resume to, let's say, maybe this what happens, right? This is actually b. So now the main perhaps resumes into b, which was created in a, right? Okay, this is interesting. So we have a main. Main creates a coroutine, resumes into it, that coroutines create another one, doesn't run it, returns it using yield. Now main has a handle on this new coroutine b, okay? So we have main here, it creates a using the call to coroutine. It does a resume, okay? This one creates a coroutine b, does a yield and the yield returns b to resume. Actually this should be, okay? So this handle to b is the handle. The handle appears here and now we'll resume into that thing. And now we wait until it yields. I guess we'll have to do with chalk. How about that, okay? So main creates a, so we create this call coroutine and we get the pointer to this lambda, this is now a. Now we'll resume into this, okay? So which means the control, this is just a pointer. Now the control goes here, goes here, okay? Now we'll create another coroutine b, okay? Now we'll do a yield. We'll pass to yield this value, returns here and now we'll resume into what? We'll resume into that, okay? Which is the value that we obtained here. So are you following? We create one coroutine from main, another we create from a. The handle to this coroutine is returned from a to main as the argument to yield. After all, the handle is nothing but a value. And now main can do whatever it wants with it, including resuming to it. So will this resume to b work? Yes, if yes, why, if not why? So who thinks it should crash for some reason or not work? Who thinks it should work? So it looks like you get the right idea that this b was created from here, but it's not attached to a in any particular way. Essentially we just opened the file and so to speak. And the handle to this is a value that we can pass around store in a hash table, whatever. We return it to main and then main resumes to it and we go here and then the control continues. Main can resume to a, absolutely. It still has a handle on a, right? So it can resume to it. Except when it perhaps terminated and at that point it should not resume to it. So the rule for resume is as long as you have a handle to a coroutine and it is active, you can resume it like as if you called it. Okay, so that's a good question. Now we are talking about the states of coroutines, right? You could call a coroutine that is not terminated yet. You can only resume it when, after resume, it still can execute some statements. If it has terminated already, then, well, you can resume to it, but at runtime we'll catch the error and discover, oh, you're resumed into a terminated coroutine and that's an error. Now I will probably now go that far, okay? So this is essentially not a corner case. It explains that a handle to coroutine is a value which you can pass anywhere and whoever's got it can resume to it. The only problem is in what state that coroutine is. If it has terminated already, reach the end, then you can't do it. Okay, if a coroutine gets a handle to itself, so I'll try to be brave and actually try to draw again. So handle to itself, how could that happen? First of all, exactly. So when I create a coroutine, I resume to it, which means I started, I pass it a handle to itself. All right, so clearly it is easy to obtain a handle to itself, so that's not a problem. And that still should be legal, right? You can keep any value. What can it do with it? All right, so now we are resuming to a coroutine that is in what state? It's running, okay? So it looks like we are now reaching some understanding of, okay, we need to know about states of the coroutine. So in what states can these coroutines be? So before we go there, since not everybody understands, I want to actually run this simple example and I wanna make sure that everybody can catch up. So let's look at the example this way. It's the same, but I want to first make sure that everybody understands how the values flow, okay? So this is main, okay? And main creates a coroutine G, right? Where is this coroutine? It's right here, okay? And CG is a handle to this coroutine, okay? Could we create multiple coroutines with the same handle at the same time? See a reason why not? Definitely we can, okay? So now we resume, where does the control go? It goes right here, and it goes here. We create this handle here, just like we created the handle here, okay? Then we just continue. This coroutine actually doesn't run. We resume to where? To CK, right? It's presumably this value here. So now we go and actually, oops, okay? Did I make a mistake in how the control goes? This is actually the flow of control, what I have shown. Is everything okay? Just because I guess I try to save space and it was too aggressive. So let's assume there is, yeah. I can erase this, all right? It's good, or is there a mistake? A and B are just some arguments. Perhaps they should be X and Y. Yeah, I just wanted to focus on the transfer and understanding the states. So if you consider main to be a coroutine, at this point here, in what states are our coroutines? Are they terminated? Are they, right, after the resume to main. So when the execution reaches the point, what are the states of our coroutines? I see, okay, so let's simplify it, right? Well, I'll add one more print here so that it's clear that the end is here. So think about the main is still running, okay? So main is running. How about the other two coroutines? Are they terminated? Who thinks they are terminated, okay? Who thinks they are not terminated? So why they are not terminated? So you can still yield to these two coroutines and in fact the return statement at the end whether explicit or implicit is actually a yield. So you can think of this as return is an implicit, is a yield essentially. So both of these are still running. But are they really running? Well, we know they are not terminated. Are they running or they are not running? And now we can come back to the example that you posed. When I resume to myself, clearly I can get the value and I can resume to myself, right? Now what should happen? Exactly, so resuming to a coroutine that is running should be an error because it is running. So we need to distinguish between coroutines that are running and coroutines that are not terminated but not running, suspended perhaps, okay? So at this point here, these two are suspended. Suspended, this one is running. And if we did not have a yield in the middle one, it would actually be terminated, right? Because the implicit return at the end would return and it would actually terminate that coroutine. So another resume would be a problem, okay? Why they don't terminate? Okay, so I'll explain it to you by perhaps putting here another statement. Imagine I put here a print seven. So you see now you can think of it as the yield is not really the last statement in the procedure. You can think of it as having an implicit return at the very end, sort of invisible. So if I have yield and then nothing, it really is yield and return. So if I yield from here and somebody resumes to me, I will do a no, no instruction and return. So there is little something invisible between the last yield and the return. So it's an excellent question. Well, the question is do you need to resume one more time to really terminate the coroutine? And the answer is, well, if you really wanted to terminate the coroutine, if you really wanted to terminate the coroutine, you would have to. But usually in programming you don't really need to terminate the coroutine. After all, usually we don't really exhaust the termination. But if you wanted to, yes, you would have to. So the question is if I take out the middle yield, not everything terminates, just this middle one would terminate. That would still be, well, in principle, well, okay, it's a good question, excellent question. So if I remove this yield, after it comes back, it implicitly yields back here and it's terminated. The question is what happens with this one? Well, it is suspended because you could resume to it. But if nobody has a handle on it, nobody can ever resume it. It's as good as that, right? So garbage collection would collect it because practically it doesn't exist. Sort of an implicit malloc free garbage collection would happen because nobody has a handle on it. We'll talk about garbage collection later. That's an excellent question. So technically it's not terminated, but since nobody has that value of the handle, you practically don't know it exists and the garbage collector could indeed remove it. So I'd say it's neither of the three, it's not existent. Okay, so let's now look at the flow of values, which is what you're asking. So rather than timeline, let's look at value flow. I know people are itchy. We are starting to get into harder stuff soon. So two goes here. Let's assume this is X and Y here. So it will flow here. Now it goes here. We do the sum. So the result will be three, of course. Where does this value go? Yeah, so it is indeed a red wall of resume. We are doing nothing with it, so it's lost. Like ignore the return value. But you could just store it in Z and then you could yield Z here and then the value would actually propagate up to here. So let's go and do real stuff. So the states of coroutines, we now know it has three states. It's running, suspended, terminated. We looked at that. Okay, I want to now make a little distinction between the coroutines that we are looking at and general coroutines. So in our coroutines, we have a yield and we have resume. And we have discussed at length how resume is like a procedural, but yield is not. So from the side of the resume, it looks like a procedural. From yield, it does not, okay? So this part here, you should understand by now. In the general coroutines that we are not covering, there is only one yield statement. And it has an argument. So you can specify where you are yielding to. It is sort of like resume. Or in other words, you could say there is no distinction between yield and resume. You give it a handle and it resumes to it and it continues at the point where it left off. So you could have a bunch of coroutines in the generic, general symmetric case and this one could yield here, this one could yield there. This one could decide whether to yield here or here. And indeed, they could transfer among each other which way they want it. In our case, it's a bit different, right? Please. Well, in our case, you cannot even go to arbitrary places, right? How would you, yeah, in our case, so it's a little bit more complicated. You can emulate general coroutines on top of asymmetric ones. But the emulation is not as simple. If you're interested, I can point you to a paper that doesn't. What I think we want to remember is that in the case of asymmetric coroutines, those that have resume and yield, how does the execution look like? You execute, you resume, you execute here, you can resume again, you execute and now you do a yield, you return here, you execute, you yield and then you continue again. Now, of course, here you could resume to another coroutine which can yield back to you. So the structure that you create is essentially a tree, right? This one is running here. Here it is running or suspended. Let's see. Now, okay, so let's talk about that. It's good to understand that. So clearly we are running here. Are we R or S? Are we running or suspended? So remember the rule we came up with that you can resume into suspended coroutines. You can resume into suspended ones, okay? You cannot resume into running ones. So imagine I'm in Maine and I resume here, I'm in a different coroutine, but if this one wanted to resume back here, is this legal or illegal? It's illegal, right? So when you are waiting in resume, as the one on the left is, it's sitting in resume for resume to come back. You are considered running. And as a result of that, this resume back to Maine is illegal as you want to. Right, it would need to have a handle to Maine. So pretend it's not Maine, but some other coroutine. So let's pretend that somebody resumed to us and this is not Maine, but some other coroutine. So you cannot get the handle to Maine, but you can get the handle to any other. So technically speaking, let's assume this is a different coroutine, okay? So this is actually running. And so if you look at it this way, we are really building a stack of coroutines. You resume, it can run. If it yields, you sort of pop back. But if this one continues further, you grow a stack of coroutines, right? This is coroutine C1, this is C2, and this is C3. We have a stack of calling context. C1 is waiting on C2 to yield back so that its resume can continue and C2 is waiting on C3. With these general coroutines, it cannot be like that, right? Because C3 could indeed resume into C1. Again, so in general coroutines, C3 could decide C1 will continue. Here you need to yield back, you return. You want to see you want to ask a question. So the question is how do coroutines behave from the scope point of view, right? When you call a new function, the function can create its local variable X. Where is that visible? The answer is from the scoping point of view, they are just like functions. So they're exactly like functions. They're a global state, they can access it. So luckily there are no special exceptions there. But you had a question, okay? So the question, why is this resume illegal? In other words, you're asking, we want to go from this coroutine back in here. Why is that illegal? Because we have made a decision that we have these asymmetric coroutines. I'm resuming to you. If you want me to continue, you yield. So there is a way already, we are providing a way for this coroutine to suspend and return to you, and that's the yield. So we are not allowing you to use also resume to continue. Well, or to put it differently. Where is this coroutine waiting? It's waiting in resume. How do we continue from resume? Well, the person on the other side yields to us. We are not allowing to continue from resume by somebody resuming. So when I call resume, I always start on the other side from where yield is waiting. Go to the end or to the next yield. Can you speak louder? Are we asking how do I get back from here to here? It would be a yield. So here we have a yield which returns back here. So when you are waiting in resume, you are running, when you are waiting in yield, you are suspended. For example, here, this is a suspended state. The question is, does the stack need to be independent? Now, which stack? So hold that thought. We are now going to transfer to how to actually implement it, and it's all about these stacks. So start paying attention because this is where the trickiest things start. But come back to it and look at it again. And remember, so resume looks like a procedure call that ends with a yield. So resume is like a call, yield is like a return. And so we can go left to right by making resumes, resume, resume and return with yields, and we'll form the stack of contexts so that when we yield, we know where to return. So it's a tree, it's a stack, it's a stack. So this thing here is important. It tells us where to return on yield. So if C3 yields, it will go back to C2, and C2 will continue from its point of resume, which is its current point of the execution where it was waiting, as if it was a procedure call. Okay. So now we are ready to provide a top-level outline of that interpreter. So each coroutine will have its own interpreter, essentially come here and say, this coroutine here when we started, meaning when we resume to it for the first time, we create a new instance of the interpreter for it. The same for this and the same for that. These interpreter instances will share their environment. So the lexical scoping works just like with functions. They can share the same variable. They can pass closures around anything as before. Each instance has its own call stack. So if inside the procedure, sorry, coroutine you do recursion, plain recursive calls. They need to be stored on the call stack somewhere. So I guess this is what we were asking. Does each coroutine have its own stack because recursion, in fact, just plain procedure calls or even recursive calls could happen inside each coroutine. So on the resume, what do we do? We call the interpreter of that coroutine, whose handle we have, and that coroutine will restart at the yield. Then it returns back to the resumer at yield. So at the resume, we come here and it restarts at this yield and it runs to the next yield at which point it comes back to the resumer. We go from yield to yield or from start to yield or from yield to the very end of the coroutine. So this is simple, elegant. It almost works. So think why you cannot take your PA1 interpreter. It is relatively simple and add coroutines to it. So ask me clarification about what we mean here by this high-level outline. The key thing would be that each time I create a coroutine and get the handle, I create a new instance of the interpreter. And then at the resume, I call it when that interpreter runs into a yield, it returns. When I run into another resume, I have a handle. The handle points to the interpreter internally. I'll resume to it again, and the thing continues. So think about why that will not work. I have an example to really illustrate it, but I would like you to think without seeing the example. Where do things break down? Okay? Exactly, right? So what you said is that when you return, meaning when you return from yield, you come to the first yield, and yes, you can return, and you return back to the first interpreter. But what is the state that you need in order to continue? Exactly. So let's illustrate this. But first, let's look at your PA1 interpreter. It essentially looks like this, right? It receives an AST abstract syntax tree. It's a recursive interpreter. And I just have the integer instruction and a call. And what happens here is in the case of the leave, we just return the value. In the case of a call, we look up the function associated with the name. I'm assuming that calls always look like this. ID expression, expression. So this is an ID rather than a general expression. I'm assuming that there are always two expressions. We evaluate one expression, we evaluate the other. That's it, right? This should be familiar. And the call itself does whatever it needs to do, but it calls itself recursively. Is that how you have it? Okay. So no magic here. Now let's see how we would extend it. So what would we do at the resume? Well, we presumably evaluate the argument, which is the coroutine handle. This is the handle. And this will give us the lambda, the code that we want to jump to. And we run the second instance of the interpreter. So far so good. Okay. Now at yield, I evaluate the argument. It should be easy. And I just return it to the resumer. You see a problem already? Well, I think where you are going, so let me paraphrase. You're saying eval is a recursive function. So when I call that return at the yield, the call stack could be full of recursive invocations of evals. So when I return here a deal, I'm not going to abandon the entire interpreter and return back to the resumers interpreter. I'm just making one jump here. So as implemented, this return is not yielding to the resumer. It is just yielding to high-level expression in itself. So if I had something like f yield 7, this yield will simply return and the next thing this function call happens. Inside the coroutine without any transfer. So this is clearly wrong because it does not yield to resumer. It only yields to whoever called it. Okay. So we could engineer it such that it does abandon the entire call stack and returns to the resumer, but then we are losing our state. Right? The entire stack is full of information that we need to continue. Right? So what is in the stack? So take a little piece of paper and assume that the execution is here. Okay? And let's assume that this is the execution stack, the call stack in other words for the f coroutine. This is for the g and this is for the k. And let's see what's on the stack. Right? The call, the entire frames including the variables that are created. This is important for us to see how we need to change the architecture of the interpreter in PA2 in order to support coroutines. So I want you to really work it out so that you understand what the state is that we keep on the stack. The two-minute investment will save you maybe hours during the project. So who is done with the exercise? So let's work through it. So how many stack frames do we have in CF? One or two? Who thinks it's one? Who thinks it's two? Both of them could be right. It really depends on how you look at it. So there are more people in two. What's in the first stack frame? The bottom one here. Both variables exist in it. Just pick up. So there is f that points to a lambda. What else? And other globals. Okay, fine. So f plus globals. What is here? c, g, x and y. Okay? And this is bound to one to two and this is pointing to... Let's show this way, right? It goes to this protein. Okay, how many frames are in the c, g? One. Who thinks it's one? Two. Three. Looks like it's two. So let's start with two. So what is in this level? So it would be x. This is the version of x in g. Yes, no? Is it this x? No. Okay, good. So this is that x. So then we would have presumably the call stack for h, right? So this is the call stack for the call to g and there is the x for... Sorry, x for y. Oh! Unbelievable. Of course we are losing everything. Thank you. Yeah, nice. It's all empty. Okay, I'll finish it at home. But we want to do... So this is the call stack for h. It contains x and y. This is then... Sorry, this is g. This one is h. X and y are here. And notice what I've done here. This one contains k. And it has, again, x and y. Okay? So what about this little piece here? What do we do with that? This is part of the execution that you should already know really well. Coroutines aside, your interpreter should be able to handle something like this. Meaning a call to m which returns provides the value. Then you need to do another call. Remember, resume is like a call. You call this, it returns, gives you a value. Then you call this, it returns, gives you a value. Then you do the plus. Where will the value of m plus b be stored? Sorry, m of a and b be stored. On the cg stack somewhere. Okay, so is there going to be... Is there going to be an eval for a plus? Or are we completely confusing things here? So people are puzzled. Now I'll tell you what we need to consider. What are we drawing here, Avot? Are the frames of the executing programs, right? We create a new frame at each function call, right? And then the closures can carry it with them as the parent pointer. But when your interpreter runs, it's a recursive interpreter, right? It calls itself, calls itself, calls itself. And it is that stack that the interpreter uses for its recursion that maintains the stack that we have a hard time preserving. The frames are easy because they live in the heap, right? You just ask for another frame. That one is easy to preserve. The stacks that are important, Avot, how does the stack of the interpreter look like? When you come to g, what does it do? It has eval of some node. Then it recursively calls eval of another node and then eval of another and maybe eval of another node. So when we are here, what does the stack of this interpreter that runs gk look like? It blocks over the ast of the program, which program? The function g and it runs it, right? And it recursively calls itself as it evaluates sub-expression. So what is on that stack? Okay, so one of them is a plus, right? Somewhere, right? When we are evaluating this somewhere, all that call stack of the Python interpreter is an eval whose argument is the plus node of the ast. Its left child was the call to m, which we have evaluated. The right child is then the resume, okay? So how does the stack look like? There is a plus, there is a call to m with a and b, and here is a resume, okay? What is here? This here is a yield, right? The parent of plus is a yield. What is the parent of yield? It's inside a call to h, right? So what is a parent of g? Sorry, a parent of h, it's a call to g. Is that how your call stack looks like? You are in the process of handling a call to g. You handle then a call to h. You recursively call the interpreter. Then you want to evaluate the yield. Then the recursively calls evaluation of plus. So this is the call stack that we have a hard time preserving. If you wanted to yield at this point, you would lose the fact that you are somewhere nested in g and a. And that we need to maintain. So that call stack remembers the history of the calls. It remembers something else also. What does it remember? What if this was another call here? What if I did plus x here? That call stack also remembers the value of arguments that we want to add, right? In here, in this eval of plus, we are keeping somewhere the value of l, l, which is the value of m calls to b. So even the temporary values, the values of the left parameters of m plus b, sorry, the return value of m, b are on that stack. So here is how we do it. This is a problem with the PowerPoint. So why does it fail? Because all the content is on the stack. The content has actually two parts, the call stack so that the procedure, when it returns, knows where to go back. And there are also these intermediate values. If you do a of something and something, you evaluate this, you need to keep the value somewhere. So what changes do we make? The Python call stack keeps all the information that we need. We cannot keep it there because we would have no way to return and continue. But keep in mind that we can keep an external stack. Just build a data structure that is our stack and put all the information there. The way we do it in the assignment is that the information about calls so that we know how to return, we are going to put on an external stack. But these temporary variables, we are going to put into so-called temporary registers. These are essentially variables invented by the interpreter. So what is an AST? You know what an AST is? It's a tree that you evaluate bottom up. We'll have to translate our programs into so-called bytecode. Bitecode is nothing else but the following version of the AST. Here is our AST, a program represented as a tree. We rewrite it as instructions of the form, one argument, operand, another argument, and we store the value into some temporary invented variable. And then we can use that on the right-hand side. So bytecode is a translation of the AST into the sequence of such instructions. You see once you have such a sequence of instructions, you don't need the recursive evaluation of that AST because you are just walking through it straight and the temporary values are stored there. So the temporary value would be, say, the value of the left subtree in a plus. Once I turn the program into what looks essentially like assembly with new temporary variables with these funny names and I interpret them going through it step-by-step down, where do I keep these temporary values? Where is the value x plus z stored? It's in this register, but where is the register? Is it on the stack? We'll just put it into the environment, just like any other variable. So now the environment stores our temporary values rather than the stack. So the point is that we don't need the Python stack at all. Your interpreter now will not be recursive. It will always stay in a loop without going down into a recursive call. So you can always return from it because you will be always at the topmost level, except maybe for calling some auxiliary functions because for call and returns, you will build a stack. And for temporary values, you will just put them in the environment because we'll give you a compiler that translates from a tree into this. You would put them into the environment of the function in which you are evaluating it. You would have still frames. But frames, remember, frames are not on the Python stack. They always create them as separate objects. So they are not the problem. It is the call stack where the eval calls itself and you keep the temporary variables actually on the Python call stack. So it's just an example of how we actually translate from AST to bytecode. You just walk bottom up the AST and rather than evaluating it as you do in PA1, you spit out code. You emit code which, when evaluated, gives you the value of AST. So you produce not values but code. Sort of you do delight evaluation. So here is a little example. So I'll give you this routine a tree. It walks through it bottom up as your interpreter does. What does it do? Well, it returns two things. It returns when it's called recursively. The code could be just a string with instructions and returns a name of the register in which the return value will reside. So when you run into the tree and the tree is a plus, here is a subtree, here is another subtree. You call this recursively on this tree, then recursively on this tree. What do you get? You get two pieces of code, one another one, sequence of the bytecode instructions and you get the name of the register that stores the return value which when you're executing this has that value of AST. Now we get a new fresh register with a new name that does not conflict with anything and we create a new instruction. How? We construct this instruction here with string substitution, return value, register that comes from the left subtree, right subtree, create a new instruction, then we concatenate the code that comes from here, the code that comes from the right subtree and the new instruction that we created just here and we return also the value that we just got. At the end of the traversal, you get a huge chain of instructions that give you the value of that tree. But it's not an evaluator, it's a compiler because it generates a sequence of these bytecode instructions with just two operands on the right-hand side, no recursion, so your interpreter stays in the main loop, you can always return from it. So that's it. So in tomorrow's recitation, you'll walk again briefly over the bytecode, how does it look like, how does it generate it and the GSI will walk with you over the structure of the interpreter. How does the stack grow? How does each coroutine has its own stack so that you can get a head start on the project?