 Five days? Six days? No. Five, right? Yeah. We miss each other on Monday, I'm sure. Okay, so let's get started. We've got a lot to cover before the next project comes out. So, somebody refreshed my memory. What is a context-free grammar? What kind of parts is a context-free grammar? Starting non-terminal, yes. So, we usually use S. So, we have S. What else do we have? Terminals? What are terminals? Yeah, symbols that can't be verb-and-verb anymore. Regularly rules for how to break down terminals. So, what's the next thing? Non-terminals. There you go. And starting non-terminals. Remember starting non-terminals rather than non-terminals, right? So, we need our starting non-terminal. We need a set of terminals, a set of non-terminals, and then what? Is that it? What else do we need? We need some kind of rules, right? Because without those, we just have a set of symbols, right? So, we need some rules. Like S goes to little a, b, a, a, b. A can go to a, b, a, or epsilon. That's it, right? This is a context-free grammar, right? So, we have... So, what would be the terminals in this grammar? Small a, small b, epsilon. What about the non-terminals? capital S, right? And starting non-terminals, S. Awesome. Okay, so let's say we want to try to derive some string that is represented by this grammar. So, how do we do that? Start with an S. I like that. Starting with starting non-terminal. And then what? Apply one of the rules that S produces. So, how many choices do we have? One. So, the choice is very easy, right? A, small a, capital A, small b. Am I done? Which one rule do you want to apply? So, we have A replace A with A, B, capital A. In our case, B. Are we done? If we do it one more time, let's choose this rule so we can not do this anymore. So, we're going to replace A with what? Epsilon. So, we'll have A, A, B, epsilon, B. So, what's epsilon can calculate with anything? Right? So, B can calculate with epsilon is just B. Epsilon and B can calculate together is just B. So, the results of this will be A, A, B, B. Do you agree that this is a string that this grammar can produce? So, if I ask you something like, can this grammar produce, let's say, is this string in this grammar? Can this grammar produce this string? How do you prove that to me? Driving the string using the... Yeah, driving the string just like this. So, you're showing a series of steps starting with S, and you derive by following all of the rules, and as long as you get the final string A, B, then that means yes, A, B is inside, is produced by this grammar, represented by this grammar. What about, I guess we won't talk about it, but not, but maybe you can, maybe the reason is this string in that grammar? What? Why does it start with A? Right, because the start... So, S will always produce strings that start with an A, and this string starts with a B, so it can't possibly be produced by this. Does that make sense? So, S is where we start, so every string that this grammar derives is going to start from S. So, what do we know about all strings that start with that S derives? It has to start with a lowercase a. So, this string, can this string come from S? Not this S, correct. We're only talking about this grammar right here before right now. We'll change it up in a second. No, it can't. It can't possibly. They have to start with an A. All strings that come from S will start with an A, a lowercase a. Okay, cool. So, we're just talking about derivations. We're talking about different types of derivations. So, what was the one type that we talked about on Friday right before the end? Leftmost derivation. So, what does that mean? So, is this a leftmost derivation? Yes, no? Yes. Yes, why? Non-terminal, though. Yes, I have to mix them up myself. The leftmost non-terminal is always the one that you choose to replace. Exactly. So, in this one, we don't have any other choice. If we have, let's say... Yes, back to our expression. So, expression, so I'm going to say S is an expression where an expression is expression plus expression, and expression is expression times expression, or expression is a num. Cool. Okay, so, terminal is in this grammar? What is? Num? That's it? Plus and multiply, right? So, these are all symbols that are considered here. And so, the non-terminal is an element. E and S, right? Exactly. And how do we tell? We'll do the symbols here. We see there's a plus symbol here, and there's no symbol on the left-hand side, so it must be a terminal. See the multiplication symbol? See there's a num on the left-hand side? It must be a terminal. Cool. Okay. So, we know that for leftmost derivation, we're going to derive always the leftmost symbol. So, now let's go with the rightmost derivation. So, we start with S. We only have one choice. So, we have to go S produces E, and then we all have one choice again. So, let's do the multiplication one. E times E. Now, we're doing a rightmost derivation, which E do we derive next? The one on the right? Yes, I know. It seems very simple. This one, the one on the right. So, we'll choose num for this one. So, we'll do E times num. We'll replace this with E plus E. So, we have E plus E times num. We'll do this one more time. We'll replace this E with a num. So, we have E plus num star num. And finally, we'll replace this E with a num. So, we will have finally num plus num times num. Rightmost is leftmost derivations. Got a lot to loaded. Cool. Okay. Always expand the rightmost non-terminal. Example, we just went over. Great. Okay. First is leftmost derivation. We're going to get to that in a second. Okay. Yes. Yes. How do you figure out which one is default? Or is it always when we state which one is default? So, it depends on what you want to do is the answer to that. If you want to show, like we talked about that this string exists in this grammar that this grammar can produce this string, then it doesn't matter any derivation. As long as you can derive it, leftmost, rightmost doesn't matter. But we'll talk about in a second why leftmost and rightmost derivation, why that's actually useful, so why am I talking about it rather than just not talking about it. So, that brings us to parse trees. So, we kind of talked about parse trees a little bit, but what do we talk about them as? Anybody recall? We're talking about parse trees. When do we talk about parse trees? Have we talked about parse trees? We definitely did. Don't shake your heads. I have a video to prove it. Okay. So, a parse tree is another way to think about derivation. So, we'll see that a derivation is actually equivalent to a parse tree. It may sound familiar because this is the point of our parser. So, our syntax analysis and our parser is going to take in tokens and build a parse tree. And that's going to be used by the next part of our compiler. So, yes. Okay. I don't actually expect you to remember a word on an arrow on a box in a diary I talked about the first day, so don't worry about it. But I just want you to refresh yourself of where we are in this whole process. Okay. So, what is a parse tree? So, the idea is, for this derivation, right, at every step here, we are replacing, let's say, expression with expression times expression. And here we're replacing this expression with a number. And then we replace this expression with expression plus expression. Here we replace this with a two and this with a one. So, the idea is, can we represent this as a tree? So, what are trees? This we definitely talked about on Friday. We had a good discussion of branches and leaves and stuff. So, we have a root of the tree, right? So, we have the top node. Every node in the tree has one or more children. What's a leaf? What's a node with no children? A leaf? A leaf with no children is also called a leaf. So, what do we do? So, we want to take this into a parse tree. What's the root going to be? S. S. Makes sense. Now, what rule do we apply here? What should be the child of S based on this derivation? E. And then what should be, what do we use to produce this E here? E star E. E star E. Yeah, exactly. So, E. Is this going to have any children? No. Why not? That's internal, right? So, here you can think about every parent-child relationship is a production rule, right? And so, then what we have here, which E do we replace? The rightmost one with nums. We're going to replace this E with num. And then we replace this E with E plus E. E. And then we're going to skip forward a little bit. We replace all these with nums here. So, does everybody see how we get from this derivation to this parse tree? So, what does this parse tree kind of look like? So, let's say this num is 1, this num is 2, this num is 3. What would this parse tree kind of represent? What was it? 1 plus 2 times 3, right? Because we have the plus here. So, we have this E expression, we have the added to this expression. So, this would be 1 plus 2 times 3, right? So, this would be that couple over there. So, let's think. Is this the only parse tree we could draw that outputs this 1 plus 2 times 3? What would be another one? Yeah. So, switch. So, have the plus occur first. And then, sorry. Yes. Okay. So, let's separate this. Let's try to do this. So, we have E. Then we have the plus, right? So, we have plus. We know it's E. We know it's E. Then we have here we have num, which is 1. And here, what's this going to be produced? E star E. E star E, yeah. And then this will be a num, which is 2. And this will be a num, which is 3. So, now what does this represent? Calculation-wise. 1 plus 2 times 3. So, is this a problem? It's different. Yes. Is that a good thing or a bad thing? It depends. We want it to be different or the same. Okay. So, let's just see this. We can do this exact same thing, right? We can build parse trees from derivations. And we should be able to go backwards, right? So, we should be able to say, given this parse tree, right? What would be the derivation of this parse tree? So, we have asterite y. E. And then what? E plus E. And then we have this E as a num. This is not going to work out well, is it? Num plus E times E and so on and so forth, right? So, this is the same derivation? Is the same derivation or a different derivation? It's different, right? The second step here is E is replaced with E times E. And here the second step is E is replaced with E plus E. Right? So, specifically at these steps here, right? But we still get the same string at the end. Okay. So, parse trees. Okay. And so, I'm trying to hit that where we're going to go in two seconds. So, we can see that this parse tree kind of broke down. It doesn't have brackets. The brackets are usually terminals. Brackets? So, like, he says... Oh, oh, oh, I'm using this as a clarification, as like, the string that these generate, right? So, we generate this string by concatenating all the leaves of the tree, right? But should the operator residence of plus and multiplication then come into play? So, that is the big question, right? Because this tree on the left, if we were to interpret this, means one plus two to the result of that times like three. Whereas in this tree on the right means one plus the result of two times three. So, but the outward string they all have is this one plus two times three. Right? So, the prompt, so there's a couple problems we're going to get to and that's going to be in half a second and then I'm going to bring it all back home with rightmost and leftmost derivations. Our goal in our parsing step. So, when we're here, we want to take in a sequence of tokens and we want to build a parse tree that corresponds to those sequence of tokens. And this parse tree is just like these parse trees, right? And so, that's our goal right now, right? So, we use regular expressions to define tokens to turn a sequence of bytes into tokens. Now, we want to turn a sequence of tokens into a parse tree. And if that parse tree does exist in this grammar, then we know that that's a correct parse tree that is described by this context free grammar. And so, we know it's a, this sequence of tokens is correct in our language. And then we'll do further processing of the parse tree to actually interpret that. So, we can build a parse tree. And so, but we need to actually be able to give it a sequence of tokens and a context free grammar. How do I actually build this tree? And so, this is parsing. So, this is what we're getting to now is how do I turn this sequence of bytes, right? So, here we get it kind of top down, right? We kind of saw we had this derivation and then we said, okay, we do s, we do this, and then we get 1 plus 2 times 3. But if we have 1 plus 2 times 3, how do we just turn that string into one of these parse trees? So, that's the central question that we're going to get to. But we have two problems which a lot of people have noticed and we've been talking about. I've kind of been trying to dodge the questions about ambiguous grammars. So, what does an ambiguous grammar mean? It's not well defined. Yes, so for instance, there's 1 plus 2 times 3, what does that actually mean? Which one should happen first, the multiplication or the addition? The second one is efficient parsing. So, why would we want our parsing to be efficient? So, it doesn't take forever, right? I don't know. Anybody dealt with like a very large C or C plus plus program that compiling it takes like 5 to 10 minutes? Or like Linux just taking operating systems yet or about to? It's like compile kernels and that kind of thing. It takes a lot of time. Imagine if it's like 10 times as long or something like that. You have to wait all that time just for compilation to happen. Okay. So, this is a key question that's going to drive us for ambiguous grammars. How do we parse given 1 plus 2 times 3? You already know, we've shown there's two different parse trees. So, which parse tree should we actually create? We've got to operate a precedence. So, why do we need that? So, it's not ambiguous, right? So, we all know how to create that tree. Cool. Here we can see there's two derivations that we've actually already done of 1 plus 2 times 3. We saw that in the second step here, they were different. The first time we chose multiplication, the second time we chose plus. But still, we have two different derivations that lead us to the same string. And we have two different parse trees, right? So, we've already gone over this. We did and showed that there are two different parse trees. So, this is where it all comes home. Leftmost, rightmost, and parse trees. So, a grammar isn't ambiguous if there exist two parse trees with two different parse trees for the same input. Or, there are two different leftmost derivations or two different rightmost derivations. So, they're actually all the same thing. So, from any leftmost derivation, you can create a parse tree. From any rightmost derivation, you can create a parse tree. So, this would be asking you, if you're a grammar isn't ambiguous, how would you show that? For what? It's going to have an infinite number of parse trees, right, depending on how you apply the rules. What was that? For the same string? Yes, the output the same string, right? So, we want to make sure that for the same string, there's only ever one parse tree in the grammar. If there's ever the case that there's two, we have an ambiguous grammar. So, we're like, well, is ambiguity really that bad? So, we actually, you're learning right now, from me speaking English, hopefully it's going into your brains. And then, it's been processed to something, you're interpreting the sounds and syllables that I'm saying, putting them into words and sentences. And trying to interpret them, trying to, okay, was he really trying to teach us or maybe just thinking about what's going to be on the midterm or something. So, you're processing and understanding what I'm saying, and we're able to convey these complex ideas, even though that, you know, we say ambiguity is bad in grammars for computers. But is English ambiguous? So, let's say, I said this sentence, I saw a man on a hill with a telescope. What are all the different ways that this could be interpreted? Somebody gave me one. Yeah. So, I'm using a telescope and I see a man on a hill. What else? Yeah. So, you, is he holding the telescope while you saw him? So, I saw a man, so you're actually sawing a man on a hill and the guy is holding a telescope. We weren't going to write to the really dark one. There's several interpretations. You don't want to think about what that means. What else? Yeah. There's a hill that has a telescope and a man was there. Yeah. So, I'm looking at the man and he's on the hill with a telescope. I saw him with a telescope. Yeah. Yeah. So, I saw a man, so there's a man on the hill and he has a telescope. The hill has a telescope. The hill has a telescope. Oh, yeah. Maybe it's like one of those crazy things where it's like the hole for the telescope is like part of the hill. He's actually standing on a telescope. I don't think I've heard that one before. That's pretty good. Okay. What else? Yeah. What else? Yeah. Maybe you broke the telescope in half and used the shards of gasoline to like saw the man in the map with the telescope. I think I lost count right at four. Five is five. Anything else? Right? So, it's very simple. We all know what each of these words are, right? The problem is, is there are multiple ways to interpret this one simple sentence. And so, would you imagine if you're a compiler, you write some code and the compiler's like, well, this person could mean to store this data in the database and they could mean to just throw it, I don't know, in the air or write it out to a file or whatever. I'll just pick whatever I think is most likely. Okay. I know it's terrible that it would be a program like that. It's not going to be hard enough to program. It's going to be a computer having to try to infer what you mean based on context. Right? So, we don't want that in programming, right? What we want to tell is a dumb little smart machine exactly what to do and what exact steps. We don't want to do any thinking about what it should be doing or what we meant. All right. So, hopefully you agree. Anybody not agree? That's just some cases where it's kind of annoying, right? You, like, misplaced one letter in a variable name and it's very clear you meant like one variable and not another one. So, like, shouldn't be smart enough to figure that out. I don't know. Maybe something in this room can develop a super cool crazy program language that, like, is, like, intelligent enough to work with you and figure out what you mean and, like, make suggestions, like, oh, you should fix this variable name. Did you want me to do this? Okay. I didn't go to the goal. Okay. So, we don't want ambiguity in our programming languages. So, we don't want ambiguity in our context-free grammars. Okay. So, that's the other issue. So, we talked about two issues, ambiguous grammars and efficient parsing. So, now I'm going to show the different ways that we're going to approach parsing. So, there are a number of approaches of how you can turn strings or the sequence of tokens into a parse tree. And so, we're not going to go into all of them. We're only going to really study one. So, there's bottom-up parsing where, essentially, you start with one plus two times three, and then you start up from the leaves and then build the tree up like that. And you say, okay, this is, like, a number. So, that's the number. There's multiple possibilities there. And you kind of build the tree from the bottom up. There's top-down parsing where you say, I know. Right? We're talking about this. I know that this string must start with S. Right? I know it's not starting on a terminal. I know it must start with S. So, I'm going to start from there and work my way down and try to understand, okay, starting from S, is there any tree that I can make that matches this string? Okay. So, in this class, we're going to focus exclusively on top-down parsing. So, let's think about this. So, I have, let's say a new string. I have one, one plus two times three plus four. Right? So, this is my string. So, we want to try to do this, maybe top-down. Try to think of an algorithm together. So, where should I start from? Close it. S. S. S, starting at the top. So, we're starting at the top of the tree. We're going to start with S. So, once, so this is kind of a trick that usually works. Anything done in case you're about 30 years done, like programming interviews for internships, okay, stuff, some of you. So, you shouldn't ask me that kind of weird big question. Like, oh, how would you do such and such a thing given such and such an input? One of the nice tricks to do is to always just give the stupid brute force answer that's, like, guaranteed to work but is really slow. So, that'll give you time to think about more clearly about the problem. But they want to hear your, the way you're thinking about this problem. So, if we were to do this, so what's our goal here? So, the thing about goals for our input is, let's say, the string one plus two times three plus four. Although it would really be num plus num times num plus num. Right? The sequence of tokens. So, what's our goal to get? We have the string. Part string. Part string. We need the part string, exactly. So, we need a part string that generates the same string. Exactly. Cool. So, what's the brute force way to do this? So, I start with S, that would be the start of my algorithm, that one. What was that? Apply every rule. Apply every rule, yeah. Will I find the part string? Eventually, if I don't get stuck in a weird loop. Right? But eventually, like, if I can get unlucky and choose the wrong rule, right? I'll just keep generating the same part string. Right? But, as long as I go through all part strings, at some point, I will find the part string that corresponds to this. Is that efficient? No. Is it correct? Yeah, it will find it. Right? And so, that's kind of what they say to do in what I've done in interviews is start with the super simple solution, which is brute force all possible part strings. Maybe they say, great, okay, what's the problem with that? So, you talk about efficiency, you talk about, you know, that's going to take a long time depending on how many rules we have and all this kind of thing. So, then, once they see that you know that, maybe they'll try to lead you toward where they want you to go or you can think about other approaches. But, you know, it's always a good time to ask yourself, like, will the stupid simple approach work here? And it does. I mean, you can do this. You can just go through all these rules, apply all these rules. We know there's only one thing for S, so S has to be replaced with E, no decisions here. And here you just try all possible combinations to create all possible triggers. But, we want to be efficient. So, and to be efficient, okay. Now we're going to use a more complicated grammar. So, let's look at this grammar. So, it's S goes to, let's see, A or D or C. A goes to little A. B is going to go to the big D, B or B. C is going to big C, little C, or that's all. So, let's say we want to parse in S. How many different choices do we have? Three, right? We have three different choices. We're going to go S goes to A, S goes to B, or S goes to C. And so, the way we're going to represent this and talk about parsers is by using code. So, we're going to say we're going to write a parse underscore S function. Which is going to be reading a token. So, how would it read a token? Get token. So, we're talking about get token from the lexer. The lexer is going to give us a token. Then it can maybe test the value of this token. So, let's think about it this way. What do we know from looking at this grammar? What are some things that we know from looking at, let's say, A from this rule? I can't hear you, sorry, there's the guy. You can yell. It's fine. Yeah, it ends with it. So, yeah, A is going to be a terminal. So, essentially, right? And what do we know what that terminal is? A single A, right? So, then what do we know maybe by looking at, like this, like let's say S reads a token. Wow, okay, I'm getting it on itself. Okay, so we know A is just an A. Let's kind of file that away in our brains. What do we know about big D? Is it going to be what? One or more B's, right? We can kind of see that every B is going to be composed of all B's, and what about C? All C's or what? Or nothing, exactly. Okay. And let's go back to efficient for a minute, right? So, one thing we want to do is to be efficient. So, we need to call get token, right? And this is going to read in one of the tokens from the input straight, right? From the input straight. It's going to give us a token, right? And so, our goal here to be efficient is we want to say, can we decide which rule to choose for every, when we're parsing S, or parsing A, when we're parsing B, or parsing C? Can we choose just by looking at one token? So, let's think about this for a second. S goes to A or B or C, right? And so, if I say that I call get token when I'm trying to parse S, and I say it's an A, which one of these rules must have happened? S goes to where? A, so why have you known that? Yeah, because S goes, so another way to think about it is we have the graphs, right? S, and so we have choices, A, B, C, or B, right? So, I already know from my parse tree that this big A, what do I know something? I'm just talking about first characters. If I say, starting with A, applying every possible rule, right? What's the first character that's going to be there? A. A little A, right? And we know that because there's only one rule here. But let's think just about B for a second, right? So think about all possible strings that B can produce, right? By every single combination of its rules. What's going to be this first character here? Are you sure? Could it be ever be more than one? As a first token, a first character, right? So what do we know just by looking at this again? One or more B's, exactly. So we know that one or more B's, always the first character is going to be a B. Then what about for C? So it will either be what? Nothing or what? Or C. So if I'm calling get token S and I'm trying to decide between A or B or C, can I decide just by looking at that one token? In this case, guys. So we're only concerned about this current right here. So we'll talk about other cases. And we'll see when exactly this can apply or cannot apply. But for right now, right? If we see an A, we know it has to be S goes to A. If we see a B, we know it has to be S goes to B. But what about C? If we get a little C, what do we know? It has to be C, right? But then what about how we deal with this epsilon? So we actually didn't talk about it, but what happens if we call getToken and the input string is, let's say no bytes at all, zero bytes. It has to be C, exactly. But what is getToken return? Yeah, so it should return some end of file, right? Something that's different that tells the parser, hey, this is not a token but we've reached the end of the input string, right? We've reached the end of file. There's no more we could possibly parse. So we're going to represent that with a dollar sign. So this is actually the same thing in regular expressions. So regular expressions, dollar sign, not our regular expressions, not to just confuse everyone, but in my JavaScript and other languages, dollar sign means end of string when we talk about regular expressions. So that's why we're using the same thing here, that getToken is going to return a dollar sign. So if we're in S and we call getToken and we see a dollar sign, I guess a dollar sign is two things, right? Then what are we, which one of these rules do we know must apply? C. C, because C could also go to nothing, to epsilon. Billy, I'm so interested in you. I want to show you kind of what this looks like. So the idea is, so we're also going to talk about how we're going to represent this parse tree, right? Because I said what we're trying to do is represent parse trees. And so the way we do that, we're going to do this by function calls and something, function calls. So we already kind of talked about the logic here. So we said, let's call it T is some token, so we call getToken. And then what do we say? If T is an A, then what? It's got to be a big A, exactly. So then we would want to actually call parseA, right? So we can say there's some A function. We can even maybe write it and look at it later. Actually, I'm going to move this down a little bit. parse underscore A, right? We know we need to call that. And so if it's not an A, B, C, or the end of file, then what should it be? What was that? Error, yeah. It should be a syntax error, right? So it's not possible, from looking at this grammar, we know that S must start with either an A, B, C, or it has to be the end of file. If it's anything else, if it starts with a D, we just show it's not possible to generate that string using this grammar. Yeah. The thing that I'm confused about is why, do you scroll down a little bit? Yes. The comparison. Why is it a T, even though organization would be capital A? We only get, from get to, we get out tokens, which in our grammar are terminals. Okay. And so that's the only thing we can compare to. So we see the input string would be something like A, or it would be maybe like B, B, or it could be something like C. Yeah, exactly. So each of these cases would return something different when you call getToken, depending on what the input actually is. And so the question now is, so let's think about, let's go dot, dot, dot here. So let's think about kind of similar logic to how we developed parts S. So we're talking about parts A. So A, how many choices do we have? One choice. How do we know, but how do we know if that output string was generated by this rule? So we started, so now we're focusing just on A. So let's talk about just A, right? So what do we know about the strings that A can generate? Just a single A, right? What if we try to call parts A when it's not just a single A? What if it's like a D or a C? It should be an error, right? So because we're trying to parse what we want to produce when we're parsing is A, a syntax tree. But B, if we detect that the syntax is not valid, we should throw an error and say, hey, there's a syntax error. I was expecting A, but instead I got something else, right? So I actually want to read the token, right? Because I want to check, is it what? A, right? I want to check if it's A. What if it's not an A? Then what should I do? Error. Yeah, so we'll assume there's a syntax and use for error function, which will throw an error during parsing, saying this is not valid. So now if we think about here, right? If parse A returns, then what does that mean? So parse A has validated that, yes, this part of the input string definitely came from parse A. So that's great. And now I already know S goes to A, so I'm done, and that means if A said everything's good, then everything's good, right? But if A found a problem, then there would be a syntax error. So I have a problem. So let's step through this. Say the input string is just a single lowercase A. So I call getToken, what's that going to return? A, so T is going to be A. I say if T is equal to A, is that true? Yes. So I'm going here, I call parse A, I then call getToken. What is getToken going to return? End of file. So is end of file equal to A? No, so it's going to tell me there's a syntax error. Why did it do that? I didn't put my token back. Why don't I need to put my token back so I can get it? Yes, but why don't we get it? Where does that come from? Yes, what if we parse? What was the rule starting from parse S? What did we choose? Yes, we know that this is the rule S goes to A. But does S goes to A? Does that produce any terminals? Where did that A actually come from? Capital A, right? Lowercase A came from capital A. And so because there's no terminal here in this production rule, you want to call it in and remember it from the lecture? So put the token back? Unget token. So you can figure this like peeking ahead, right? So the goal is by peeking ahead just one token, can we decide which one of these rules we want to follow? And then from there, if we don't actually produce that terminal, we put it back. So I think we're going to end here. We'll go into more detail about how we're going to deal with this B and this C. And then we're going to talk about algorithmically how we calculate everything.