 I'm John Sterling, and I'm going to tell you about the meaning explanations for type theory. This presentation of type theory is going to be pretty different from lots. So whoever was here for both will get two different ways to think about type theory. So the first object that you will consider in the creation of a logical theory or a type theory is a judgment. And a judgment is simply a thing which you come to know at a certain point in time. And when you have come to know a judgment, it's because you have constructed in your mind at that time the so-called evidence for the judgment. And you propound a logical theory by simply stating what are the forms of judgment that you're going to have. And with each form of judgment, you include a definition, a meaning explanation. Meaning explanation for a judgment always sounds like this. To know this judgment is to know blah, blah, blah. You can think of it as specifying the shape of the evidence for the judgment at a point in time. So without further ado, I guess I'll just propound the judgments of type theory. And in the course of doing so, I hope that I will have demonstrated the methodology of using meaning explanations in order to justify a logical theory. So the first form of judgment that is considered in type theory is the big step evaluation judgment. And we write that like this. We read that m evaluates to m prime. And what I'll do is I'll write the form of judgment over here, and I'll write its meaning explanation over here. So the meaning explanation for this judgment looks like this. To know m evaluates to m prime is to know that m prime is the value of m. And what it means in particular to be the value of something, it's an uninterpreted notion that we will learn more about as time progresses. And I mean that literally in the sense that as time progresses and we introduce new constants into the theory, the meaning of value of something or other will be sort of augmented over time. So you can think of these uninterpreted open-ended ideas as sort of like a choice sequence which unfolds over time. And so at a particular time during this hour, I will define certain constants and then augment the meaning of value. Going further with the next judgments I'm going to do, I'm going to write them a bit shorter. I won't write to know blah, blah, blah is to know because I'll consider that to be implicit. The first type theoretic judgment that we will consider is written a type. You can also write this a prop or a set. I'm going to use the word type, but in different presentations you will see different things. Propositions and types are considered the same under this interpretation. And so let us give the meaning explanation for a type. To know a type is to know that a evaluates to a prime such that you know a couple of things. First, what counts as a canonical verification of a prime and canonical verification is another one of these uninterpreted notions that as we introduce constants and types into the theory, the meaning of canonical verification of something will be augmented. We will learn more about it. We don't ever expect to learn everything about it, but we will learn what there is to know about it for the things that we've introduced so far at a point in time. The second thing that you have to know in order to consider something a type is when two canonical verifications, I'm going to abbreviate this as CVVs, I'm going to write that all the time, when a two canonical verifications equal. So that is what it means to know that a is a type. So to recap, a evaluates to some canonical form, a prime, such that you know what counts as a canonical verification of a prime, and when two canonical verifications of a prime are equal. You can think of a type being like a set that has an equivalence relation. That's another way you can think of it. But this part is crucial, because we might have some sort of a function that computes a type. So we might have something such that p of 0 equals top and p of 1 equals bottom. And we would like to say that p of 0 is a type. Well, we have to compute p of 0. So we say, well, p of 0 evaluates to top. And then all we have to do is justify that we know what counts as a canonical verification of top and when a two such canonical verifications equal. And I'll demonstrate that for some basic types after I have propounded the meaning explanations for type theory. And let's see here. I'll write the next judgments over here and I'll erase what I need to do. The next what's called a categorical judgment for type theory is written M in A. And this will be our first experience of something called a presupposition. I'll write that like this. A prime. OK, so we read this. M is a member of a presupposing A type. So what is this presupposing thing? And PS is how I write presupposing just for brevity. Presupposing means that we may give the meaning explanation for this form of judgment, what it means to know this judgment, by structural induction on the evidence for the presupposition. So what it means is that we can think, well, what would I have to know in order to know A type? Well, in that case, I must know that A evaluates to some A prime. We've got to know that because that's part of the meaning explanation for A type. And we also know these 1 and 2, what's a canonical verification and one or two canonical verifications equal. And so what that means is that in giving a meaning explanation for M in A, I may appeal to any of the data that's included in the evidence for A type, by structural induction on the evidence. So the meaning explanation is to know M in A is to know that M evaluates to some M prime such that M prime is a CV, a canonical verification that is, of A prime. And this is a well formed thing to say because we already know that the notion of canonical verification of A prime is known to us at this point in time via this bit of information that we've extracted from the evidence of the judgment of the presupposition. So that's the meaning explanation for M as a member of A. You can think about it informally as saying that first you compute, you find out what the value of M is, and then you check if it's one of the canonical verifications of the type which is the value of A. The next, I'm going to have to erase so that you can see the writing, the next categorical judgment that we will consider is the equality of members. We write it like this. M equals N in A. And we have multiple presuppositions. First, excuse me, M equals N. So first we presuppose that A is a type. Then we presuppose that M is a member of A. And then we presuppose that N is a member of A. So these are the things that we need to know in order to even propose this judgment and likewise that we may take advantage of in the course of explaining this judgment. And the meaning of M equals N in A is to know that let's extract some stuff from the presuppositions. So based on the meaning explanations for membership, we know that M evaluates to some canonical value M prime and that N evaluates to some canonical value N prime. We also know that A evaluates to some canonical value A prime. And then the meaning explanation is written simply M prime and N prime are equal canonical verifications of A prime. And yes, these are the first three categorical judgments. So I want to emphasize here, at every point in time, every time I mention this idea of a canonical verification of A, everything is sort of well-formed. We know what it means to be a canonical verification of A because we can extract that information from the evidence of the presuppositions to the judgments. The next categorical judgment that we want to consider is the equality of types. So A and B are equal as types. In order to actually explain that judgment, I have to introduce some further forms of judgment which are these forms of judgment you will have whether or not you're doing type theory. So I could have defined these first if I wanted to make everyone confused. I wouldn't have to explain the judgments I've done so far. These judgments will be what are called the higher order judgments. And I will do my best to explain them. So the first higher order judgment that we will consider is the hypothetical judgment. So I write this J2 assuming J1. And the meaning of J2 assuming J1, where J2 and J1 are both just other judgments. OK, these letters stand for judgments, I mean. Is you know J2 assuming you know J1. It almost seems like I haven't said anything, but just remember that we have to know J2 assuming J1. And then we've distributed the knowledge aspect of it into the middle of it in order to explain the form of judgment. So we have actually given a real definition. And so one way you can read this meaning explanation is say that the evidence of J2 may be formed by structural induction on the evidence for J1, as you can pattern match on the evidence if you consider evidence in the sufficiently abstract sense. This is hypothetical judgment. Hypothetical judgment is not this. This is not hypothetical judgment. This is what's called a sequent judgment. And depending on your theory, it may or may not be explained using hypothetical judgment. The next form of judgment that we will consider is called general judgment. And this I write like this. And I'm using Martha and Love's notation from what he called the logic of judgments sometime in the mid to late 80s. And I can pronounce this for arbitrary x, J of x. And what it means is you have to think of J as being a judgment that has a free variable. And then the meaning of this judgment is you have to know. So for any term that you substitute in for x, you have the evidence for J at that term. So it just means that J is true no matter what you put in for x. Well, I shouldn't say it's true. It's evident no matter what you put in for x. And remember, we say that the type is true. The judgment is evident. So these are the two higher order forms of judgment that we need in order to explain the last categorical judgment of type theory, which is the equality of two types. So without further ado, I will give its meaning explanation. A equals B type. The presuppositions are that A is a type and that B is a type. What do we do? Pull this out. OK. All right, so we've got those presuppositions. And now the meaning is the following. You have to know the following four things. One, that for any x, if x is a member of B assuming that x is a member of A. The second one is that x is a member of A assuming that x is a member of B. So this just says the two types contain the same elements. The third is that for any x and y, x equals y in B assuming that x equals y in A. And then the fourth one that we have to explain or that we'll obligate it to demonstrate is that x equals y in A assuming that x equals y in B. And so these last two premises suffice to say that the types have the same equivalence relation as well. So it's not enough for two types to contain the same elements. The same elements must be equal in either type. And so you see how we use the general judgment here in order to express generality over some variable. And we use the hypothetical judgment here in order to express the semantic consequence between the two judgments. So at this point in time, we would seem to want to start actually defining some types. We haven't actually defined any types yet. We've just sort of laid the groundwork to do it. So it's kind of a peculiarity of type theory, lecture that you only really ever get around to explaining pi. But I'll do better than that. And first I'll give you a unit where that's the simplest. And then I'll give you pi. So yeah. And so this is important what we're doing right now. Because you'll see on the one hand how to introduce a constant into the type theory. And thus augment our knowledge at a particular point in time. And also how does one even define a type? So we'll just do it. So what we want to do is first we're going to introduce some constants into the theory. So the first constant that we'll introduce is the unit type constant. And I'll write it as top, like Vlad did. And I want to say that it's already a canonical form, which means that it evaluates to itself. So I'll just say that's part of the definition. I'm augmenting the meaning of value, that uninterpreted idea that we had at the beginning. So at this point in time, if we denote the time now by some little token u, then I might even say that if people are familiar with like Kripke or Beth's semantics, this is kind of what's going on here. But we can not think about that if we want to. Another constant that I'll introduce is the trivial element. And we intend this to be the canonical verification of top. And we say that it's also a value. So then the thing that we want to make evident is that top is a type. And so to define a type, you cause the judgment top type for whatever type you're defining to become evident. And so how do we do that? Well, we have certain obligations. First obligation is to say, what is the value of top? Well, we already know that. We've just said so. And then the first real obligation is what counts as a canonical verification of top? And we would say that bullet is a canonical verification of top. And then lastly, we have to say one or two canonical verifications of top equal. So we say that bullet and bullet are equal as CV's canonical verifications of top. And this is the full definition of the type. These are evident judgments. They're evident because we have simply defined these constants in this way. And at this point in time, we now know what it means to be the value of top. And to be the value of top is to be top. Likewise, to be the value of bullet, the symbol, is to be bullet itself. So you'll often see definitions of types given like this, unit type. And then people will propound the members of that type. This should be top. No, these rules here are going to become evident. We are going to say that these are admissible under the meaning explanation and the definitions of the types. But this is the definition of the type. There's nothing else to it. And you'll find that when we have a type that's sophisticated enough that it would warrant an elimination rule, the elimination rule for this, as Vlad said, is trivial, you'll also find that we do not include the elimination rule and the definition of a type. It simply is evident based on the definition of the type. So that was kind of easy because it's the most trivial possible type. But now we will try to do pi. And you'll see where it goes wrong as we simply do not have enough kit to do it right. So here's what we want. We want to make evident. And for anyone who's not familiar, pi type is called dependent function. And it's like a function type except that the type of the output depends on what input you gave it. And so we want to say that this is a type under the circumstances. And we're going to run into trouble immediately. Because we have to say under what circumstances do we intend this to be a type, then we would attempt to make that judgment evident. But we're going to have a problem. So first, this point is trivial. If pi x and A is B of x is to be a type, then we know that A has got to be a type. That makes sense. But then what do we say about B of x? Well, B of x sort of has to be a type, but we've got to do something with the x. Because B of x is not really a type. It's not a value at all. But we will have to say something. So as a first cut, we'll simply abstract over that x with a general judgment. And we'll say that B of x must be a type assuming that x is a member of A. And then we close the out of parentheses. So this is our would-be goal for defining pi. But this is not correct. And the reason it's not correct is that we have not given sufficient condition to believe that B of x is functional. And well, what does it mean to be functional? Functional is a synonym for extensional. So it means that it takes inputs to equal outputs. And I can demonstrate how you might run into trouble. If here we, let's say that we define the type, I'll call it like, what's some kind of a good symbol? So the vegetable type. And so we say that's a type because we have two members of that type. I'll say B1. And I don't want to use the word B. Say x1, 1 and x2 are CVs of vegetable. But then if I went and said that x1 equals x2 as a canonical verification, I can assert that. This is part of the definition of a type. If I were to do this, then I could define a B such that B of x1 reduced to top, or true. And B of x2 reduced to bottom, or false. And if I did this and I could inject this B as the family in this dependent product that is phi root over A, and if I did that, then I would have a non-functional pi type. Why does that even matter? Well, if you do it this way, you have to be very careful because then if you try to define some relation with this, and then you say that the relation is total, and then you use the axiom of choice, you may, via Diakonis's theorem, you can prove the law of the excluded middle, which we specifically want to avoid. So it's very important that every construction that we do be invariant under equivalence. And so this will not suffice. So we will have to work harder. We will require a third premise, and then we'll do away with all of it and do something better. So for any x and y, B of x equals B of y as a type assuming that x and y are equal in A. And this kind of does the right thing. This forces that B is functional in A because it takes equal inputs to equal outputs. The problem is that this is just, to be quite honest, it's a pin. And since we're going to have binding forms all over the place in type theory, we don't want to have to do this again and again and again because we're going to screw it up eventually. And additionally, we want certain structural properties to hold, so we will need the notion of a context and a sequence judgment. And the point of this is simply to sort of codify the kinds of constraints that we're placing on binding forms such that they be functional in their inputs. So here we will introduce the notion of a context. We're going to have to give meaning explanations for several judgments simultaneously by simultaneous induction. So the first form of judgment that we will add is that gamma is a context. We write it like that. OK. And there are several other ones that we will have to define. X is fresh in gamma. And gamma entails A as a type. And gamma entails M in A. Gamma entails M equals N in A. And lastly, gamma entails A equals B as a type. So we've got to re-explain all the categorical judgments with respect to a sequence. And note that I'm using this symbol here for the sequence. You can also use this one. That's a turnstile. I don't use a turnstile here because it is typically taken to denote a syntactic proof theoretic consequence, whereas in type theory, or at least in the kind of type theory I'm showing you, it's a semantic consequence. So one could write it like this, but no one does that. So I won't either. And Bob Constable back in the day wrote it like this. And he did it just because he didn't want to figure out how to type the turnstile on his typewriter. But I'll just write it like this. So we've got to explain these judgments. And the judgment of each one will sort of depend on the meanings of the other ones. But it's not circular. It's inductive-inductive. So to know gamma context is to know that one of two things, you have two choices. One, so this is door number one. You know that gamma evaluates to the empty context. And I'm just using this little symbol to indicate the empty context. And we can say this is a value. So in that case, you know that gamma is a context. There's nothing further you need to know. The second option, door number two, is that gamma evaluates to delta comma x in A. That is the extension of some other context delta, such that delta is a context. And that x is fresh in delta. And that delta entails that A is a type. OK. So we've already sort of depended on things that we have yet to explain. But we'll explain them in due course. So the meaning of this is x does not appear in gamma. I could give a formal meaning explanation if I wanted to, but I'm just going to keep it simple. This has a presupposition that gamma is a context. So this is why it's OK for us to give this meaning explanation. The sort of like the longhand version would be this is a presupposition gamma is a context. So you can define the freshness condition by structural induction on the evidence for gamma context, which means that you may have a case for when gamma is the empty context, and you may have a case for when gamma is the context extension. And so it's defined inductively. So then the meaning of this judgment is going to be presupposing that gamma is a context and that gamma entails, no, it doesn't presuppose anything else, just that gamma is a context. And then here we'll do the full inductive argument. So by structural induction on the evidence for gamma context, we may supply too many explanations. One, in case the gamma is the empty context, and two, in case the gamma is the context extension along with the evidence for these premises. So one, we explain this. So the empty context entails that A is a type, may appeal to the initial categorical judgment that we gave without respect to a context. Two, in case we have delta xA, we're going to rename A to B so we don't get them confused, entails A type. OK, so to know this, to know this is to know many complicated things, which I will write out. And I'm going to write them over here because they're kind of big. So it is to know that, let's see here. So for any z, we'll say, delta entails, z substituted in for xA is a type under the assumption that z is in B. And then for any z and z prime delta entails that z for xA is equal to z for xA, or z prime for xA, excuse me, assuming that z equals z prime in B. So these are the premises that we had to, that we gave by hand before for the definition of the pi type. And now we sort of bake them into the meaning of the sequence judgment. And then we have to do pretty much the same thing for the rest of the sequence versions of the categorical judgments, m and a, m equals n and a, a equals b type. I'm not going to do it to you because they're almost identical to this with just small changes. The thing to keep in mind is that in each case, we have created the conditions necessary in order to cause the functionality or the extensionality of any family of types to be evident, or any family of values to be effident. Now that we've built that kit, or at least we've built most of it, and I promised you the rest just is evident, we may try again to define the pi type, which I tend just to call the Cartesian product of a family of sets. So let's try this again. So first, let's introduce the terms into the computation system. So we'll say that that's a canonical form, and we'll introduce a lambda abstraction. And it's also a canonical form. And we'll introduce the application operator, which can be used to apply lambda abstraction to some object. And in the course of introducing each constant, I'm just giving it a computation rule. So I'll say that m of n evaluates to m prime if m evaluates to lambda x e of x. And let's see here. e with n put in for x evaluates to m prime. This is the computation rule for function application. So it's a lazy function application, right? And type theory is lazy. That's why we do it this way. So we don't evaluate n before putting it into the function. It works out better when it's lazy, but groundstrom showed that you can do a strict version too if you really need to do. OK, so these are the additions to the computation system, and now we will try to make the typing judgment for pi type effident. So here's how we do it. Pi x colon a b of x is a type. This is our intention. Is a type assuming that a is a type. And now we will use the sequence judgment, which we created, which will automatically sort of propagate all the functionality constraints that we had asked for. x colon a entails that b of x is a type. OK, so this causes b to be functional over a. OK, so this is not a definition yet. The definition consists in us causing, at this point in time, what time is it? It's 432. At 432 on this day, this judgment will become evident to us. And so it becomes evident to us in the following way. And by the way, this is a hypothetical judgment. So we may proceed in making the conclusion evident by structural induction on the evidence for the premises. So the first thing we need to do is we need to say that pi x and a b of x is evaluates to something, and we already know it evaluates to itself. So I'll just leave that implicit. And then we have to say, what is the canonical verification of it? That's the first obligation, the first real obligation. So canonical verification of it is like this. Lambda x e of x is a Cv of pi x and a b of x. Under the circumstances that x in a entails, and again, I'm using this functional sequent here in order to cause the e to be functional, e of x in b of x. Well, I went too far there. So a lambda abstraction is a canonical verification of the Cartesian product of a family of sets under the circumstances that when you place it in a context with a single variable, then it is in the output type. And moreover, the interior of the lambda abstraction is functional over its input variable. Then the second one that we have to say is when are two canonical verifications, canonical functions, equal. Well, we will say that lambda x e of x is equal to lambda z. You can choose any variables you want. f of z under the circumstances that let's see. x in a entails that e of x equals f of x in b of x. So two functions are equal when they coincide on the inputs. So they're equal when they're equal point-wise. This is also called the extension equality function. So it's the correct one to choose for the Cartesian product of a family of sets. So this is the full definition of the Cartesian product type. And so we've caused this judgment to become evident at this point in time. So one of the things that I had hoped to be able to demonstrate to you today is this, well, what do we do with the elimination rules? So you're accustomed, I'm sure, to seeing a rule that looks like this. Gamma entails m of n in b of n. OK, if gamma entails m in pi x in b of x and gamma also entails that n is a member of a. So where does this come in? And we didn't even give an introduction rule with these contexts. And the reason I didn't is that such an introduction rule becomes evident based on the meaning explanation by simply weakening the context. If you have the categorical judgment, lambda x e of x is a member of pi x in a b of x, based on the meaning explanation for the sequence gamma entails m in a, you may simply, from the evidence of the categorical judgment, you may easily, trivially construct the evidence of the sequence judgment. And then you may weaken it by arbitrary numbers of premises. But so that one's the introduction rules, how you translate them into this form seems trivial. But for the elimination rules, maybe not so. And so I'm going to prove it to you. So basically, we want to make this judgment here evident. OK? So I'm going to need some space. And these meaning explanations that I'm giving you are called verificationist meaning explanations for a reason. The reason is that in order to define a type, you only explain what are the canonical verifications. And then all the other rules follow directly from that. They become evident based on the evidence of the premises. So another way to put this is that this rule will become admissible. And we're going to demonstrate that now. So let me repeat the conclusion that we intend to make evident. OK. So we need to make this evident. So we're going to do some kind of like a natural deduction proof tree here. So we're just going to see what it takes us. We already have certain premises that we can take advantage of. I'm going to call this the evidence of this. I'm going to call it D. And I'm going to call the evidence of this premise E. Then we may define the evidence of this by structural induction on these bits of evidence. OK. So let's just see where we get. Well, first we know that it suffices to consider the case where gamma is the empty context. Because this is stronger than some gamma. And so we can always weaken it to any arbitrary gamma. OK. And then based on the meaning explanation, and so I'll do that here too. OK. So we may do that. And then based on the meaning explanation for the sequence judgment, we may if we're dealing with the empty context, we may simply get rid of the whole sequence thing for the whole thing. And then we may just consider the categorical forms of judgment. OK. So this is already getting simpler. All right. So now from this judgment, we can also take advantage of the presuppositions to this judgment. Presuppositions to this judgment, we must know that B of n is a type. And so if it's a type, then it must have a value. We can say that it evaluates to B prime. OK. Good. OK. So now we will just simply follow the meaning explanation for this judgment. And so to know m of n in B with n for x is to know that m of n evaluates to some m prime. Such that is a Cv of B prime. OK. Yeah, OK. Now we may go by inversion on the computation rule for the application. And so what are the premises here? Well, we know that m must evaluate to lambda x e of x, OK, and that e of, what is it? Yeah, if n evaluates to m prime, OK. So we consider this sort of, that goal is evident, OK. Now what do we have to do next? We need to justify this. So let's go back to our premises. So from the evidence d, we can extract the following facts. Yeah, so we know that m must evaluate to some lambda abstraction. And it's going to be the same one as we have over there because evaluation is confluent. And let's see here. We also know that, OK, so let's say that's good. We also know that lambda x e of x is a Cv of that, not really pi type there, OK. And so then what we can do is we can go by inversion on the definition of the canonical verifications of the pi type. And so I'm not going to show all the premises because we'll only need one of them, which is that x. I hope I'm not too far off to the side here. x in A entails that e of x in B of x, OK. And then we may go by inversion on the mini explanation for the sequence judgment. Just pulling out one premise here. Any x? E of x is in B of x, assuming that x is in A. OK. So now we're getting closer. All right, so I think what we've got to do now is we have this evidence E here that n is a member of A. Based on the mini explanation for the general judgment, we may swap in n for x, OK. And we already know that n is a member of A, so we can discharge that hypothesis. And then we have this evident judgment. E of n is a member of B of n. And let's go back here. How would we know that m prime is the canonical verification of B prime? Well, what that would have to mean is that m prime evaluates to m double prime such that, oh no, we already have that. Never mind. Ignore that. OK, yes, so here we go. So we know that, so in order to know this membership judgment, we can say that we must also know that E of n evaluates to some m prime. And we also know from the presupposition that B of n evaluates to some B prime. And any time we've got these duplicated across these different places because evaluation is confluent, we can just assume that they mean the same one. OK, and we also know the last bit that we get from inversion here is that m prime is a CV of B prime. And I'm going to call the evidence for this bit f. And then this is simply evident by means of f, which we derived over here by inversion of the definitions of the types and the meanings of the judgments. And now we have proved the admissibility of the elimination rule for the Cartesian product of a family of sets. That's kind of the main idea that I wanted to get across, which was that the elimination rules are not part of the definition of a type. You give them separately as sort of meta theorems based on the meanings of the judgments. And even the sequence judgment in type theory is defined after the categorical judgments. Because I've got a few minutes left, I'd like to briefly compare the type theory that I just propounded with the intentional type theory that sort of came into vogue post 1986. So what's different? Well, I will give the most charitable presentation of intentional type theory. We will not be able to define a type solely by virtue of its introduction forms. We will have to give both the introduction and the elimination forms. There's an issue that you'll have to deal with if you ever try to make a proof assistant for intentional type theory, which is a little concerning. But I'll briefly show how you can resolve it. So if we're defining a type and intentional type theory by means of its intros and a limbs, I'll give an example. So we'll say that if m is of type A and n is of type B, then the pair mn is of type A times B. And then we also have this as an introduction rule. And we have to give an elimination rule. And we would give another one for the second projection of the pair. And so if we do this, then if we try to actually go and explain the judgment, m colon A, what does that mean? Well, we'll be in a position where for any type, we'll have to know what all the other types of the theory are. So in the simple case, let's say that we're trying to say that m is in top. In intentional type theory, we were able to say, well, it means that m evaluates to a bullet. Here, however, we're in a position where, yeah, OK, so m might be bullet, but m might also be first of r when r is of type top times B. And likewise, it might be second of r when r is of type A times top, and so on. And so in order to say what it means to be a member of any type, we have to know in advance what are all the elimination forms of every other type. So the system is completely anti-margillou and very difficult to work with. So one way to work around this is to fragment the typing judgment into the verifications and the uses. So to know m checks at type A is to know that m, we'll give the simple example of top, is to know that m is bullet. On the other hand, we'll have another judgment for synthesis which says that we can guess the type of r. And here we may do so by just appealing to whatever's in context. And so if r is a variable, we look up the variable in the context and figure out what type it was hypothesized to have. Or if r is the first projection of a variable, then we look up the variable in the context, hope that it's A times B, and then we say that it's A, and so forth. The only remaining problem here to deal with is that I sort of just snuck in this sequence here. And what actually ends up happening interestingly in intentional type theory is that whereas we're accustomed to explaining the categorical judgments first and then causing a categorical judgment to become evident in order to define a type, in intentional type theory, we actually have to define all the sequence judgments first. And then the categorical judgment becomes evident by virtue of the evidence of the sequence judgment at the empty context. So things are turned backwards. It's required to deal with both introductions and eliminations via this fragmented judgment system of verifications and uses. But that is how you can sort of fix the anti-modularity of intentional type theory. And that's how logical frameworks today are implemented. That's probably all I have got time to really talk about. So I'd love to answer any questions. Thank you, everyone.