 s1 up through sn minus 1 such that the braid relations hold, but also si squared is the identity. So here, si is just the permutation that switches i and i plus 1. And here, the homomorphism, as I said, q equal to 1. And then this relation here is just the braid relation. And what's the composite homomorphism? Here it says, take a braid, if I follow the strands from left to right, I get a permutation on the ends of the braid. 1, 2, 3 here goes to 3. So 1 goes to 3, 2 goes to 1, 3 goes to 2. Now, you should think of this HEC algebra as being kind of a fancy-deformed version of the group ring of the symmetric group. We've inserted this q. And in particular, a lot of things about the representation theory of the HEC algebra are closely related to the representation theory of the symmetric group, sn. OK, but now I'm going to keep this presentation. Let me not erase this yet. Instead, let me say that there's actually a better set of generators for the HEC algebra, which is I take bi, which is q minus ti. So in other words, ti is q minus bi. And notice that this looks an awful lot like the map from the braid group to the temporally-leave algebra that I just managed to erase. And then in this, this is clearly a perfectly good set of generators. So I could change variables in this presentation over here to express all the relations in terms of the b's. And when I did that, I'd find that hn has a presentation that looks like this, b1 up through bn minus 1. I still have far commutativity, bi, bj is bj, bi, i minus j are far apart. Now I get a relation that looks just like in the temporally-leave algebra, bi squared is quantum 2 times bi. And then the right of Meister 3 move turns into this relation, bi plus 1 bi minus bi is equal to the same thing with the roles of i and i plus 1 switched, bi, bi plus 1 minus bi plus 1, OK? And you see in the temporally-leave algebra, these relations, this relation holds, because actually both of these terms are 0 in the temporally-leave algebra. And these are the same as the relations we had here. So that means I have a homomorphism, let's call it p2 hn tln that sends, well, bi to bi, OK? All right, and so now what we're going to do is say that the Humphley polynomial is defined in much the same way that the Jones polynomial was. So let's put this down as the theorem of Aknianu, OK? So there exists maps trn mapping from hn to, let me just say, z, a, and q, such that what? I'm going to write these properties over here, OK? So one is that it's a trace. So trn of xy is trn of yx. Two is that if I look at the inclusion, let's call this yoda from hn into hn plus 1. So if I take the trace of n plus 1 of yoda of x, this is, let me call it squiggly 0 times the trace on n of x. And three is that the trace on n plus 1 of yoda of x times bn is squiggly 1 times trn of x, OK? Where squiggly n, that's aq to the minus n minus a inverse q to the n over a minus a inverse, OK? And so the construction of these traces basically just relies on taking some standard tricks in the representation theory of Sn and applying them to the Hecker algebra, OK? And then say p of sigma bar, let's call this a definition, p of sigma bar, p twiddle of sigma bar is going to be the trace on n of map psi of sigma, OK? So that's the best definition, really, of what this humbly polynomial is. It is this trace. But what I'd like to talk about a little bit is a pictorial interpretation, where so what we should think about is the same way we drew elements of or generators of tln as tangles and generators of the braid group as tangles. Similarly, we should draw these bi's and hn. And the way that we'll draw them is we'll put a little kind of graph with a thick edge here and two trivalent vertices. That's the picture that we draw. And so for example, if I have b1, b1, this would be represented by a picture that looks like this. These come apart and come back together again like that. And these are sometimes called MOI diagrams. I'll also write this as the bracket of psi of sigma. And we'll talk more about these MOI diagrams in the next lecture. OK, so now here we have the Heck Algebra. So one thing that's true is that I said the Heck Algebra is kind of a deformation of the group ring of the symmetric algebra. So the dimension of hn over that ring r is the same thing as the dimension of z of sn over z, which is the size of the symmetric group is n factorial. And so there's a sort of nice basis for hn called the hardest thing here is to spell Cajdan and Lustig properly. I don't want to define this basis in general, but I'll just discuss it by example in the case n equals 3. So let's call this basis will be b sigma, oops, bs. s runs over the set sn. And so for example, b of the identity element is just 1 in the algebra. b1 is the thing I called b1. I have b12. That's the permutation that I get by multiplying s1 and s2. This is b1, b2, and b21, b2, b1. So far, everything is very stupid looking. But the interesting thing, and I should keep this here for one more second, is the permutation associated to the longest word 1, 2, 1 in sn. So b1, 2, 1, this is, let me leave that board up, but let me just go over here. b1, 2, 1 is b1, b2, b1 minus b1, which is the same thing as b2, b1, b2, minus b2. And notice from looking at this, I can see that if I take bi times this b1, 2, 1, it's the same thing as b1, 2, 1 times bi, which is always just quantum 2 times b1, 2, 1. And in particular, with respect to this basis, all the sort of structure constants of the multiplication are positive. It's sort of positive powers of q. So this is what was called the Kazdan. Let's go over here. It was originally conjectured by Kazdan and Lustig, and then proved by Berenstein, Bailens and Brylinsky, and Kashawara. So let's just write it this way. Oops, bs, bt. Well, this is a basis. It's necessarily equal to BSTU, oops, CSTU times BU, where all of these CUSTs have positive coefficients. And actually, this phenomenon is one of the first appearances of the notion of categorification. So there are lots of different ways to think about this theorem, but probably the easiest one was discovered by Surgl. Yes, that's right. Will this be b1, 2, 1? Right, OK, so my name's, all right. So if I write, for example, 1, 2, 1, that's supposed to mean the permutation s1, s2, s1. OK, yes. OK, so Surgl proved this theorem by thinking about, consider, let's say, Rn, Rn by modules. So those are modules where I can multiply on the left by my ring Rn and on the right by my ring Rn. The actions aren't necessarily the same, where Rn is just a polynomial ring and n variables. If you think about it, that's exactly the same thing as, let's say, at R twiddle n modules, where R twiddle n is just the ring where I take x1, x2, oops, say, x1 up through xn, and then y1 up through yn. So these x's are the left action of Rn, and these y's are the right action of Rn. For example, the identity my module here is 1, which is Rn, which is the same thing as taking Rn twiddle and dividing by the relations that xi is yi. OK, so Surgl defined modules bi, so that's, here, I'll think about it as Rn twiddle, modulo the relations that xi is, let's say, xj is yj for j not equal to i or i plus 1. And then x1 plus x2 is y1 plus y2, and x1, x2 is y1, y2. And I can multiply these just by taking tensor products with respect to the Rn's. So I could start with these bi's and just start taking all the possible tensor products that I could see, and then looking for into composable factors, OK? And what Surgl proved, yeah, these are supposed to be i and i plus 1. Thank you very much, definitely, i, i plus 1. So one thing that he proved is that these bi's satisfy, and really, the relations in the Hecker algebra. And let's say, maybe let me just write b1. Here, I'm going to use b1, b2, and b1. This is a module, let me call it b1 to 1 plus b1, where b1 to 1 is Rn twiddle. And notice that really what I did is I took the elementary symmetric functions. This is the first elementary symmetric functions. And the x's, and set it equal to the first elementary symmetric function in the y's. And this is the second elementary symmetric function. So here, I take all three elementary symmetric functions in x1, x2, and x3. So let's say, e1 of the x's is e1 of the y's. e2 of the x's is e2 of the y's. And e3 of the x's is e3 of the y's. And this relation here exactly categorifies this definition over here. Yeah, that's right. All the brackets are always going to be quantum integers. And so, right, OK, good. So what does it mean? I'm sorry. So these are graded. So the grading of xi is 2. And I should have really said that this is q inverse times this. So in other words, the grading of 1 in here is really the element 1 in this ring is really minus 1. And then this multiplication by q is shifting the grading up by 1, or down by 1. Sargal showed that the set of indecomposable sum ends of things that I get by taking iterated tensor products of these bi's is actually a finite set consisting of b, let's say s, where s is in sn. And if I take bs times bt, this is a direct sum, c, stu, bu. Or these numbers here are the same, OK? And this is the reason that these coefficients are positive, right, because I can't have a negative direct sum. OK, what's this all got to do with categorifying the homely polynomial? So remember, we have sigma i went to q minus bi in the HEC algebra. And similarly, sigma i inverse went to q inverse minus bi. OK, so Rucchier showed that there is a well-defined map from, say, Brn to complexes in, let's say, that Sbmn is the category whose objects are direct sums of the bs's with grading shifts. And morphisms are morphisms of bimodules, OK? And this sends sigma i to, let's say, q times 1 to a complex that looks like this, sigma i inverse, to a complex that looks like bi goes to q inverse times 1. And there's some map s prime here. OK, and so I have to tell you what these maps are, OK? And to do that, let's just work in the case n equals 2, right? So for example, b1, that's arc 2n modulo these relations, x1 plus x2 is y1 plus y2. And x1, x2 is y1, y2, OK? But I could just use this to eliminate y2, let's say. This linear relation means I can just get rid of y2. This is the same thing as the ring z of x1, x2, y1 modulo the relation that x1 minus y1 times x2 minus y1 is 0. And on the other hand, this module 1, that's, well, this. But I could again eliminate y2 if I felt like it. This is z of x1, x2, y1 modulo the relation that x1 equals y1, OK? And so here we have, what do we have? So I have s mapping from 1 to qb1. What does that do? Well, it just takes 1 in here and multiplies by x2 minus y1. And s prime going from b1 to 1 takes 1 in here, and it just sends it to 1. This is a perfectly good map. This is like sending z mod 6 to z mod 2 by sending 1 to 1. OK. So I'm now operating in negative time. Sorry about that. So let me just finish quickly by saying, so Rucier showed that this is really a well-defined map in the homotopy category of complexes, OK? So what's the analog of taking the closure? It turns out that the analog of taking the closure is Hoxschild homology. So there's a function, let's call it hh, from Rn bimodules to Rn modules, let's say graded. And so Kovanov showed that, let's say, hhh of sigma bar, that's what I get by taking the homology of, take the functor, which is Hoxschild homology, apply it to this complex of bimodules, c sigma, and then take homology of that complex. So this categorifies p. So hh here plays the role of the Acneanu trace. OK. All right, and I'm sorry, this was a little bit too quick. Yes, so c is, yes, did I not call this map c? Yes, there is a well-defined map c going here. OK. All right, so I'll stop now, and so we'll come back and talk some more about colored homology in the afternoon.