 All right, thank you so much for inviting me. So I'm really sorry for not coming in person to Paris and to IHS. It would be great to see all you in person, but hopefully one day I will be there and yeah, I'll try my best doing it online. So again, if you have any questions, I guess Andrei will translate the question to me and I'm not sure I will monitor Q&A and I'm not sure who could raise hands, so please ask questions out loud. Okay, so the plan for, so there will be five lectures and the plan is to start slowly. So today will be mostly definition of Kavanon-Frasansky homology. What is this thing? Give some examples and mostly it would look like something quite formal community of algebra exercise. So first of all, I have to say and I'll say it again in a couple of minutes that I will talk about some link invariance. I guess none of you are really topologists in neither am I. So I will try to explain that this is not really topological problem that we're talking about. Like, you can phrase everything in pure terms of community of algebra. And I'll give a lot of examples like how to define this homology, how to compute this homologist, slightly more complicated examples, and what we know and what we don't know about it. Then in the next lecture, I'll talk about rate varieties and for a large class of rates and for a large class of links, this is actually a very explicit geometric model for computing or at least defining this homology. So roughly speaking, you would have an algebraic variety and the homology of this algebraic variety will be Kavanon-Frasansky homology with some subtleties which I will explain next time. Then on lecture three, I will talk about slightly more subtle structures and homological operations in this homology. So this homology, unlike homology of topological space, has no multiplication. But there are lots of interesting homological operations and the right of topological classes or analogs of those which are familiar to many people in the audience. And there is this fine operation of deformation or why application of this homology. And I will explain how to work with this and how to compute a lot more examples with these techniques and these operations and prove some interesting results. In the fourth lecture, I'll talk about algebraic links. That's even smaller class of links, but there is a very interesting connection to plain curve singularities, a fine Springer theory, and some structures and some varieties that will appear where actually closely related to the course by Joel Kavancer on B. Fenschbringer theory because that's very, very related. And finally, in the last lecture, I'll talk about the hybrid scheme of points on the plane, which I guess is mostly familiar to most people here in the audience and how that is connected to link homology. So there will be lots of different models, how to use commutative algebra or algebraic geometry to understand what's going on with this homology. So this is the idea. And before all this, I just want to spend a couple of minutes and talk like what do we mean by a link invariant and how to build a link invariant and what is a link if you haven't seen it. So first of all, I'd like to start with the brave group. So the brave group is a group with generators sigma one through a sigma n minus one. And the relations written here. So sigma i sigma i plus one sigma i is sigma plus one sigma i sigma plus one. And sigma i sigma j is sigma j sigma i if i and j are apart. So I think many people have seen this group. And one should think about sigma i as a simple crossing between so I have n strands and I cross i's and i plus first strand. And I choose this to be the positive crossing. So this would be sigma i. And if I have a sigma i inverse, this would be the opposite crossing. And so by vertical stacking of such pictures, we can get arbitrary brave. So that should be familiar to most people. Maybe less familiar thing for algebraic geometry is the two theorems of Alexander and Markov. So Alexander's theorem says that any link you can actually obtain is a closure of some braid. So here alpha is an arbitrary three strand braid. And I can close it up like this. So I just join the ends of the braid on top and on the bottom. And they connect like this. This is a closed diagram with no ends. And this is a link, possibly with several components, depending on the braid alpha, you could have three components, you could have two components, you could have one component in this picture, depending on the permutation corresponding to alpha. For example, if alpha is the identity braid, so nothing happens here. And then I just have three circles. So this is a link with three components. And then slightly more solid theorem of Markov says that we can say when two braids give the same link. So two braids close to the same link, if and only if they're related by the following moves. So I can relate alpha beta and beta alpha. So this is equivalent to conjugation of a braid. So I can just take this part of the braid, slide it down. So when I slide it to the right, you will get alpha upside down. When I slide it again, I will have alpha upside up. And so these two links are clearly the same. And then if I have alpha is a braid and end strands, I can add one more strand over here. Mark this. So this is a new strand. And then I add a crossing marked in red circle. And that crossing could be positive or negative. And again, here I changed the number of strands. So let me mark this like this. So I go from two strands to three strands. But it's clear that I can undo the skin. And the result and link will be the same. And so these two operations of conjugation and what is called positive and negative stabilization, they don't change the link. But they change the braid, obviously. And so in particular, the second operation would change the number of strands. You would have two different braids with different number of strands, which close up to the same link. And so even all this, so let me give like a rough plan for everything, what will happen. So we're interested in anthropological link invariance. So we have a link, which is something, some curve in three dimensions, but we don't regard it as a curve in three dimensions. We want to get more algebraic structures there. And so first of all, we present this link as a closure of a braid. Then to define as a successful link invariant, what do we need to say? You need to say that something is assigned to crossing. So for each crossing, positive or negative, we assign something and this something will depend on the context, of course. Then this something should satisfy braid relations, which I wrote in the beginning. So sigma i, sigma plus one and sigma i, sigma plus one, sigma i and so on. And so if this happens to satisfy braid relations, then automatically we get a braid invariant. So for any braid, however, we write it as a product of generators, we get something invariant. And this is not enough to build a link invariant. So to build a link invariant, we need to define, describe some operation, how to close a braid. And this, you should think of it as some kind of trace in more or less abstract sense. But anyway, so you have a braid and you have to explain what does it mean to close a braid. And then for this operation of closure, you need to check additional invariance under conjugation and stabilization, under marker moves. So that whatever product of generators you assign to alpha, beta and to beta alpha, they could be different. But once you close the braid, the results are the same. And the same thing here. So these are different braids, alpha and alpha with this extra yellow strand. So there will be different invariance of braids and they would live in different categories if you want. But then once you close the braids, this operation should give the same result. And that is just a formal consequence of the general terms of Alexander Markov. And so what I need to explain for you is how to assign something to crossing, verify braid relations, and then describe the separation of closing a braid, which is, I would say, a separate part of the construction. Not only checking braid relations, but what does it mean to close a braid? And that's pretty much it for the general outline. So sometimes it's also useful to restrict yourself to some subclasses of braids. For example, just positive braids when you don't have negative powers. And maybe you want to also restrict your invariance. So you can say maybe we're interested in braid invariance, which are only invariant under conjugation, but don't, but are not preserved by Markov moves, for example, by stabilization. There are lots of these invariance. We can also ask about invariance and we'll see them today and next time, which are invariant under conjugation and only positive stabilization, but not invariant under negative stabilization. And so this won't give you a typological invariant. But for many purposes, it's still very good and very interesting to study. And maybe let me pause here and ask if there are any questions about this kind of very general plan, how to build the invariance. Any questions? Okay, so if there are no questions, let's actually go to actual algebra. So we start with the ring R and this is a polynomial ring and n variables x1 through xn. And there is an action of symmetric group which permutes the variables in obvious way. And we will consider by modules over this ring. So to every braid, we will associate a by module or a complex of by modules over this ring. Okay. So the most elementary by module that we will consider is called bi. So this is R tensor R over R si. So si is a transposition of i and i plus one. And so this R si, let me write it down maybe. So these are really si invariant functions. Oops, sorry. Fine. So R si are si invariant function on R. And so in particular, if I have x i x1 through xn will be elements and generators of this R, x primes will be generators of the other R. And so what are the si invariant functions? So we require that x i plus x i plus one is equal to x i prime plus x i plus one prime. So this is an invariant function under transposition of i and i plus one. So the action of this element on the left is equal to the action of the symmetric function on the right. The action of x i x i plus one on the left is equal to the action of x i prime x i plus one prime on the right. And then the action of xj is equal to the action of xj prime on the left and on the right, provided that j is not equal to i and i plus one. So all these j's are clearly invariant under this transposition. And sometimes it's useful to draw. So I won't draw a lot of stuff because it's really more algebraic course, but sometimes people like to draw this in the following pictures. So I have x's on the bottom, x1 through xn, x primes on the top, x1 prime up to xn prime. And then what happens is that x i and x i plus one they merge together and then they split into x i prime and x i plus one prime. But when they merge and they split, you don't know if they stay the same or they're commuted and transposed. And so only things that you know are symmetric functions are preserved. So symmetric functions in x i are the same as symmetric functions in x i and x i plus one are the same as symmetric functions in x i prime and x i plus one prime. And so this is a binomodule. And again, you can think of kind of the left action of r is sitting on the bottom of this picture and the right action of r on the top of this picture. And this is a binomodule. So any questions about this binomodule? Okay. And so Eugene, I just wanted to say, I mean, I think it's a good point to say that the picture you drew with the x's and x prime is just like a picture for intuition and the actual definition of Pi is the tensor product on the left. Yeah, exactly. Yeah. So this is again, you can think of like the way the most explicit way to think about this is either this tensor product or you have polynomials. So maybe I should write it down. I won't be able to write it down, but it's fine. So I have polynomials and x's and x primes quotiented by these relations over here. And an exercise, if you haven't seen this thing before, is you can tensor binomodules over r. So bi is a binomodule. So I have a left action and then the right action. And this bi is also a binomodule. I can tensor them over the middle r and get again a binomodule over r on the left and r on the right. And as tensor product of binomodules, this actually splits as bi plus bi. And this is one of the exercises and you can check it in the exercise session. And another exercise to get familiar with these binomodules is that there are very interesting maps of binomodules from bi to r. So r is a binomodule over r itself. And there are maps of binomodules from bi to r and from r to bi. And these maps are explicitly constructed in the exercises. So you can check. And this is kind of the most explicit thing that you can compute about this. So given these maps, you can take the cones and form two term complexes of r, r binomodules. And I will call them ti and ti inverse. So ti is the column bi to r and ti inverse is the column from r to bi. And again, like so far, everything is really formal. These are just complexes of binomodules. And oh, yeah. Now it's right. So now the main theorem here, since we're discussing braids and braid closures, is a theorem of Rukia proved about 20 years ago, I guess, that ti and ti inverse satisfy braid relations up to homotopy. So you have this complex of r binomodules. You assign this to the generator of the braid group and to the inverse of this generator. And then you can tensor them over r. So all these tensor products are over r. And maybe I'll stop writing over r for a while. But all these things are tensor products of complexes of binomodules. So for example, here, this is a two-term complex. And so I tensor it with another two-term complex. So as written, it will be four-term complex. And again, it's a very interesting exercise to check that this ti tensor ti inverse is r. And so maybe I'll do it over here and just give you an indication of what's going on. B goes to r. I tensor with the two-term complex where r goes to b. I get a four-term complex where I get b tensor r is b. Here in the middle I have r plus b squared. And then here I have another b. So this is just the tensor product of these two things. And then I use this exercise over here that b times b is b plus b. And then you can use it to simplify. So basically this b square cancels out with this b and this b. And again, if you want to get a flavor of what's going on with these binomodules and these complexes, please do this exercise because this is really helpful. And checking braider relations is slightly more complicated, but not too much. And so as a consequence, for any braider and strength, we get a complex of our binomodules, which is well defined up to form a question. So we just tensor these things to generate us because there is some echo for some reason. Okay. So we tensor these two-term complexes for generators and we get a giant complex for the whole braid. So the number of terms in this complex will be two to the number of crossings of your braid. And then because of these relations, it's well defined up to homotopic limits. So this is called rookie a complex for a braid. And there are a couple of remarks which are not so important, but I want to say anyway. May I ask a question? Yes, please. Okay. So how do you define this T beta again? So I have a beta. I write it as a product of generators. Maybe it was. I don't care. And then I just replace it with Ti1 in the power plus minus, tensor, and tensor TiR in the power plus minus. And so this is a giant complex. So this is a two to the r-term complex. And this tensor product is again tensor product over r. And because the braid relations are satisfied after homotopic equivalence, this giant tensor product of complexes is still well defined. And it doesn't matter how we write beta as a product of generators. So the result is a well-defined complex of r-bimodules up to homotopic equivalence. Does this braid relation also imply that you have some kind of a braid action on a category of r-modules? Yes, you can say that you have a braid group action, but this is a monoidal category. So you can just tensor things. So you can tensor on the rise, for example, on all this. Whenever you have a complex of bimodules, you can tensor it on the right with arbitrary r-bimodule. And with some care, you get a braid group action r-modules on the left or on the right. That's right. But yeah, this is the key thing that you can define this TI and the satisfied braid relation. So you do have a braid group action in this sense. Any other questions? All right. So a couple of remarks just to say some words. And again, we don't need these details to keep going. And I'll give an example of this construction in a minute. So just stay put. And then I'll explain how to actually do some things in this example. So bi and the tensor products, they generated a very specific category, which is known as the category of zorgled bimodules. So formula speaking, so how do you define zorgled bimodules? You can take all possible bi's, all possible tensor products of bi's and all possible direct summands in tensor products. So it's a Caribbean completion. And it turns out that this category is much smaller and much more interesting than just all r-r-bimodules. And it categorifies the Hecke algebra and it has lots of other beautiful properties. And lots of people study this these days. And so you can definitely say that t-bita is complex of r-bimodules, but it's also complex of zorgled bimodules. So if you like geometric representation theory, the best way to think about it is that to say that this is really complex of zorgled bimodules. But we don't, we once really needed, I think, for most of this course. So I won't discuss this because I don't want to introduce too much notation. And another thing that lots of people like to ask is that this can be defined for any type, for any coaxial group in fact. So the same definition just works for any coaxial group acting in some representation by reflections because you just define this bi and you proceed. So you have si is the generator of a coaxial group and you proceed this way. And you have a beautiful theory of zorgled bimodules for arbitrary coaxial groups developed by Zorgel, Elias, Williamson, and many other people. And you get the action of the corresponding braid group by these Rukia complexes. So this generalizes and you satisfy braid relations of the corresponding type, corresponding to your coaxial group. And this is all very nice and well-behaved in all types. Okay. And so the next piece of information is how to close the braid. So we defined an interesting braid in there. This is a complicated thing. It's not a number. It's not a vector space. It's a complex of bimodules. And we need to explain how to close the braid. And to close the braid, we use the notion of coaxial homology. And I mean, many people here are much smarter than me. So of course, you know, by heart what is coaxial homology. So rather speaking, you resolve which are termed by free RR bimodule and identify Xi with Xi prime in the resolution. And then you take homology. But I won't spend too much time on this definition. And in fact, I'll use a special case of it, which I'll say maybe right now, actually, that one special case of coaxial homology, which is really easy to explain for everyone, is just this. So you hh upper zero, this is coaxial coaxial homology. It's dual to coaxial homology in this case. So this is just home of bimodules from R from the diagonal bimodule to X. So if I want to come, if I, some of you don't know what coaxial homology or coaxial homology is, you can always think about just this hh zero. So hh zero of X is home from R to X. And this is well defined for any bimodule. So this is home in the category of bimodules. And we will see examples of these homes very soon. And so starting from a braid, what do we actually do? So we start from a braid, we assign this two term complex to every crossing, we assign the product of these two term complexes to any braid. And this will be this t beta. And this is a complex of our bimodules. And then you either apply coaxial homology, if you like it, or if you don't know what coaxial homology is, just apply this form from R to every term of this complex. And you have to do it to every term. So there are lots of terms in this complex, do it term wise. And then you will get a complex of our modules, because if X was an R-R bimodule, form from R to X is an R-module. And the resulting complex of our modules is essentially your invariant. Or more precisely, you take homology of the resulting complex of our modules. And this is what is known as HHH of beta. So you first take HHH for coaxial homology of t beta, and then you take homology of that thing. So this is a pretty involved construction. And it has two very different steps. The first is constructing this complex, then applying coaxial homology, and then if you want taking homology of that. And then mysteriously, question? Is the complex itself invariant or invariant or not? In what sense? I mean, as our modules know, because for example, the number of variables is the number of strands in your braid. And so it can't be invariant. But if you just regard it as a complex of vector spaces, yeah, sure, because it's like up to homodub is the same as its homology, so it doesn't matter. But as a complex of our modules, unfortunately, it's not invariant. You can have some remains of this structure. So you have slightly more than just the complex of vector spaces. And we'll talk about this probably on Wednesday. But for now, we just forget all the structure, take homology and regard this homology as the vector space. Is it clear how this complex changes under this conjugation stabilization operation? Yes. Yes, yes, yes. So all this is known. So in particular, the theorem of Kavanoff and Rosanski says that if you just take this homology of HH, this is a link invariant. So this is really invariant under conjugation and stabilization. And so it doesn't up to maybe some grading shift, which I will suppress. But this is really, really link invariant. If you want to understand this as a complex of our modules, it's still invariant under conjugation. That's true. Because basically, Horses homology is some kind of categorical trace. And so it doesn't matter if you take homology of X tensor Y or Y tensor X. And you can see what happens with stabilization concretely. And maybe I won't see it right now, but the right explicit formos, which say what happens if you have some complex of bimodals, then you add an extra strand, add a crossing, and what happens to HHH of that. So that is understood. That's right. And from homotopic categories. So I never go to the right category of our bimodules. I always work with homotopic category bimodules. And there are some subtle technical reasons why I want to do this. But I guess one reason is that the category of circuit bimodules is not really an abelian. It's just additive, and you have to be very careful with talking about the right category of that. But again, practically, what happens is that you just have a complex of bimodules. It leaves in the homotopic category, and then you apply this HHH, which is the native funter to every term. And that's it. Okay. And so, I mean, so far, this is just the vector space. This vector space is triply graded. And what are the gradings? So the first gradient is an internal gradient, but all bimodules. So every bimodule bi is naturally graded if you scroll up issues here. So all these equations anyway were homogeneous. Oh my goodness. I'm sorry. Okay, sorry about that. So all this equation that we had before were homogeneous. Yes, x i plus x i plus one is x i prime plus x i plus one prime and so on. And so all these bimodules are naturally graded. And we assume that the degree of x i is equal to two. So this is the capital Q gradient on these things. Then because we are talking about complexes, we have a homological gradient. And this will be denoted by capital T. And then because we take host homology for every term, this is what is called as A gradient. And so just to repeat, so if we want to, if we don't like higher host homologies, because they're kind of harder to think about, you can just think about this home from R to X. And this corresponds to picking one specific A degree in this triply graded homology. So this is A degree zero. And we will often just restrict to this part because it's so much easier to explain what's going on. You don't need to do this host homology business. And from all the phenomenon that I will explain actually, this thing is enough. So you can talk about other HHI. And I'll mention them from time to time. But for most interesting phenomenon for most interesting computations, this is already very interesting playground. So this is again just specific. And just as a caution, so this part is actually invariant under conjugation and invariant under post-stabilization. But it's not invariant under negative stabilization because negative stabilization would kind of shift everything A degree up. And you will lose this degree zero piece. But again, for many purposes, this is actually enough. And so this is it. And so this is this recipe again that we start from a braid, build this tensor product of complexes, compute social homology or apply this home from r to blah and then compute homology. And it's been more than half an hour and I haven't given you any examples really. So let me give you an example and work it out in detail. So the braid that I'm talking about is this two-strand braid. So n is equal to 2. And we have two crossings and we assume that both of these are positive. So actually, if you think about the link, it would close up to the Hopf link when I have two circles which are linked like this. And so to a single crossing, we assign this complex T, which is two term complex from B to R. So this is just a single crossing. So if you want to define something for this braid, you need to tensor this complex twice. So you have a complex from B to R, you have another complex from B to R, you tensor them over R, and you get a four-term complex where I get B square on the left, two copies of B in the middle, and then R on the right. And I'm ignoring all the grading sheets because that would be too much. Then I'm using the rule that B square is B plus B. So this is happening here. And we have two B plus B, B plus B here and B plus B here with slightly different gradings. And again, I'm kind of cheating here. And if you do some consolations, you will see that actually you will be with one copy of B here and one copy of B in the middle and R on the right. And this is really the minimal complex for T square. So this is again a complex, very explicit complex of R, R bimodals that we assign to this braid. And then we want to compute to close the braid. So as we discussed, to close the braid, we just apply form from R to every term. So form from R to R is R in bimodals. And in fact, form from R to B is also R. So this is generated by that map from R to B that I discussed a little while back. And so we'll get a complex which is R goes to R goes to R. And this is again, form from R to B is R, form from R to B is R, and form from R to R is R. And then we need to compute differentials. And I was kind of sloppy with the differentials here. But in fact, you can compute them. And this differential will be zero. And this differential will be X1 minus X2. So R is, we call that R is just polynomials in X1, X2. And so finally, we get some answers. So it was this long abstract discussions, but this is a very concrete complex. And I'm sure that anyone here can compute its homology. So the result will be what it will be R in degree two. And then you would have R mod X1 minus X2 in degree zero in here and here. And this homology is interesting in particular, it's infinite dimensional, because you have this copy of R. And it has an interesting model structure over R, which is actually in this case, link invariant. And so this is an example of again, HHH, H is equal to zero of this link that we're talking about. And if you have a two strand braid, you can actually do more or less the same computation. So if you have a positive two strand braid, you have a power of t, positive power of t. So you just keep doing this thing repeatedly, tensor them, simplify, use this rule that v square is v plus b. And it's actually not that bad at all. So in, in examples, in exercises, there are some very explicit examples how to compute things and how to do computations on those strands. Because there is only one copy and it's very easy to use this rule and compute everything there. So you will have some explicit models over C of X1 X2. And that's that then for negative powers of t is the same thing. But the problem is, you know, all this example. No, I just still have a question. You view it as a, you view this as a vector space homology, right? So when you take homology, when you take the virtual homology, it's just a vector space, right? I mean, you can say it's our module. So virtual homology of b will be like h h zero of b will be r, for example. So this thing is h h up for zero of b. And I can regard this as our module. So this is like, this is a complex of our modules. And in this case, it's actually makes perfect sense to think about this as a complex of our modules. And so we can just take homology of that is still our module, which is written here. I think it should be said that when you take r to think of it as a vector space, it's important to think of it as a great vector space. That's right. So this is in this case, so because we are a degree, we have still two gradients. So this again, the degree q of x pi is equal to two. And then these two guys would live in different t degrees because they correspond to this homological degree. And I would say that capital, I can't leave it right there now. Say it again. And this corresponds to t is equal to two or minus two. I mean, it really depends on shifts and conventions that I want to talk. I don't want to talk about, but you will have this r mod x one minus x two here. And you have this r here and they're really in different t degrees. And each of them is a graded module because r is graded itself. Okay. Well, all of that happens in the b module category, right? So r is also b module there. No, these are b modules. So no, when we when we close the braid, when I apply this thing, we don't have by module structure anymore. So this is just our module. So maybe I'll write it down. So these are our by module. And these guys are our modules. And again, in this example, you can actually find like lots of interesting spaces where this is like a querying homologous some space or homologous in shift on some space. And we'll talk about this. So this example is kind of key to understanding what's going on. I would say, but this is really complex of our modules. We kill the by module structure when we take the trace, when we take the whole homology. And so, but we still have well defined our module structure, we have these two different summands. Okay. Any other questions? Okay. Anyway, so this is roughly how this thing works. And the problem for Maryland time. So all this was known back to the work of Kavanoff and Rosansky in 2000s. And then for about 10 years, there was a huge roadblock because nobody knew how to compute it beyond two strands, more or less. And the key problem, of course, is that this complex grows exponentially in the number of crossings. And even if you try to compute it for three strands, you will have an exponential large complex of by modules for polynomial ring in three variables. Then you need to take function homology. And then you need to take homology on that it's really, really messy and really complicated structure. And if like complicated commutative algebra after all with this axis. And so people really didn't know how to deal with that. But kind of a computer could do it in a kind of finite amount of time, which is great. Yeah, that's right. I mean, you can program it, but the programs would break pretty fast, actually, because I don't know if you have. Exponential, I mean, it's exponential, not crossing. So we're if you have, I don't know, so t three, three, you would have some like six crossings, you already have two to the six terms in the complex. And then you need to run all this machinery with a change. And this so this already is a lot actually. This is already 64 term by modules over putting numbering in three variables. So we can do this, but the computer stops pretty fast, actually. And so people needed some new computational techniques. And in the remaining time, I want to outline like some very vague idea of how this might work. So the key theorem obtained in several papers by Ben Elias, Madhogan company, not minute, is that this triply graded homology for all positive tourist links is actually computable. So this is, first of all, supported in even homological degrees. So all, and this was so in this example, right? So we have some terms in degrees zero and some terms in degree two. And it turns out that for any positive tourist link, the homology is supported in always in even homological degrees. And moreover, they gave explicit recursion computing correctly normals of this homology. And so kind of the state of the art is the recent paper from a couple of years ago, fucking up and made it when they give very, very explicit recursion for all these things. And probably I have to say, what is a tourist link? Because maybe not so many people have seen it before. So a tourist link, we'll see them a lot. So t, t, n. So we have, for example, an n-strand braid, which looks like this. And then you raise it to the bar key. And then you close it. And tourist links are kind of easiest examples of links. And what these people tell us is that you can actually compute this thing. And one interesting example, which I will explain more, I guess, Wednesday again, is that so you have nn plus one tourist link. So you have this braid on n-strands and raising to power n plus one. This gives for so-called QT cut-line numbers, which are related to lots of interesting things in algebraic genre of the Hebrew scheme of the points on the plane and the algebraic combinatorics of McDonald's. And this confirmed lots of conjectures of myself, Andrei, Alexei Blanco, Degrasmusen, Vivek Shand, and many others. So there were lots of conjectures. And again, it was very hard to compute this. But these people made a real breakthrough with computing it and finding some recursive ways to do this. And this is actually also an example of a tourist link because this is just two tourist link. I have this thing on two strands and raise it into power two. And so how did they actually do it? So I just want to give you a very kind of rough idea. And the idea is you have to enlarge your class of things. And you really need to consider some complexes of Zorgil bimonials or complexes of our bimonials. Actually, that's enough. So a theorem of Hockencamp is that there exists a complex of Zorgil bimonials, KN, with the following properties. So first of all, if you add a crossing to this thing on the top or on the bottom, you will get the same thing KN. So it eats crossings on the top, it eats crossings on the bottom. If you have this guy and you make this kind of partial closure, so you just take one strand, you close it up, you will have a previous KN with an extra factor of t to the n plus a. Here I'm using small t and small a. I'll explain what it is in a second. And you should think of the second rule really as abbreviation for the following thing. So you have KN plus one, you add an arbitrary braid or arbitrary complex of things on the bottom, let's call it X, and then you close up. So then you close out this extra strand without touching X and you close up all the other strands over here. And this gives you just KN and just X with this extra factor t to the n plus a. So this is again some kind of mark of move for this, in some sense, this is mark of move for KN. What happens if you add a strand, you don't even add a crossing, but you just close it up. This is mark of. And the most interesting property is that if you have KN and you add an extra strand and wrap it around KN, that in fact you can resolve it as a two-term complex where you have KN plus one and KN over here with some shifts. And there is some interesting differential which potentially can be written down. And it's important that there is some chain map here such that it's cone. So the cone of this map is commonly equivalent to the complex on the left. So that's as much as what we want to say. And if you have KN with just one strand, then this is just nothing. So you can erase KN. And here we're using aqt which are the grading sheets which are related to capital A, capital Q and capital T. So these are the standard gradings that I defined before. And this is just some change variables. So it's not so important. What is important is that you, all homological degree shifts are even. So we're saying that we can resolve this thing by this thing and this thing with even homological grading shifts. And now the game is that you can try to apply these rules and say, well, you have a picture like this, you resolve it by something with this thing and something with this thing. Maybe you apply it again and resolve this thing by some smaller things. And at all times, all the smaller things will appear with even grading shifts. And if this recursion stops, this means that we resolved our complex with a bunch of easy stuff where we know the homology. All these things appear with grading shifts and there are some crazy differentials between them which we don't really understand at all. But because the homology is even, the differentials must be zero. And so the associated spectral sequence actually collapses immediately. And there are no differentials. And so homology of this guy is really equal to homology of this guy plus homology of this guy with explicit even degree grading shifts. Maybe said differently, if you know that the homology of this guy is even, and if you know that the homology of this guy is even, then homology of this thing is given by a long exact sequence between homologies of this and this. And we're saying that the differential between them, the connecting differential is zero because again, everything is even. So kind of the most naive version of this is that whenever you have a topological space paved by even dimensional cells, you know that the homology is just the number of cells. And this is kind of the same thing. So this is a combinatorial technique, it works. So maybe for experts, I'd like to say that this KN is so-called compact categorical John Smith's projector. So if people have seen hacky algebras, there is an element of the hacky algebra which looks like this, which eats crossings, which behaves like this and which is a kind of an eigenvector for this thing, for this operation of multiplication by this morphine rate. But if you haven't seen this, well, there's a sum combinatorial rules. And I think I actually have, so I want to explain what this thing is and then I want to compute, maybe I'll actually want to compute first, because I do have time for this and hopefully it won't crash. So suppose that I want to compute again, this guy in a different fashion using this projector, using this KM. So I can say that this is actually the same thing as K1 over here, because we had the rule that one strand is the same as K1. And then we see that this thing with K1 is actually this picture. So I have extra strand wrapping around this strand with K1. And so I can replace it by this complex. I think this will be 2 minus 1. Here we'll get K2. And here I'll have something like Q, K1. And again, I don't know what these things are. So this recursive method just says that there are some complexes in some weird category which behave like this. And then when we close up, we know that the closure of this K2 is the same as T plus A times the closure of K1. And this is the same as T plus A times invariant of just a knot which we can compute. The invariant of the second guy and just the invariant of this thing which we can also compute. So this will be, I mean, this will be essentially just polynomials in X or if we have odd variables we have summits through A. And we have this thing. And what I'm seeing is that the homology of this thing are supported in even degrees because we just computed this. And homology of this thing are also in even degrees. And in this case, by the argument that I tried to explain, it doesn't matter what the differential between these things is. And homology of the total thing, because all the grading shifts are even, it's just homology of this plus homology of this. And in the example sheet, in the exercise sheet there are some explicit problems and explicit answers how to compute this. So this is the idea. And again, in general, what they showed is that this method works. So you can always find some recursive things. And you can always find these pieces of KN and keep growing from K1 to K2 to K3 to so on to compute this homology for all positive torsions, develop recursion and compute it at least as a triply graded vector space. So this method won't give you anything as R-module but as a triply graded vector space it works perfectly. And just another example in slightly different direction which might be helpful is this picture of K2, which is this complex R to B to B to R. So K2 can be written explicitly. So there are some explicit maps of binomials. Again, this is a complex of R, R binomials. And you can write this complex either as a complex where here you have a negative crossing and here you have a positive crossing. And there is some chain map between them and you take the cone of that map or as you have R and you have this B, B, R. And as we discussed, B, B, R corresponds to this Braille use two crossings over here. So you can think of it as a cone of a map from identity to this B, B, R. And this can be used to show that actually this eats crossings and this complex has all the nice properties that we have. And it's pretty explicit complex. And I haven't actually seen this in other geometric settings so it would be very nice to see it very explicit. But this is it. And as I said, in exercises and Q&A sessions, we can explain how to use this theorem to compute some examples. And maybe one last thing which I want to say is some general properties. So before we go into all these general things and structures in homological operations, so some general facts. So first of all, if beta closes to a knot, then this HHH of beta is a finite rank pre-module polynomials in just one variable, which is x1 plus xn. So the action of all xi on HHH of beta is the same. And so you can think of it as a finite rank pre-module of a pretty normal one variable, or you can think of it as just a finite dimensional vector space when quotient by this action of C of exponential. So more or less, we can take this reduced homology. This is just a finite dimensional vector space. And so we can say that this HHH of beta in just HHH reduced tensor with polynomials in x1 and xn. And if beta doesn't, if beta closes to a link with several components, HHH of beta is not free, but it's still free for this polynomial rank. And I will explain it properly I guess next time. But the point is that we had this example where HHH of t square is r plus r mod x1962. So this is definitely not free over r, but it is free over polynomials in x1 plus x2. So sometimes it's useful to consider smaller subring and restrict to that. And it's always the free module. So you get some views of structure from there. But in general, if you want to find a dimensional vector space, you can do this. But yeah, it doesn't need to be a finite dimensional vector space. And I will see a lot of interesting examples next time. Okay, so sorry for all the technical issues. And thanks a lot. So that's it for today. I'm going to ask if anyone has questions. In the last line from item one, it should be c x1 plus dot dot dot dot dot dot dot dot. Yes. Yes. Anyone has questions? We seem to have one online. How was KN, could we construct a link invariant from them? You can construct the link invariant from KN and that is related to so-called colors but like this theorem says that there exists KN. So this is one example. But in principle, like there is some construction of KN from this theorem, like one way to construct it is to say that this KN plus one, you can express so you have exact triple of this guy, KN plus one and this guy with KN and then you can reverse the arrow and say that this is actually a cone of connecting map between this guy and this guy. And then you have to prove all these properties for it and that's how it goes. But a priori KN adjusts some complex of our bimonials. And yeah, so in principle, yeah, you can use it to build colors homology if you want, if you know what color homology is. But maybe it's not so important. So if I understand correctly, can you scroll down just a little bit to the next relation? Yes. So this has relation with the strand that goes around the KN is what allows you to compute the HHH for product of these kinds of loops, right? That's right. So basically, whenever you have a torus node, you have a lot of loops and somehow you remove one loop at a time. And what you get is not a smaller torus node, but it's kind of a piece of inductive structure and you get kind of less and less and less loops and less and less and less wrapping around. And if you have a cleverly organized induction, you can make sure that this recursion converges to something reasonable. That's right. But you remove this loop and this allows you to simplify things. You have definitely less crossings on the right than on the left. So originally, this method should work not only for torus links, but for positive pure braids, right? You don't know. No, I mean, I wouldn't be able to say it works for positive pure braids. And I don't think it works. But it's a very good question of when this method actually works. So either a larger class of braids where we can compute things. And whenever we can apply this method, we get something where all homologies support an even homological degrees. So there are examples of positive pure braids where homology is not even. So that's the first kind of test. But it's a very interesting question, which many people are interested in, is what is the nicest class of braids where the homologies support an even degrees? And torus links are one example, but is it true for algebraic braids? Is it true for something else? That's an open question. I have a question. Is there an axiomatic approach to this Kovanov-Rozanskiy homology? I wouldn't say so. I mean, it's again, like this recursion might work or might not work. So if, I mean, it works in general if you know all these differentials over here and kind of differentials where this is a part of a bigger picture. So in this sense, it always works. The question is a bit different. Yeah. About the Kovanov-Rozanskiy homology, is there an axiomatic approach to it somehow as like... So you define it as the concoction of homology of a certain complex, but can you define it axiomatically? Oh, no. I think the answer is just no. Like again, you have a lot of properties, but I don't think there is like a full set of properties which characterize them. And does one expect that this to distinguish all the knots or what should be kind of... I mean, if two knots have the same Kovanov-Rozanskiy homology, what can one... And what's the expectation for the kernel of this? I don't know. I mean, I think like they detect more than just... So I didn't say this, but first of all, you can extract home-fliped enomials from this homology by taking on the characteristic. And the right examples of knots where home-fliped enomials are the same, but this homology is not the same. So in some sense, it's a better invariant, but whether this is an interesting thing for distinguishing knots, I don't know. And again, the main reason is that it's really hard to compute. So like there are some examples where this is computed, but not so many. But it's a part of topological Kovanov field theorem. For example, if a surface connecting two knots or a cobertism, you expect a map in this homology. And that's very interesting. You can get a lot of topological information from there. So is there an example of two non-isomorphic knots who have the same homology? I don't know. I'm not sure. I mean for Kovanov homology, which is related but slightly different, the right examples. For this one, I don't know if anyone computed, but I'm sure there is, but I don't know if it's known. Excellent. Any more questions? Any thoughts? Let's thank you again.