 Let's start lecture two with a small review of lecture one. So yesterday, we were studying the scattering of gluons. And we said that when you scatter n-gluons, you have to specify data. You specify momentum, polarization vectors, and a color index for each particle. And there was a clever way of separating the color structure so that you only had to worry about the kinematics. And that's called the color decomposition. And we wrote it in a form that looked like this, where w is some permutation of n elements, modulus cyclic transformations. These objects over here were called partial amplitudes. And we said that they satisfy some amazing properties. Well, at least they look amazing from their point of view, because they don't know anything about color. But once you know about Lagrangian, you know that they have to satisfy some properties. As yesterday, we mentioned some of them. The first one is cyclicity. Imagine we have the partial amplitude with the canonical ordering. This has to be equal to this one. So it has to be cyclic in the labels. The second one, and today I'm not going to try to spell it, I did it for you, the KK relations. So cyclicity tells us that from the n factorial, independent ones, we go down to a minus 1 factorial. But KK were smarter. And they said that you can actually bring any two labels, say one and n, to become adjacent by taking appropriate linear combinations. And I hope somebody tried to do it. So we had a formula like this. What's important is that they told us that any object of this form can be brought or can be written as a linear combination of amplitudes where one and n are adjacent to each other. And that brings any a minus 1 factorial down to a minus 2 factorial. And the third property was the BCJA identities, which tell us that you can even do the same kind of procedure but with a third particle. Here they chose one and two to be adjacent. After you use KK, you can bring one and two to be adjacent. And now you can use a further relation to bring three next to two with some function that depends on the permutation and only depends on Mandelstein variables. That was the magic that this only depended on Mandelstein variables. I told you that these functions were complicated, but that these identities could all be derived from something called the fundamental BCJA identity. And this is an identity of this form. So I want to get it right because we're going to use it later on. So let me get it from here. So just so that everybody's on the same page, let me remind you that throughout this talk, SAB is the same as KA plus KB squared. KA, KA mu is 0 because it's our massless particles. And therefore, this is always 2 times KA dot KB. So we have this identity. Exactly. Very good. So Jiu-Jitsu, a very good student. Actually, the physical origin of BCJA is not very clear. So this is something that in 2008 they discovered. Later on it was realized that it can be derived from a string theory by taking vertex operators and moving them around the boundary of a disk. So once you go around, you pick up a monodromy factor when you expand that, you would think that only the real part, or only the alpha prime, the leading order in alpha prime corrections, or sorry, the leading order in alpha prime, we have a field theory interpretation. But it turns out that the leading order is a real object. And the imaginary part also can be shown to be independent of alpha prime. Or not independent, but it doesn't receive any higher order corrections from the effective action. So there is yet another identity. And that's the BCJA identity. OK. The last thing we said, like yesterday, was that MHV amplitudes, the formula discovered by Park and Taylor, is Taylor made to satisfy these identities. So the formula has all gluons with plus elicity, except for two gluons, say i and j, that have minus elicity. And this is a formula that by now you're very familiar with. So this i and j is a little bit boring. The engine behind the triviality of all these identities, or how simple they look like, is actually this factor, which I would like to define as the part Taylor factor of 1, 2, up to n, just to simplify the notation. So I'm going to call this the part Taylor 1, 2, all the way to n. So a linear combination of these identities gives rise to the starting point that we had yesterday, which was called the U1 decoupling. And let me write it today in a slightly different form. It's exactly the same identity, but I'm going to start from the n. Yesterday I started with one position for the particle that I'm going to move around. So today I'm going to start from the n point. So I'm going to put particle n between 1 and 2. And you're going to see why. And no, I'm not going to forget it. Yes. So that's the U1 decoupling identity, the one that tells you that if you are scattering n minus 1 gluons and you want to know the probability of producing a photon that has to be 0. And we said that for MHB this was easy to prove. So we use this formula. Remember we can forget about this factor that's boring. So yesterday what we did was to say, well, let's factor out what everybody has, or what almost everybody has. And that was a part Taylor factor of particles 1, 2, up to n minus 1. So remember the definition. So I'm going to write it for you again here. So we have 1, 2, 2, 3, up to n minus 1, n minus 2, n minus 1, n minus 1, 1. And then we're going to put in here whatever I need to multiply this with to produce the corresponding one. So I have this formula, and I want to get this one. What do I have to multiply by to get it? Well, what I have to do is to multiply by, so in this formula 1 and 2 are not adjacent. But this formula has them adjacent. So that's bad. So I have to remove that factor. This formula has a 1n factor in the denominator. This one doesn't. So I have to put it in the denominator. And this one likewise. The next one, I do the same thing. And now you see why I reordered them. So yesterday I wasn't very smart. But today I am. So we go all the way to n minus 1, 1, n minus 1, n, and n, 1. And this thing has to be 0. I encourage you to prove this by a telescopic argument. But I'm going to tell you today about my favorite, actually my favorite way of proving it. And this was actually done by Hodges in 2008. And what I'm going to tell you is not the paper that he wrote down. It's just basically a footnote in his paper. So the paper is about something else. But the idea comes from there. So here is a deep identity we're going to use. Let me remind you before we do it that for any spinner, we can write it as an orbital factor. This is a two-component object. So we can write it as an orbital factor and 1 in the first component and ZA in the second component. And this identity now becomes an identity. See that the factors, the overall scale for factor 2 cancels out. The overall scale for the spinner 3 cancels out. And the only thing that appears here is the overall scale for n. But n is a common factor. So the overall scale factors out. And the identity becomes an identity between differences of complex numbers or among differences of complex numbers. Now I can tell you the trick. The trick is to realize that if you have ZA minus ZA plus 1 over ZA minus ZN ZM minus ZA plus 1, this is nothing but the contour integral in the complex plane Z. Imagine you have ZA here and ZA plus 1 here. Choose a path that goes from ZA to ZM plus 1 and integrate this function. Now it's a double pole. So the path doesn't matter. You can deform the path in any way you want, even pass it through the pole because the residue is going to be zero. So no problem. Now let's write down the identity. So the identity tells us that we have to start here. The first factor is the integral from Z1 to Z2. The second factor is the integral from Z2 to Z3, Z4, and so on. And guess what happens? We're going to close the contour. And when we close the contour, we deform it and we get zero. Isn't that nice? But this identity or this way of proving it also shows us something interesting, which is that if we start with any point here, say ZA, and we go to any point ZA plus M, the sum, say, of B from A to A plus M of these factors is equal to the same to the integral from here to here. But then I don't have to pass through all the points here. I can deform the contour and just go from A from ZA to ZA M plus 1, how appropriate. It ended up in the right place to be there, to be the upper limit of the integration, over Z minus ZN square. And this is nothing but ZA minus ZA plus M over ZA minus ZN ZN A plus M. So we got ourselves nice identities. So why are they important for us? Well, because it will allow us to rewrite this PCJA identity in a nicer way. And that's what I'm going to do now. So let's study the fundamental PCJA identity for part Taylor factors. Once again, I'm forgetting about the overall ij to the fourth. I don't care about that, because it's not going to do anything interesting. So again, I look at this sum and say, well, this sum looks pretty much like one of these. You want the coupling identities, but it's incomplete. You see that it doesn't go all the way around. It starts at B and it goes all the way to M minus 1. But can we use anything on the blackboard to simplify this problem? Well, exactly this identity here. Because the sum, so the fundamental PCJA identity becomes SMB, B1 to N. And now this sum over here becomes the sum from A equals B to M minus 1. So let me write it A, A plus 1. So I'm getting a little bit bored of writing Z and Z all the time. So let me introduce a convenient notation. ZA minus ZB is going to be just this. That will also save you some writing. Here, note that this A plus 1 is actually a number modulo M minus 1. So don't worry when we get to A equals to M minus 1 that we're going to be in trouble. Because when you put it in here, you get 1. You don't get N. So that's what the identity tells us. Or that that's what we want to prove. But now using our identity here, we can re-sum each term and get something that looks like this. That looks pretty simple. It's just a sum over B of these factors. Now this factor is trivial. It's a common factor. And we want to prove that this is 0. So we can cancel it out. Oops, I don't want to fall. All right, so we can cancel this out. Now we are ready to prove the identity. Let's go back to our spinors. We can multiply again by the scaling factors and bring this notation back to the angular bracket notation. And the identity we have to prove is that when we go from 1 to M minus 1, we multiply the Mandelstand variables. This spinner over BN, this has to be equal to 0. So let's see how many people are familiar with this formula. Well, I wrote it down yesterday, but did you see it in Simone's lecture? Well, if you haven't, know that this is the only Lorentz invariant possibility for this. There could be an overall factor, maybe a factor of 2 or something, but it turns out to just be 1. So you can try to work it out and show that indeed the factor is 1. So we're going to use that here and see why. Because when we write SNB as that product, this factor cancels. So we get MBB1 from 1 to M minus 1, and this has to be 0. Is this 0 now? Well, let me rewrite this in a spinor notation a little bit. So I'm going to write it like this. So this means that we have particle N, lambda tilde of particle N, alpha dot. This is the spinor index contracted with the spinor index of particle B. Sorry, these two are contracted. And this one is now contracted with the spinner of particle 1. But what is this object? This is a momentum of particle B. Now I can put the sum in and get, and now we use the observation that somebody made yesterday that I told you we have it in our imaginations, but we keep it always in mind, which is that we're always dealing with something that satisfies this momentum conservation, and therefore the sum of all momentum is equal to 0. And therefore the sum of all momentum from 1 to M minus 1 happens to be minus the momentum KN. But if we have the momentum KN contracted with the spinor lambda N, we get immediately 0, OK? So that's the proof of the identity. Now I want to rewrite the identity a little bit. Actually, I'm going to take this form, and I'm going to do the following. Once again, these brackets indicate differences of complex numbers. So I'm going to write the numerator, which is ZB minus Z1, in a slightly different form. Sorry, not even Georgie cut this because he's reading. So I'm rewriting the identity slightly. The idea is that this factor now cancels with a denominator and gives me a sum over SMB from B1 to M minus 1, plus the next factor, which is ZN minus 1, comes out of the sum. And we get SMB over ZB minus ZN. Now what is this? If you sum over all the particles from B to M minus 1, from 1 to M minus 1, know the definition of SAB, which was KN dot KB. We can put the sum inside. If you put the sum inside, you get, again, KN square, and this is 0. This is an overall factor, and we discover this new form for the BCJA identity. Note that this, we discover by moving particle M. But if we move any other particle, it should also be true. So it must be true that if we sum from B to N and B different from A, SAB over ZA minus ZB, this must be 0 for all A. Now we want to make a bold claim. And the claim is the following. That any rational function, some variables, x1, x2, up to xN. Now this could be independent variables, or they could be sets of variables. So x could represent many variables. But you have N of them that satisfies 1 and 2, cyclicity and KK. So we have a function 1, 2, up to N of these variables. And the function satisfies this. There is cyclic in the permutation of the labels and satisfies this identity. Then it must be the algebraic transform of Park and Taylor. We all know, of course, what algebraic transforms are, right? No? Good, because I made it up. So let me give you the definition of this. So what I mean by that is that you take Park and Taylor in terms of the complex variables Z. Now let me change the name, because it's going to become useful in the future. Because we're not going to be talking about spinors anymore. So I'm going to change Z for sigma, just to remind myself that I'm not talking about spinors. I'm talking about something more general than spinors. So for me, Park and Taylor, from now on, if I say Park and Taylor of 1, 2, up to N, these I take to mean the differences of sigmas, where the sigmas are complex numbers. So I'm introducing as many different notations as possible so that I confuse you maximally. So hopefully I won't succeed with that. So let's keep going. So we have this Park and Taylor. So what do I mean by the algebraic transform? The claim is that our function x1 and x2 up to xn is an integral over all these sigma variables of Park and Taylor of some product of constraints that relate the sigmas to the x variables using polynomials times possibly an integral that depends on the x's and the sigmas. And this integrand is permutation invariant in the labels. And these constraints are also permutation invariant. When you take them all at the same time, it's a list of constraints that are permutation invariant. These are delta functions. So many of you should be wondering what the heck am I doing, because I have integrals over complex variables and I have delta functions. And they don't go together very nicely, as you know. But by this, what I really mean is poles. So we have denominators. And we're doing a contour integral in a multi-dimensional space. But that's too long. So I prefer just to write delta functions and remember that I'm solving all these equations simultaneously. And I'm summing over all possible solutions. I don't care if they are real or complex. So this is a claim that if the function satisfies that, then it must be of this form. So the contour is actually a sum of contours. It's a sum over n-dimensional tori. And I cannot say that they enclose the singularity, because in more than one complex dimensions, you cannot enclose a singularity with the product of s1s. But each one of these guys defines a hyperplane. And then the Tn, in a sense, encloses the union of these hyperplanes. All of them. All of them. You might be worried, because you would say, well, if you pick all of them, you can use a residue theorem. But remember that there is a factor here. So this is not 0. You cannot use what is called the global residue theorem, because there is an extra piece that also has poles. So let me give you a more or a less complicated way of thinking about this. You solve all these equations, find all the solutions, and sum over all the solutions, 1 over the determinant of the Jacobian of the matrix made out of this times Park and Taylor evaluated on the solutions. The solutions are values for sigmas that depend on the x's times your integral. Now the claim I want to make is that this object, even though these polynomials could be very complicated, this object, if the polynomials have coefficients that are rational functions of the x's, this answer, after you sum over everybody, it has to be a rational function of the original variables as well. What's the proof? Well, you can use a little bit of Galois theory to prove it. But it's actually true, OK? That's very good. So now, why am I saying this? I told you that we were able to prove these identities for Park and Taylor amplitudes. These are the MHB amplitudes. But how about the other ones? How about n squared MHB, these complicated objects that the more negative electricity particles you put, the more complicated they start to become? But they all satisfy this identity. So my claim is that they can all be written in this form, or at least as a claim. But how about number three, the BCJA identity? So the BCJA identity ends up being a voice down to these constraints. How about putting this together with BCJA in the following form? How many equations do we have here? We have n equations because we have n variables. And these, after you put them in Mathematica and you type together, they become polynomial constraints on the signals with coefficients in the Mandelstam variables. How nice is that? That looks exactly what I need for the definition of this integral transform or this algebraic transform. So what if we replace these polynomial constraints by exactly those equations? And using George's technique, I'm going to cross this and put my equations here. So it seems pretty good. But the moment you try that, you find that there is a technical disaster. You find a small problem. So let's compute this Jacobian. So what is a Jacobian? So if these are my equations, so let me now call EA the sum of our SAP over sigma A minus sigma B. My Jacobian is a following matrix. So I have to make a matrix of this form. And the matrix, let me call it phi, is a matrix that has the following structure. So everywhere else, everywhere outside the diagonal, the matrix has an entry like this. And on the diagonal, it has, I mean, I'm being very lazy here. So I should have written something a little nicer. So say here we have 1, 2 over sigma 1 minus sigma 2 square, 1, 3 over sigma 1 minus sigma 3 square, and so on. And in the diagonal, we just get minus the sum of all the elements here. And the same thing happens in all the rows. Now when you take the determinant of this matrix, what do you get? Zero. Damn it. So close. I mean, this whole thing, we have worked so beautifully. Now you say, well, maybe it's not such a big problem. Only one null vector. Not such a big deal. But then you discover that you have three null vectors. The null vectors are 1, 1, 1, sigma 1, sigma 2, sigma n, and sigma 1 square, sigma 2 square, sigma n square. That's a bit disappointing. So there are three linear combinations. So we only have n minus 3 independent equations. Yes. Well, because having one problem is a little better than having three. Usually, yes. Well, the reason having one sounded at first better than having three is that people were used to defining in graph theory, you can define matrices associated with graphs that have precisely core rank 1. And there is a notion of the, when you have a matrix where the sum of all rows and the sum of all columns set up to 0. It's a theorem that you can remove one row and one column and compute the determinant of the matrix left over. And that determinant doesn't depend on the row or the column that you removed. That's something very common in graph theory. So if you knew that, of course, I didn't know that. But if you knew that, you would say, oh, having one null vector is not such a bad thing. In fact, this matrix satisfies us. The sum of all rows is 0, and the sum of all columns is 0. So indeed, if you remove one row, one row and one column, and you compute the determinant, the answer is independent of the choice of which row and column you chose. Of course, the answer is still 0. As it turns out, if you treat this as a bisonic redundancy and you use Fajepov-Pov procedure, you can show that there is a way to define an object, which is the following. You compute the determinant of the matrix where you remove three rows and three columns, say the row P, the row Q, and the row R. You remove them, and you remove the columns P, Q, and R. And you take from this object here, you take the determinant of the 3 by 3 matrix made out of the ones that you chose. So you take the determinant of 1, 1, sigma p, sigma p square, sigma q, sigma q square, sigma R, sigma R square, and you call this P, Q, R. Of course, you all know what this determinant is. It's just the product of the differences of the three labels, of the sigmas of the three labels. This is called the Van der Monde determinant of these three particles, or these three labels. And then if you have this and you divide by this, this is independent of the choice of P, Q, and R. Now you would say, well, but we started with n sigmas, m complex variables. And we only have m minus 3 equations. Sure enough, with this trick, you can define a Jacobian. But what do you do with the other three sigmas? And then you stop and think for a while and think maybe for a few days, and then keep thinking. And then you realize, well, but we saw in Ashok's lectures that if the sigmas were punctures on a Riemann sphere, and a Riemann sphere is a complex manifold, so we should remember to mod out by the SL2C acting on the sphere. And SL2C allows us to fix the value of three punctures. As Ashok said yesterday, the modular space of a three puncture sphere is trivial. There is only one such thing, so it's rigid. So that would mean that if we have n variables, we can fix three of them if we only had SL2C invariance. So BCJ or the BCJ identities are telling us that these transforms or these integrals are supposed to mean integrals over the n puncture sphere, not just any random n dimensional integrals. And that's a resolution of the puzzle. So once again, the resolution of the puzzle is that any amplitude should have a form or should be written as an integral over the product of n sigmas. But of course, we know we have this redundancy. So I'm going to write it like this. I'm going to divide by the volume of the redundancy, even though it doesn't make sense here. But times the part Taylor times some definition of what I mean by the constraints on the particles on the sigmas coming from these equations, times some integrand that depends on what? Well, it depends on the data, the momentum of the particles. So let me say kc, the polarization vectors of the particles and the sigmas. So these were the axes where I started. So what do I mean by this? Well, we have the definition. We know that if we remove three labels and multiply by this, we get something that is meaningful. Likewise, we know how to gauge fix this SL2C. You're very familiar with that in string theory, right? So when you're doing three-level computations on this here, you gauge fix by removing three of the sigma variables, fixing them to whatever values you like, and multiplying by the Van der Monde of those three variables. So what I mean by this is the integral from A different from ijk. So those are the three values I'm fixing, times the Van der Monde of ijk. Now here, I take this to mean precisely this, or in other words, I multiply by the Van der Monde of PQR and only impose the equations coming from all variables except PQR times some integrand, divided by my part Taylor factor, which I'm going to write explicitly here. So that's a clean. Now, one exercise for you is to show that the twisted string formula we wrote down yesterday indeed can be written in this form. Instead of doing that for you, I've decided to study a little bit these equations. So these equations, if that formula is true, and it actually works for any amplitudes, I think these equations that are linking the modular space of the n-puncher Riemann sphere to the space of kinematic invariance, they deserve a name for them. Probably this is as good as any scattering equations we can start calling them. Now, I derive these equations from BCGA, Identities, and so on. But these equations were actually first found by, so remember yesterday, Jiu-Jitsu thinks that everything was done after 2000. This one is even more surprising. It's 1972. I'm published. Yeah, well, I mean, Peter Goddard sent me a copy. Of course, his body's with it. Don't force me to say anything. Well, they were trying at the time to create some new dual resonance models. But it didn't quite, I mean, I think they wanted to remove the tachyon or something. So they tried different things, and then they ended up with something that produced those equations. OK, so as I said, so if we have the space of kinematic invariance, the equations are connecting them. Well, it's not equal, so maybe, I don't know how to do it. So they are connecting with the modular space of the n-puncher sphere. This was supposed to be a calligraphic m, but something happened. OK, so let me give you an example in there. So we have 10 more minutes. See that these equations are really special, and they know a lot of physics. So in a sense, Riemann knows a lot of physics. And the example is consider four particles. So the space of kinematic invariance for four particles is the space of these Mandelstein variables subject to the constraint, well, this is what you usually call s. This is what you usually call t, and this is what you usually call u, and this is s plus t plus u equal to 0. Now let me draw this in the space s and t. So the space of kinematic invariance is basically R2. So R2, just the plane, as far as I know, doesn't know any physics. There is nothing that the plane knows. Especially it doesn't know that this line is special. This line is a line when s is equal to 0, and amplitudes are supposed to have a singularity there. But the plane doesn't know that. For the plane, this is just a regular line. This line here is a line t equals 0, where we are also supposed to have a singularity because amplitudes are supposed to factorize. And last but not least is a line u equal to 0. The physical regions for scattering are, of course, these ones. And we know that crossing allows us by complexifying this space. We can move from one place to the other and so on. But for the time being, all I want to say is that the plane doesn't know anything about why these lines are special. But let's look at the scattering equations. So we have four particles, but three of the scattering equations are linearly dependent. So we should be able to do everything with only one of them. So I'm going to choose scattering equation 1. And this is the equation that says that 1, 2 over sigma 1 minus sigma 2, 1, 3 over sigma 1 minus sigma 3, plus s14. Sigma 1 minus sigma 4 has to be 0. And now I want to make a good choice of sl2c. So I'm going to set sigma 1 to be, let me check what I want to do, 0, of course, sigma 2, 1, sigma 3, infinity. And sigma 4 is going to be moving around. So as we know, the modulus space is only one dimensional. So I only have sigma 4 as a variable. Now in this, the equation, what happens to the equation? Well, after gauge fixing in this form, this drops out. This is not here. This is not here. And this is 1. So the equation becomes a very difficult equation. Now you put it in Mathematica. Shift Enter, and you get that sigma 4 is equal to minus s over t in our variables there. Now what happens? Well, now we see that if, now Ashok also explained to us that the modular space has singularities. The modular space of the n-puncher sphere has singularities. Now the modular space of the 4-puncher sphere, how many singularities does it have? Well, it turns out to have only three singularities, which is when the puncher that we have free to move approaches one of the three that were fixed. So I'd like to draw the space. I mean, people say that the modular space is actually a three-puncher sphere. But let me draw it like this. I'm going to draw m0,3. This is my picture of what it looks like. Of course, this is a three-puncher sphere. At a genetic point, we have a sphere just like the one I drew there. The three punchers are happily living each one of them on their own. But then when we approach one of the singularities, sigma 4 approaches one of the punchers. So if sigma 4 goes to 0, that happens when s is equal to 0. If sigma 4 approaches infinity, that happens when t is equal to 0. And last but not least, if sigma 4 approaches 1, know the minus sign here. That's when u is equal to 0. So somehow, the three singularities have been mapped to the three lines here. So the modular space knows about all the physical singularities that the amplitude is supposed to have. Moreover, when we approach a singularity, the sphere can be thought of as splitting into two spheres, where we have, say, sigma 1 and sigma 4, sigma 2 and sigma 3. Here we have, I hope you're a little bit familiar with this. What I'm doing here is using a conformal transformation to blow up the sphere that seems to be getting a shrink to 0. There is a new puncher that is being generated. And of course, you're familiar with this because juji kept drawing these pictures all over and over again. And here we have another one, where we have sigma 2 and sigma 4. And here we have sigma 3 and sigma 4. Thank you. Yes. Very good. Now we have five minutes to determine how many solutions these equations have. I already know for four particles, we found one solution. You can sit down and write down the equations for five particles, and you will find two solutions. But only two data points are not enough. Any guesses on how many solutions could this thing have? So here is how we're going to study. Let n be the number of solutions. So how are we going to determine this number? We're going to use a trick that is going to be useful later today, I hope, and it's a following. The number of solutions, thinking about this as some intersection in some algebraic variety, some complicated story like that, the number of solutions should not depend or should not jump, even though it's an integer number, should not jump as you deform the parameters smoothly. And if it jumps, it will jump by an infinite number. So if we don't find infinity, we're all fine. So what are we going to do? We're going to take particle n to be soft. Well, what that means is that you take the momentum of the particle n, you multiply, you think about it as being some fixed vector times a parameter tau that you're going to take to 0. So let me write the equations, the first n minus 1 equations, and see how they look like. Well, they look like this. So I'm on purpose putting the nth guy at the n. So it's going to be nA over sigma n minus sigma a equal to 0. But this thing contains kn, and therefore, this thing is of order tau. And I'm assuming that tau is very, very, very small. So in fact, in the limit, I get the equations that we have for a system with n minus 1 particles. So starting with any solution for a system of n minus 1 particles, I should get something that is very close to a solution of this problem. So I have n, n minus 1 solutions, which I'm going to denote by the index i. So i runs from 1 up to n, n minus 1, a number that I still don't know. So for each of these n, n minus 1 solutions, I can study the last equation, which is the en equation. And this equation is the sum over sv, nA, n minus b, b from 1 to n minus 1. Now I can factor out the tau. So I'm going to have tau q dot kb. And this has to be equal to 0. But tau is a common factor. So no matter how small tau is, I can always remove it from here. And this is going to be a constraint on sigma n. Remember that I'm assuming that I have found already all the other sigmas. So this is a constraint for each one of these solutions. Now once again, you put this in Mathematica, you type together, and you find that Mathematica will return something that has a denominator of the form sigma n minus sigma bi, i from 1 to n minus 1, and some polynomial in the numerator. Our task is to find the degree of that polynomial. Well, clearly, we started with n minus 1 factors. This is something that should go to infinity as 1 over sigma n when sigma n goes to infinity. This has degree n minus 1. So this polynomial should start at sigma n to the n minus 2 power with some coefficient plus lower order. Is that clear? Now let's compute this coefficient. This coefficient is nothing but the sum of all these factors. But what is the sum of all these factors? I can put the sum inside or equivalently take q outside. By momentum conservation, this thing is q dot kn. But it's the same as tau q square. And q square is assumed to be null. So this is 0. So the leading term is 0. Now what I wrote as a lower piece is actually relevant, is actually the most important piece, is n minus 3 plus lower. So the conclusion is that for each i, for each of the capital n, n minus 1 solutions, we have n minus 3 values of sigma n. And therefore, we have counted how many solutions we have for n particles if we only knew how many solutions are there for n minus 1 particles. This implies that an, and we know, now you know how many solutions are there. And I think I'll stop here.