 Okay, first one. Okay, welcome. Welcome back. Just recall that we wanted to compute the statistics of critical points in this constrained constrained problem where you have a linear set of linear equations with a quadratic constraints that imposes that the solution must leave on the sphere of radius. And we're going to do it using the cuts rice formalism that in our setting translates into the following set the following problem. So, so we are going to integrate over all degrees of freedom. So all the degrees of freedom of our equations here, which are lambda the Lagrange multipliers and the components of the vector x. So we are going to impose that x square in modulus must be equal to n. We are going to impose the constraint on the equations that must be simultaneously satisfied, and these equations are written here. We are going to write them earlier by imposing that the gradient of the Lagrangian is equal to zero. So we're going to impose that delta that a transpose a x minus a transpose. We're going to call. I thought it was okay. So minus lambda x so this is a delta that imposes the constraints that all our equations must be simultaneously verified our equations for the critical point. And then we need to impose that the determinant of the Jacobian matrix, we need to include it here with an absolute value. So the Jacobian matrix is an n by an n plus one times n plus one matrix because we have n variables the components of the vector x plus an extra variable that is the multiplier lambda. Of course, we want to average this object over the disorder encoded in the random matrix a and the random vectors random vector B. So the Jacobian, you can convince yourself that by differentiating these equations with respect to x, we get a block matrix here. So this blocks, this block here is just this term here. So we are differentiating this equation with respect to x and we are obtaining a transpose a minus lambda. Okay, then we need to differentiate the equations with respect to the constraint to the Lagrange multiplier lambda. So we will get a minus x here. Now we are going to include the next equation here, which corresponds to the constraint that x must be must live on on the sphere. This gives upon differentiation, the following term, and then this other constraint does not depend on the on the multiplier. So we get the zero here. So this the setting we need to average over a and B, but we notice that this term here, and this term here do not depend on the vector B. So the first thing that we can do is to average over B, which is a vector of normal ID variables, because B only appears in in here. Okay. So, let me let me try to do it. Yeah. So we do that. The type of integral that we need to compute is as follows DB. This is a multivariate m dimensional Gaussian vector. So we get Gaussian term. We have this delta term with the be in here. So we can rewrite this object as you minus a transpose B, where the vector you is a transpose a x minus lambda x. Okay. So this Gaussian integral with the delta constraint. I put the result in the in the handout. That's equation equation for you can you can compute it easily by introducing an integral representation for this delta, which is DK over to pi to the end. So I K transpose you minus I K transpose a transpose B, and then exchange the order of integration, you will get a Gaussian integral in B, and then another integral that you can compute in in K. I don't follow all the, the steps but the result is given in in the handout and I suggested you and try to compute this explicitly. The final result is this. So we have a square root of a determinant that comes from of w w is a transpose a, and that comes from one of the Gaussian integrations, and then you get a one over two sigma squared, you transpose w to the minus one. And the average over the randomness in B is is performed. Now we need to integrate or the next step is to integrate over x. Okay. So, let me just rewrite here. So, we have the lambda, the x, this integration has been done. And here, we have the result of this integral, which is two point sigma square, and over to, we have a determinant. So we have a exponential minus one over two sigma square. We have you transpose that is this object. So we get x transpose. Then we get w minus lambda identity transpose, but this object is symmetric. So there is no extra transpose. We get w to the minus one. So we get w minus lambda identity. And then we have the determinant in here with an absolute value which is a mess of this of this object here, and we want to integrate over over x. So the integration over x follows from an observation. So we have the lambda the x, the, this object here, let's call all this terms in in the integrand. I of x and lambda. So the observation that we have is that I of lambda has a particularly nice rotational invariant feature. So it follows. So property one. So the fact that I of x lambda is unperturbed is it is unchanged. If I, instead of computing it on x, I computed it on O times x, where O is an orthogonal matrix, and by N. And in particular this implies that I is only a function of the modulus square of of x. This is not entirely. And of course, I forgot to add that there is there is an average over a and the definition here includes the average over over a. Okay, so the, this entire object, including the, the average over the matrix a it has this particularly nice rotational invariant property. So I will not prove it even though I give some hints and some steps of the derivation in the in the handout. So essentially, and here there is an X. What you have to do is you have to replace every instance of X with all times X, and then use the fact that W and the measure is rotationally invariant itself, so you can reabsorb this extra all into the measure, the integration over over a. If you do that, essentially this, this property comes out, comes out naturally. So since I depends only on the model square of of X, in particular we can choose, it depends only on the length of We can replace X with square root of n times the unit. Vector the unit base vector, because the dependence on on X is only on the modulus which is equal to N. Okay, so there is there is no actual dependence on the angle that X forms with with the axis. So if we do so, we need to just evaluate the determinant of this object when X as this particular form. Okay, so we need to compute the determinant or the absolute value of the determinant of this matrix, where we have minus root and the basis vector one. So what you can do is, this is a block, a block matrix and they give in the handout the formula for the determinant of a block block matrix. This is an N by N block, and this is a line. This is just a vector here, and the vector and the vector there. Okay, so the determinant of a block matrix is particularly simple. When the matrix as this, this form. And if you do that you you get to end that comes from these. Sorry, this is to square one. There is a to end that comes from, from here, and then you get an absolute value of determinant of a size and minus one. What is, what is w tilde w tilde is a parametrization of w. So I'm using the following parametrization for w. Okay. So I'm calling my matrix w a single number omega this w tilde of size and minus one, and then two vectors that I call V. If, if this is my parametrization by applying twice, essentially, the formula for the block for the determinant of a block matrix, you get this result. Okay. So, what are the dimensions about this. So we are. We worked out this, this object here, and we are now using the fact that this integral, including the average over a, where a of course only appears in this combination, a transpose a so this is actually an average over w. Okay. So we are now responding, we should, we should matrix. Okay, so now I have to. Yeah, so I'm. All I'm saying is that it is convenient to just parameterize my matrix w as a block matrix itself. So you parameterize it in in this way you have just a single element a single real element w, then you have a matrix, omega w tilde. And then you have two vectors, just one vector and it's transpose because the matrix is symmetric. So if you do that, this this determinant boils down to computing the determinant only of the, of the right, the right corner. Okay. So everything is written in terms of this of this w tilde. Okay. Now we need to work on this exponential here. So, let me write this as, so we have exponential minus one over two sigma square x, we wrote it, we choose this direction. So we have root and, and then another root and here. So we can write and then we get one transpose. So what we have here w minus lambda identity, and now I'm multiplying w minus one times w, which gives the identity minus lambda w to the minus one applied to the vector to the basis vector a one. So if I further compute this object explicitly we get w minus lambda w times w to the minus one which is the identity minus lambda times the identity plus lambda square w to the minus one. So in total, I get w minus two plus lambda square w to the minus one. And I'm sandwiching this matrix between the two vectors, the two basis vectors, you want where the only the first element is one and all the others are zero. That's the direction that I chose. Okay, so what is the result of this sandwiching. Well the fact that I'm selecting a particular element of this of this matrix, which is the element one one. Right. So what I get here is exponential minus n over two sigma square. And then I have the element one one of this matrix of course. So can you can you read this probably not. This is the limit. It's one of the many limits. One of the many limits that we're going to take. Okay. Okay, so we need to compute the element one one of this matrix for w. There is no problem, because we know what the element one one of w is due to this parameterization is just little omega. But now we need the element one one of the matrix w to the minus one. Okay. And this can be computed with some some work because this matrix is a block matrix itself. So we need to invert a block matrix. But it is just algebra. It is, it is not a very difficult operation. Okay, so if we do that, we observe that w to the minus one element one one is one over omega minus the transpose. We got sorry w tilde to the minus one be okay. And this is obtained using the formula for the inverse of a block matrix. This was the second second beat. And we have a result for the determinant for the absolute value of the for the Jacobian, and we have a result for the exponential term of my of my integral. And then I'm using another property, which is also mentioned in the in the handout in the first first page and I give a reference where this is this is proved. If w, my wishart matrix is parametrized in this way. So as a block matrix, then it holds that omega w tilde. And this particular combination, omega minus V transpose w tilde to the minus one V are positive definite. So this is a property about minors of a wishart wishart matrix and peculiar combinations of it. So I give a reference it is just a theorem. So I don't know how much more convincing I can be of them than than that. I give, I give you the. It is specific to the wish wishart ensemble in the sense that it exploits the positive definiteness of the wishart ensemble and this peculiar block the composition. Yes. Okay. If this is the case, using this property, we can infer that the measure when we integrate over omega over w and and this is wishart will become a product of theta's times the omega dv d w tilde. So eventually we want to integrate over these, these terms into the composition instead of integrating over the full matrix w. Okay. So I want to, but I also need a further, a further property, which is the fact that, but I will give it later. Okay. I don't want to erase any more. So can I erase a bit of everything. Theta is a heavy side theta. So I'm just imposing that w w tilde is wishart itself. That's a second consequence of the theorem. Theta is a number and it must be positive. And, and this combination must be, must be positive. This is not implicit in this change of variable I need to, I need to include it, include it explicitly in the measure. When I, when I do the integral. Okay. Otherwise I would be tempted to integrate over all omegas. So the theta w tilde is like a formal notation to mean that you integrate over the set of positive definite matrices. Yes. Okay, exactly. Okay, so I hope I can raise a bit of everything. If I cannot just shout now because otherwise going to be too late. Now that we have performed the average over B. We haven't performed the average over. So the, the average over the number of critical points of this of this problem is a certain constant that we can characterize explicitly. We have a remaining integral in D lambda. We have remaining integral in the omega between zero and infinity. We have an integral over the vector B, and we have an integral over w tilde, which is the parametrization of my wishart matrix w. So a theta that imposes this condition, this positivity conditions on the minors of my matrix w. Then we just write various terms, and I explain where do they come from. We have a term that comes from the determinant of the matrix w right it, and so all all this, the stuff comes from the, the previous term that depended on w. So we are using this decomposition this block the composition. And then we have a term that comes from the exponential and the one one entry of the term in the exponent times the average, the sorry the absolute of the determinant of w minus lambda, I, and minus one. This term comes from here. The exponential term comes from this the element one one of the special matrix that was in the exponent. I'm using the result over of the probability density function for the wishart ensemble, which is also included in the in the handout. So we had to average over a which in practice is the matrix of coefficients, and a only appears in this integrals in the combination a transpose a, which is wishart. So all I have to do is to average over the distribution of a wishart matrix, which is, which is known in closed form. The wishart. So that's this the weight function that I need to use to average my object over. And now all I have to do is to replace. Instead of w. I'm going to use this expression here in terms of this parameterization. So for example the trace of W that appears in the exponent is what is omega plus the trace of W tilde, which is exactly what I write here. And I use the same the determinant of a block matrix. There is a question maybe. Yeah. This one. Yeah, the notation is quintessentially bad. So it's my fault. So this this means this is a constant that that we know because I'm collecting all all the content, all constants, like for example this constant which is a different content constant, but then I have for example to end here. And I also have other constants that comes from the integration over over X. So there is, there is a number of constants that I'm lumping together in this symbol but this is fully explicit, we can write down explicitly what what this number is, but I shouldn't you're right I'm going to call this CN, I'm, I guess I'm running out of letters. So this is another constant and it also depends on M, but then I definitely shouldn't use the C here. So let me call see hat for the lack of a better option. They are not this exactly the same thing but they are related. They are included in, I think a question one of the, of the handout. So I hope, apart from the technical steps I hope that at least the picture is, is relatively clear. All I'm using is a parameterization of the we should a block parameterization of the we should matrix and the fact that the measure gives some constraints on the range of of these, these numbers. So far, the expression looks complicated but in the end it is reduced to an integral over we should matrix smaller we should matrix and and integrals that we are able to perform. But, so if you had a similar expectation to perform, but with a non be short ensemble, but as long as it's positive definite things generalized right. So, you would have, you would have this condition, and this condition, still in there. What you, well everything hinges on the fact that you, that you have an explicit, an explicit expression for the weight for the weight function of your own ensemble. But then then everything will go through and then the fact, whether you can perform the integrals or not. This is to be seen but in the case of which white which we can do that. Okay. So, I would like to erase here same warning applies. Okay. Now, we need to choose which, which integral we want to do. We want to do first. So it turns out that we can compute the integral over V, which leaves in our n minus one and the integral over omega, relatively easy. We have a theta so we want positivity of this term, and then we have minus and minus two over two. Then we have exponential, so we can lump the exponential together. We can, we get one plus one over sigma square. The exponential comes from here, and then it comes from here as well. So we can lump these two together. And then we have another exponential of minus and lambda square divided by two sigma square. Omega minus V transpose rate that minus one. And this, this is just a number, of course, it is a number that comes from here. Okay. And all this in integral in the end will depend on W tilde. Okay. So clearly, there is an obvious change of variables here. So the obvious change of variables is to call this this object q, because it appears here as well. Okay. So with this change of variables. My integral, I have becomes something like integral and db. Then we have an integral in the queue. And we want you to be positive. Then we have q to the power and minus and minus two over two. And we have an exponential of minus and over two. One plus one over sigma square. And then omega is q plus be transpose minus and lambda square over two sigma square q. Right. So this is a change of variables as a Jacobian as a Jacobian one. So we are we are trading omega for q questions. No, it was about this property that you use six here. So the, the, the only condition that the visual flow induces on V is this positivity constraint, the law of this V is only given. And because, because we know that, that the minor, which is w tilde is we shared itself. Okay. Yes, sorry. Yes, that's my question. It's the only term induced by the visual measure on V is this. So which, which term in the measure. So this one, this one, and this, this one, this one, this one and the fact that w tilde is we sure which, which. Okay, and this is the one induced by the. Okay. This one. Yeah, it comes from from yes, because, yeah, because the, the, the, you have the we should measure has these determinant of w. You have a block matrix. So you need to expand the determinant of a block matrix, and, and, and you get the determinant of one of the blocks times this this extra term in front. Right. Okay. So now, do you have a guess of what is the next integral that we want to do. You've got a 50% chance there are only two. The V integral is Gaussian right because we have only appears here so it seems like a very good choice, the two integrals are essentially then the couples more morally they are the couple. Okay, so we get an integral in DV of exponential minus n over two. One plus one over sigma square, the transpose w minus one, the first integral, and then we have q integral. So we get an exponential minus n minus two over two exponential minus n over two, one plus one over sigma square q minus and lambda square over two sigma square q. Okay, if you look at this, this integral in q that has an exponential of q and one over q. With some experience you would recognize that this probably leads to a Bessel, Bessel k function. Okay, so this integral will lead to a Bessel k function and this integral leads to something that depends on something simple that depends on on something simple. So, essentially we cracked two integrals. Very, very easy. So this is a Gaussian integral, great. In n minus one. How much time do we have five minutes, less than that. I can't, I can't manage it less than that. Good. So, I, I won't complete, I will not complete these two steps because I mean this is just a known integral, and this is just a Gaussian Gaussian integral so in the end, what, what will appear here is a determinant of W, W tilde, out of this, out of this integration. So there is only one integral, essentially left, which is complicated, and it is the integral over W, W tilde, this is a nasty, nasty integral because of this, this term. This term is very nasty because of the, of the absolute value we need to integrate over matrices with an absolute value of, of, of a determinant. So that's, that's the main technical hurdle that we need to overcome these two integrals are trivial. What is left then. Well, let me erase here. So the integral that is left is, I'm just recalling here. Yeah. So the first definition of IW tilde, maybe I'm blind but how I see that you express everything in terms of Q and the next line is, you have this V integral, the V Gaussian integral you have a V term. Yeah, and you have an exponential V transpose. Where is it at the previous line. Because, because you had an exponential of omega. Omega is Q plus this term. Okay, you integrate. I'm just, I'm just saying that Q is omega minus this, this term, which, which implies that Omega is Q plus. And that's what you, what you put it. So, this is the very complicated integral, which actually, Jan, and, and Rochelle managed to to crack using using a trick that that young had developed earlier to deal with this kind of at this point without this trick, we would be lost because we wouldn't, we wouldn't know how to handle this particular, this particular interval with the absolute value of the determinant. Let's call this object phi another lambda. So I think I don't have a chance to be able to crack this down today, but it's only one page, one page and a half. So probably I can, I can do it next, next time the first 10 minutes. This is very important because it is, it is a very clever trick. But you will put it in your bag bag of tricks that you might need at some point you'd never know when but you'll bump into something like this in your career help, and, and it is very important to have seen it at least once. Okay. And from that, essentially we have all the ingredients and it is just a matter of bookkeeping and putting all, all together and the integral the problem can be cracked until the end for finite and, and, and them, and then we switch with which gear once again. Okay, thanks very much I'm happy to take any, any questions. I have one question. Yeah. Would it be relevant and how harder would that be to try to compute the expected the expectation of the logarithm of the number of critical points. The logarithm of the num the expected log and. Okay. I guess without using replicas. With or without with or without. With maybe doable, but there is a caveat because I'm doing, I'm doing a replica calculation for this problem not exactly about about this, but. So replica replicas done in in a physical as a physicist has a problem here so we need we really need the help of rigorous, rigorous people to, to put this on on rigorous footing because one of the reason why I'm interested in this problem is that the replica gives a result that may be inconsistent with what we know from other, from other sources and there is a previous case where replicas calculations in a similar problem gave a wrong result in a range of, in a range of parameters so we don't understand where the problem comes. I mean replica symmetric. It is, yeah, in the standard, you know, replica symmetric answers. Yeah, this is a replica symmetric problem. Yes. Is it easy to see that. I will, I will do it. Okay. I will do it to do it and I will point out what the problem is or it is the absence of evidence actually so we get a result that appears to be totally fine but the same thing in a previous problem very similar was actually disproved by rigorous, rigorous treatment. So there is, there is a regime where the replica calculation is fine. And the regime where the replica calculation gives a result, which was proven in a similar problem to be wrong. But I mean, the naive problem would be that it's replica symmetry breaking but it's not the case. That's not the source. That's not the source of this, of this issue. It must be more subtle. Yeah, that's why I wanted. There's there's a lot that we don't understand here so I thought it was a rich and interesting problem to discuss. Okay, nothing else. So we'll start again at free with coming my will start discussing free probability. We have a bit of time. Much much time. Yes.