 theory, random matrices to statistical mechanics. But now we are going to, today, we are going to change the mechanism of how we do lectures. So what I'm going to do is to challenge you with exercises. And then we are going to work in groups. And I will go around. And we have to solve between all of us the challenges I'm going to pose. All right? Is that OK? And we'll see how it works. Good. And then we'll go, we're going to do a collection of exercises between all of us for the rest of the lectures. Yeah, so the first three days, I needed them to give you all the mathematics and physical tools you need. And now it's going to be much more interactive. Good? OK, so let me finish with the replica method. All right, so we were, where? Yeah, OK. So recall, we have the following Hamiltonian. So I was trying to restrict the replica method by a Hamiltonian. The Hamiltonian is a Hamiltonian of a DC model on graphs. So the Hamiltonian was the following. h of sigma is equal to minus j. And I put it directly like sum for i is more than j of c i, c i j. The Hamiltonian depends on the connectivity matrix c sigma i sigma j minus h the sum for i from 1 to n of sigma i. Right? Whereas remember that this is the adjacency or connectivity matrix at adjacency or connectivity t matrix. And what I argue, I hope you understood that, is that the object I want to do the average over this, the statistics of the graph is the logarithm of the partition function. Right? Because the logarithm of the partition function is the cumulon generating function of the observables of interest. And the replica trick tells you that this, you can achieve this average, which is very difficult to do, directly as the limit when n goes to 0 of 1 over n, the logarithm of the n power of the partition function average over this order. OK? And let me put here c. So far so good? Yeah, then I tell you that this is, this formula is the replica trick, the replica method is a number of steps. OK? This would be, or this is what I would call personally the replica trick. And the replica method is a number of steps. So step number one would be to take the partition function to the n power n bin integer and then do the average over the quench disorder of the n power of the partition function. And step number two is to introduce an ansat, introduce some hypothesis to make the limit n going to 0. Right? And maybe there are some subtleties in the second step. OK? So far so good? So yesterday, we finished for this Hamiltonian where C is a connectivity matrix of random Erdos-Renyi graph or Poissonian graph. We found out the following. We found out that the average of the n power of the partition function is equal to a sort of a path integral dp dp hat of the exponential of minus n s sub n that depends on p and p hat, where sn of p and p hat is equal to minus i, the sum over sigma p hat sigma p sigma minus d divided by 2, the sum over sigma over tau of p sigma p tau, exponential of beta j, a scalar product between sigma and tau minus 1, minus the logarithm of the sum over sigma of the exponential of beta h, the sum alpha from 1 to n of sigma alpha minus i p hat of sigma. Or something like this, yeah? And for the rest of the lectures, before the exam, the gaps I left, we are going to do it together, OK? So just let me finish this part. Is that OK? Good? Well, and then, so let me emphasize this thing again. Sometimes in statistical mechanics, when you want to calculate the partition function of something, apparently you start doing very crazy derivations. You start complicated things for no apparent reason. And it seems like we were doing this thing here. You agree with me? Very good. OK. The reason we do this type of derivations is the following. We try to rearrange, do derivations such a way that the partition function can always be written as an integral of a few numbers of degrees of freedom of the exponential of something that has a variable that goes with the system size when you take the thermodynamic limit. Why is this important? Well, because if you're interested in the typical properties of the thermodynamic properties of the system, if you manage to write the partition function in this way, then the thermodynamic limit is very easy to do. The only thing I have to do is I apply the style point method. Compare this with the original definition of the partition function. What would be the original definition of the partition function? This is what it is. The partition function would be the sum over all possible configurations of the exponential of minus theta Hamiltonian of sigma. So this is the definition of the partition function. Imagine if you were not doing these tricks and you want to study the behavior of this expression when the number of variables grows. Here, how many elements in the sum you have? 2 to the n. When n goes to infinity, so imagine how you can capture what are the most important contributions to this sum. This is very difficult to do. Even numerically, if you have of the order of n equal to 50 or 100, this sum, computational speaking, is very difficult even to evaluate. So the tricks with it, do you understand this thing? The tricks with it, it was to achieve this. If you achieve this, the thermodynamic limit is very easy to do. Are you with me? So therefore, now, we discuss the style point method for the simplest cases. But the spirit of the style point method normally survives when you change the object over which you are doing the integral. This is very important. Here, the integral is being done with very weird objects or functions. This is a path integral. Although in reality, this is not a path integral, but let us call it that way. So we saw that the style point method or the steepest method or the plus method, depending on the function, it was a way to evaluate the syntotic behavior of an integral that has a parameter that grows in certain limit. So remember that we started introducing this. We have an integral over a and b dx exponential of minus fn of x. So here is simply an integration over one variable. And here, we managed to prove that this can be written. That this is equal, sorry, this is not, this behaves asymptotically as the exponential of minus n f of x evaluated at the minimum of the function. Then we mentioned that this trick can be generalized when instead of having one variable, you have n variables. Here, in principle, you have, if this would be a real path integral, it is not. You have an infinite number of variables, but you can do the same trick. So this would be a functional, which is functions of functions. You can do a Taylor expansion of the functional. And you could, in principle, apply the same idea. You would do a Taylor expansion of this guy around the saddle point. And then you can integrate the fluctuations of the functional around the saddle point. So the same asymptotic behavior or the same analysis that you did to prove this, you can analyze it here. OK? So therefore, what? This one here? Yeah, yeah. This is the member that the notation, if you don't understand notation, please ask me. Remember that the vector with the line below means a vector in the replica space. So this means sigma 1 to sigma n. And this would be another vector in the replica space. So tau, tau with the straight line below is tau 1 to tau n. An annotation like this, the sum over sigma, what it means is the multiple sum for sigma 1 taking values minus 1, 1, up to sigma n taking values equal to minus 1, 1. And the same for tau. OK, because if you want, we can do the exercise to complete that part. But at some point, when you do the average over the adjacency matrix, you have an expression of the following sort. You have the sum for the total sum for i and j from 1 to n of what? Of the exponential of minus, no, exponential of beta j, the sum over alpha from 1 to n of sigma i alpha sigma j alpha. Do you remember this expression that appeared? What do you mean? So this is smaller. Sorry. Yeah, this is some dummy index that runs from 1, the first copy to the second copy up to the nth copy. Is that OK? And these indices here are the indices of the nodes of the graph. Yeah, it happens because of the following, right? So OK, shall we do this a bit, the address of the derivation here? Hello? You want to do it? Yeah? OK. So the reason it appears this double object here is because of the following. So let us do the derivation directly. So let me first put this thing in a nicer way. This would be the double sum of i and j from 1 to n of the exponential of beta j. And I'm going to write this thing as the scalar product of sigma i, the scalar product with sigma j. Yeah? Is that OK? Very good. Now, this is equal to the following. This is equal to the sum over a vector, generic vector in the replica space of the sum for i and j from 1 to n of the exponential of beta j, this generic vector in the replica space, the scalar product sigma j, a Kronecker delta of that, or Dirac delta, let's put the Dirac delta if you want. Better Kronecker delta. Sigma vector in the replica space must be equal to sigma vector in the replica space for not i. So what I have done is the following. If I have a function that depends on sigma i, this is equal to the sum over sigma of delta Kronecker of sigma i of f of sigma. Is that are you following this? So you see, I'm interested in taking out of the argument of this function the index that appears in sigma. The way to take it out is I put just an expression of the identity using a Kronecker delta or a Dirac delta. And of course, you can do this thing instead of having just a variable, a vector of these variables. So in such a way that if I were to do this sum over sigma, I can substitute the value of sigma for sigma i. Is that OK? So now you understand I can do the same thing for this object here, right? So then this is equal to the sum over sigma, the sum over tau, this vector in the another vector in the replica space of the sum of i and j from 1 to n of the scalar product of beta. Sorry, of the exponential beta j scalar product of sigma i, sorry, sigma with tau Kronecker delta of sigma i Kronecker delta of tau sigma j. Is that OK? Well, now what is the i index and then, sorry, this would be sigma j. Where is the i index and the j index? Before, they were here, right? Couple inside the argument of exponential. And now they are beautifully the couple in the Kronecker delta. In such a way that, let's see, I can write now the following. So now I can rearrange the sum for sigma and j. And this is equal to the following. This is the sum over sigma vector sum over tau vector of the exponential of beta j, the scalar product of sigma with tau. And then I can put here the sum of i from 1 to n, the Kronecker delta of sigma i times the sum over j from 1 to n, the Kronecker delta of tau sigma j. Yeah? Is that OK? Better? Now, these two sums are the same object. It's a probability distribution. So given a configuration of the spins in the replica space for different nodes, this is the probability of finding a particular configuration sigma in this statistics you do. So this, if I define now p of sigma as 1 over n, the sum for i from 1 to n of Kronecker delta sigma sigma i, this is equal to the sum over sigma in replica space, sum over tau or replica space of the exponential of beta j, sigma scalar product with tau, multiply and invite p of sigma p of tau. That was your question? Good. More questions? Yeah, because since these are Kronecker deltas or Dirac deltas, so you can think about this like a way to construct a histogram from a bottom view of frequency. For instance, you have your collection of, you have your graph, well, you have a graph that has n nodes, and then you have n small copies of this. So in one graph, you have n nodes, and you have a configuration of sigma, sigma 1, and then in a graph and the n replica copy of the graph, you have the configuration sigma n. And now suppose that you have this system, which is simply n copies of the original system, and this you run like a movie, and in each graph, the spin values are fluctuating. And then you make a screenshot, and then you go and say, OK, how many spin configurations, yeah? When I go from graph to graph, they have a given value in replica space. So this counts how many configurations in each node are equal to a particular generic configuration in replica space. So this will give you the probability of finding a given configuration in replica space if you look at your collection of graphs. So that's why it's a product distribution. More questions? Ah, no, it went somewhere because I forgot to put it. Thank you very much. In the final expression, instead of having an n square, you have an n, because you have d divided by 2n. Good. More questions? OK, so let me continue with the replica method, OK? Just 10 more minutes, and then we start doing the mappings and exercises in groups. So we arrive at the fact that the n power of the partition function, average over the coincides order, can be written as a path integral dp, dp hat of the exponential of minus n, s sub n, p hat. And if you are interested in the asymptotic behavior, for n being very large, this will behave as exponential of minus n. This function, or this function evaluated at the values of p and p hat, that are the extremals of this function. I call those values p not and p hat not, OK? So these guys, p not of sigma, p not hat of sigma below, sigma are such that the variation of s sub n with respect to p sigma is equal to 0. And the variation with respect to p hat of sigma is equal to 0, OK? This condition is equivalent to the condition of the, that we saw in the style point method for the function. So the derivative of the function at x0 has to be 0. Or the gradient of the function at a particular vector has to be 0. It's the same thing. It's the same condition, but now it's for a functional, OK? So these are called saddle point equations. And I'll leave it as an exercise. Go ahead. It's a variation because your degrees of freedom are the values of a function when you write the independent variable, yeah? So in the path integral, again, so this is not really a path integral because the function takes values on a finite set, OK? Because the vector in the replica space has a finite number of values. But just for simplicity, I call it a path integral, right? So what you are doing, this, in reality, what it means is that I'm integrating over all possible values this function takes when I'm varying the argument. And the argument is the values of this vector in the replica space, yeah? So if you want me to be more picky, so maybe what should have done is the following. To define you that actually this p, what it really means is the product over all possible values of the vector in the replica space of dp sigma, right? So this vector sigma in the replica space is a way of labeling the different values of the function p and the same one for p hat, right? OK? So the variables I have to minimize over this function s are precisely all possible values that this function and p hat can take, right? So that's why I do, instead of putting partial, I put a delta as similar to what you do in calculus of variations. Is that better? More questions? Very good. So if you calculate the Sylphon equations, I told you that you get the following. Yeah, from the expression of first, you should get that p0 or minus i p0 hat of sigma is equal to d, the sum over tau of p of tau, exponential of minus beta j, a scalar product of sigma tau minus 1, with here p0. And p0 sigma is equal to what? That's right. It's equal to the exponential of beta h, the sum for alpha from 1 to n of sigma alpha minus i p hat 0 sigma divided by the sum over tau of the thing that you have in the numerator. Yeah, let's do something. Do you want to start doing the exercises, maybe? Do you want to do this? Hello? So have you tried to do the derivation of the Sylphon equations? Who has tried to do it? Have you managed to do it? OK, so let's do the following. Let's do the following. Let's start. And then we do the mappings in exercises later. So let's do the following. Let us go backwards in the difficulty or the way I presented the exercises. Let us do first exercise one that we are going to do now, all of us, between groups. Discussing is to derive this. So let me remind you what you have is the following that s n of p p hat is equal to minus i the sum over sigma p hat sigma p sigma minus d divided by 2 sum over sigma and tau of the exponential of beta j, a scalar product of sigma tau minus 1 p of sigma p of tau minus the logarithm of the sum over sigma, the exponential of beta h sum for alpha from 1 to n of sigma alpha minus i p hat sigma. And again, so this might be an scary object, but in his deep soul, it's a function inside the argument of an exponential, and you have to apply the Sylphon method. So the object, it might be a bit more complicated, but the spirit of the method is the same. It remains. So from here, I have to calculate the functions p and p sigma for which the variation of this is 0. So I have to take the variation of s with respect to a certain value of the function for a given value of the independent variable p tau, and agree this thing to 0. And this with respect to p hat as well. And this should give you these two equations. All right? Go ahead. Is there a sorry? I have to. Ah, yeah, no, this one. Sorry, it's plus. Sorry, sorry. Yes. Thank you. More questions? So shall we start with this exercise? So we are going to do something that's called active learning. So active learning follows the, it all has follows, OK? So you turn around, you look at each other. So you form groups of students. And you are going to discuss within yourselves of the way of doing this derivation. And I'm going to go around to see how you discuss and how you solve this problem. Is that OK? And so let us take like 10 minutes to do it. Yeah? 10 minutes, 15 minutes to do this. And then we do all the other exercises. So you see that the first part of this exercise, this way of proceeding is, again, you turn around, you look at each other. And you form clusters of students. And the member that Mateo said that you are scattered into masters and PhDs or something like that, diplomas, so maybe you should mix them up, right? You have two replicas, OK? N is equal to 2, right? So this vector, so this thing of writing down very simple examples, believe me, is very, very useful to learn things properly, right? So sigma in replica space would be now sigma 1, sigma 2, yeah? So how many values this vector in replica space can take? 4, right? So sigma, this sigma now, can take the values, let's put it like this, plus 1, plus 1, and put it just with the signs, no? Plus, plus, plus, minus, minus, plus, minus, minus, right? Is that OK? Everybody with me? Very good. So now, look at the first term I have here. I'll write it down explicitly. Here I would have, for this one, minus i sum over sigma p hat sigma p sigma. This will be what? Minus i p hat plus, plus, p plus, plus, minus i p hat plus, minus, p plus, minus, minus i p minus, plus, p minus, plus, minus i p hat minus, minus, p minus, minus, all right? Yeah? So what I'm doing is, in this calculus of variations, but again, it's not really a calculus of variations, because the value will take a finite number of values. Anyway, I'm just abusing language here. What I'm doing is, I look at this how this function f depends on the possible values. These two functions take, right? And then I took one of them, for instance, p plus, plus, and I look at how this function depends on a variance when I change p plus, plus, right? And I do the same thing for all possible values of this function, yeah? So how this thing is encapsulated in notation. What I do is the following. I take this function s of n of p and p hat, and I look at how it varies. When I take a particular value of the function p, when its argument is equal to a given value of the vector that I call here, for instance, tau, or if you want sigma prime, all right? This sigma prime, in this case, will take four values, plus, plus, minus, minus, plus, minus, minus, all right? And then I do the derivation, OK? So once this is clear, for a particular case, you can easily generalize for any case. So let us do it, right? So this is equal to what? So now you see p appears here and here. In this part, it doesn't appear, so let us forget about this part. So this is equal to what? This is equal to minus i, the variation, we'll expect p sigma prime of the sum over sigma, p hat sigma, p sigma, plus, sorry, the variation with respect to p sigma prime of this part here, that is, this would be minus d divided by 2, the sum over sigma over tau of p sigma, p tau of the exponential of beta j, the scalar product sigma tau minus 1, right? Is that OK? Now, in this sum over sigma, this sigma plays a role of a dummy variable. It runs over all the values this sigma can take. In this particular case, it would be four values. For a generic n, it would be two to the n values. So at some point in this sum, you are going to have that sigma is equal to sigma prime, right? And you can write down this thing explicitly as we did it here. So when sigma is equal to sigma prime, you can do the derivative of the value that this function takes when sigma is equal to sigma prime. And this will give you what? This will give you p hat sigma prime. In the same manner, you can do the same thing here. But you have to be a bit careful. Here, you have a double sum. At some point, one of the elements of the first sum, sigma would be sigma prime. And then at another point, tau would be equal to tau prime. Is that OK? So then you do the derivation. And you would obtain the following. You obtain that this is equal to minus i p hat sigma prime minus d divided by 2. But you have here two p's. And at some point, you'll have to do the derivative of this. And then the other one. So you have the sum over, for instance, tau of p tau, the exponential of beta j. Remember, notice that this scalar product is symmetric. That's why you have two times the same thing that cancels this 1 divided by 2. The scalar product of sigma prime with tau minus 1. And the couple of functions that obey this equation, these are the points that extremize. And we call this thing the p0 and p0 hat. So this would be p0 and p0 hat. And the same goes for when you derive the other side of the equation. Better now? Yeah? What's up? Yeah, you equate this thing to 0. So this is the variation. This equal to 0 will give you the other side of the equation. That was the question. Wait a second, please. Can you speak up? This sorry? Hey, please, one second. I cannot listen to her. Sorry. How do I get this? So you see here sigma prime is a particular value that this vector in the replica space can take. In this sum, this sigma runs over all possible values of this vector in the replica space. So at some point in this sum, sigma is going to be equal to sigma prime. So when in this sum sigma is equal to sigma prime, the derivative will give you, in this case, p hat sigma prime. It's like, for instance, in this expression that I wrote down here explicitly, if I were to do the derivative when sigma is equal to plus plus, the p of sigma, what I would get? p hat plus plus. OK? More questions? You have to speak up. Here? No. So because you are doing the one derivative with respect to p, and you would cancel one of the p's, and you always one word would remain, right? So there is another proper way to do this thing, much more compact way, which is the following. It's to realize that, let's do it now with this trick. So this derivation, you would do it thinking about what it means to do this variation. Now, a way to do it in a more compact way is to realize that if I have p sigma and I have p sigma prime, and I do the variation of p sigma, sorry, this with the orbital. If I do the variation of p sigma with respect p sigma prime, this equal to a chronicle delta when sigma is equal to sigma prime. Why? Because the values of these functions, they are like independent variables, and you are doing the derivatives with respect to them. So this is the generalization of having a set of independent variables, x1 to xn, and then you do the partial derivative of xi with respect to xj. This equal to chronicle delta i and j, the same thing here. So what you can do now is to use this trick to do this derivation, right? Questions? OK, so let's do one thing, because I need to finish the explanation of the replica trick, the second step. But I'm going to do that when we apply it, we start doing the mappings to random matrix theory, or problems in random matrices. So let us now interchange one exercise of a mapping, OK? And then we'll come back to the exercise I left. I'm going to pose you the first challenge, which is the following one. So the challenge is the following. Suppose I have an ensemble of random matrices, OK? This is an ensemble of matrices, OK? Let us say that A belongs to this ensemble. And let us assume for simplicity, to start that, this is an ensemble of symmetric real matrices, right? So this is an ensemble of matrices which are real and symmetric of size n times n, right? A is equal to A transpose, yeah? Is that OK? Now, let me denote, given a matrix A, let us denote, like as follows, lambda vector of A, the spectrum of A, OK? So that means they have values, right? For this notation, this lambda vector of A is equal to lambda 1A, lambda nA. What's up? A belongs to the ensemble. So this ensemble is a bug where you have a bunch of matrices. This means a matrix from this collection of matrices, from this ensemble. Good. Now, let me define what is called the empirical spectral density. So I introduce the empirical spectral density that is denoted as, I can do it like a row, pay of lambda is equal to 1 over n, the sum, I from 1 to n, delta, lambda, lambda, IA. So this would be the, if you have the eigenvalues, this would be the collections of eigenvalues constructing a histogram, something like this, yeah? So far, so good? Yeah? Now, what is the challenge number one? Challenge one corresponds to the first mapping, one, two, realize that the spectral density, given a matrix A, this empirical spectral density, it can be written in terms of a partition function of a system of n particles interacting, OK? What's up? Yeah, this n is the size of the matrix. We have an entire matrix. So that means it has eigenvalues. And here I'm doing the sum over all eigenvalues. So the direct delta is lambda is equal to a particular value of the, no, it's just, somehow this captures the density. So how the eigenvalues are spread in the real line, yeah? So the matrix would be very, very large, yeah? Imagine that the matrix, the size of the matrix goes to infinity. You'll have a bunch of eigenvalues in the real line, and this will give you a profile of the eigenvalues in the real line, yeah? So for finite n, or for a small n, this is just a collection of direct delta peaks. But for n large, this will give you a profile of the density of eigenvalues in the real line, yeah? So let me put it differently, OK? If I were to take this, yeah? So what this thing means, if I were to take this, do the integral in a particular, do the integral in a particular interval on the real line, this would be the number of eigenvalues in the interval AB, right? So then this captures the idea of the density of eigenvalues in the real line, yeah? Yes. It's the fraction, very good, because this is normal. So it's the fraction, this would be the, excellent, thank you. It's the fraction of eigenvalues, thank you. Or if you want to put here n, right? And then it's the number, yeah? Very good. OK, no, this is a very good question, OK? So sometimes, OK, I will see this. Sometimes, OK, the idea is the following. That's a very good point. So if I can, I'll organize the matrix and I have the collection of eigenvalues, why I want to do this thing? You are absolutely right, OK? But sometimes, OK, you don't have access directly to the eigenvalues, yeah? So if you have the eigenvalues, I put this definition, yeah? But sometimes it's not possible to calculate the eigenvalues. But I have the expression of the matrix. So what I'm going to show you is starting with this definition, I can rewrite this as an expression that depends on the matrix. And when I do the average over the ensemble of matrices, I get an expression of the spectral density, even though I cannot diagonalize the matrix directly. OK? Actually, we are never splitting this delta. No, no, no, no, this is start with this definition. No, because if I have the collection, so what I'm going to teach you has no value, right? So starting with this definition is to rewrite this in such a way that I can compute this thing without having to have access to the knowledge of the eigenvalues, yeah? Good? Excellent, very good question. Now, so let me go back to the challenge, number one, or the mapping number one. What's up? Yeah? So the idea is the following. So the ensemble of matrices is just a collection of a certain number of matrices, right? So what I'm going to do now, well, later or not now, we'll do this thing later, because we don't need for the mapping. It's the following, OK? So I'm going to assign a probability distribution in this ensemble of matrices. Hey, I'm going to apply a probability distribution in this ensemble of matrices that will tell you what is the probability that if I put my, I take one of this math, I pick one of the matrices. Randomly, I take a given value for that matrix, all right? But now this is not needed. No, that's a very good point. So what you will have is a probability distribution for the matrices. And in some cases, depending on the ensemble of matrices, you can derive exactly what is the joint distribution of eigenvalues that is inherited by the distribution that you assign to the matrices. But in some, in many cases, in other type of matrix ensembles, you cannot derive this expression explicitly. OK, for instance, if this would be what is called the GOE ensemble, which is the ensemble of Gaussian orthogonal matrices, that would mean square matrices where the matrix entries are Gaussian variables. One can derive exactly what is the joint distribution of eigenvalues. But if this would be the ensemble of Poissonian graphs, nobody can derive the joint distribution of eigenvalues, right? More questions? OK, very good. So what is the challenge? The challenge is that to show that this can be related to a, let's put it like this, a spin-glass problem, where A plays the role of the entries of A, play the role of interactions between spins, between quotation marks, right? And more precisely, the spectral density is related to somehow the expectation value of a local observable, right? What's up? We are starting with the simplest mapping, OK? So the simplest mapping is for symmetric matrices. We are going to do it also for non-ermitian or symmetric matrices, right? So this is the challenge, OK? This was, this disappeared, OK? In a paper, I'm going to give you some hints, right? So this is a paper by Edwards and Jones, 1974. It tells you how to do this mapping. So part of this, when we work in group, is to know how to search literature, which is very important for research. And the tricks you have to do to go from here to here are the following tricks. This is very simple. We are going to use two tricks, OK? One trick is that there is a relationship that the direct delta can be rewritten using the following identity. 1 divided by x minus i eta is equal to the Cauchy principle part plus i pi direct delta, OK? That was in the limit eta going to 0 plus. And the second trick you need to use is that the given asymmetric matrix, 1 over the square root of the terminal of a symmetric matrix, this is a symmetric matrix, OK? This is equal to the integral of a bunch of variables of exponential of minus 1 half sum over i and j 1 to n x i a i j x a, sorry, this is s, OK? With 0 plus, yes. Just using these two tricks that I explained, one should be able to express this in terms of some sort of partition function, OK? Where the matrix entries play the role of interactions between random variables. No, no, no, I wrote down this thing. You have to work out there, yeah? The only thing I wrote down here is this identity. And just to remind you that this has to be a symmetric matrix. When you do a derivation, it might happen, maybe just maybe not, that the matrix a is this one here or maybe something else. You have to work it out, yeah? Clear? What do we have to do? Sorry, ah, sorry. It's a spin glass type of problem, OK? Where the entries of a play the role of interactions between spins. And when I put quotations, it's like, this is not really a spin glass. This is not really a partition function, because maybe the exponential is complex, et cetera, et cetera. But functionally speaking or mathematically speaking, it has the same mathematical structure. And that's the important bit. Clear? So what time is it? OK, so let us give it 15 minutes to try to work it out, be working in groups. Is that OK? Go ahead. No, I mean, you work in groups. So again, you work in groups, and I go around, and I help you with this. It's your mission. We tried yesterday, like, spent too much time on it to prove the fact that we have the one over determinant of a, which is equal to, for the complex case, basically, project, and so on. Yeah. For Hermitian matrices, it's obvious you diagonalize and you have a unit series of information. Everything is OK. But for Hermitian case, like we found, Hermitian case, everything you have to do is to do a linear transformation. You have to realize that actually, and this is in the book of St. Justine, that the couple of variables you put, the CI and the CI bar, are independent variables. They are conjugate one of the other, but they are not complex conjugate. Yes, they are. It's just a way to represent two. That's right, that's right, that's right. So you can take a transformation over one set of these variables to a new set of variables, say, like, for instance, CI, if suppose that you want to transform the CI. Actually, we did it, but we ended up with something like, in the end, exponential minus, when we did the transformation, the prime transformed, like. And here, it is not the quadratic form. Yeah, but that's OK, no? Because, so think about it. Take that formula that I gave you in the simplest case, where the matrix S, that can be any matrix, is simply the diagonal matrix. And then you have the sum of minus the sum for i and j of CI bar Cj. You are telling me that this expression, this integral is not well defined. This is not true, right? Because in this sum, you would have a diagonal term that is real, and the other diagonal terms, that maybe they are not, they oscillate. But the diagonal term is going to make that the integral converges, right? As far as you don't care about this, as far as you care. Well, it's not that you don't care. It's that the existence of the integral is a nice game, but the diagonal part. And then how do you determine the value of the integral? Sure, what you can do is.