 So I'm going to talk about random matrix theory, a particular part of random matrix theory. So random matrix theory, for those of you who know, is a very vast area of research with a lot of applications, many, many results, et cetera, et cetera. So try to give, to cover some of these topics in two weeks, would be very foolish for my part. So what I'm going to do instead, I'm going to be selfish, and I'm going to focus on the work I've done. And the work I've done essentially could be summarized in a title, which is the title of this lecture, which is a statistical mechanics of random matrices. By the way, I'm going to use the blackboard all the time. That's OK with you? So the title of this course is going to be statistical mechanics random matrices, even a particular type of random matrix at some point. But we'll see this thing later on. And what is the main goal of these lectures I'm going to give you? I'm going to try to give you some very, very cool tools to solve these problems. So I'm not going to focus on universal results, et cetera, et cetera. So what I'm going to do is, or the main goal of this series of lectures is the following. Let me put it here. So the main goal would be something like the following. And actually, this happens not only in random matrix theory. It happens also in other areas of science, like for instance, economy, ecology, et cetera, et cetera. So I'm interested in a problem in random matrix. Here, a problem of interest concerning random matrices. I don't know whatever. Maybe you come from different areas. You have various different interests. And what I'm going to do, I'm going to map exactly this problem into a problem in stack mech of disorder systems, also called for the people in the known spin glasses, the stack mech of disorder systems. Can you understand my writing? I hope so. And the people, the last part you can see, very well. So I'm going to do this mapping. This mapping, essentially, is exact from the beginning. So I have a very interesting problem in random matrix theory of random matrices. I map it exactly to a problem in a stack mech with a partition function, a free energy, et cetera, et cetera. And then what I do here, once I realize that my problem can be understood in the setup of physics, what I'm going to do is to apply all the machinery I know from physics to solve the problem. And what is this machinery? Well, this machinery would be, so number one, of course, concepts and ideas of the foundations of statistical mechanics, partition function, free energies, et cetera, et cetera. So I will be using things about a stack mech. And these are the idea of what is a partition function, what are free energies, et cetera, et cetera. So this part, I will not cover it, because I assume that everybody knows the basic of the statistical mechanics. Am I correct on thinking that? Very good. Then I will be using some tools of disorder systems. And again, disorder systems spin glass is a very vast area with many good results. The physics is very interesting. And that gave rise to the Nobel Prize in physics, one part, to Giorgio Parisi in 2000 and 21, if I recall correctly. So I'm not going to go into details of what are spin glasses, the physics behind them. What I'm interested here in these lectures, very short lecture, is to use the tools developed to understand spin glasses to solve problems of interest in random matrices. So which are those tools? Those tools is what is called replica method. I guess some of you have heard about it. Who knows the idea of replica method? Raise your hands. And the rest you don't know. That's very good. Have something to teach. New. The other method is called cavity method. And maybe for other very advanced results on random matrices, you would use other techniques. But I'm going to focus on those. Good so far? Good, and then I need some basic results from mathematics. Tools from mathematics. Let me see that. I'm not missing anything. Then I need some tools coming from mathematics. One tool is a very important tool that every student from physics should know, but sometimes you don't know, which is OK, which is the so-called side point method. This is a very cool method that is used, actually. It's a cornerstone in many, many results in condensed matter physics quantum field theory. So one tool I need to explain to you is the saddle point method. This is also called the Laplace method or the steepest descent method or the stationary phase approximation method. It depends on the context. But this method is also called Laplace method or the steepest descent method or the stationary phase approximation method, stationary phase approximation method. All these concepts are used synonymously, even though it is not true. For a specific type of mathematical problem, what you should call it is either steepest or Laplace, depending on the other problem is called either the saddle point method or the stationary phase approximation. Have you heard about this method? I hope so. Very good. Then we'll need some results of Gaussian integrals, multivariate Gaussian integrals, which again, there are many, many results very important in QFT or FT, you know what I'm speaking. Here, I'm worried about expressing the terminals in terms of multivariate Gaussian integrals, all right? And finally, I need some very simple results of expressing Dirac deltas, Dirac deltas and theta functions in a smart ways, OK? That's it. Questions so far? Nothing? OK, so how I'm going to, or how I thought I could structure these lectures? Let me see. The way to structure them is going to be the following. This is going to be the content. So first week, well, I have to say that I teach a lot, OK? But I never managed to follow my own schedule. It's impossible. I don't know why. I'll try to do that, OK? So first week, day one. So this is what I'll try to do. Day one, we are going to introduce the mathematical tools. This is today. Second day. Second and third day. This is one three, tools of spin glasses that would be replica and cavity methods. This is four and five. We'll do the mappings. We'll set up into deciding which problems we think they're interested in random matrices. And then we'll do the mappings, two problems in statistical mechanics, OK? So mappings. So in particular, I think I'm going to focus on, no, we'll do it generally, OK? General mappings, or as general as possible, OK? And then in week number two, second week, in days one and two, we'll focus on a spectral density of random graphs, directed and undirected random graphs. Of random graphs. Then days three and four, we will develop what is called LAR deviation theory for this type of problems. LAR deviation theory for this type of problems. And of course, day five is the exam. Questions. Now, the most important question is about the exam, I guess, right? What I'm going to put in the exam, I need to think about it, OK? But as Mateo mentioned, it's going to be very, very, very tough. All right? Very good, so shall we start? Yeah, excellent. So let's start with day one, which is today, which is mathematical tools. So OK, before, while I'm erasing, can somebody tell me something about the, what is known as the saddle point method? What is the idea behind the saddle point method? Somebody is a way of calculating integrals without having to do the integral, OK? That's very cool, because everybody knows how to do derivations, but nobody knows how to do integrals, right? So it's much, much more difficult to do integrals. So the idea of the saddle point method as, what's your name? Christopher? Christopher. Is to do integrals, OK? Actually do, we evaluate the integrals in a particular case. So let me see, some mathematical tools. We start with the saddle point method. Also known, as I told you, like the Laplace method or a steepest method or a stationary method approximation, all right? So the idea is the following. So suppose I have an integral of the following sort. I'm going to denote this integral as i sub n, and this is equal to the integral from, for instance, from a to b, the exponential of minus n f of x, OK? And what I want to study is what is called the asymptotic behavior of this integral when n goes to infinity. I'm not interested. If I could, that would be very cool. That would be much, much, much better. But sometimes you're interested in the asymptotic behavior of this, like this happens in statistical mechanics, where you're interested in the thermodynamic limit, right? So here, I want to study the asymptotic behavior. Let us put it here, behavior of this integral when n goes to infinity. So for those of you who know the Saal point method, you know that the result is that when n goes to infinity, or for n very, very large, the asymptotic behavior of this integral, and then the notation is like this, I'll explain what this thing means, precisely, is the exponential of minus n f of x0, where x0 is such that the derivative of f at x equal to x0 is 0, and the second derivative of f of x at x equal to x0 is positive. And what this symbol means, this means that, you know, physicists, we confuse the symbols, all the time, we use the, and I'm going to do that a lot, but at least this symbol means the following. What this symbol means means that, if I take the limit of I sub n, if I take the limit, sorry, of the 1 over n, the logarithm of I sub n, n going to infinity, let me put here a minus sign, this is equal to f of x0. This symbol, what it means is this limit. Good, so how do I prove that? Do you know Taylor's function? And if you know this method and you know properly the Taylor's function here, what appears is the starting point of a perturbation theory, Feynman diagrams, et cetera, et cetera, is simply a Taylor's function. So what I do is I do, let us do it for those people that do not know these methods. So this is a proof, it's an informal proof, it's not a mathematical proof. So I take the function f of x, I do a Taylor's function around x0. So since the first derivative is 0. And by the way, here I forgot something, you have to assume that x0, where you have the minimum, is between the integration interval. If it's outside the integration interval, then you have to generalize this result. And the result is called Watson theorem. We're going to assume that x0 is between a and b. So you have that f of x is equal to f of x0 plus 1 half of the second derivative of this function x minus x0 square plus the sum for n greater than or equal to 3 1 divided by n factorial, the nth derivative of f of x0, x minus x0. All right, so I put this thing apart because that's important. So then what I do is simply I put this result in the integral. What I have, I have that. I suppose n is equal to the integral from a to b in the x exponential of what? Of minus f of f of x0 minus n divided by 2, second derivative of f at x0 x minus x0 square minus sum for n greater than or equal to 3 capital n divided by n factorial, the nth derivative of the function of x0 x minus x0 to the power. Now, this term is a constant. It can get out. Can you follow my writing? Everything's OK? Yeah. So this guy, I can take it out. So then I'll have this equal to the exponential of minus n f of x0. And I have the integral from a to b dx. And then I have the rest, no? Exponential of minus n divided by 2, second derivative of the function evaluated at x0 x minus x0 square. And the rest I'm going to call it plus r of x. And what I do, I do this thing in many ways. Well, I can proceed in several ways. One way would be I do a change of variables. I denote y is equal to the square root of n x minus x0. So then I have that the same integral is equal to the exponential of minus n f of x0. And then in the change of variables, what do I have? I have a square root of n a minus x0, the square root of n b minus x0, half dy divided by the square root of n, exponential of minus 1 half of second derivative of f of x0 y square. And then let me put it like here. Here I have the exponential of r tilde of y. So r tilde of y is this when I do the change from x to y. Just work it out. And the next step you can do is the following. So this is equal to the exponential of minus n f of x0. And then, so let me see, this would be the integral, the same integral, square root of n a minus x0, the square root of n b minus x0, dy square root of n, exponential of minus 1 half second derivative of f of x0 y square. And here I do a particular expansion. It's equal to the sum of the series from n for n from 0 to infinity of 1 divided by n factorial. n factorial r tilde to the power n y. Good. Questions so far? So then what happens? So what happens if you look at the leading term? If you look at, thank you, if you look at the leading term in n, so the leading term, the one that is the important one in the limit of n goes to infinity comes from here. When n goes to infinity for very large n, the integral goes from minus infinity to infinity y, because x0 is between a and b. So this is negative. This is negative. And when n goes to infinity, it goes to minus infinity, and this to plus infinity. And you can look at the collections to n of this part, but they are not exponentially big. OK. So in field theories, you would call that this is, you can look at these results in the book of San Justan, for instance. So in field theory, when you do this case of derivations, this would be a Gaussian measure. Yeah, you have a Gaussian probability distribution. The Gaussian measure. And these are the terms that come from perturbation theory. So that means that the leading contribution, the one we are interested in, the leading term, would be the following one, would be the exponential of minus n f of x0. And then the integral from minus infinity to infinity, d dy squared of n exponential of minus second derivative divided by 2y squared times 1. You take the series of terms in this part. And this is a Gaussian integral. So therefore, you get that this is equal to exponential of minus n f of x0. And this is the square root of 2 pi second derivative of f of x0. Questions? Yeah, I assume that it's between a and b. I'll say it at the beginning. I just erased it. So if x0 has to be between a and b, in such a way that when you do this integral, this integral goes from minus infinity to infinity. If this would not be the case, if you have that x0 is not between a and b, then you can generalize this result. And the theorem is called Watson theorem. But for our cases, or the cases of interest, our x0 is always within a and b. Or as a physicist, you assume that it's between a and b, and you do the derivation. More questions? Which one? This sum here? So this sum here is like since I'm interested in the leading term in n. The one that contributes to the leading term is when n is equal to 0. The rest of the terms would be the corrections to the leading term, which is perturbation theory. Which if we would have time, I would tell you how to do a perturbation theory for random matrices. But we will not have time. More questions? Why? I'm doing the integral from minus infinity to infinity. It's not correct? It is correct, isn't it? This is a 1, sorry. If you don't understand my writing, this happens. Don't be shy. This presents to be a 1. Sorry. More questions? Very good. So now, sometimes it happens that instead of having a just the exponential of a real function, this function I didn't say, but it's supposed to be real. The argument is real, and the function is a mapping from the reals to the reals. You have a function in the complex plane. So suppose now that you have the following now. Suppose that f, the program is, sorry, once more here. So this function here, for the Laplace method, is a function from R to R. So suppose now that I have a function f, but in the complex plane. And I have the same type of integral. So I have the integral, which is the integral from a to b of this set, exponential of minus n of set. So I have in the complex plane a point a, a point b. And then this is a line integral in the complex plane. So I have to choose. Let me choose a path. Although if the function is analytical, the result of the integral is independent of the path. Choose a path gamma, that goes from a to b, the path gamma. And again, you're interested in the asymptotic behavior of this integral when n goes to infinity. So in this case, you can do the same trick, or something very similar. So what you do is you look for a point in the complex plane. Let's say, let's say, in this point, set 0. You look at point in the complex plane, such that the derivative of the function evaluated at set 0 is 0. Then if the function, if f is analytical, the exponential of minus nf is analytical function, that means that the result of the integral does not depend on the path. So that means I can modify the path as I please. So what I do is I modify the path to go through set 0. Go ahead. Sorry? Because I did a mistake. So it should be below. Thank you. Yes? Thank you. So what you do now in the case of the complex plane, you assume that there is a point set 0, such that the derivative of f at set 0 is 0. So you take the path, you deform it to go through this point, and then you move the path, and then you can do the Taylor expansion, et cetera, et cetera. And then when you do the Taylor expansion, there would be, and I'm going to leave this thing as an exercise, you'll have a Taylor expansion expression, which is the second derivative of the function evaluated at set 0 times set minus set 0 squared, and higher order terms. So then what you do is you move the path in such a way that the imaginary part of this expression is constant along the path in a small region. And then you apply the side point method to the real part. So at the end of the day, you obtain exactly the same result, but you have to be smart of, you have to be aware that you have to modify the path. So at the end of the day, you find that there's in total behavior of this integral is, again, the exponential of minus n f of set 0. This is a result from complex analysis. If you have a function which is analytical, an analytic function, the line integral in the complex plane does not depend on the path. So I'm assuming here that f of set is an analytic function. Therefore, the exponential of minus f of set is analytic function. So here, we're assuming that this function is analytic. I only care again about the asymptotics. And since I only care about the asymptotics, I only need to deform the path a little bit in such a way that the asymptotics cares just a bit about what happens around this point. More questions? Go ahead. Very good. If you have many points, this happens also in this case. That's a very good point. If you have many points, what you do, you apply this to the various points. So you can do this. And actually, in a stack phase, sometimes it happens that you have an equilibrium state, a metastable state. Metastable states appear when you take into account these different minimums. So let me delete here. So if it happens, the following is very interesting question. Thank you for that. Let's go back into the real case. Suppose I have a function like this, f of x. When I have one local minimum, let us call this thing x0, 0, x0, 1, to x0, n. So what you would do when you are integrating over the whole line for simplicity. So then I have, again, so I have the integral, i n is equal to the integral from minus infinity to infinity of dx exponential of minus n of x. So what would happen in this case is when n goes to infinity, again, it's asymptotic behavior, this would go like the following. This would go like the sum for i from 0 to a small n of the square root of 2 pi n, the second derivative of x0, n, the exponential of minus n of f x0, n. Now, sometimes you need to keep the contribution from all minima. But in other equations, the only one you're interested in is in the deepest minima. Suppose that, for instance, I'm going to exaggerate this picture, the deepest minima is this one here, x0, n. And then what is going to happen? Well, what is going to happen when n is very large? I can always write this thing as follows. Let me forget about this part. I can write this thing as exponential of minus n f of x0, n, 1 plus exponential of minus n, let us say x, n f of x0, 0, minus f of x0, n, et cetera, et cetera. So when n goes to infinity, this is very small. So the most important contribution is the contribution that comes precisely from the deepest minimum. If for some reason, and this happens, actually, Mateo has worked on this application, so inference problems. Sometimes you have to take into account contributions from other minima. And in physics, these are called metastable states. Good. More questions? Go ahead. So I repeat. How do I know? Due to the structure. Ah, there's a bleeding term. How do I know that they are not the leading behavior? Because they are subleading. They don't have the form of exponential of n. So you see, normally, this would be related to a partition function in a stackmate. I mean, I'm always interested in the logarithm of the partition function, which is free energy. So whatever which is not exponential in n, in the partition function, in the limit when n goes to infinity is subleading. And it goes to, say, the thermodynamic limit. So there might be some case where subleading terms can sum up to exponential contribution, but I don't think so. But I'm not sure. But you know, this term is subleading, in the sense like, if I take the logarithm of this expression, you divide it by n, this term would go to 0 when n goes to infinity. That was your question? Very good. More questions? Go ahead. This one here, again, if you don't understand my writing, please yell at me. I can rewrite it. So the argument is the following. So suppose you have various minima. If you generalize our derivation, you will get this. But of course, what happens is you have this sum of exponential of minus n. And it happens that if you look at the behavior when n goes to infinity, the deepest minima is the most important one in most of the cases of interest. So how can I see this? Well, let us forget about this part. This is not important when n goes to infinity. So let me focus on the deepest minima, which is this one here, which I denoted as x0n. And I'll write that expression here. Let me put it here as follows. This goes like the exponential of minus n f of x, 0n. I take it out. And I subtract the arguments of the other exponentials. So then I have 1, because it's the term that I take out, plus the exponential of minus n that multiplies f of x, 0, 0. That would be the first minima minus f of x, 0, n. Plus the exponential of minus n f of x, 0, 1 minus f x, 0, n, et cetera, et cetera. And then what happens? What happens, this is the deepest minima. So then any other minima minus that, or the value of that minima is positive. And you have a beautiful minus sign. So when n goes to infinity, the important term is the one just subtracted. More questions? No, I have to, you have to, OK. If you do this derivation, and that's very important, that you do the derivations, what happens when, instead of having a minimum, you have a maximum? What happens? So the integral diverges here, OK? So somehow what you are doing when you are doing this integral is the following. So you are splitting the integral around the parts where you have minimum, and you are forgetting where you have maximums. It would be the same. But the second derivative changes sign. And if you go to the derivation of the real case when the function was real, then you have the integral of a Gaussian weight. But where the sign is the other way around, so that the integral diverges. So it doesn't make any sense. So that's why, when I did the real case, and I'll leave it as an exercise for all of you to do derivation, it's very important, you have to. For the sign I put here, the second derivative of the function at x0, for real function, has to be positive. If it's negative, the integral is not well defined. More questions? No questions? Good? OK. So it's important that you do the derivation, because now, so to know the details of them, you have to choose a path such that the real part has the appropriate sign, and the path around the imaginary path is constant, because what you want to do is to apply the Laplace method in the complex case. Constant along the path, so it doesn't change. So then you take it out of the integral, and then you have the real part, and in the real part, you apply the Laplace method. More questions? No? OK. So of course, this Laplace method, or the steepest method, is very powerful technique. It's used a lot in stack mech. As you see the structure, it can be easily generalized. Suppose now that I have, again, something like the following, but now I have a multivariate integral. I have the integral. So I have a map from Rn to Rn. No, from Rn to answer, I'm doing here the integral over Rn. So I have the integral of dnx, the exponential of minus n, where my notation dnx is equal to the product of i from 1 to dxi. And again, I'm interested here in what? In asymptotic behavior when n goes to infinity. So if you do the derivation, for those of you who didn't know this method, so derivation is straightforward, the only thing that you have to be careful about that now you have a function of various variables. So suppose, so here, the result would be that this is equal, sorry, it goes like exponential of minus n f of x0, when x0, the vector, along it to Rn, is such that the gradient of f of x applied to x equal to x0 is 0. And when due to a Taylor expansion, in order for the integrals to be well-defined, the Hessian must be definite positive. So that means that the partial derivative with respect to i, with respect to j of f of x evaluated at x equal to x0 has to be definite positive. So this is the Hessian matrix. Good, go ahead. No, this time I'm mixing notation. Lovely, thank you. So let us call this in a small n. Thank you for the point. Where does this condition come from? Do you know? That's right. So the matrix must be positive, definite in such a way that when you do the Taylor expansion, the quadratic term has a positive definite matrix, so the integral is well-defined. If not, the integral is not well-defined. It blows up. Good, in this area, when you do this type of derivations, this set of this condition for x0 to be an extremum, or sometimes it's called critical point, these are called saddle point equations. These are so-called saddle questions. Now, be careful, because now it's a function from a higher dimensional space to r. So when you have a critical point or the extremum, you have to classify what type of extremum you have. You can have minimum, maximum, or saddle points. So to apply this method, the extremum you have to take into account must be minimum. Because if there is a saddle point again in the direction where you have the maximum, the integral is going to blow up. So x0, let me put it here more explicitly. x0 has to be a minimum. Very good, more questions? So sometimes when we do this type of derivations, what we do, what we say is the following. So let us assume that it's a minimum and then you carry on. Because sometimes to check these things, depending on what you are doing, sometimes to check this thing is very difficult. And normally what happens is like since you are starting from a well-defined problem, what you would happen is like this has to be a minimum. Because otherwise, the problem will not be well-defined from the beginning. So essentially, nobody checks those conditions unless you need those conditions for something else, like for instance, perturbation theory. Go ahead. Yeah, yeah, this one here. Yes. The Hessian matrix, so that means, so this is a matrix now, because it depends on these two indices. This is a Hessian matrix. So when you put that a matrix is greater than zero, that means that what you are saying is that the matrix is definitely positive. So that means that, OK, the easiest way to say these things like all the eigenvalues are positive. Or another condition is like all the minus are positive. All the minus are positive, definitely. But this is easier to think about this condition with respect to the eigenvalues of the matrix. Yeah? You de-analyze, and if the eigenvalues are all positive, the positive is definite. The matrix, sorry, is positive definite, yeah? More questions? So, OK, so there are no more questions about saddle point methods, saddle point equations, et cetera, et cetera, that I will use a lot. So let us now go to another part, which is multidimensional Gaussian integrals. So again, if you take the book of San Justan, for instance, you find a lot of information in there. It's this very thick book that has like 2,000 pages. This is the black book of Converse Matter Physics. I forgot the title. So what I'm interested in here is the following. So I want to relate. So I suppose I have a matrix, A. It's N times N matrix. This N has nothing to do with the other N. And for some reason, in some problem I'm interested in, the determinant of A appears. And what I want to do is to write the determinant of the matrix in terms of integrals. So I want to find a way. You'll see why this is useful. I want to find a way to express the thing in terms of Gaussian integrals. So let's start them with the simplest case. The simplest case is suppose that the matrix A is symmetric, definite, positive. I suppose that A is real. So N is real. It's real, symmetric, and definite, positive. Yeah? So one can show that one divided by the square root of the determinant of A is equal to the integral of dnx. Sorry, the integral of dnx divided by 2 pi to the power N divided by 2 of the exponential of minus 1 half x A x. Sure? I guess you have seen this, at least the one-dimensional case. Well, and how do you prove this? There are many ways to prove it. There are very good tricks and very weird tricks. Like, for instance, suppose that the matrix can be divided into the product of the square root of matrices and do a very weird transformation. Or there is another simpler one. How do you prove this? Sorry? This is a multivariate Gaussian distribution. Suppose that you want to prove this. So by the way, good point. This is what is called multivariate Gaussian distribution, of course. It's missing the mean values, vector mean values, but this is the multivariate Gaussian distribution. You can diagonalize it. Very good. So the way you diagonalize it is the following. We know that since A is a real and symmetric matrix, so that means that it can be diagonalized by an orthogonal transformation. Suppose that O is a matrix, which is the orthogonal transformation that they diagonalize is A, so that the O transpose A O is equal to lambda. Where lambda is the matrix diagonal with the eigenvalues of A. Of course, since the matrix A is symmetric, that means that the eigenvalues are real. And since I am taking that is definite positive, the eigenvalues are positive. There are no zeros or negative eigenvalues. Right. And then what do I do next? What do I do next? I do a change of variables. So I say, so what can I say? I say that O, so x is equal to O x prime. So I go from x to x prime, where this O is precisely the orthogonal transformation that diagonalizes the matrix. And once I've done that, I do the change of variables. I do the change of variables. And now, so I have. So let's do it step by step, OK? Because then I'll put an exercise, which is a bit a deviation of this. So I have this expression here, x transpose A x. I do the change of variables. So the change of variables is that x vector x is O x prime. So that means that this is x vector prime transpose O transpose A O x vector prime. But this is the diagonal matrix, all right? So this implies that this is equal to, what I just said, x prime transpose diagonal matrix x prime. But this is equal, let me put it explicitly. This is equal to the sum of i from 1 to n lambda i x i prime prime square. I have not done anything, but I need to at least to go through this for those people that have not seen this. And then since I'm doing an integral, I have to do the change of variables in the integration variables. So that means that dx, sorry, dx is equal to the terminal of the Jacobian transformation that is normally denoted like this, partial of x with respect to partial x prime, the determinant dn x prime. This is the determinant of the Jacobian of the transformation. But the transformation is an orthogonal transformation. And the determinant is equal to, no, plus minus 1. Because orthogonal transformation can be the standard one, or the orthogonal transformation can be a reflection. Sorry, it can be rotation and rotation and reflection. Reflection has the determinant minus 1. The problem is like here, in this transformation, you have the absolute value. It's the determinant and the absolute value. Since I know that the determinant of O is plus minus 1, so that means 1 is also an absolute value. So the measure doesn't change. So that means that this integral here, the integral of dn x divided by 2 pa pi to the power n divided by 2 of exponential of minus 1 half this is equal to what? It's equal to the integral dn x prime of 2 pi to the power n divided by 2 of the exponential of minus 1 half. I'm going to put it like this. Sum i from 1 to n lambda xi square. And now the integral has decoupled into one dimensional Gaussian integrals. Because this is equal to the following. I can write this thing as the product of i from 1 to n of the integral. I'm not putting the limits of the integral. I'm assuming this from minus infinity to infinity. Sorry about that. Of d xi divided by square root of 2 pi exponential of minus lambda i xi square divided by 2. And this is equal to what? The result of the one dimensional Gaussian integral everybody knows. Unless you do a mistake as I did before. This would be 1 over the square root of lambda i. Which is equal to 1 over the square root of the product of i from 1 to n of the lambda i. But the product of the lambda i is the determinant of lambda. And the determinant of lambda is equal to the determinant of a. So one more step. I'm being very explicit with the steps. Just for those of you who have not seen this. So let me continue here. So this is equal to 1 divided by the square root of the determinant of lambda. Because the lambda is a diagonal matrix. The determinant is simply the product of the elements of the diagonal. And now the determinant of lambda is equal to the determinant of a. Because the determinants of matrices are called, well, the determinant of a matrix is what is called, is one of the invariants of a matrix. It doesn't change by a similarity transformation. And this is a similarity transformation. So therefore, this is equal to 1 over the square root of the determinant of a. Good. So then I've proven what I wanted to prove. That 1 over the square root of the determinant of a matrix a can be written in this way. Exponential of minus 1 half x a. Good. I guess you have seen this thing for the people who have seen this thing. I apologize. People who have not seen this thing. You will see why we need this. Very good. Questions? Go ahead. This is a determinant. No, no. This would be without these straight lines. Is the Jacobian of the transformation which is a matrix? Yeah, so this notation is a weird notation that sometimes we introduce. Well, it's a compact notation. So what this thing means is the following. So this is actually a matrix. So I have the partial derivative of x with respect to x prime. So this is a matrix that if I take the entry ij of this matrix, what it means is this. It means the partial derivative of xi with respect to xj prime. Since the transformation is the linear transformation I mentioned before, this is precisely the matrix of the linear transformation. The matrix is O, and the determinant of O is plus minus 1. More questions about notation or any other thing? OK, very good. So exercise now. If you with an exercise, which is the following one, suppose now I have the following object. I have the internal and dimensional integral dnx divided by 2 pi to the power n divided by 2 of the exponential of minus 1 half x transpose A s x plus B times x, where B is simply a vector, B is a vector in. And we will use this in at some point. So it's useful to write it down. So in condensed matter physics or field theory, this B is what is called a generating field. It's called a generating field. Or in probability theory, this would be a measure, which is a measure, it's a Gaussian measure. So this is the moment generating function. It's as simple as that. And the logarithm of this would be the cumulon generating function. So this, from one point of view, is simply from physics generating field. In probability, this would be the field or the variable that generates moments. Variable that generates, let's say, a moment of a distribution. In this case, a distribution is Gaussian. Right. Also, for some weird reason, because these things sometimes happen, for some weird reason, this has another name, which is called the Habard-Stratonovic transformation. Very useful also in condensed matter physics. Very, very useful for other reasons. This is also called, but the result is called the Habard-Stratonovic transformation. That may tell you why it is a cool trick. And what is the result of this? The result of this is simply the following. So this is equal to the exponential of 1 half B transpose the inverse of A B divided by the square root of the determinant of A. This is a 1, and this transpose, I apologize. So this is the inverse of A, which is the matrix that appears here. So this is 1 half the scalar product of B with A inverse of B inverse of A with B. Yeah? So how do you prove this? I'll leave it as an exercise. Do you know why this is a cool trick? You see it in reverse. That's why this Habard-Stratonovic transformation is a very cool trick. That's right, that's right. Because sometimes it happens that you have something quadratic, and you don't know what to do with it. If you do the transformation the other way around, what is quadratic becomes linear. Very good. Let me see. OK, final case that we will need for the case of spectral density of non-ermitian matrices is the following one. Now suppose that I'm using the same letters, I apologize. Yeah, suppose that now A is a matrix, n times n matrix, matrix, actually complex matrix. And the only thing you ask for it is the determinant of A is different from 0. OK, it's the only condition. Suppose that the determinant of A is different from 0. It can have real eigenvalues, complex eigenvalues, mixed up with n2, just this condition. Now you can prove that 1 divided by the determinant of A can be written as follows. As the integral as the product i from 1 to capital N of the set i, this set i complex 2 pi i of the exponential of minus the sum of i and j from 1 to N of set i bar Aij set j. Where set i and set i bar are complex numbers. And if, let me see, if I do a slight modification of this, I'm going to do it here. If I introduce now a couple of complex vectors, bi, bi bar, bunch of them, I have, I can write that. Let me see, the exponential of the sum, that's right, the sum for i and j of pi bar A minus 1 ij Bj divided by the determinant of A is equal to the integral of the product of i from 1 to N, this set i, this set i bar divided by 2 pi i. This exponential thing here, the sum of i and j from 1 to N, set i bar Aij set j, I can put this thing below. Plus the sum for i from 1 to N of bi bar set i plus bi set j. And again, in this case, this is the equivalent of the hardest to noise transformation the other way around. Or this would be, or this B's, you have a pair of them. These, again, are generating fields. Of course, this has no meaning of measure because it might be complex. But sometimes you call it measure. So this would be, again, generating fields. Good, so I'm going to let you as an exercise to prove this. So please do it because I might ask it in the exam. I'll leave this thing as an exercise. Yes, it's without the square root. So in some cases, instead of having the square root of the terminus, you have the terminus, you would use this trick to represent that in terms of an integral. Even you have a representation, an integral representation of using grass-mount variables, but I'm not going to use that, that's too much. More questions? It's not, it's in the whole. So you have to understand, I'll let you to work it out. Because you have to understand what this thing means. So it's better to take the real to put that set i is equal to x i plus real imaginary part. So what this thing means is you are integrating over the real part from an infinity to infinity, the imaginary part, sorry, from an infinity to infinity, et cetera, et cetera. This is what it means. It's 2 pi i, yes. You put it in such a way that when you go to the real and the imaginary part, the i disappears. And the square root of 2 pi i goes to the real part, and the other one goes to the imaginary part. So you have the factors of the corresponding complex, sorry, grass-mount intervals. Good. Questions? In this case, yes. In this case, still a, a symmetric, a symmetric matrix. In this case, that's a half to b. But when you do the derivations, you have to pay attention of the conditions you need to find this expression. And you will realize that must be symmetric in this case. More questions? OK, what time is it? You tired? It's the first day. You're going to say that you are tired. Yeah, a couple of more things. I think we finish for today. And maybe we do some exercises. I don't know. We'll see. So useful tricks now regarding Dirac deltas and Heavisite step functions that we are going to use a lot. So these would be useful expressions in our case, or Dirac deltas and Heavisite functions. And I'll never hear to Heavisite functions. It's just a step function. It's not the theta functions. Theta Heavisite functions, so OK. So the only expressions, the only results we are going to need, I think, are the following ones. This one is the following. Suppose you have the following expression. You have the limit when eta goes to 0 plus or 1 divided by x minus i eta. So I claim that this is equal in the sense of distributions to the principal Cauchy part or 1 divided by x plus i pi Dirac deltas. So what means this equal with a d? It means equality in sense of distribution. So that means you take well-behaved functions. I'm not going to do the details of the theory. So what this thing means is you take well-behaved functions. Suppose that this is a well-behaved function, whatever that is, that means. And then you calculate the limit when n goes to 0 plus of the integral of dx, this function phi of x divided by x minus i eta. This symbol equal in the sense of distributions means that you have to have equality for integrals. This operation apply to integrals. So if this expression, if this function is well-behaved, you can show that this is equal to the principal Cauchy part of pi of x divided by x plus i pi pi of 0. Well, this is the Cauchy principal part. What this thing means is the following. This means it's equal to the limit of epsilon going to 0 of something like this. So you take, what can I put this, like this limit of, sorry? P means Cauchy principal part, which I'm going to explain what it means in notation. So the Cauchy principal part of an integral means the following. Again, this integral is from minus infinity to infinity, for instance. So what you do is simply you regularize and you take out the part where the integral diverges, which is in x equal to 0. So you make the limit when epsilon goes to 0 of the integral from minus infinity to minus epsilon pi of x x, I'm using r dx, plus the integral from epsilon to infinity pi of x x dx. This is what this symbol means. This theorem that goes under the name Sokowski-Lemel theorem. And actually it's not very difficult to prove. I'll let you to prove it. Good? What? What this notation means? This is what is called Cauchy principal part. Sometimes you have that integral of an integral diverges, but these diverges can be removed. So suppose that you have an integral. Let me do it here. Let's do it for simplicity like from minus infinity to infinity. Suppose you have a definite integral f of x. And sometimes it happens that the integral diverges. And sometimes even though the integral diverges, the integral can be finite. So suppose that the limit when x goes to x 0 of f of x is equal to infinity, for instance. So then what you do with this integral, sometimes what the best thing to do is to do the following. You define a new integral that is you integrate around the singularity. For instance, you do the integral from minus infinity to x 0 minus epsilon 1 of f of x x plus the integral from x 0 plus epsilon 2 of infinity of f of x x. So you remove the singularity of the integral. And the way you remove it might be different from the left and from the right. Then you take the limit when epsilon 1 and epsilon 2 go to 0 of this object. And then you see what happens. In the case where epsilon 1 is equal to epsilon 2, that means you approach the singularity at the same rate. This is called Cauchy principle part or Cauchy principle value. When epsilon 1 is equal to epsilon 2, because there are ways, different ways to remove this singularity, when these two guys are the same thing, the result of this integral is called Cauchy principle value or Cauchy principle part. More questions. In this case, since I said that phi of x is a very well-behaved function, that means essentially that this c infinity doesn't have singularities. So the other singularity that can come is from the denominator. So since I'm applying the principle Cauchy part of this, the singularity is at x equal to 0. You go to your supervisor and you tell him what you ask me to do, diverges. What can I do both? So but again, if you start with a problem which is well-defined, unless you scroll up in the derivation, this singularity has to mean something. Remove it or understand it. More questions. For the problems we are going to do, Chris, this will not happen. But maybe in the future, it might happen that you find infinity somewhere. More questions? Yeah, yeah, yeah. This appears, for instance, when you do, for instance, it depends on the context, but I know what you mean. So what happens in that case is that, suppose that you have an integral that goes from minus infinity to infinity to this one, and you have in the denominator this, or the thing you have before, you have x minus i epsilon or i eta, then you can do this integral going to the complex plane. You know these tricks, right? You go to the integral, go to the complex plane, so you complexify the integral, and there is a theorem or a lemma, it's called Jordan's lemma, that tells you that depending on where the singularity, the pole is, and how you close, the contribution that comes from a semicircle, when the semicircle goes to infinity, is 0. So you know what I mean? No, normally that comes from another part. And actually, this theorem, you can prove it this way. You can complexify, use residue theorem, et cetera, et cetera. But one thing I need to point out, because sometimes when students, they do this, the way they close, is for the integral to give you something different from 0. And you don't do this, right? So what you are mentioning, I don't think we are going to find this thing, but maybe it's worth to mention. So suppose you have an integral the following, right? So you have an integral from minus infinity to infinity of phi of x, x minus i eta, right? So you are doing an integral in the real line of this function, phi of x. But in the complex plane, and this is a very well behaved function, but in the complex plane, you have a pole, a single pole at here, right? Actually, let us do it, let us do it more fancy. So you have a pole here, x0, and you want to resolve this integral. So what you do is you say, ah, OK, so what I'm going to do, I'm going to first consider a finite integral from minus r to r, and the integral goes from minus r to r. So what I do is I take the integral from minus r to r of phi of x, s minus x0 minus eta. I want to do the limit r going to infinity, because the integral that I'm interested in is this one. But first I do it finite. And then what you do, you complexify the integral. So you say, ah, so this is related to, I'm going to do the integral in the complex plane. I'm missing dx all the time. z, z minus x0. And what is the, and I choose a closed path, right? What is the closed path I choose? No, that's not true, all right? So you don't choose the path that encloses the pole. You choose the path that makes that the integral of the semicircle goes to 0, OK? And that depends on the integral. So you never choose a path so to, you know? You say, ah, if the pole is above, I'm going to choose this one, because I want to apply a residue theorem. It doesn't work that way. Because in the limit when r goes to infinity, you want to this integral to go to 0, because this is the integral you're interested in. So the way you close the path is such that then, when you go to the semicircle, you can apply Jordan's lemma, and that integral would go to 0 when r goes to infinity. More questions? Depends on the function. OK, five more minutes. OK, one more useful expression in this case for the Heavisite function. Here's the following. Suppose that you have the complex plane, and I think about the logarithm of set. You know, this function has a branch cut at 0, branch cut. And you can put it to go to 0 in the real axis, the negative real axis, to minus infinity, in such a way that if I approach the logarithm from one side or the other, there is a jump of 2 pi i. So if I take the logarithm, suppose that this is x plus i eta, and I compare the logarithm of x plus i eta with the one below, x of minus i eta, and I apply the logarithm to these two things, I make eta going to 0, and I notice that there is a jump of 2 pi i. But only if eta is in the negative part. Sorry, if x is in the negative part, if it's in the positive part, this is not true, because here there is no jump. So therefore, I can say that the theta function of the step function of eta minus x is equal to the limit of eta going to 0 plus of the logarithm of x plus i eta minus the logarithm of x minus i eta. I think I put it correct. Did I put correct designs? Work it out. And this is the heavy side theta function. It's 0 when the argument is negative, and 1 when the argument is positive. And when x is equal to 1, flip a coin on the side. Doesn't matter for most cases. You see this equality here? If x is negative, what happens if you are taking this x, which is in the negative part, and then comparing x above and below the branch cut? And if I take the logarithm, there is a jump of 2 pi i. Ah, sorry, I forgot. I know what you are. We're confused. This is the limit of eta going to 0 plus 1 divided by 2 pi i. Now, there is a jump of 2 pi i between the values of the logarithm above and below. If I divide it by 2 pi i, this goes to 1 when x is negative. And on the other side, there is no jump. So this is 0. Good. Questions? In the definition of the logarithm, the complex plane is not has a special singularity at 0. This is called a branch cut, right? It's called a branch singularity. And it has a cut that goes from the singularity at 0 to the infinity. So that means that in this line, the logarithm is not well defined. And if you compare the logarithm above and below, there is a difference. Yeah, so no, what happens is the following. And this is another thing. So if you now take one of what is called a Riemann sheet. So what happens, you can define what is called the principal value of the logarithm, which is not the same as the principal value of Cauchy before. And what happens is that you take multiple sheets in the complex plane. And when you go around, here, if you stay in the same sheet, there is a jump of 2 pi i. But you glue this sheet with another copy in the complex plane. And it happens that this function, which is not defined in one copy of the complex plane, is well-defined in infinite copies of the complex plane. But if you stay in the same sheet with, I think this is called a Riemann sheet, there is a jump. The same happens with other singularities. For instance, when you have a square root, more questions. OK, so that's it. I'll see you tomorrow. Thank you. See you at 7 PM for the head together, here.