 Nikolay quite come with some expression, which is difficult to control. And in some problems for gradient fields, similar computations for second moment were recently done. I will mention it maybe when discussing the results. But in general, this is much more challenging problem than just calculating expected value. Okay, so let us now really start doing the computation. So I just do what I promised to do. So I separately average delta functional factor and the Jacobian. I will concentrate for the main part of this lecture and maybe also big part of the next lecture on showing this type of, not showing all detail, but at least main steps of this computation for the total number of points. And then in the end, I will indicate how to deal with the number of stable points. So let us deal with how to deal, how to average delta function. Probably there are delta function in some sense is nice object. So there are, sorry, minus mu x. Minus mu x plus f of x. We need to average this. Okay, I always prefer to average these things using Fourier representation for the delta function. So I will write it i mu then vector and component vector k times x. And this is exponential of i k f. So this is the object which I need to average, but this is extremely simple since we know that f is Gaussian variable. So this means that this average is just property of the Gaussian. Averaging is just that it is equal to minus one half. Then basically average of k times f squared, just variance. So we need to know the covariance properties of various components of f. And one can recover them just from, just from, so what one should do just to outline? We have this representation for f. So we are interested in the following. We're interested in covariance f i of x, f j of y. Although we, here for this particular calculation, we need only basically covariance, I mean variance, covariance structure at the same point x and y, but for, in order to proceed further and to calculate really covariance structure associated with Jacobian, we need the knowledge of this full covariance of f. So how one does it? Basically one just takes this definition of f, substitutes, uses independence on V and A and uses that our fields are smooth enough so we can interchange taking expectations and differentiating. So basically this is, we do it in the following simple way, I mean in standard way. Just differentiate covariances, V of y and then similarly plus, okay. I forgot one technical but useful thing. This is as I showed quite general decomposition of the field but in order to ensure with these definitions, nice and natural, large and limit, it's advisable, although not necessary, but advisable and convenient to put one over square root of n factor here. It's just because to make typical covariance of this term comparable with typical covariance of this term in the same normalization. But it's, yes, thank you. Okay, so similar term one over n, sum over lk, just of covariance of the fields i l of x, a jk of y. So basically you need to differentiate this covariance kernel gamma V and then this is, gamma V and this is basically proportional with this delta functional prefector proportional to gamma A. So it's simple exercise in differentiation. So it really should be done as an exercise and you find really the full covariance in this way after a little bit of algebra and in particular you find that at the same point covariance structure of the field is, so if x equal y, okay, if ij, ij, then it's really simple structure. The result is minus two V squared delta ij. Gamma dash V at zero minus two A squared n minus one divided by n, gamma dash A at zero also times delta function of ij. So delta ij is in fact common factor. So we see that they are very simple covariance structure. Then we immediately, we substitute it here. And then the result in integral is Gaussian and we calculate very cheaply this averaging. And interestingly the result is extremely simple I mean of this calculation. You can then forget about this factor and also forget. Okay, because of stationarity in fact, integration, the only dependence on x was in this delta functional factor because of stationarity this expected value of Jacobian is independent on x. So one can also perform the integral and then the only factor, important factor which comes from this delta functional calculation and then integration is one over mu to n. So we see that the problem of evaluating, okay this is expected value, the problem of finding the mean value of number of equilibria for our dynamical system is a purely random matrix problem. Is just a problem of finding expected value of the modulus. Modulus is very important. Modulus of the determinant of random matrix. In fact, if you like modulus of characteristic polynomial of this random matrix. So it's typical random matrix problem. So we should, next step is to understand what are properties of this random matrix. So what is this random matrix, this Jacobian matrix? And in the remaining time of this lecture I just briefly discuss what is this matrix and really the assumptions of homogeneity and rotational variance is isotropy will result in a very nice structure of the corresponding Jacobian. So proceeding exactly by the same steps as before. So calculating these explicitly for different x and y and then differentiating once more over the components we recover really the covariance structure of the Jacobian. So DFI, this is a little bit long calculation. So I even do not give it as an exercise although I mean there is nothing special in it. Just differentiate accurately. But long. When you do this, you get the following result. Okay, I will write it in the following way. One plus epsilon n. I will explain what this epsilon stands for. Delta i n, delta g m plus tau minus epsilon n and then the following combination. Delta j n, delta i m plus delta i j, delta m n. I think it's correct. Okay, brackets is closed. So this is exact result of this differentiation for any n and epsilon n, where epsilon n is just one minus tau divided by n and tau was given before. Tau is exactly this tau, this parameter tau. Now, we are eventually interested in understanding this dynamical system when number of degrees of freedom, number of equations is big. So n should be considered as big parameter. Then it's clear that this epsilon tends to zero and we will neglect it. We won't consider it any longer. When this is done, we get that, okay, calling our Jacobian. So j i j is d f i over d x j. This is our Jacobian entries. We see that basically this random matrix j i j is random mean zero Gaussian matrix with the following structure. It's equivalent, it has the same law as the following combination of matrices. Delta i j, where this parameter psi is just mean zero variance one real random variable and matrix entries x i j have the following simple covariance structure. Delta i n delta j m plus tau delta j m delta i m. You may ask what is, okay, we have Gaussian matrices, real matrices, whose covariance structure of the entries of this matrix have this form. Plus simple term, which is diagonal and random. So let us write down the joint probability density of entries of this matrix x and then you immediately recognize relation to something well known to us. So what is the joint probability density which can be of course easily read from this covariance structure of our matrix x? Up to normalization factor, which is of course easy to calculate, is just minus one over two, one minus tau squared, where tau is our parameter trace, trace of x x t minus tau x squared. This is the joint probability density. So what we can infer from it? And tau is parameter between zero and one, which controls this relation between purely gradient and divergent less part. So if tau, tau is zero, then we do not have this term, we do not have this factor is one, and I hope you all recognize the object that we investigated for two lectures. This is exactly the real genie brand sample. So if there is no potential component which ensures gradient descent, so if the field is purely divergence free, then this corresponds to situation, divergence free, then our x is just genie brand, real genie brand. So we, at least that limit we already know a lot, and probably we can use this knowledge to calculate our main object of interest, expected value of modulus of the determinant. Although we did not calculate that object directly in our lecture, but I can give some hints or even discuss how to do it. However, this is only the limiting case. This is much richer structure behind it. So what second limit tau equal one, which is pure gradient, pure gradient, what is this? Something not very, the first glance, something not very pleasant because we clearly have divergence when tau tends to one, but if one takes into account also a normalization constant which also depends on tau, then it's easy to check that really this limit is very simple. It imposes this divergence is just nothing else as imposing a constraint, delta functional constraint, that the matrix x becomes symmetric. So it's equal to its transpose. So then we are back to real Gaussian symmetric matrices and one can check that it's exactly the same distribution as known as Gaussian orthogonal ensemble. So our Jacobian, at least part of our Jacobian, not the full bit, but the main interesting part of our Jacobian is very simply related for purely gradient dynamics, is simply related to Gaussian orthogonal ensemble, matrices from Gaussian orthogonal ensemble. And now we have the whole life in between. So how much time I have? Five minutes. So let us briefly discuss properties of really this ensemble, which is known and it will be clear in the moment why it's known by this name. So this, for general tau, for general tau, this is well-known ensemble studied in random matrix theory known as Gaussian real elliptic ensemble. Gaussian is obvious, real is also obvious, elliptic. Elliptic is less obvious. Ensemble. So why it's elliptic, we will understand in the moment, but basically the new ingredient here is just this term. Since its trace of x squared, it only depends on obviously on eigenvalues of x. So it's really a relatively benign and simple change of measure. And all this machinery that we developed in two lectures can be with due effort extended to this ensemble. And you won't be surprised that really, we have very similar structure. In particular, this is, again, Pfein ensemble. All correlation functions of its eigenvalues will be given by Pfein's, but I'm mostly interested in mean density. So I will give you the expression for the mean density of really complex eigenvalues. And then, of course, you always can check the limit tau equal zero and see that these formulas revert to those which we discussed in the last lecture. So density of complex eigenvalues is given by the function of x and y is given by, in fact, expression which is not very much different on the surface from the Geneva case. Of course, tau appears naturally in these expressions. So it's not just square root of two, but square root of two divided by tau, but otherwise structure is the same. And then here is the kernel, k n of z, z bar. So exactly the same structure as before. But unfortunately, the kernel itself is more complicated. And this is, the only difference with Geneva ensemble is just in the fact that this kernel is more complicated. I will give it explicitly. And this will be the last bit for this lecture. Some from naught to n minus two. Again, I consider only matrices of even size, but new objects come into the game. This size, which I will explain what they mean, this size. So psi with the index j plus one of z, psi of z at z bar minus psi j of z bar, psi j of z bar, psi j one of z. No, probably vice, so they should, okay, this. Okay, divided by, I hope I did it correctly, but I should check, divided by j factorial and now the only bit, which I should specify, which is still unclear, psi k, what are these size? They are in fact proportional to Hermit polynomials of complex argument, times hk of z, where hk, hk of z is precisely Hermit polynomial, which I give in its integral representation because this is most handy for a, for calculation of various asymptotes. z plus or minus it square root of tau to power k dt. Now, if tau is equal to zero, this is the last bit, tau equal to zero, this term is not operative, it just have basically hk then reduced to z to power k. So it's easy to check that this sum, then will produce these incomplete gamma functions that appeared before with also some, because of this anti-symmetry of this with some factor, which will add to here and we will get back the formulas that we had last time. But otherwise, it's very explicit and similar formula, a little bit more complicated, exists for the density of real eigenvalues. I think I'll stop at this point and we'll apply it to the analysis of these modulus of the determinant in the next lecture. Thank you.