 So let's start the third lecture of Jan Fedorov. OK. Another talk after lunch at 1 o'clock. Right. So after these two lecture of preparations and discussing properties of Geneva ensemble, I really start discussing the problem which was motivation for all that, the problem of studying or saying something about a system of coupled ODE's, which I will write in the following form. So x is just an n component real vector. And we have a system of n coupled autonomous ODE's. Following really this idea of May, I separate this simple linear part proportional to this coefficient mu, which basically is such that if we forget about interacting part, which will be eventually nonlinear, this precisely the generalization of May model which we propose. So in absence of this, there will be a simpler exponential relaxation to equilibrium at zero. But now let us try to build some class of models just following simple guiding principle. They should be rich enough. And at the same time, they should allow analytical treatment. So it should be tractable. This is a general idea behind basically random matrix theory. So let us try to extend it to nonlinear setting. The first simple type of probably the simplest and interesting, but not most general and not most interesting type of dynamics is just purely gradient dynamics. When we assume that f, vector field f, is a gradient of some function v of x potential and being interested in random coupling or system of randomly coupled ODE's, one may assume this to be a good model will be considered v as a random Gaussian field with mean zero and prescribed covariance, which in principle one can consider different types of covariances. But of course, the most natural, probably not the most natural, but the most studied and the easiest to treat by analytical means will be just Gaussian stationary field. Or even not only stationary, but also isotropic. It means invariant respect to rotation. So basically, its covariance depends only on Euclidean distance in this space. So in this situation, our dynamics can be written down as just gradient descent in length scape, or I mean it's always for these gradient dynamics is useful to think about really this descent in some length scape, I mean it helps. So this L is just mu x squared over 2 plus v of x. And we can visualize it. So the first term is just mu is positive. We always consider positive mu control the relaxation in the absence of interaction towards the origin. So from the point of view of this gradient descent in this potential, combined potential, a consistent of two parts, of random part and of fixed parabolic part. So how one should think about this? So we have this parabola with curvature controlled by mu. And then on top of it, we have a stationary and eventually isotropic process. So it's clear that for large x, this is of course parabolic confinement will always dominate. So far enough, everything will be just driven here just towards the origin. But close to the bottom, random potential will generate some ripples. So may generate local minima. And of course, if we think about high dimension, there will be many of them. And in this way, basically by changing mu, making say with decreasing mu, making it confinement less confining or more shallow, we will have larger and larger regions affected by this addition of random potential, which is stationary. So this is interesting model by itself. This type of models appeared in physics long ago as some models related to some versions of spin glasses and attracted some attention. And one can study various properties of this model. However, it's clear that from the point of view of just system of couple differential equations, and especially having in mind this application in neural networks and to ecology, it's clear that gradient descent is only very special type of dynamics. So we need to look at some more general model. And then the question arises how to build this model. So a possible way, which we followed, when I say we, I just refer basically to the proposed construction in our work published last year in collaboration with Boris Horogenko. So the idea is the follows. So we'd like to have some nice and quite general model for the vector field. So what one can say about vector fields in Euclidean space? With every vector field, one can associate the natural generalization of this gradient dynamics comes from language of differential forms. So let me just use it. So with every vector field, f, smooth enough with components f1 to fn, one can naturally associate in Euclidean space generically on any remaining model. Many of all, one can associate just one form, one form which is just sum over i, fi, dxi. And on other hand, functions, I mean, just like this potential v of x, just color functions or variable, these are zero forms. So the relation for gradient dynamics is just basically that we are considering if we are only restricted to gradient fields, that basically we say that this form is external derivative of zero form. So basically, it's special type. If it's gradient dynamics, it's special type. It's sum over dv over dxi, dxi. This is special type of form, which can be thought as basically acting. So if this form is omega, then this is just external differentiation operator d acting on zero form alpha. So it's just special case. But it's well known that usually people consider this on compact treatment manifolds without boundary, but with due effort, one can also discuss it in just fully Euclidean space. It's well known that there exists the famous Hodge or sometimes called Hodge-Cadaeira decomposition of vector forms such that if I just reduce it, it's a beautiful and general statement. But I'm interested in particularly in one form. Then basically, although what I write will be generally valid for k-forms, so the composition is that every form can be written down as external derivative acting on the form of dimension. So if this is a k-form, in our case k equal 1, then it's derivative of k minus 1 plus a co-derivative or Hodge differential operator delta acting on some other form beta, which is k plus 1 form, and then plus another form gamma, which is known to be harmonic. I will explain what harmonic means. Harmonic. So in our simple setting, when k equal 1, we need beta to form. To form is just can be written down explicitly as sum of, OK, usually on one right, i smaller than j, a, ij, or maybe I use different indices, L, L, Lk. It will be better. a, Lk, then dxl, which product dxk. This is a general two form. I will prefer to make summation just over all k, L and k, and then consider that a, Lk is anti-symmetric. It will be more natural for me. a, Lk equal to minus a, k, L. Where L, a, k is conditions are some smooth functions of coordinates x. So if Riemannian matrix is given, then one can write explicitly action of this Hodge co-differential operator. And especially simple situation when we're just in flat Euclidean space, action is, this action is very simple. So basically, delta on b is just sum over i, j, d, j, a, j, i, dxi. So this is the action. And then we just see, so this provides very general decomposition of any one form, and therefore representation if we go back to coordinates for our vector fields. So the only bit which one should discuss is really harmonic functions. Harmonic means that they are basically nullified by Laplacian, which is known to be given by this combination of differential and co-differential operator. But this is, I mean, unusual setting. It's just a satisfying Laplace equation. But if we impose conditions that we are interested in fields which do not grow or either remain constant or decay at infinity, then basically these harmonic, since harmonic functions are always, if they are bounded, they are constant. So they are not interesting. And one cannot consider this in our setting if we are interested in these fields. So summarizing, we get explicitly the following model. So let us consider the following f i's components of our vector field, which just follows from this. This is dv over dx i. This is gradient. This is what comes from this part. And exactly what we need to generalize it, sum over j from 1 to n, dAij of x over xj, where Aij equal to minus Aji. This second part is precisely what comes from this part of the decomposition. So this is really the model that we'd like to consider. So we'd like to consider this is a useful model for our right-hand side. It includes gradient part, which is responsible for gradient descent, and non-gradient part. So basically, just to give maybe some intuition for some physicists who are in the audience. Really, this is high dimensional analog of what is known, I mean, frequently mentioned in electrodynamic courses in three dimensions. It's so-called Helmholtz theorem that any vector field can be represented as gradient part and curl of some potential. So this is really a high dimensional generalization of it. And let us work with this model. So what will be our assumptions on this model? Assumptions will be that v and all a's are Gaussians. Gaussian random fields, independent on each other and also component-wise for different indices independent, just respecting this anti-symmetry. And they are not only Gaussians, but they are nice Gaussians. They are stationary with covariance which makes them smooth in every realization so one can differentiate them. And also they are isotropic. So basically, it amounts to mean zero. So it amounts to specifying their covariance. So this will be our model in particular. So covariance. So we should specify what is v of x, v of y for covariance structure for two points in our space. And this we will call v squared, some parameter, gamma v depending on x minus y squared. This is our assumption of homogeneity and isotropy. And similarly for aij, aij of x, a and m of y, OK, to respect anti-symmetry, I should put here anti-symmetry and independence of all components. This is taken care by the following Kronecker deltas. This minus sign, of course, just a consequence of assumed anti-symmetry of aij. It should be another parameter a squared times gamma a of x minus y squared again. OK, we assume that these gamma v and gamma a covariance, they are the same for all values of n of dimension. Then it's known that they can be given as some Laplace transformations of some positive densities, spectral densities. And we have this finite mass, and we have also normalization. We will assume convenient normalization just that second derivative of gamma a of v at 0 is equal to 1. Then basically, relative magnitude of this potential or divergence free. In physics literature, in fact, frequently one uses notation for potential part, one called it longitudinal. And this part, which generalizes the curl and responsible for rotations, it's called transversal. It basically comes from using free representations for it. But otherwise, they are called probably the most terminology which I will use either gradient and solenoidal part or divergence free and curl free parts. OK, so we have now specified our model, and we can start asking questions that we'd like to answer, many more questions, but which we will be able to answer. And the first question of this sort is just counting equilibria. Counting equilibria of this system. What are equilibria? Equilibria are solutions of just positions, vectors x, which solve the right hand side equal to 0. This is a definition of equilibrium. And we'd like to solve two problems. We'd like to provide some information on how many equilibria exist in this system, depending on parameters mu v squared and a squared. We will see, maybe it's good to mention, that we will see that really v's and a, they enter the theory in combination. So I will introduce the combination tau, which is v squared divided by a squared plus v squared. So this is then between 0 and 1. And purely gradient descent corresponds to a equals 0. And therefore, to tau equal 1. So tau equal 1 is the limiting case of purely gradient descent. And tau equals 0 is the limiting case of purely divergence free field. And anything in between can be in between. So we will see that this is really second important parameter. The first parameter, just to remind you, will be basically the maze parameter, maze parameter m. I will call it m, which is more or less mu divided by square root of n. And then total variance of the random fields. So this is the same parameter, basically, which appeared in May's linear model. But this will be another important parameter on which everything will depend. So back to the problem. We'd like to count equilibria to start with smoothly for those who never thought about how to count 0s of a function. Suppose we have a function, smooth function, something of this sort, such that it has only isolated 0s. It doesn't have double degenerate 0s. So this function f of x of single variable x. So what is the formula which allows to count the number of these simple 0s between, say, an interval a, b of real axis? And the formula is very easy to write. It is formally. Formally. So the number of 0s in interval a and b is the following integral. It uses Dirac delta function. Basically, it's Dirac delta function of f of x. I assume that f of x is very smooth, at least twice differentiable, delta of f of x, modulus of f of x dx. This is a very simple and nice formula. Basically, it's nothing else as definition of Dirac delta function. But it's really nice, because for any smooth enough function, if you understand correctly the meaning of delta function, it just gives you, of course, its formal formula, because this is distribution. But nevertheless, you can very efficiently work with it. And especially, you can calculate what is known as cut rice formula is basically expected value of this. Under expected value, this is really well defined. And this is the simplest instance of cut rice formula discovered during more or less Second World War independently by Mark Cutts, famous probabilist, and also well known, but maybe less known. I think he worked for engineering, but also with probability method, Stephen Rice. So they discovered it for different purposes. Cutts wanted to count real zeros of random polynomials. And Rice was interested in characterizing some how frequently given random functions say noise crosses a given level. So they more or less used this formula without explicitly writing delta function, but using some Fourier representation for that. More or less. OK, this is just how to count zeros of function of one variable. So what one should do when one has a system of equations, so f1 of x equal to 0, f2 of x, and so on, fn of x. More or less the number of solutions of the system again assuming that every solution is isolated. So the vicinity around each zero, the zeros are simple that there is no any other zero. Then similar formula, number of zeros in some domain d. This is, OK, here is integral from a to b. So basically of the following object, one can write down, OK, formula is product multivariate delta function, so product from 1 to n delta of fi of x. And now the analog of this term is, of course, just modulus of the Jacobian. Modulus of the determinant dfi dxj. And then dx1 dxn. Formally, this gives you the number of zeros in any domain d. And again, this is when one takes expectation here over some distribution of f's, then this is known as multidimensional multivariable catarise formula, very natural analog of it. So with all these in hand, total number of equilibria for our system will be equal to, again, in the same way formally as this formula, integral over rn delta of minus mu x plus f of x. I just use one single delta to denote this product of deltas. And modulus of determinant of minus mu times identity matrix plus dfi over dxj, this matrix. And then the back measure. So this is the starting representation. Now, if I'm interested in counting only stable equilibria, can I do it? Yes, of course. The same reasoning shows, if I'm interested in counting not all equilibria, not all zeros, but only zeros, which are locally attracting for the dynamics, it just means that I should condition, I should introduce here just conditioning that this matrix is negative definite. And that's it. So again, formally, the starting point is practically the same. Of course, it may be that this conditioning will make eventual calculation more complicated. And it does. But nevertheless, starting formula is the same. So now, ideally, we would like, of course, OK. Now we are dealing with this counting problem for random fields specified in our model with all these coverances. So the number of equilibria is obviously random quantity. Depends on changes from realization to realization. And ideally, we would like to know the whole distribution of it, if not distribution, say at least the first two moments. But this is a very challenging problem. And this is one of the problems, really, which is open. What one can do, it's much more modest, one can find expected value, mean value of this random quantity. And why finding mean value is much, it's not trivial, but much simpler, much, much simpler technical task than finding, say, second moment, I will try to explain it immediately. So what is the fact about Gaussian stationary fields, which makes a computation of expected value of this number of equilibrium or number of stable equilibrium feasible, but higher moments challenging? This is the following observation. For stationary fields, for stationary isotropic fields for which covariance depends only on Euclidean distance, it's very simple fact, very well known and simple fact, which I suggest to check as an exercise today, for those who don't know it, but otherwise it's well known, that odd and even derivatives, they are uncorrelated and since they are Gaussian, uncorrelated at the same value of x, locally uncorrelated at the same point x, and therefore, being Gaussian variable, they are just independent. So if I'm interested in taking expectation and assuming that, of course, I can exchange integration and expectation, I can separately average this delta function or product of delta function, which contains function itself, and separately, since they are independent when taken at the same value of x, separately average this Jacobian factor. And this, as I will show, is interesting problem, basically, exercise in random matrix theory. If I'd like to calculate, say, variance of this random number, I need to just consider product of these two integrals and therefore, in the integrand, already there will be points, will play a role which are not the same point, and therefore, I need to consider covariance of the function itself and the corresponding entries of the Jacobian at different points. One still can write it down, since these are Gaussian filled, but this really becomes technical.