 Very good. So for this final presentation before the before the break. So, Jitendra and Svelana going to spend 5,30 for the industry distribution of for the random matrices and as it was a place which I supervised so please, Jitendra, Svelana, you can start. Hi everyone. So, as Isaac has already mentioned, we'll study diluted random matrices, specifically the index distribution using functional rate theory. And this project was done with Svetlana. She is from QMUL and myself under the supervision of Professor Isaac. So this is the organization of the talk. In the beginning, I'll just give a brief introduction of the problem, and then just motivate the replica method, and we'll explain the replica method in the next part. And then, after we get it, I mean, we give a brief introduction to the derivation, not the details of the derivation. So we get a final equation, which has to be solved, but it can't be done analytically so we use numerical method which is population dynamics algorithm. And using that one can compute some interesting quantities which I'm going to describe, and then we'll be talking about the results and finally the conclusions. So, like, I mean, I'm just going to make a statement here. Generally, we physicists think that we study truth, but the reality is it's the model of 5 over N independently. Yeah. So, on average, each vertex of the graph will have five neighbors. And we are studying the adjacency matrix of such a graph. It's a Hermitian matrix and it has real eigenvalues. So, what one want to study about eigenvalues? First quantity is like empirical spectral density. So we put delta function into each eigenvalue. And afterwards, we are renormalizing this measure with coefficient 1 over N to make such a measure probability measure. And we study this. Also, one can study number statistics. So the fraction of eigenvalues between X and Y. Here, we will focus on index distribution. It's a fraction of eigenvalues that are less than X. Yeah, so index distribution for one random matrix is a function. And so we can generate the distribution on the space of all the functions. And we want to study the asymptotics of the probability that this particular index distribution equals to this particular function when N tends to infinity. And we will calculate the mean of index distribution and we will calculate the covariance of index distributions in points X1 and X2. So one can reduce studying these asymptotics into studying the cumulon generating functional. So, this probability will be asymptotically equal to exponent of N times this integral of mu of X k of X minus cumulon generating functional. And in the mu star that extremizes such a thing. So we will focus on studying the cumulon generating functional. If we study and if we study what it is, we can get the probability and we can get some quantities that are connected to original index distribution. Okay. So, yeah. So, Jitendra will introduce you the replica method to study cumulon generating functional. So now that we have this cumulon generating functional, we would like to, I mean, I mean, this is the expression for cumulon generating function, we would like to compute this integral, this doesn't look scary right now. But so the method we use is replica method, which is seems to be very powerful method. We are just, I mean, it seems very powerful. Like, let's talk about how to frame the problem. First of all, to proceed further, we need to rewrite the index, which is the fraction of eigenvalues, which is less than X in terms of theta functions. We use a very special property of log for it, because we know that log function has a branch cut for negative x axis. So as you go along the, I mean from positive to negative y, along negative x. We see a jump in log value by two pi Iota, and that jump can be used to represent theta function as this expression using this expression. So this is the fraction of eigenvalues can be represented in terms of this, this log of Z by Z of X star, where Z is actually nothing but a determinant over determinant of the random matrix minus this X, X eta times the identity matrix, the value of that and using this representation of index fraction of eigenvalues, we can and discretizing this some integral into L boxes. We can rewrite this expression in terms of Z as this. Here, one thing to notice is here we have delta X times mu. The delta X times mu divided by pi I is actually denoted by nl plus and the minus of delta X mu divided by pi I is divided represented by nl minus. Now the very important trick, which is that applicator is to, even though we know this nl plus and nl minus are, as you can see, imaginary, we assume them to be positive integers. We can write them to the analytical continuation. And once we do that, we can use the representation of Z function, and in Z, it's like a partition function representation of Z to simplify this expression, and if we assume them to be positive integers, we can write them as n plus copies. So it's like creating copies for Z and similarly nl minus copies and create, which is like creating copies for Z star. Once we have this, no, like, yeah, we have this expression, which is very complicated looking. And this is but when it's actually a microscopic description. And there are summations involved inside it. So it seems natural to go to a more field theoretic description, microscopic description, going to such a microscopic description using this P and P hat. And we are not showing the expressions of P and P hat. We can rewrite the cumulant generating function exponential of cumulant generating functional as this expression, which is a functional integral over P and the conjugate of P field and exponential of minus n times action. Now to solve this kind of integral, very like common statistical physics, I mean, very common statistical physics is a trick of saddle point approximation, which says that we have an integral which has a weight x time in the rate n times some x of x, such an integral can be approximated log, I mean, asymptotically represent, I mean, is asymptotically equivalent to exponential of n times the maximum of s, which is like s, s prime is zero. Whatever x where s prime is zero, that will be the x which will approximate this integral, the best. Using this approximation for this functional integrals, what we get is a self consistent equation for P and P hat. Now we have, we have to solve this self consistent equation, it looks, I mean, we haven't shown the expression but they look terrible. So to actually proceed further, we use a replica symmetry and which is a like application of Definity Theorem. Svetlana will describe what is Definity Theorem and what exactly we do here. Suppose that we have n variables. We call them exchangeable in the joint distribution of x1 and xn, from x1 to xn equal to joint distribution of their permutation. So if variables are exchangeable, it doesn't mean that they are independent. And Definity Theorem tells us the following. So suppose that we have x1, xn, that are exchangeable. Yes, we can build all such sets of variables the following way. First of all, we choose delta according to some distribution mu. Then in for each delta, we choose the distribution P delta. So firstly, we are choosing delta according to distribution mu. Afterwards, we are choosing x1 variables from x1 to xn independently according to distribution P delta. Finally, and we rewrite our field P the following way. At the end of the day, in P mean, we have variables that are linked to n plus and minus exchangeable. And afterwards, we generate such density w. That means that there exists such density w that the following expression holds. So we had self consistent equation for P. We put the previous expression into self consistent equation for P. And what we get is the expression for w. So if you don't know w, we know P. If you know P, we know cumulon generating functional from here. So this is how the expression for w looks. Here we did the following. We had variables delta s before. So we had variables delta i that were linked to each bulk. So originally we had one big bulk and divided it into small bulk. So variables were connected to each bulk. Afterwards, we make this thing continuous and we look at w of the function delta of x. And this is how w of the function delta of x looks like. And if we know the function w of delta of x, we will know cumulon generating functional. And here is n is the normalizer. Yes. So if we know cumulon generating functional, we will know mean value that is the first derivative of cumulon generating functional. We can calculate it through w. And we know covariance that is the second derivative of cumulon generating functional. And it's also can be calculated through w. Jitendra will explain you further. So we have a self consistent equation for w. Jitendra will explain you further how we calculate w. So now, as we can see here, like to compute these things we need this here w this w is actually the marginal distribution to compute this. I mean this expression we need w and to get this w it's a non trivial job because how do you solve this kind of equation. So we use something called mission dynamics algorithm. The basic idea about the population dynamics algorithm is think of a transcendental equation how do you solve a transcendental equation it's like solving for a fixed point right, but here the fixed point instead of being a point it's a function and to solve that we use population dynamics one like important thing to notice is here when we say it's a mean we assume that after taking the functional mu is zero. So the self consistent equation which we are interested in solving for the computation of mean and covariance are when mu is zero. If the mu is non zero, then it's a whole different game. I mean it needs modification of this algorithm. So first we'll talk about new equals to zero and since we haven't computed new non zero and not go there. So to compute a distribution to find this w. How do we go go about it. So we consider a population of deltas and like you have some NS samples, I mean members of a population. Because you have this expression, which says you have to choose the random variable delta from a poison distribution. We have to choose we choose a random variable K from a poison distribution and K and so using this K we like we pick K members of the population. And these K members of the populations are used to update one of the member of the population using this expression which is coming from this delta function. So this is sort of like a Markov chain, and this Markov chain will eventually reach a steady state. And now you'll have an updated population. This updated population is actually sample from this distribution, which is eventually which is what we needed. This is actually a slight very deep, very interesting algorithm. I don't completely understand it. But this is the algorithm. Using this algorithm we actually try to compute the results but we because lack of time we couldn't do the exact expressions. Here we only show the numerical diagonalization of the random matrices. We specifically talk about the mean and variance mean and covariance. So, since this is this talks about the fraction of eigenvalues less than X, the fraction can't be greater than one so it has to be a sigmoidal kind of function. I mean, it has to be increasing function because it's going to keep increasing. Now for when C is equals to two that means the average connectivity is very small. That is the case, then you see that this matrix will be sparse. If it's a sparse matrix, there will be many I can I mean nodes of vertices where there won't be any edges connected. So that those nodes will correspond to zero, zero eigenvalues. I mean those vertices correspond to zero eigenvalues. So you see delta function at zero big delta function. This is a expert. I mean, this is how the density looks like which has a delta function and it's slightly understandable why these jumps are coming in the mean, I mean it's understandable why the job is coming here and here. This is because of the delta function. And similarly, then for C is equals to three. We see the spectral density has a delta function, which is very strong at zero but it's the weight of the delta function is smaller at minus one and one. So we see a huge jump but it's because of the accuracy. We can't see I mean visually it's not, I mean it's not visible here the jumps. Similarly for C is equals to five. Now you have bit more less sparse metrics. So you have delta functions, the strength of the delta functions are reducing. So it's actually a jump here but we can't notice it. So this is how the mean looks like. If you look at the covariance. So you have like, you want to study the connected correlated of the index distribution at point x one and index distribution at point x two to do this connected correlated. I mean, we computed numerically. We found some interesting result. I mean one interesting result is x one and x two can be exchanged because these are numbers of course they can be exchanged, but x one common minus x two also has a can be exchanged and they'll give the same result which is quite interesting. We don't completely understand it right now because it's a preliminary work. It's a work in progress. Similarly for C equals to three. One thing I should mention is that the correlation is highest when when x one and x two are close to zero for C equals to two. Similarly for C equals to three, but C equals to five the correlation becomes higher at other values of X one. I mean, it's a work in progress as I said, we need to understand why this happens. So, yeah. So, in some like to summarize what we have done is actually we are developing this functional rate theory with Professor Isaac and we computed the mean for the index distribution through new I mean exact diagonalization we would like to verify it. Using theoretical population dynamics algorithm, but that will have to wait for now. And similarly correlations also need to be verified and better understood. Thank you. Thank you. Thank you for the interesting presentation and congratulations for the project you have developed. One of these was very challenging. Theoretically speaking. Okay, so we have time for questions before the first break. So if somebody wants to ask, please go ahead and meet the microphone so raise the user icon to raise the hand. Questions to one has his hand raised go ahead to one. Thank you for the talk. I just have one question regarding to the population dynamic algorithm. You said something about member from an ensemble. What would you mean by member here is he's a graph from the ensemble of the graph that you generate. Just a question about I don't because I don't know this method. Can you explain a bit. So, so suppose here we have, we want to solve something like this equation, but here there are many delta eyes. We are, let's say we talk about only one delta I let's say some some particular value of delta I, then we consider just one population in this population. This is like a sampling. How do you do a sampling suppose you have a distribution, you have to sample from a distribution right so suppose this is the sample. And we would like to get make this sample look coming from this distribution to do this we use this population dynamics algorithm. The way to do this is to update these members of the population using this particular since we have this kind of structure in the consistent distribution, we update them using this method. And once we update it for long enough time, it will reach a steady state. I mean this is a mark of genius of sorts right so it will reach a steady state and you'll have a population which will be coming sample from this distribution. Okay, it's about it's just a like you just want to solve this equation. It's like you can think of it as independent of the random matrix. Okay, thank you, thank you. Thank you. Any, any other questions. Okay, I also have a question. Go ahead. Sorry, because Fernando sorry. Yeah, yeah. So the first congratulations for the world very, very nice very interesting. Can you go back to the population dynamics where you have this me of X this function. Just for me to understand a bit better. So, if I understand well what you did now you plotted for instance the index as a function of X. And in this case you, how did you choose me me of X. I mean, you mentioned, sorry, you mentioned something about me of X. Yes. So, like, for computing mean and variance, we have to we take new going to zero limit after taking the. Okay, sure. And, and what type of information do you get when you, when you consider me arbitrary or. Yes, exactly. You mentioned in the beginning of functional theory so I just would like to understand, for instance, for the mean value of I. What do you what do you get in this functional approaches, is it an average over X as well or actually actually can compute the asymptotic behavior of the probability, given a K function, like given an arbitrary function, it is an increasing function. Based on the new, we can actually compute how the asymptotic, what will be the asymptotic behavior when case are actually away from the mean index, I mean mean index, we can compute this thing. Some last division kind of forms. So this helps in, okay, the beauty of this method is actually to be able to relate the correlation of eigenvalues, I mean, we might be able to study the correlation of eigenvalues from this method, like it's still a work in progress but it can be done. Yeah, but in principle you can also compute if I'm not saying any stupid thing but the, in principle you can it seems you can also compute like for instance the deviation from the, from the bigger, from the bigger case from the bigger. In the case you can compute analytically the function, right, you can compute is this fraction of eigenvalues as a function of X, so you know what I mean. So in principle, you can also compute deviation from one finger. Is that right? Yeah. Go ahead, you tend to because we need to wrap up and then maybe I'll add some points. Yes. When you mean like the function is that you're talking about spacing distribution. No, I'm talking about just the average index of the case because then you just obtain by integrating the Vigner law right so you can find the function. And I need to get as a functional fact so I was just wondering if you put computer fluctuations around the Vigner around the Vigner case, exactly functional approach. Yes, we can. Yes. Okay. Yeah. And the difference. Yes. And I will add just more if you go to the, to the main object we just put in the property, you know, the definition right, not this one. Yeah, yeah, the previous one. So the difference that we do here, Fernando as opposed to our previous work is that here with this method that we developed, we are able to consider the index. The index number as a random function as a function. So then you can you can wonder yourself what is the probability that this object as a function takes a given value for a function. Okay. So yeah, go ahead. No, no, no, no, sorry, sorry to interrupt it. Go ahead. And also that means that essentially now you have access to you may have access to level correlations. In principle, this is what we believe. And the cool thing about this method that you know the thing in very well is like when you do not do the replica limit to apply to a method. You have to, you have an infinite number of applications, right, so you have to make a functional replica limit in them, but it seems to work. Anyway, so, so we have to wrap up sorry. Maybe if somebody's interested in this work we can discuss and I have a play there. Okay, so you tend to us with Lana. Thank you very much for this presentation. You did a fantastic look from my part so shall we thank for the presentation. Thank you and ask guys. Now, so let me stop recording.