 This is on. Are we recording? Yes, I think so. Okay. So let's start again, I will just make a little recap with some complement of what we've done before. And we were, we'd finished with a proof, which was about computation of the limit of the expectation of the normalist trace of a product of ignometric seeds. I will introduce some graph notation that I will recall here. I recall a test graph. That's to be think as a test function, but it's a graph. T, V, E, and labeling L function. So it's a finite, let's say connected graph for the moment. And L is a labeling function. Example, we were considering such a graph. We were considering simple cycles, simple cycle with ages labelled L1, L2, up to N, right? Then if I denote for XN, a family of matrices, we were considering X1, X2, LL, we denote to N of T applied in matrices. So the notation of a functional applied to this test graph in this matrices, we call it the trace of a test graph. It is the expectation of one over N, the sum over the labeling map from the vertex set of T to the set of indices from one to N of the product. So we choose for each vertex here, some integers, given this collection for each age, we have a label. So we can associate the matrix XL. And because we have, we have specified entries, and this is if we can consider the entries. So we do the product about these entries. This is what I'm writing right now. We do the product of all the ages that I denote VW in the set of T of the matrix associated to this H that I called L of E. This is my labeling map, which is tell me this is L1, L2, LN. If there is some adjoint, I will be, I will have some labeling, but let's forget about this. And the entries that correspond to this choice, you can check. This is a phi of W, phi of E. So this is just a complicated notation that can be specified in what we consider with this graph. So let me stay in this place. This to N of T in XN is just what we were considering before. The trace of the product XL1, XLN. This is just a consequence of this definition. If you specify it, yes. Absolutely. Of matrices of size N by N. Any matrices and we specify independent bigger matrices later, but just a general definition. And then we, I convinced you that we have a formula. So we were specifying in specific case, but actually this is true in general. That when you compute such a trace, you have a sum over indices and you can organize your summation depending on which indices have the same value. Some vertices. Okay. And if you look at this formula, having the same value can be integrated in the quotient graph in the following way. So we have a sum over the partition of vertices. And then we have this sum where we restrict the indices that are equal according to this rule. And it can be written in the following as a new functional. I call it to zero. It is applied on the quotient graph T PI. I remind it is obtained by identifying vertices in a same block of the partition. And to N zero is similar to to N. It is the same definition, but it is not that five is injective. But it's five. So here we apply in the graph T PI. So we start from the vertex set of this graph, which is the Pi to N is injected. You can check that assuming five injective and doing this sum over the partition, ensure that we are exactly organizing the indices and where they are equal nodes. So this was specified in the case of a simple cycle, but actually if you start with an arbitrary graph, this is always true. So now we start again, considering that XN independent Vignard matrices assuming the variance is one outside the diagonal. We proved that this injective trace, which was the term inside this sum over partition, I just give him a name for it today. It converges either to zero or one. Converge to one. If the quotient graph T PI is a double tree on the other lines. What is a double tree? So it's a quotient graph, which becomes a tree if I forget the multiplicity of the ages and the multiplicity is two. So if it is clear, let me give some comments. I presented the notion of non commutative distribution. If you forget the multiplicity and the multiplicity is two, what do you mean? So here, if I forget the multiplicity, I get a tree. Oh, yeah, I forget. If I forget the multiplicity, I get a tree. Yes. And the multiplicity is two for each group of ages. Is it clear? Okay, that's the double tree. This is what they call the double tree. It's not a tree because I have loops here, but they are just trivial loops. Because I have double ages. Okay. So if I forget the multiplicity, I forget the multiplicity. Oh, we can compute that. So just I say that the non commutative distribution generalizes the classical notion of distribution. If we look at moments, classical theory, we have moments, non commutative probability, we have non commutative moments. For the traffic theory, we have this graph moments, which plays a role of the moments. So the multiplicity for each test graph is by definition during the traffic distribution of XN. We can speak about that to motivate that it is a generalization, which is quite natural. And this tow zero is actually a transform of tow, which I like to see as a wavelet transform, because it is not exactly the Fourier transform, it is between the frequency representation and the special representation in some heuristic. And it is a very sparse representation. It's either zero or something very simple. Okay, but we will play a lot with this injective trace because it's a nice object. We have a nice description and we, if we understand where the injective transform will be able to understand this guy, which is the quantity of interest. Okay, but first let's see some application of that, of that formula. Consequence for this matrix models, the limit of the expectation of the normalized trace, so I denote it before phi n, the expectation of the normalized trace of a product of matrices, converge to some quantity for each n and each L1Ln. Quantities that I denote phi of XL1, XLn on these guys are non-commutative random variables if you want or just a notation. But you can say they live in an algebra and we have a linear form which is a non-commutative expectation. Now let's play about this guy because it's limiting distribution. So first I want to make a parallel with Gaussian random variables. We have this formula which can be seen as a consequence of this. Yes. Why is that a consequence of what you said before, that this limit. This limit exists? Yes. So this we agree is the expectation of one over n's trace, right? Yes. Okay. Which is nothing else than this. Yes. And this we decompose it as a finite sum and prove that each sum on converge, right? Okay, again. What do that for? Good to go back. Okay. So a fact which is classical in free probability and can be seen in different way will be seen. Thank you. This formula is the following for all m a monomial for each of this index. So for each big matrix if you want and we are considering its limits. If you consider the limit of the expectation of the average trace of the big matrix XL times a monomial. So this is just a limit that we consider for matrices. We have this formula. This is a sum of all decomposition of my monomial as a left monomial time XL time right monomial of the limit of the expectation of the normal trace of the left monomial time the limit of the expectation of the normal trace of the right polynomial. So this is a formula that we will see is can be seen as a consequence of this. We'll play a little bit with this combinatorics representation. And later we'll do the parallel with the Gaussian variable. So first let's see why this is a consequence of this. So does everyone understand what I mean with this is the composition. Let me give you an example. If I take, let's call X and Y instead of XL1 XL2 two variables like this. So if I have a monomial which is X, Y, X squared Y and I want to look at the decomposition as a left and right polynomial monomial that sandwich X which decomposition I will have. I can write that M is one time X time this monomial, Y, X squared Y. I can also write this is XY time X time X squared Y or I can write it considering the last letter here XY X times X times Y. So in each case I have a left monomial, right monomial on the sandwich my variable. So if I want to write this term I will have three terms. So by L, are you always mean L times R or M also? So these two monomials? So I have a left monomial. I have a right monomial on each line. I have the equality that M is the left times X times the right. Is it what you mean? Okay. You look at all the way you can decompose it as such a product of three terms. Here I give the three decomposition that we can consider here. So when you have a square it's X times X and there are two instances of X that can be considered differently. So in the second term I consider this in the first position but I must also consider it in the second position. Okay. So why this formula is a consequence of that? When you consider the value of X times the monomial you know, thanks to the graph notation that you can write it as a tau of a first edge labeled X and we have a cycle where we have some unknown variables such that the product is M. You're still looking at this so maybe you need a or maybe it's a proof we'll just try it a little bit what we have. One, two, three, four, five, six. Okay. Yes. Yeah, and this is just a symbol for the limit of the matrix. When I write this is just think it as a definition of this definition which is just formal. But if you want you just construct a non-commutative probability space and this is a nice space. No, the limit is on the left. The limit is on the left. The limit is on the left. Okay, so let's see why this formula is true. What we know is that when we compute that we can just do the summation over the partition pi such that if I note T as usual this test graph this guy should be the indicator than the question graph is a double tree. Right. This is this formula. And this is the indicator function representing this. Okay. So now if it is a double tree this means that this age must be associated to another age. And this is my sum here because it must be associated to an age with the same label. Okay. Doing so, if I fix first, I don't know which partition I'm considering, but first I do the summation over the decomposition of my monomial x is L and y. And this means that I choose which age label x is associated to this guy. So now I have the sum over the quotient pi such that we have this connection. We have the orange connection just a shortcut to sum what I say of the indicator than T pi is a double tree. I just intercalate conditions that appear. So now, if you know that you have a double tree here, x, y, you know that you have, you are a quotient of the cycle. This means that here you have a cycle which corresponds to the L monomial. Here you have a cycle which correspond to the L monomial. And you must identify vertices of this graph. In such a way you get a double tree. So with that, you cannot identify a vertex of this guy with the vertex of this guy. This is a double tree structure. You should be convinced about this so the proof on writing a proof is different. And so, making a double tree with some identification of vertices. There are only one way to do that is independently of what appears here is making a double tree here and independently of that making a double tree here. This result in the factorization formula which means that this is the sum over M equals L x R of choosing this pi here. Let's say we have a pi one on the pi two. Pi one which will be a partition of the vertex set of this L subgraph and the sum over the partition of the vertex set of the subgraph represented by the cycle of R. And this sum is two independent sums. And now the indicator that T pi is a double tree is exactly the indicator that this subgraph is a double tree and this subgraph is a double tree. So it's how we can under this combinatorics representation to prove such formula. This is what we are known. The sum over this decomposition but sum over pi one of being a double tree is just let's say T L, oh yeah, no, that's T. T pi L, so the subgraph here and T R pi one and pi two are double trees. And now this, if you regroup this two sums you find that this is a formula you proved before and you get pi of L times pi of L. I'm going a bit fast but I'm just trying to convince you that we can just use this kind of reasoning to get absolute formula, right? Okay. Yes. Hello. I'm just saying that because I have a double tree the first, what I call the first age is this one which correspond to this one. So if M is a complicated monomial you have XL one, XL two which depends on this one. First age because we have a double tree which is associated to a unique age that I let me call it just X tilde for just for me, X tilde. Okay. Well, maybe I should know it's labeled A. So let's say this is the age E, this is the age E tilde, both of them have the same level and this is what I should say. E tilde, okay. So I'm just fixing this age and we know that all choices are possible so I will have a sum. This is just a sum over the E tilde age of the M part. And just I represented choosing an age like this is just choosing one of these decomposition. Okay. So of course there's a lot of information like this but I'm just trying to give you the idea of the computation we do. Yes. We avoid counting the number of double trees but if you count them you will find the catalan number when you have a single matrix but here you have several matrices. Yay. Yes, but maybe I did not focus about that. I should have mentioned it. Such that three, so I call twin age of a double tree two age that form an age of multiplicity two and I forget to mention it that if I have a XL1 and XL2 but L1 is different from L2 in the weight that was appearing in the formula this result in having a weight zero because we will consider the expectation of one entry of the associative matrix that forms the same entry of another matrix and this is non zero and these matrices are the same otherwise they are independent. So is a double tree such that twin ages have same label. So I apologize for this thing that I forget because it clarifies a little bit the fact that this variable must be the same as this one. Okay, so let me just do the parallel with Gaussian variable and then we will stop here. So I will use different letters just to avoid confusion I consider why a family of classical Gaussian random variable independent independent Gaussian random variables so you know that you can use the integration by part which tells you that if you take the expectation of one of the YL time of function of this family and this function is derivative with bounded derivatives then this guy is the expectation of the derivative with respect to the else variable of F of this. This is just integration by part for Gaussian random variable. Now if you specify a monomial with respect to one variable is as likely doing this summation think about it when you differentiate a monomial here it's a commutative monomial but we can always order the letters and consider it like this. So the expectation of Y of L time a monomial in this family will be the sum over the decomposition of my monomial in the left monomials time X YL time a right monomial and if you look at this expectation of the derivative it will give you the expectation of the product of L and L. Here we can see the difference we have something which is similar if you think that this phi is an expectation but we have the same decomposition which is a non commutative derivative here this is the name of this thing but you see that here this expectation takes the product of the object and here we have the product of expectation this is just a slight formal difference but somehow having such a formula in both cases just make a parallel between these two variables so this is the expectation variable it is a central variable for classical probability for instance in the in the central limit theorem this guy is the analog in the non commutative probability space and there is a central limit theorem for free variables you should take the sum of free variables as they are centred you normalize it as in the central limit theorem the limit is so this is the end of this morning session we will resume this afternoon if you have any questions we have time for that thank you sorry I had just one question maybe a follow-up on the last line on the on the left this tau n zero so is the is the arrow meaning that it is a limit n to infinity right absolutely okay so the follow-up thing is I'm familiar with another example where you have some object that in the limit n to infinity goes to let's say trivial limit either one or zero which is in the context of extreme value statistics when like for example when you study the distribution of the maximum of IED random variables and you have the limit that is either zero or one unless you scale appropriately the some parameter inside the function so I was thinking if you could get like a crossover function between zero and one if you if you have a pattern if you add a parameter in the game which which essentially computes or quantifies how much your network is not a double tree how far it is from a double tree because if you get a double tree and you for example add just one edge it is technically no longer a double tree and so you would immediately drop to zero but clearly this object is more close is closer to a double tree that any random network that you can imagine so I'm thinking that there might be a non-trivial crossover function between zero and one if you add like a parameter in the game so for another matrix model you know that should quantify in some sense how far you are from a double tree what is the level of violation the amount of violations that you have in your game but I apologize for the chaotic nature of the question I have no idea how to answer this question sorry I mean if you take an arbitrary matrix you want to identify what happens when it is not a double tree and have an interpretation about this I could present you different matrix models for which there will be different matrices with different limits where you will see nice shapes like trees with arbitrary tree-like graphs with arbitrary multiplicity or cacti so I can present you compare you give you some models and some variety of these graphs but I'm not sure that it will already answer your question in that direction we can discuss alright thank you we start again at 2pm is there any questions in the chat let's see this is stringer dyson equation let me see excellent the family of vigner matrices thank you the family of vigner matrices why is xl not mentioned I don't know there is a word in the excellent can you rephrase this I don't know I don't know I don't know yeah that's exactly a stringer dyson equation in m concerns some oh yeah it's a bit hard wait I should stop I should stop or should I stop