 in progress. Okay. So let me just remind a little bit the organization of the talk so far. So after introducing the very basic of the motivation of the talk, we were focusing on the non-commutative aspect of this Wigner matrices ensemble, okay, proving what is called the convergence in non-commutative distribution. But showing this, I decided to use a method which is specific to what I developed, which is this traffic stuff with graph. It is not mandatory to prove anything that I mentioned so far. Like this equation where I compute phi of x time a monomial and I have this decomposition. This formula is known since 15 years in physics. This is called the Schringer-Daisen equation or loop equation. This is just one way to use the formalism, okay. That's it. So I will state a second assumption which is more specific to the traffic setting, okay. And then I will develop something very specific to this bad ensemble of matrices for which the traffic gives a solution where the classical non-commutative probability theory is not fitted to work. So what I want, I invite you to consider. So first, I will use this notation for two matrices A and B. With a little cycle like this, I define the entry-wise product of the matrices, also known as the Adamar or Schur product. What I propose to do is to compute the limit of the expectation of the normalized trace. This is our job today of the Adamar product of two polynomials in Vignan matrices. I have p of xn, entry-wise product of q of xn, where the xn, they are independent Vignan matrices and we computed the traffic distribution of this guy, p, q are polynomials, non-commutative polynomials. And what I propose to show is that for this Vignan ensemble, we have something quite surprising is that this Adamar product is irrelevant in the following sense, that this guy is in the large end emits up to a small error equals to the expectation of the normalized trace of the first polynomial times the expectation of the normalized trace of the second polynomial. Of course, it is in the large end emits, which means that there is a small error. So somehow this Adamar product split the expectation of the normalized trace. And you know, you have a normalization over n which shows up on the right hand side because we have two one over n, just one over n, so something is happening. Okay. So when we have this situation, I will say today shortly that the Adamar product is irrelevant and actually this property will be important in traffic probability theory. Let's see why this is true. Let's prove this using the double tree result. So let me call Phi n of P Adamar product Q, the shortcut for this expression. Phi n is the expectation of the normalized trace. I'm just not repeating the denominator. What we must do first, if we want to use a traffic setting is to interpret this quantity in terms of a trace of a test graph. This is a strategy in this theory. So how do we do that? We have this to n. This is just like the trace, but for this combinatorial object, which are graphs. And what is the test graph that I will obtain? First, I will consider PQ, but they are monomials. I will assume they are monomials because this is for monomials that I made my computation. So this is two arbitrary monomials in my Vietnam matrices. I will represent this quantity for this polynomial. Thanks to the following graph. I'm start drawing, as usual, my cycle for Q. So let's say it is a monomial of length three. I have XM1, XM2, XM3. Now I will do the same for P, but the first vertex, which is attached to the end of my first age, I will identify the one of the cycle and the one of the cycle of P. So let's say we have a length six monomials here. I will have XL1, XL2, and so on. Everyone can check by definition of this tau n functional that this expression coincides with what we will obtain if you expand this in terms of the entries of the matrices. Okay, this is an example. So we have the limits of the traffic distribution, which means that we know that this guy converged and we have a way to compute this limit. What is this way is to expand the trace in terms of the injective trace. Just think that we want to compute a spatial quantity and we have a transform where the computation is much better. So every time we go to this domain, we do the computation and we come back. Let's do that. We know that this guy is a sum over the partition of the vertex set of my graphs and I have vertices here, vertices here of the injective trace of the quotient graph obtained by making the identification on that asymptotically this injective trace is the indicator of double trees. T pi is a double tree. A double tree where twin ages are associated to a same matrix. I will not repeat this plus a small error. Okay, where T is this graph and T pi denotes every graph you obtain by identifying the vertices. So the question is, how can we obtain a double tree from this graph? So here I have someone with an odd length so I will get zero. So let me just rectify this. So just think about it. If I give you some craft material where you can just fold things with hands, what will be the manner to get a double tree? It's almost in the proof, as in the proof we have seen earlier, the unique way to get a double tree is to first form a double tree here and then form another double tree here. And this operation of finding a partition pi1 here and pi2 here are independent. If you choose pi1 here, there is no restriction about your pi2. Which means that to get a double tree, you need to first, as before, choose a pi1 in what we called vp, which is this vertex set to find p2 in vq, which is this vertex set. Okay, on the pi you're considering here will be obtained by combining the blocks, of course, because these vertex may be identified on the right and on the left, but it doesn't change that these partitions are independent. I know having that tpi is a double tree is equivalent and it must be proved properly, not just an observation, but that's the tp, the graph like this. When you question it by pi1, it is a double tree. And the same holds for the right on-side graph. Okay, so it's as before, but we have more time, so let me do the, plus a small error, is do the step where we just explicitly write it as a product, double tree, time to sum over pi2 of the same thing, but for the other graph. And note that we have this product, yeah, this product, no, no, no, it's the entrywise product of the matrices. So this is a matrix and this is another matrix. You take for a monomial, you write explicitly in terms of the entries, you use the formula for the entries of a product and this term, the formula for the entries of the product and this term. You then use a formula for the entrywise product of this term and so on for this trace. You just do step by step all these elementary operations. Again, the vertices are? No, no, no, no. You have vertices for this monomial as we did before. Think about the same graph on the right-hand side, but just think that when you're doing the entrywise, the trace of the entrywise product is just one over n the sum of, and here you have the same index. And because you have the same index, the rule to make this translation means this is the same vertex here. Yes, okay, if it was a question, it is exact. No, it's not. So if you want, I can write just a little bit more. If I consider the trace of x square entrywise product as to the third, let's do that. So I use this formula. It's one over n, the sum over i, and i is my central vertex of my x square entrywise one, my x third entrywise one, and now I expand the definition of what is the entrywise one of x square. It's a sum over one index, and for x three, it's a sum over two indices. Let's write explicitly, we have the one over n and the sum over i. We have also a sum over an index that I call j here, two indices that I call k and l from one to n. For this guy, we have x, j, x, j, a times x, a, k, x, k, x, l, a. Right? This is an elementary operation, right? This formula, it's not each of these seven. It's this big formula with its sum. You recognize the definition of the trace just to change a little bit the language, because in this definition, we are considering the map phi from the vertex set to n, but this labeling of the vertices is nothing else that this labeling with the convention that is here, and then you saw the picture doesn't match, but you see the idea. And I add a product over the edges, which is just this product. So there is a little cost in understanding this translation, but. Okay, can you, for example, this x square times x to the third, the graph is the triangle. Yeah, let's finish this. Absolutely. Very good. Well, we are considering Hermitian matrices. If they are real Hermitian, it doesn't, it's not important. If they are complex, it starts being relevant, but let's don't, I'm not focusing on this. Okay, so did we finish the proof? Okay, just one last step. We have seen since the computation that there are some multiplicativity on the last step is here to recognize from what we did earlier that the sum over the partition of the indicator of being a double tree, this is a trace. It was our first lemma. So we have somehow to reinterpret the combinatorial formula we have to go back to a trace formula. And this from what we start for, so this is the consequence of our computation. This is the limit of what we announced. The expectation of the number is trace of the first monomial p times the expected, the limit of the expectation of one other end, the trace of q. Moreover, you have a factorization property that says that you can also consider the expectation of the product. If you prefer, you want to make a big error. This is specific to this. Okay, so just to play a little bit with this formalism and to go further outside the field of polynomials and see that for other operations, we indeed get something that works for computation. Here it is a not relevant situation, but being not relevant is relevant from the point of view of traffic probability. If you know that, you have information about your matrix models actually. If you have these properties, you actually belong to the field of vocabulary school. You will never really need traffic probability because if you have traffic independence, this property ensures that you have free independence. So you don't care about traffic. But if tomorrow you met an example, a matrix model for which you compute that and you find that this is not true, this matrix you're considering won't be free with an arbitrary independent matrix in general. But it is likely to be free over the diagonal or traffic independent. So this gives you a criterion to see if you need me or not, more or less. So if it is for the Adamar product, I don't think so, but people really don't care. It was not known for everyone. I don't know if some isolated people just already solved this problem. I'm not sure, but it was not really natural to consider just our product before having this traffic probability. So I never heard about such a result. Okay. So now we have finished this part with these basic aspects, which were not so basics, but basics in the sense that we were talking about these classical ensembles of free probability. The things are asymptotically free by linearity. And the formalism I introduced, it has a disadvantage, this graph stuff, that it talks about monomials. We'll see, maybe certainly in two days, when we are doing nice linear combination, something can happen. But in general, it is a mess. But here, the problem, you prove it by form monomials, and by linearity, the lemma I stated is true. Okay. Okay. So now let's go to the second part. Yeah. Should I call this section? Yeah. And what about this bad example of matrices? We are matrices. So we have chosen this terminology. And we start with permutation invert matrices. And in second session, we will see this variance profile matrices. What I want to show you is not, I will not present as a full theory. I just want to show you that the formalism we introduced with graphs make things very easy for these two questions of permutation invariant and variance profile. So I propose to consider A1, A2, two independent n-by-n matrices. And I assume that they are permutation invariant, random matrices. What I propose to do is to compute the expectation of the normalized trace of a product of some A1 and some A2. And it will be the opportunity to see that the formalism of traffic shows up. So allow me to not state all the assumptions right now, and we will discover during the computation what are the good assumptions that we want to consider. Sorry, I have probably a naive question. There are matrices that are not rotationally invariant, but permutation invariant and vice versa. But one ensemble is included in another. If you're unitary invariant, you're permutation invariant. So we are developing a theory big one with this permutation invariant stuff where inside we should recover the classical theory of probability. And this is what we do. The traffic distribution of a unitary invariant model will be very specific. We have seen the double trees, but for a unitary invariant matrix, we will not obtain any kind of traffic distribution. We will get cacti's. A double tree is just a cacti where the cycle is r of length two, which comes from the fact that semicircular variables are only cumulants of order two, which are zero. I'm going too far for people, we don't know. But so now that we are considering these kind of models, we don't assume that we have such a specific form and we may have form which are very wide for which we will not get asymptotic freeness because we don't have these pictures. So just give another view, let's focus on some techniques that we can understand together. So let us compute the expectation of the normalized trace of a product and let's do as before a l1 a ln, this for an integer n and for l1 ln, which is either one, I'm just reasoning with two independent matrices to simplify the presentation. So earlier we were considering the norm matrices like this. No, we don't have matrices which are written like x over square root of n, so it will be different. But this is always true that it is a tau n of a simple cycle. This definition we have given at the beginning of today's session while working for every matrices. Here we have the first matrix, just write the labels because we don't know. So we know that this is the sum, so we know that we can expand this trace in terms of indices and organize indices which are equal or not. And we have written this as a sum over the partition of these vertices. And here we have the injective trace of the quotient graph T pi applied to these matrices if T is this one. So I will recall the definition of this T pi because we must focus on it to see what happened. So I have not written the definition of tau zero because it was just a repetition of the definition of tau, but it's stuff like this. There is an expectation that I will put by linearity inside. One over n, the sum over the vertex set of my quotient graph and because this is the quotient graph, I encode the identification by pi. Pi, I assume that it is injective. So my partition really tells me that two different vertices have different indices in my entry-wise formula. And then I have a product of matrix entries. For each h, the e, I have a matrix which is either a1 or a2. Let's call it a of e, just the corresponding matrix. And then we have the matrix entry. We start with the input. We finish with input because it's a nice convention. Here is the formula. And I forget the expectation that I have written. I have written it at the beginning of the formula earlier, but no by linearity. I can put it right there. Okay, so it was a formula is clear to be cryptic at the beginning, but we start maybe understanding what we mean. Okay, the permutation invariance. We assume that each matrix is permutation invariant and the matrices are independent. So that if I consider the couple of the matrices, this is a family which is permutation invariant. Also, the permutation invariance implies that this expectation is actually independent of five. It's the same trick as before. Remember, we were taking the expectation of a product of GUI matrices or Vignan matrices. And I was saying that if you change the name of my indices, you don't change the value because you have a IID random variables. But for permutation invariant matrices, this is the same story. This assumption we use for Vignan matrices is actually the permutation invariance. So this guy, let us give a name delta zero because it's almost two and zero. So delta zero of a graph implicitly applied to my two matrices. And it's just a normalization of this guy. So that is this. We have n minus one. We have this number of objective maps. We know that it is n factorial over n minus cardinology factorial, which is about the number n to the power, the number of vertices up to zero of the following factorial times or quantity delta n zero. It is what is it? It's just the expectation in a product of the entry of matrices, a joint moment on the random variables that happen in new entries. Okay. But here, two matrices are involved, a one, a two. They are independent. And I'm considering an expectation of a product of a function of a one, a function of a two. So it splits. Delta n zero of t pi. We can write it as an expectation for my first guy. So labelled one of my quantity. Let me put phi of e as a shortcut for this times the expectation of the product over the ages labelled two of my matrices. So here it's a one and here it's a two. So this is my independence. And in this formula appears a phi. Okay. But I say that it is independent of phi. So this is just a notation. This is not a notation. This is true for any injective map for all five injective. We have this equality. Okay. Toe n, which is an injective trace is proportional to this delta. So each of this guy is actually a delta which is proportional to a toe n. And we are going to go back to some toe n. Let me do that. In the previous example, we have l, vigno matrices. And today, at this moment, I have chosen just to consider two matrices to simplify. No, no, no. This is because it's a product of n matrices with possible repetition. l one, l two up to l n. Either it's one or two, just to say that my first matrix is the first or the second one. It's a one or a two. Similarly for the second I decide and so on. If you want, just put colors. You have a color for a one, a color for a two. And in your cycle, you say that the white one is the first guy. The orange one is the second one. Okay. So you just have two colors because you have two matrices. Cool. Let's clarify it now. Please. Common? No, we need that. Otherwise, this is not true. You should think about this. If you try to do that without the injective trace, just speeding things like this, it won't work. So we should focus on which step, but okay, we have 15 minutes. I would like to explain carefully the rest of the book. So I want to interpret this product. This is similar to this delta n zero, but it's not on all ages, but just the ages level one. So I have this graph. I look at a quotient of this. So it can be quite arbitrary. Let's say I will draw something which is not really a quotient of a cycle, but for today I don't care. You can start with something different. For imaging, this is a t pi. Let me do something which is a quotient of a cycle. It's not difficult. So if you do a quotient of a cycle with a good number of ages, you may obtain this guy. So I will denote t1 as well, t2. So the graph, the subgraph of t, so t pi, I should say, I'm talking about this quotient, consisting in age labeled either one or two respectively. So that I have a subgraph which is white, a subgraph which is orange. t1 pi is a disconnected graph like this and t2 pi is this disconnected graph. So this guy is just, if I follow this definition, a delta n zero of my graph t1 pi in my first matrix. Let's write it explicitly today at this moment because it's really bad. And this guy is a delta n zero of t2 pi. Okay, it's a lot of notations, but what is important is that we split stuff. And that this delta n zero can be written in terms of a 2n zero because from the beginning we know that a 2 zero is just normalizing differently a delta n zero. So note this guy. I can write it by putting this guy on the other side as a, I will normalize it a little bit differently. So let me write things. Earlier we were considering connected graphs. So now that we have unconnected graphs, I will write 2n zero of t1 pi. I will just incorporate this non-connectedness in the definition by dividing by nk1. And here we have the expectation of the sum over phi injective of the product as usual. The same formula as before. Just, I just want to redefine my 2n zero and the graph as k connected component like this. Same 2. So k1, so k i is the number of connected components of t i pi. Okay. And then I have the same formula. I have the same phenomenon of that one delta is the rescaling of one tau. And so this expression, this guy 2n zero pi is one over n, n to the number of vertices up to small error plus v pi. Now I reverse my delta zero in terms of tau. So this guy is a tau n zero of t1. I will do the same for t2. But now I have some exponents here. I have n to the k1 plus k2 minus two times the cardinality of phi pi. I'm just using the same formula as here. But no, it's not minus one. It's minus the number of connected components. So this minus one plus v with the other sign becomes the k1 minus v. I do that for t1. I do that for t2. Okay. So this is just the same trick as before, but reversed. The assumption will be that this guy converges. And this is actually the convergence in traffic distribution. It is equivalence to several ways to characterize the traffic distribution convergence. This is one way, thanks to this injective trace. If you look at the monograph about this subject, there is another assumption of factorization of connected components, but for the talk of this week, we don't need that. So if you discover that in the paper, don't want to be surprised. It's some ingredient we want in the speech of the paper, but we don't need for the result we state. So we assume that it converges in particular. It is boondi. And we are in the same situation as for the Wigner matrices, where we have an injective trace we want to compute, which is equal to n to some power times something boondi. Let us look at this power, call it eta of pi. And what I'm going to do is to prove that for a graph that I introduced and it was chosen, this guy is minus one, minus e plus v, where g pi is a graph which is connected. And I'm going to introduce the difficulty of this technique is to see which graph we will introduce that will rule this quantity by its alert characteristic. Let's call it alert characteristic. So after studying this problem, we arrived to this conclusion. We set the graph that we called the graph of color red component. The vertex set is the set of connected components of t1 and t2 with the union of the vertices of t pi. This is an arbitrary choice, but that will work at the end because we just sit in front of the problem and analyze it. What is the set of edge? Each vertex is attached to the components it belongs to. Let's illustrate this to be clear that we know what we are talking about. Now here we have this graph. So we have four connected components. I will represent them by some squares like this with the color that explicit a little bit. What is it? Now I repeat my vertices. On these vertex, it belongs to these connected components and to this one. So I draw a line here and a line here and so on. This guy belongs to this one and to this one. So let's check what is the cardinal of the vertex set. It's the number of connected components of my graphs plus the number of vertices. What is my number of edges? For each vertex, it belongs both to one first and the second connected components. So it's two times the vertex set. And if there is no mistake, the miracle happened. This is indeed the Euler characteristic of this graph. The minus one is the number of connected components of my graph. It is connected by definition. Check this. On here, you have a cardinal of V minus a cardinal of V, which explains the change of site. So it's a bit magic like this, but this is the analysis by traffic. It's to find a good way to interpret this power and expansion. And one way which is convenient is to introduce a graph where we have this quantity. For Vignan matrices, we have seen that we have a Euler characteristic on something else about the multiplicity of this H. So also it can happen. From some matrix model, it is more complicated and we need a much finer analysis. If we can assume something more natural than this, we have to work more. But still, there is a step where you introduce a good graph. This needs some intuition, of course, but that's it. So the conclusion is that 2n0 of a T pi converts to 0 if, let's call this guy GCC, or graph of connected component, if my GCC of T pi is not a tree like this, this guy is not a tree. So it won't work. But if I consider, okay, imagine you make something else, which is a tree, it will work. And otherwise, it converts to the injective trace of the first graph times, so two times the injective trace of the second graph. And that's it. We proved what we decided to call after this proof, the asymptotic traffic independence of the two matrices. This is a weird formula. It is very combinatorial because we have decided to write it in terms of the injective trace. And there are other characterizations of traffic independence, one which is similar to a freeness, one which is similar to the cumulant characterization of freeness. But I told you this guy in the injective trace is another transform, is the wavelet transform between the two guys. It has a sparse representation because most of the time you get zero. And otherwise, we had cumulant, which is a product. I won't expect something much simpler than a product, of course, in such a formula. Okay, so this is the end of decision, if I'm correct. And we will see in next lecture how to interpret this combinatorial formula, which may be not very convenient. It is nice to get it, but how to work with that by proving freeness over the diagonal. We define this, and it will be much more algebraic than this combinatorial aspect. This is just a detour, a bit obscure, but at the end, we want something more analytic or more algebraic. Okay, thank you. Questions? If you have a matrix with IID on trees, in particular, it will be permutation invariant for a real matrix. So it works. I decided to do the Wigner matrices where you have a symmetry, but if you try to do the same proof without the symmetry, the double trees which shows up, and there are some details that change, but there are exactly the details that I did not mention. So the main story will be the same. On the double trees, rule the limit. So symmetry, we don't really care. There are semicircular variables, or circular variables that actually belong to the same family, and nothing different happens. I don't think so. I don't think you want this number to match. You want this to be zero. So yeah, there is this minus one. This is the number of vertices of the GCC. Yeah. Yeah, this is the number of vertices. But you know, it involves different topological property of my graphs, the number of vertices, but also this number of connected components that can be translated in very elementary functional of another graph. So if you think about the computation with the computer, this is not really the subject here. It is about the theoretical aspects. First, if you want computation with the computer, it will be next lecture when we will sum up this in an analytic thing. Secondly, if we introduce this setting, it's not to focus on Wigner matrices and double trees. It's clearly to be able to catch other ensembles. Catching other ensembles means that we don't expect a theory here to have a limit, which is a double tree or cactus or something specific. We just assume the limit exists. Then if you specify, for an example, like the Bernoulli matrices, you will see that there are not the double trees, but trees with arbitrary multiplicity, fat trees, I call it. And so you will play with this formula to understand what is the qualitative properties. But it's not finished. You have to understand depending on the traffic distribution, which is basically a combinatorial description, to use a combinatorial formula and to expect that you have some analytical result or something. So it is a good start. It is good to have this conceptual notion of independence which goes outside the unitary invent model or this permutation model. But when you accomplish this, it's not done. If you want to draw a density, you are very far from doing this at this step. This goes back to 2011. Being able to do a computation with a computer was five years later. But you will have a brief description Thursday about this. Okay. Thank you.