 very much and thank you for the invitation. So I'm talking about that subject and first of all, in the last talk we have already seen a relationship between some distributional symmetry and the classical notion of independence. As Christophe Sabot recalls that exchangeability thanks to definitive theorems is related to the notion of independence. Here, this is mixture of independent processes. So the story is a bit different in the context, but this is the same mechanism. We are talking about random matrices, and thanks to symmetries of that model, we will see some notion of independence. But in this context, this is something a bit different than classical probability. The non-commutative notion of independence belong to a framework of non-commutative probability that I'm going to introduce and probably most of my talk will be devoted to recall what is a non-commutative notion of probability. So first I'm going to give an introduction about random matrices where I will give the motivations. So I consider a random matrix Xn, and as usual, we are interested in the spectrum of these matrix, and we encode it into the empirical eigenvalue distribution. I denote this measure LXn, which is a probability measure with a weight 1 over n on each eigenvalue. The purpose is to some extent to compute the limiting distribution, the limit of this probability distribution. The main example is Wigner theorem. So if I take a Wigner matrix Xn, which is a real symmetric matrix, whose entries are normalized by square root of n, such that the Xaj iid out of the symmetry, they are centered with variance 1, and have moments of any order, say the distribution is independent of n, then the empirical distribution of Xn converts as n goes to infinity, almost surely on the expectation, to the semi-circular distribution, which is reduced. So this is the very basis of random matrix theory, and the context in this talk is not to compute the limiting distribution of one matrix given by one model, but to understand what is a distribution, the empirical eigenvalue distribution of a matrix, which is made of several independent matrices, X1n, Xln. So this is one example where the Xln, independent matrices, say there are independent Wigner matrices, for instance, and P, for instance, is a fixed non-commutative polynomial. And if we have some assumption of symmetry about these matrices, we will be able to develop a theory which tells us that the limiting distribution of Hn exists, and we can compute it with some rules. Okay, and the idea of free probability. So when we see one matrix, what is interesting for us is that measure. And we see that random matrix, is a section of a lot of random variables, but has a single variable, which is distributed according to that measure, but we think about it as a non-commutative random variable. To see a matrix, a random matrix, has a non-commutative random variable, so an abstract random variable. A simple example is when you are considering just the sum of matrices, say we have Hn, which is the sum of two matrices, which are independent matrices. So in the context of classical probability, if you have the sum of two independent random variables, the distribution is called the convolution of the distribution of X1 and the distribution of X2. In the limit when n is large, we will have a notion of convergence in non-commutative distribution, which allows us to introduce an abstract object, H, that belongs to an algebra and can be written as the sum of two non-commutative random variables. And under some assumptions, specifically distributional symmetry, we'll get a necessity notion of independence, which is not the classical notion of independence, and it's called free-independence. In that context, the distribution of the sum will be called the free convolution of the distribution of X1 and the distribution of X2. And free convolution is something that I'm going to define. It's just the distribution of the sum of two free-independent variables. In the classical setting, which is introduced by Voykoulescu, the associative notion of distributional symmetry is the unitary invariance, like for the GUI. So it is a quite strong notion of invariance. And if time allows, what I will develop is the theory when we reduce this symmetry to the invariance by conjugation by permutation matrices. Permutation matrices is a notion of invariance that is much weaker, and which is very useful in the context of front-end graphs. So in another story, not of that talk today, I could develop the application of this theory to the computation of the spectrum of front-end graphs. But today I will just mention the very general theory. So let me now introduce a notion of non-cumulative probability where we will focus on the potential application for random matrices. So what is a non-cumulative probability space? Okay, non-cumulative, I'll just use this shortcut and see. A non-cumulative probability space is something which is much easier than the probability space of classical probability. It's just a pair, a-phi, and a is a space which is a unital star-gebra. So it's a non-cumulative algebra. It has a unit in the classical sense. And by star-gebra, I mean that it is handled with an anti-linear convolution, star, which satisfies that a-b-star is b-star, a-star, as the conjugate transpose of matrices. And phi plays the role of the expectation. It's linear form on A. So if it plays the role of the expectation, we assume that phi of the unit of A, like the expectation of the function constant 1, is 1. And as we will need the flavor of analysis, we'll assume that the expectation of a positive element is positive, and it is, in its non-cumulative context, a positive element is an element of the form a-star a. This will allow us to get a flavor of analysis when the distribution and the real line will realize the non-cumulative distribution of variables. Yes? It is complex value in general. Phi from A to C. It's a linear form. Okay, so let me give the example of this kind of spaces. And the first example is to relate that example with classical spaces. If you have omega fp, a classical probability space, you can create a non-cumulative space of random variables. You take a phi, which is an algebra, say of bounded random variables defined on omega, together with the expectation. It's just this data that we keep in mind in this algebraic context. What we want to do is to compute the eigenvalues of random matrices. So the next example, okay, I put the index n to denote the size of the matrices. You take an algebra of random matrices together with whenever n is a trace. You can take also possibly the expectation of whenever n is a trace. And so we will see just in a few minutes that will tell us that the notion of distribution in this context is encoded in the spectrum of matrices, at least for Hermitian matrices. And a more abstract context, which is actually one of the very general examples, a, b and i of phi. You take b of h, the set of bounded operator in some Hilbert space, together with phi, where phi of a is the scalar product of a vector psi against a applied to psi, where psi is a unit vector. So it is much more abstract context, but at the top level, this is somewhere we can realize most of this non-commutative probability space in this context. As in classical probability, what is important is not really the way we define the variables, but the notion of distribution. So what is the distribution of non-commutative random variables? So the distribution, so the non-commutative distribution of a family of elements of a. So I will denote families by this bold symbol. So it is a family of guys, say indexed by j, and it belongs to a to the j. So it is encoded in the linear map. I denote phi, like for the expectation, but with the index a. So this is a linear map which takes a non-commutative polynomial in the variables a, j and the a, j star, and associates phi of p in a. The idea is that as we are in an algebra, what we can extract from a, the numbers we can extract from a, they are tenets follows. We first do the most general operation we can do in this algebra, the non-commutative polynomials, and then we capture the trace, the expectation. So let's go back to these examples, at least the two first examples, compared to the classical setting. So the notion is not really the notion of distribution in the classical sense, but it is very related. If say a is a family of classical random variables, then phi of index a, which is defined in that formula and in this context, it just encodes the joint moments of a. So under some assumptions, say when the random variables are bounded, this actually characterizes the distribution. But of course it is in a much more algebraic way as we are in a more general context. For a random matrix, we can relate this notion of non-commutative distribution with the notion of the eigenvalue distribution. If a n is say a Hermitian matrix, phi of phi n of the polynomial xk, by definition, so let's take with, let's say we take the first definition is 1 over n, the trace of the polynomial to the power k, applied to the matrix a n. And so this is nothing else that the empirical eigenvalue distribution of a n apply to the function, which is power k. So for a single matrix, the non-commutative distribution and the eigenvalue distribution are exactly the same thing. Yet the situation is more complicated if we have several matrices. If you have several matrices and take the trace of a polynomial in these matrices, this does not depend only on the eigenvalues of the matrices. It also depends in general in the eigenvectors. If the matrices are not, you cannot diagonalize them in the same basis. And this is exactly the role of the eigenvectors, which will give us several notions of independence. Let just me mention that, so if you have a n, a family of random matrices, and we are considering h n, which is a function of these matrices, say a non-commutative polynomial, and say that this is the guy who wants to study. And this is the guy that you know. You know the distribution of that guy. Then the empirical eigenvalue distribution of h n is encoded in the non-commutative distribution of the a n. If I want to understand the case moment of the empirical eigenvalue of h n, so by definition, so let's say that it is a real symmetric. This is one of the trace of q of a n to the power k. And this is the non-commutative distribution of the family of matrices a n applied to the polynomial, q to the k. And so the strategy, if you want to study that guy, is to develop a theory to understand the limiting non-commutative distribution of several matrices. Okay. So now I can introduce the notions of independence. And Spicer proved that there are exactly three notions. Three notions of natural independence. I'm not defining what is a natural notion of independence, but heuristically this is a rule where knowing the distribution of a 1, a 2 to a l, it gives you a joint distribution for that guy. And it is natural if you can deduce it in the same intuitive way as the notion of independence, which characterizes the thing in the natural way. Sorry. So I'm going to define these three notions of independence. When we define independence for classical random variables, we first define the independence of sigma algebras. Here we define what are independent algebras. So I take a 1, a l sub-algebras of a non-commutative probability space like this. And I'm going to define a rule which defines a distribution on the algebra that is now spanned by all these elements, just knowing the distribution in a 1, the distribution of a 2, and so on. So first, they are tensor independent. And tensor is the classical construction in classical probability of independence. So this is the same rule. If the expectation of a product is the product of the expectation for all a 1, so for all n, for all a 1, a i in a l i. So I want to compute the expectation of a product. Each element belongs to one of the algebra. And the formula is just the product of the expectation. So I just organize my term depending on the algebra that I belong to, phi of the product such that my element is in the corresponding algebra of a i. And here this is a directed product because my algebra is non-commutative, so phi of a 1, a 2, a 3 is not the same thing as I permit the variables. So here I mean that I first start with the first variable, then the second variable, and so on. So this is actually the classical notion of independence. If the algebras commute, then you don't care about this line and this is the formula you know. The second notion is also quite simple and it's called Boolean independence because of the combinatorial structure here. There is the Boolean poset. So we say that variables are Boolean independent. In the same context, we give the rule explicitly like this. If the expectation of a product is a product of expectation, but now we really think about non-commutative objects. The naive definition is that one, phi of a 1 times phi of a 2 times phi of a n. This is a naive definition, yet it is a natural notion of independence. It is an interesting one, but it is a kind of strange. In the sense that we don't have the property that we expect in classical probability. In this instance, if a is Boolean independent from the unit, which is a property which is usual for us, then if I compute a moment of order k of a, a to the k, because 1 is a unit, is just a times 1 times a times 1 and so on, this is the definition of a unit. There is nothing strange. But now if you apply this rule, you conclude that this is phi of a. You can cut phi there. Phi of 1 is 1. So it is phi of a to the k. The moment of order k is the mean to the power k, which means that a has a distribution of a constant variable, random variable, deterministic one. So this is an odd property. We could have the feeling that Boolean independent is not a good property, yet it is. It is just a strange notion of independence. The third notion of independence is, arguably, the most important non-commutative notion of independence. It is the free independence. And it is also the most complicated to write. So it is not a rule which gives you a way to compute any moment directly. So I consider variables a1, a2, al as before, or maybe I forget to mention something which is important here. When I write this, I assume that a l is different from a l plus 1, which means that if I have a variable there, it is not in the same algebra as its neighbors. So l, oh, it is a li. Thank you. So when I have a word in several variables, I can always regroup those in the same algebra and write it like this. And here I assume the same. So it is li such that l a plus 1 is different from l a. And I also assume that the elements are centered. In the sense that their expectation is zero. Then the product a1, a2, an is also centered. So an alternated product. It is a product where the variables belongs to different algebra when they are neighbors. An alternated product of centered elements is centered. And this is a definition. It's not especially easy to use. But just to mention, Roy Cooley-Skoo introduced that definition. He did not discover at the beginning the relation with random matrices. He was studying the Feynman algebra of the three group factors. And actually it mimics the definition of freeness of subgroups. So I'm not going to give that definition, but if you know what is freeness in groups, this is exactly what Roy Cooley-Skoo encoded in that definition. Okay, just to mention, so let me convince you that this well defines the notion of independence in the sense that if you know phi on each algebra, a1 and a2 and so on, then you can compute phi on the algebra generated by all these elements. So I consider a1, an, such that we have the same thing as before. The elements are alternated, but I'm not assuming that the variables are centered. And if I want to compute phi of a1 times phi of a2 times phi to an, if I'm able to give a formula for that, I will be able to compute phi on the algebra generated by everything. And to compute that, we write, we introduce the term a1 minus its expectation times a2 minus its expectation and so on, an minus its expectation. And if I introduce this term, I must correct this formula by expanding all these terms plus other terms. This is the guy I want to compute. And here, I can use the definition of free independence. This term is zero. And the other terms are obtained by developing these expressions in all possible ways. And so these other terms involved phi of a product of less than n elements. So you must iterate several times this rule and expand a lot of terms in order to be able to write explicitly that thing. So it's more difficult. But yet, a lot of studies have been done in this notion and we have a more explicit solution to compute that kind of thing in some situation. For instance, let a, a non-commutative random variable, which is distributed according to the semi-circular distribution, which means that phi of a to the k is the case power of the semi-circular distribution. Let b be free from a and let h be the sum of a and b. What is the distribution of h? How can we compute that? Then, actually, we can use the still-gest transform, denote s and x, x of z, which is phi of, so this is the usual formula, z minus x to z minus 1. So this is not exactly a polynomial, but trust me, you can give a sense to that formula in the non-commutative probability space. Just say, as a limit of this sum, phi of x to the n, lambda to the n plus 1, then the still-gest transform of a plus b is equal to the still-gest transform of b applied to z minus the still-gest transform of a plus b. Everything is in z there. So if you think about the classical case, the distribution of the sum of variables, you have a formula which does not depend on the distribution of one or the other variable. Here we have a formula for which we need that one variable is very specific. It is a semicircular distribution, and it is not an explicit formula for the distribution of the sum h, but it is a fixed point equation for its still-gest transform. What we are interested in is the sum. Still it is a characterization, and if you have a computer with some numerical method, you can draw a picture of the... or an Instagram of the distribution of a plus b thanks to that formula. So this is the kind of thing we can do, and if you have never met free probability, maybe you met this kind of formula in a problem of random matrices. Free probability gives you kind of a framework to motivate this kind of equation. Okay, so let's now talk about the relation between random matrices and free probability. Okay, the first relation was given by Voicule school in the 90s, whereas introduced this context in the 80s or 10 years before. So it is called the asymptotic free-independent theorem. And here is the context. I consider several families of random matrices. So each of these symbols is a bold symbol. So it denotes a family of matrices. Here I have a family of matrices. Here I have another family of matrices. And I don't care about the way they are indexed, this one, compared to that one. So let a1, a2, and a b independent families. So the first family, so it has a collection of matrices, maybe they are dependent, but this one is independent from the other one. Okay, and so on. And we assume two things. The first thing I assume is that each family converges in the non-commutative probability sense. And what I want to obtain at the end is that the L-tuple of families converges also. So each family converges almost surely in expectation in non-commutative distribution, which means that, I recall that for any polynomial, 1 over n, the trace of a polynomial in a little ln converges to some value that we can denote, okay, which is upon an l. Okay? And the second assumption is a distributional symmetry. Assume that for each family, if I take the family of matrices, so let me denote the matrices like this. I have an index set gl, I denote the matrices like this. I assume that this collection of random variables is the same as the one we obtain by conjugating each matrix by the same unitary matrix. And this family, so this is for any u, which is a unitary matrix. Okay? I will denote this family u, a, n, u star in the following. Like if you have a family of independent GUI, GUI or even dependent GUI, it will satisfy this kind of property. Then the families are asymptotically free in the sense that if now I take 1 over n, 1 over n is the trace of a polynomial in all the families, the first one, the second one and so on, until the health one, this expression converts to some phi of p and phi is determined by phi 1, phi l and the requirement is that this is the free product of the variables. The matrices are asymptotically free. So this is stated for unitary invariant matrices and it remains true if one of the family consists in independent Wigner matrices. And the conclusion of that is that if you have a matrix Hn, which is the sum of two matrices, two independent matrices and this one is a GUI matrix, then the empirical eigenvalue distribution of Hn converts as n goes to infinity toward the free convolution of the limiting distribution. And so if you know that that random matrix or that deterministic matrix has an empirical distribution which converts on this still just transform, you call it SB, then you can use that formula and you can use that technique to understand the spectrum of Hn. Yes? Can you repeat the assumption of this statement for all l 1, 2, l? So where is the claim? So the law of... The law of equality sign or what is the claim? So here you have a tuple of matrices. You conjugate of random matrices. Now you consider another tuple which is on the right where you have conjugated each matrix by U. U is an arbitrary unitary matrix. Then the claim is that you do not change the law of the family like a GUI matrix. And there is not a lot of matrices with that property, but the GUI matrix is a prototype of one. Right? Okay. So we can tell a lot of things about this notion. For instance, each of this notion of independence is actually related to a notion of graph construction. If you have random graphs or deterministic graphs, there are three notions of products such that the spectrum of the product is some construction on the graphs. And here at this level we just have the relation with three independence. Essentially three. Yet we can see that if the matrices satisfies another symmetry, you assume that the matrices are diagonal matrices and that this invariance is by permutation matrices. This means that just you have matrices which are invariant when you conjugate the variables along the diagonal. And then you can compute it. It is an exercise that the relation that you will get at the end is tensor-independence. Because when you have diagonal matrices the action of the permutation groups which gives you the tensor-independence, the classical notion. And there is not really before that fitting distributional symmetry related to Boolean independence. What I'm telling now, so this is the next part, is what it was first motivated by a model of random graphs. But the result of that study is that I was able to obtain a generalization of the theorem there where we do not assume this assumption, the conjugation by any unitary matrices, but just the invariance by conjugation by permutation matrices. So here we will be able to, if we are able to write this theorem completely, to have something which applies for much more matrices because this invariance is much weaker than the unitary invariance. The permutation group is just a small group inside the unitary group. Right? But if the distributional symmetry is weaker, we must adapt that part as a notion of convergence and we will need a stronger notion of convergence to have the convergence in uncommitative distribution. We will need more. And this is what I recall, convergence in traffic distribution. Traffic is just one word which is related to this word which appears there. And the definition of that I'm going to give you that definition. It just involves the same kind of convergence but for more functions which are of combinatorial nature. Okay? So if you have more information about the matrices you are able to allow less distributional symmetry than this one. And the conclusion of that will be truth not just for polynomial but also for these functions. So we will get more information at the end and the conclusion is not that the matrices are asymptotically free-independent but what appears is another notion of independence if we wanted the fourth one which actually encodes the three notions of independence. So here, the matrices are said to be asymptotically traffic-independent so it's just a word. It's a first notion it is not really a non-commutative notion of independence because it is defined for objects where we have a stronger notion of independence of distribution. What is important is not the detail of this definition which is a bit complicated but what is important is that traffic-independence encodes the three notions of independence and much more. We can introduce variables that are neither tensor, Boolean or free-independence but still that are traffic-independent in this context. Specifically matrices of random graphs. What does encode mean? It implies means that I define something which is a traffic which is something for which the notion of distribution is richer than non-commutative random variables and then I can define three classes of traffic and for traffic of each class their traffic-independence means tensor-independence free-independence or Boolean-independence so I can realize each notion thanks to a specific and explicit class of traffic and then I can also the feedback of this analysis then we can also introduce random matrices which for instance are asymptotically Boolean-independence even if there are no theorem about the asymptotically Boolean-independence in this context the tools of traffic allows us to do that. So in the last 10 minutes what I'm going to do is to introduce this notion of traffic distribution for matrices. So the traffic distribution an as I announced is the data of a map which applies for a given function g gives 1 over n the trace of g in an and there g is called a graph polynomial and this is exactly what I'm going to define what is a graph polynomial and what is the meaning of g of a n and now I'm going to define first a monomial the basic idea is that when you have linear operators the way you compose linear operators is in a linear way and for matrices you do the product of matrices when you have traffic so you have say in this graph I put an input to mean the space at the source and the space at the end with traffic we will be able to compose operators with any scheme given by a graph and it will have a sense to have several ages which operates in the same time we are able to have loops we can have ages in the reverse way we can also have an age we go nowhere and so on and a graph monomial so is the data of a picture like this which is a finite connected graph e the ages are directed of in and out which are two vertices possibly equal and here so usually you have a1, aL which are your matrices are given in advance and here you say that you put a1, you put a1 again you put a3 and so on we do the same here we want to have in mind connected to a matrix and so a labeling of the ages by indices of matrices so which is just a map from the set of ages to an index set J given that we have in mind a family of matrices which is indexed by that set J so this labeling tells me which matrix is associated to each age so now what is the definition of how do we define a graph monomial applied to a family of matrices so let me recall something very simple is the definition of the product of matrices because we just mimic the definition so if you have the product of n matrices the entry aj is the sum for a2 a3 an-1 from 1 to bn the size of matrices of the first matrix it's a3 it's a2 the second matrix it's entry is a2 a3 and so on for the last matrix we have the entry an-1 J just the definition of the product of matrices right? and here we have to see two processes there is a summation it's a summation for each vertex which is not the input or the output we have chosen an index between 1 and n it's the same for each vertex there we choose an index so that the entry aj of G of an will start with this choice and I can encode it like this it's a sum over the map phi from the set of vertices to the set 1n and here we see for the first and last vertex actually the index is given there so the phi of the input is J phi of the output is a right? so we have done that process and there we have a product over the edges for each edge we have a matrix and at the source of the target of this edge we have two integers so we do the same it's a product over each edge let's call this edge VW the matrix which is associated to E is called A gamma E on the entry so from the source to the target from the right to the left VW so here we have another kind of algebra so this is not the algebra in the usual sense this is an ocean which is related to operad algebra and maybe in the last two minutes I can give you examples of such functions so by definition because I compare it with polynomial if I take this I put directly the matrix on the edge this is just the matrix A and if I put if I have two edges like this AB from the input to the output this is the product the usual product of the matrices AB so if I have two edges which are like this from the input the entry AJ of this guy what is that so first we have this summation we choose an index for each vertex which is neither the input and the output but all the vertices are either the input or the output so there is no such a choice and then we have a product of A AJ B IJ which means that this matrix is the entrywise product so the entrywise product is not a polynomial but it is a graph polynomial and we need the trace of this entrywise product if we want to tell something in the context of the theorem I mentioned before let me give a last example which is important in graph theory so as I said it is possible for a graph polynomial to have the input and the output which are the same vertex so let's take this simple case I have one single edge what is the entry AJ of that guy I have an internal vertex k so in this process of summation I will have a sum from k from 1 to n but moreover if I want the input and the output to have respectively the image J and A and that the input and the input are equal I will have 0 if J is not equal to A so actually this is a diagonal matrix then the complex number which is associated is A so we have k there and A there and so we see that this matrix is a diagonal matrix with diagonal entries the sum so A is fixed over the lines and so I call this matrix the degree matrix because if A is the matrix of graph, the adjacency matrix this gives us the degree of each vertex and so we need the distribution of the degree in this context and actually let me just mention that this convergence in traffic distribution if you consider it for adjacency matrices of graphs with boondi degree is exactly Banjami-Nishram convergence of the graph so you can also see this traffic convergence as the generalization of Banjami-Nishram convergence but for an arbitrary matrix not just a matrix of a graph thank you very much yeah for graphs sorry I don't understand the question the graph polynomial can be realized can be realized as one of all the three nodes of the independent ah the three product of graphs how can you realize it in terms of the three years? so you have the three notion of independence in the classical ways I apply to deterministic graphs tensor independence is the tensor product of graphs free independence is the free product of graphs and Boolean is another one and for the notion of traffic independence there is also a notion of products but actually it is a relevant in the context of random graphs and it generalize the notion of the free product of graphs when you have the free product of graphs I don't know if you know the definition but you have several copies of your graph but if you think about random graphs what you want is that the different copies are independent and if you sample you adapt this construction with independent copies of graphs like this the notion of independence which is associated is the traffic independence not the free independence is that questions? sorry I am still confused about you think that there are just three notions about independence but there you want to introduce the traffic independence because this is a notion of independence between non-commutative and unvariable and here we have more structure and this is the trick because it wasn't abused that it was possible to have something else because of the theorem of Spicer but there is not a notion which just depends on the notion of non-commutative distribution because you need this distribution which involves these graph monomials it's like you have a more complex object so you are able to define something else but if you just have the piece of information which is encoded in the non-commutative distribution you won't be able to do that I can state it differently but the traffic free product of two variables it depends on more than the non-commutative distribution and this is because you have this additional information that such a relation can exist including that the traffic is more information yes you are to define it it's not a relation just about the trace of polynomials it can be written in a complicated way as a relation between the trace of graph operations and even if you are interested just in moments to compute the moment of the sum of two traffic which are traffic independent in general you need more than moments you need to see the eigenvectors somehow for instance if you take the adjacency matrix of a Erdos-Schweeney graph with parameter 1 over p you can try to write formulas to characterize the spectrum it's difficult but you see you need more than the just transform you need more information and this is just an int to see that such a relation exist exactly this is the notion of convergence in traffic distribution to graph polynomials and form a graph polynomial that is this kind of product of it can compute something so the consequence of that is that if you have two graphs two large random graphs which are permutation invariant takes the sum of the adjacency matrices it gives you a new adjacency matrix which is the adjacency matrix of some graph and if the matrices converge in traffic distribution or if the graphs converge in Benjamin Schramm convergence this one converges in Benjamin Schramm convergence and you can describe this graph in terms of that graph you have an explicit construction matrices asymptotically traffic independent they converge in traffic unitary invariant matrices permutation matrices and matrices of graphs that converge in Benjamin Schramm and also semicircular matrices which means that they can be diagonized in the Fourier bases so if you have a diagonal matrix and you conjugate it by the Fourier matrix we have an expression for the traffic distribution also bound matrices bound matrices have the same limiting distribution as Vignard matrices in the traffic sense not in every regime but in the more general