 equals j. If I want the cubic one, well, you'll see that this is tau tensor tau of something like 1 tensor xk plus x i tensor, what x j tensor 1, decorated with some delta functions. But anyways, this involves monomials of shorter degree, and so you keep on going, so you'll be able to get everything. And with d, it's not so difficult to check that. So uniqueness is just some induction and recursion. Now, what is more complicated is, or not more complicated, what's left to do is to check that actually in the case that you have freeness and that x i are free semi-circular variables, we do have this equation, right? And this is the streak that I already explained to you. There is this representation on a folk space in this right creation operator, r. So we have this, remember, if I write x i as l, say, of h i plus l of h i star with h i orthonormal, then I can look at this operator r i, which is right tensor multiplication by h i. And then it satisfies these identities that r i x j is delta i j p omega, where p omega, remember, was this vacuum vector, this vector of degree 0. Okay. And then, moreover, you have these relations that r i omega is same thing as x i omega and r i star omega is 0. And then, well, the point is that you have the following formula, what's written on the bottom. I'm claiming that r i, say, z is equal to d i of z hash p, where this hash is defined here. So the hash is the way a tensor acts and something is to multiply it on left and right. Okay. So where does this formula come from? Well, this del i is a kind of a universal derivation. In fact, anything that satisfies the Leibnitz rule somehow factors through this derivation del i. So if I have delta from my algebra a into any module k, any derivation, then del of some polynomial of x 1, x n can be written as, well, I simply differentiate each of the variables x j. And then I look at d i, d j of p, and I hash it, and I sum it like that. And the reason for it is very simple. I mean, how would you compute del of this polynomial? Well, he would apply Leibnitz rule all the time, right? So suppose your polynomial is, say, x i 1, x i k or something like that. How would you compute del of that? Well, you would write it as a sum of decompositions of your monomial q as a times one of the variables x j, let's say b, right? And then you would put here a, your derivative of x j times b. This is what the Leibnitz rule dictates, right? But this is exactly del j of p of q, sorry, right? Hashed with del of x j, because this thing here is a tensor b hash del of x j. And these a tensor b come from applying the free difference quotient to q, right? So it's a very simple algebraic fact. And so this here, that's a derivation. So I can write down that same formula for my derivation of taking the commutator with r. So I can write down that the commutator of r with sum z is equal to the sum r, what am I doing, r i of d j of z hashed with, well, taking r j with x, sorry, r i with x j, right? This is the formula here. And now, of course, this is zero unless i equals j. So this is just d i of z hash p, exactly as I advertised, okay? All right. So now, did I do it on, I didn't do it on the other slides. Let me do it on the board. So now let's do the computation. So why do I have this equation here? Well, what is this, what is this thing here? Remember, we can magically write it as the trace of p omega xi. Let me reverse it. Since it's a trace, I can just put in front q in front like this. So write q xi, okay? And now I remember that I have these properties here. And these properties tell me that actually xi omega is the same thing as r i, sorry, xi p omega is the same as r i p omega. And then p omega r i is zero because this says that the range of r i is perpendicular to omega. So if I compose the projection onto omega with the image of r i, I get zero. Okay, so let's use it here. I can rewrite this as the trace of p omega q r i p omega. So at this point, I've just used this identity that multiplying p omega on the left with xi is the same as multiplying it with r i. And now I'm going to add zero, actually subtract zero. I'll say minus trace of p omega r i q p omega. I'm subtracting zero because p omega r i is zero. And now this is the same thing as the trace of p omega commutator r q r i q r i p omega. But that's by this formula, the same thing as the trace of p omega d i q ash p omega p omega. Okay, so now all we have to convince ourselves is that the trace of p omega a tensor b ash p omega p omega to compute what that is. Well, this is just the trace. Just use the definitions p omega a p omega b p omega. And that's trace of a trace of b. Remember that p omega is rank one, right? So so that's it, right? Because what this then is is just tau tensor tau of d i q, which is exactly what we wanted to prove. So I've done it in the case that d is scalars. But if d aren't scalars, there's a very similar argument only you insist on so that this r i commutes with d. So this derivation kills d, okay? So to sum up what we've checked is that this equation characterizes freeness from d and being a sense being semicircular. So now let's check that this equation is almost verified on our random matrices. So let's do this. So I will denote by omega k of ij the function which is signs which picks out the ijth entry of the kth matrix, right? So the setting just to remind you is that we have these matrices a a 1 and a d n. So these are self-adjoined n by n matrices with Gaussian entries, complex Gaussian entries. So if you like their functions on n by n matrices self-adjoined to the power k power d, right? And so omega ij k is simply the ij k, I mean the ijth entry of this matrix view, kth matrix viewed as a function on my space. All right, so now because my law on the matrices is Gaussian, I have every right to write down the Stein situation for those entries. So what I'm writing here is the thing that I wrote on the board before, that if I integrate with respect to the Gaussian measure this function, this linear function on my space, multiplying by any function f, then what I get is the expected value of the derivative of f. Now there are a few funny things. First of all, there's a factor of 1 over n. This comes from the fact that we've normalized our entries to have variance 1 over n. The equation I wrote before was for Gaussians with variance 1. So if you go from Gaussians with variance 1 to Gaussians with variance n, there's a supplementary factor of 1 over n. Remember we're using this differentiation, how do you call it integration by parts? So at some point we had to differentiate e to the minus x squared. So if it's e to the minus x squared over sigma, the sigma will come out. And that's 1 over n. The other funny thing is the switch here. It's j i versus i j. And the reason for it is the fact that we're looking at complex Gaussians. So actually it's the derivative with respect to z bar applied to the function that's the integration of z against the function. And the reason is that I'm just switching indices as opposed to putting complex conjugates is because we have a self-adjoint matrix. So if you like, I could write this as omega k i j bar times f. All right. Good. So now let's apply this identity. Nu n will be our Gaussian measure on these k tuples or d tuples of n by n matrices. And so let's look at the expected value of the kind of term we want to compute, which is x i multiplying some monomial. So I'm taking the kth matrix, a k n, and I'm multiplying it into a polynomial of a 1 through a d. All right. Well, if you think about it, that means that we're going to be summing over i and j. We're going to look at the j i-th entry of this a k. And we were multiplied by the i-j-th entry of this polynomial, this e i j e i i and e j j. These are just diagonal matrices with 1 in the j-th or i-th diagonal position. Okay. Now we can use our Gaussian integration trick to convert this into a partial derivative. So this is 1 over n square times this partial derivative of that thing. And now what is the partial derivative? The partial derivative is a directional derivative of my function, right? So what am I supposed to be doing? I'm supposed to be applying some kind of a derivation to this polynomial. But I told you that the difference quotients are universal. Any derivation factors through different quotients. So in particular, the derivation where I differentiate this polynomial as a function of all these entries in the i-j-th entry of the k-th matrix, that's a derivation. It's going to factor through that. And if you do the algebra, you'll see that this is the formula. It's just the k-th partial derivative, sorry, k-th difference quotient of p hashed e i j, or e i j as a matrix with 1 in the i-j-th entry and 0 as elsewhere. Okay. Now if you rewrite what that is and think for a second, what you come up with is this formula. So this is almost what we want. It's not the expected, so here you have the normalized trace, tens of the normalized trace. We've picked up a factor of 1 over n when we're doing this Gaussian integration. The only problem is that we have the expectation of a product of traces as opposed to the product of expectations. If we had the product of expectations, we would be done because that's exactly what the formula that we want to prove gives us. And well, here you use concentration. Use Gaussian concentration to prove that since the fluctuations of this trace are very small, it's pretty much almost equal to its average all the time. When you take its product with itself and you take the expectation of that product, it's the same thing as the product of expectations up to a small error that goes away as n goes to infinity. So in the end, you actually recover exactly this equation here. So what I've done actually is a very short, see what kind of one page proof of just convergence to the semicircle law. I didn't put any algebra D into this consideration. But actually it's not such a big complication. If you throw in the algebra D, you just have to go through the motions and check that everything works out fine. And basically because the entries of that algebra are constant, they will not be affected by these partial derivatives at all. And so in this formula, there will be no contributing factor from them. So you might as well be just taking these Dk's to kill that algebra. And that's what's going on. So that's more or less the proof of the theorem as I stated it. All right, any questions about this? Okay, great. So now I want to talk a bit about more complicated random matrix models. So here, as I told you, we can, in the Gaussian case, we can interpret these D tuples of n-bion matrices as simply functions on this space. And to make them Gaussian, it's enough to tell you what the Gaussian measure on this space is. And there's a very easy formula for this Gaussian measure using the fact that we're dealing with matrices. The Gaussian measure, let's call it nu n, is up to some normalization, e to the minus n trace of some a j squared d, let me write it as dA1 dAd. But this just means Lebesgue measure. And in case you're wondering which Lebesgue measure, we'll identify, put a Hilbert space structure, a real Hilbert space structure on the space of functions using this. Okay? So this will be the Gaussian measure here. Now there's a lot of interest in understanding what happens when instead of the Gaussian measure you put something else. So you take some v, let's say a non-commutative polynomial in d matrices. So an example of this non-commutative polynomial is the sum of squares. And then you put a measure of the following form. You take minus n trace v of a1ad, you exponentiate that, then you have to renormalize it. Now of course, if you write it like that with an arbitrary polynomial, you may not have much happening, much interesting thing happening at infinity. So this measure as written may have infinite integral over all matrices. So you may have to put in a cut-off. So for instance, you may want to cut-off insisting that all of your matrices have operator norm no bigger than some number r. But what turns out is that this r is irrelevant after a little bit. So let's call so new nr. So let's say that I select some matrices here with respect to this law. I think I switched from new to new at some point. No, I didn't. I'm good. Okay. So there's one of the first theorems in the subject is one of the first rigorous theorems in the subject. And the multi-matrix regime is due to Alice Guianna and Duarte Morel-Segala. And it says that imagine that your potential v is a small perturbation of the quadratic potential. So you're looking at the quadratic case plus epsilon times some w, where w is some arbitrary polynomial, a self-adjoint polynomial. Well, the claim is that basically, there is a certain sweet value of the cut-off after which the cut-off doesn't matter anymore. So the law, you have some convergence to limit law, and this limit law doesn't really depend on the cut-off anymore. And so the statement is that you actually have convergence to a certain limit law. And in fact, this limit law depends analytically on this value epsilon here. And the other comment that I want to make is that if your potential w splits up as a sum of two things, then actually, under this limit law, if you split your family into the first r and the last d minus r, according to how your potential is split, then these two families end up being freely independent. So somehow the reason that we were seeing this independence of the semicircular variables was because our Gaussian potential was a sum of functions of individual matrices. So how do you prove this? Well, you have to do a somewhat different integration by parts. So, classically, if you are interested in one of these Gibbs measures, things of the form e to the minus v of x, then the integration by parts that I did gets replaced by this integration by parts. It says that if you integrate f of x, not against x, but against v prime of x, then what you get is the integral of the derivative of f. So you can do that very same proof that we did before with the Gaussian thing. Only our target is, instead of this equation here, that q times x i is this tau tensor tau of d i q is going to be a little different. This time around, you have to replace x i by what's called the ith cyclic derivative of v. So let me tell you what this i-th cyclic derivative is. So if you want to be formal, d i is m sigma composed with partial i, where m sigma from a tensor a to a is the map a tensor b gets sent into b a. So if you like d i of a monomial, let's call this monomial p, this will be the sum over all decompositions of p as a x i b of b times a. If you're wondering why would anybody ever want to deal with such a thing, the reason for it is that actually if you look at a partial derivative of a trace of some polynomial of a perturbation like that, take any trace whatsoever, then what you get if you take the derivative at epsilon equals zero is that you get tau of p times d i q. And if you think about it, this explains why there's this reversal of b and a because you see when you differentiate this p, at some point you will imagine it's a monomial, at some point you're splitting one of the x's and replacing it by q. And then the rest of the x's, they have to come out, what did I write here? q d i p. And then the rest of the x's, the remainder of p has to come out on the other side and that's why you have this kind of cyclic behavior. Anyway, so if you just do that same computation that I did with matrices and just pay attention, you will find this formula here in the limit. Again, you would have to prove some kind of concentration, but that's fine because for small values of epsilon, this measure that you put on matrices is going to be locally, strictly log concave, and so therefore you can apply things like log subalive inequality to deduce concentration. Okay, so anyways, now if your v is the sum of quadratic plus a perturbation, then you can rewrite this equation in this form, you can write it as tau of x i q, that's what happens with the derivative here, then that's going to be equal to tau tensor tau of d i q minus, this is a typo minus, what did I write? Sorry, I was typing this up at yesterday evening and I was a bit tired, sorry, what should be is tau, let's just do it, so tau of d i of sum x i squared plus epsilon w times q is equal to d i of q, sorry, tau tensor tau of d i q, and now if you rewrite this, what you get is tau of x i because of what you have here plus tau of epsilon w times q and that's equal to tau tensor tau of d i q, so at the end of the day you get the equation that tau of x i q is equal to tau tensor tau of d i q minus epsilon tau of w times q, so now if you think of your tau as a, yes, oops, yeah thank you, so now if you think of your tau as a power series in epsilon, you see that when you increase the degree of q by one, then you can express that in terms of either something with smaller degree in q or something with higher degree in epsilon, so in any case you have a certain recursive formula that closes up and then you have to do some work to prove that actually this does converge and give you an analytic function in epsilon and so forth. Let me just tell you one, yeah let me don't do that slide, let me just tell you if I have two minutes, do I have two minutes or yeah, so I just want to tell you a nice diagrammatic interpretation of this equation in terms of counting planar objects, you might remember that for a Gaussian matrix I've drawn these things flat but you can draw them on the circle, if you're interested in the moment of a semicircular variable, something like that, then what you do is you put m points 1, 2, 3 up to m and you're simply summing over certain diagrams d, the number one, right, you're just counting how many diagrams you have, these are these non-crossing diagrams that you have like that. Now you can modify things a little bit by putting other diagrams here and permitting some connections to these diagrams, something like that. And now imagine that each one of these diagrams comes with a weight, let's call it beta, and let's just imagine that we have a formal wave evaluating such things. Then you will see that if I start with an equation like this and I try to follow what a string does, well the string goes into a kind of a soup here, you have a soup made of other strings, well this whole soup, it's made out of other strings and maybe some W is floating in them and so it depends on your luck, if you took your string it might come back here like that, well if it comes back like that then you just have, your soup just got split into two, right, because the soup that's here is disjoint from the soup that's there. So you get a formula which is something like a tau tensor tau of some derivative of x to the m because you've eliminated these two points so you've taken some derivative, or you might be unlucky and you might actually get to touch one of those W's in which case you can kind of reel it in and you have another term which is this dWx. So I'm not doing justice to this but I just want to put into your minds that there is some combinatorial structure with this counting of these so-called planar diagrams or planar maps that has the same kind of recursive behavior as this equation here and that's why there is a very nice connection with this. Okay, thank you very much. Do you have any questions? We have not so we'll resume again in 10 minutes and thank Dima again.