 All right, well, thank you so much. So just to continue what we were doing last time, I kind of wrote a short summary of where we ended up, very, very end. So if you combine what Johanna Dumitru did in her talk with what I was doing last time, you get this statement about d-tuples of n by n Wigner matrices. So these are exactly like in the morning talks today. And so what you know is that their joint law, so the joint law of this d-tuple, converges to the joint law of a semicircular d-tuple. So here s1, sn are semicircular variables which are free from each other. And what that simply means is that if you want to take the normalized trace of some test function evaluated in your matrices, that will converge to the trace of that same test function evaluated in a semicircular d-tuple. And here, as I said, I mean, what we did was to take polynomial functions. But you can go beyond them. You can take resolvents and things like this if you want to. So just a concrete example, what it means is that if you're interested in the eigenvalue distribution of some function of these matrices, and I chose, let's say, the square of the first one plus three times the second one, then the limit of that is going to be the same thing as the spectral measure of this operator here, with s1 and s2 being free semicircular. So the question for today is how to compute this. And notice here that this and that one are free, because, of course, this is in the algebra generated by the first one, and that's an algebra generated by the second one. There will be more random matrices later today in Roland-Spicher's talk. And also, I will finally get to the real connection with random matrices and more of a connection with random matrices in my next talk later. So I guess that will be tomorrow, I think, tomorrow or early morning. OK. Oh, before I do this, let me just point out one little thing. Somebody asked me this question yesterday. He said, well, how do you, we represented the semicircular variable, what I call s1 there, as somehow L plus L star, where L was this shift operator, acting on some kind of a direct sum of spaces, which, in the one variable case, is just the L2 functions on the integers direct sum, a special vector, omega. How do you actually see this within the usual presentation of the semicircular law, acting on L2 on itself? And the answer to that was, of course, orthogonal polynomials. So if you look at L2 of minus 2, 2 with a semicircular measure, then this contains a sequence of orthogonal polynomials. These are Chebyshev polynomials of the second type. And they have a recursive relation that if you multiply by x, your polynomial of degree n, then what you get is the polynomial of degree n plus 1 plus the polynomial of degree n minus 1. That's if n is at least 1, and p0 is just the constant vector. And so if you think about it for a second, it precisely tells you that multiplication by x is the shift forward plus shift backward in those bases given by the orthogonal polynomials. So it's just a different way of presenting Hilbert's Pate, nothing more. So I don't know if that eliminated the puzzle things for you. Anyways, let's come back to this. And so the general question is, what do we know, and how do we compute expected values of complicated functions of several independent variables? And so the first topic that I want to discuss today is this thing called the additive free convolution. And so the setup that's on the screen up there is that you have two probability measures, mu 1 and mu 2. And then you define their additive free convolution in the following way. You say it's the law of a sum of two variables, x and y, where x is chosen to have law mu 1, y is chosen to have law mu 2. And they're jointly chosen in such a way that they're freely independent. Now of course, if you replace here free independence by just classical independence, you would get the definition of the usual convolution, right? Because the usual convolution is what gives you the law of a sum of two variables if these variables are independent. Now, why is this well-defined? Let's just think for one second why it works. So for instance, let's say we want to compute the expected value of some function with respect to mu 1 convolve with mu 2 of some function f. Well, by definition, what this means is that we're taking the trace of the function f of x plus y. And that will be inside the algebra generated, the Phonomenalgebra, if you like, generated by x and y of this function is in here. So we're interested in computing the trace of some element of such an algebra. And then we saw last time that freeness, so free independence, determines tau on this algebra in terms of the restriction to the individual algebras. And of course, these restrictions are explained to us. I mean, we know how to take expected values of any function of x1 by itself and any function of x2. So that's why this is well-defined. It doesn't depend on anything. Once you've chosen this free realization, the specifics of that realization are irrelevant, OK? All right, so let's just do one, two very simple computations. Suppose I take mu 1 equals to mu 2 equals the semicircle law. What is mu 1 box plus mu 2? Anybody knows? It's again the semicircle law. Why? Well, we know that in order to model mu 1 and mu 2 in a free situation, so in order to take two variables which have laws mu 1 and mu 2 and which are free, it's sufficient to take these things here that we had these operators s associated with two orthonormal vectors. So here h1, h2 are norm 1 and are perpendicular. So if we do that, then this variable here will have law mu 1. This variable here will have law mu 2, and they will be freely independent. That's what we proved last time. Well, we didn't prove it, but it's in the notes. So if I add them, I have the law of mu 1 convolve mu 2. But of course, it's not hard to check that this is the same thing as s of h1 plus h2. And what is h1 plus h2? It's just some other vector, but a slightly wrong length. The length of h1 plus h2 is square root of 2 by Pythagoras. So what we see is that we have, again, a semicircular law. Only it got multiplied by square root of 2. So this will again be a semicircular law, but it won't be. It will be something like 8 minus t squared supported on minus 2 root 2 with the appropriate normalization. So that's the easiest computation we could make. It's just right. A more involved one that was carried out long before free probability was invented by Vojkulesk in the mid-80s is a computation due to Kasten, where one looked at the random walk on a free group. So here the setting is that you have the Cayley graph of a free group. I'm looking at the case where they're called n. n equals 2. So we're looking at the free group on two generators. It's Cayley graph. And if you don't know what a Cayley graph just think of this graph, is the tree? I guess it's a trinary tree or something like that. It's an infinite tree like that. And the random walk that you're dealing with is that you start at this point of the tree, and then you have four directions to go in each place. And so you randomly choose one of them. You make a jump. You make a random choice of four other directions. You make a jump, and so forth. And so this is the random walk. And so I explained last time that this is the random walk operator here. GJ is a generator of your free group. I'm symmetrizing to throw in an inverses. And this average here is the operator for whom the p-th moment is the probability of coming back to the origin in p-steps. And so the law of this operator was actually computed by Kasten. And what's interesting there is that we can write it as a free convolution. Namely, if you look at L, I can write it as sum Lj. Well, I guess 1 over square root of n, sorry, 1 over n, where Lj is lambda of GJ plus lambda of GJ inverse over 2. And the point is that these guys are free because they live inside the different copies of Z inside the free group, the generator G1 and the generator G2 and so forth generate free subgroups of the free group. So therefore, they're freely independent. And so now, if you do it like this, then you see that L, the law of L is just the, well, apart from this 1 over n, well, it's the dilation by 1 over n, if you like, all the free convolution mu L1 mu L2, et cetera. And now each one of those mu Ls, as those of you who did the problem session remember, is the arc sign law. So this is just the unfold convolution of the arc sign law with itself. And Kasten, as I said, computed that a long time ago. Incidentally, in both of these cases, what happens is that if you look at the norm of your x1 plus xn, it will scale like square root of n, provided that all of these xj's are centered. You see this here that when you add these semicircular variables, you get something that scales with a square root of n. And there's a very similar looking behavior here, which was actually very interesting at the time in conjunction with questions of amenability and things like that. OK. Not immediately. Oh yeah, I mean, of course, for any one of these measures, there's a preferred family of orthogonal polynomials, but not exactly in this way. If you know the orthogonal polynomials, you can kind of create a nice basis for the free product space, for the space in which the free product acts. All right, so how do you actually do the computation for this free convolution? There is this whole theory of what's called r-transform of Oigulescu. And it's a very pretty manipulation that has to do with the analytic functions associated with a measure. So I'm going to leave this slide on for a while. This is just going to define the various objects for us. But I'm going to kind of copy them here at various moments. So g mu as before is the Cauchy transform. So it's the integral of d mu of z minus t. Now, what this thing is is that if this is the complex plane, here's the real line inside the complex plane, our measure mu is somewhere supported on this real line. And of course, this is well-defined for any z outside of the support of our measure. So for example, if the measure is compactly supported, then this thing is supported outside the little one-dimensional piece of r that supports your measure. And in particular, it's a perfectly analytic function at infinity. Now, if your measure is not compactly supported, things are a little bit more complicated because the support of it could go to infinity. And so then your g mu of z is not defined at infinity. It's not analytic at infinity anymore because this one-dimensional line hits infinity. But it is certainly defined, say, in the upper half plane. So in particular, if you revert things so that infinity becomes zero, you'll see it will be defined if you substitute instead of z here, 1 over z, it will be defined in a certain wedge-shaped region. So it's perfectly analytic as long as you approach infinity that way, but not this way. But to avoid these kinds of discussions, I'll just make the assumption that things are compactly supported. It's just totally for the simplicity of the presentation. It's not really necessary. OK. All right, so we have this g mu. Now, k mu is the functional inverse of z. So this is j mu inverse. So what I mean is that you invert it as an analytic function. So it turns out that, for example, g mu is nicely behaved in the upper half plane. It has an inverse. And so you can invert it and define an analytic function called k mu. And now, so because they're inverses, they have these definitions. And then r mu is essentially k mu up to a little fudge factor. So if you put these things together, you get that g mu and r mu, they are almost inverses. They're inverses apart from this addition of z to inverse. All right, so the main theorem here is that r mu linearizes free convolution. So you see, to any measure mu, we have, therefore, associated this function r mu. And this r mu, this function r mu, is linear. If you take the convolution of two measures, then the function r mu adds. So this is kind of similar to the situation in the classical case, where instead of r mu, you would consider something like the Fourier transform of the measure, or maybe the logarithm of the Fourier transform. In that case, the Fourier transform of a convolution, of course, is multiplicative. So the log of it would be additive. And that's one way to compute Fourier transforms. All right, so let me try to, yeah, so let me put here the main equation g mu of zeta inverse plus r mu of zeta equals zeta. And so what we're trying to get to is the additivity of r transform. OK, so now I can switch slides. So there's going to be a kind of a trick. By now, several proofs of additivity of r transform. There is a combative, well, so there's the original proof of Oikulesku. There's a kind of modification of it due to Ufa Ha group, which is the proof I'm going to give. There is a somewhat different looking proof by Terry Tao. And finally, there's a combinatorial proof due to Roland Speicher. So I'm going to select this proof because it will be fairly easy for us to talk about the combinatorial aspect of it, which I want to get to in this talk. Probably the slickest one is the Terry Tao one, which also is related to subordination. But we will get the subordination separately. Anyways, so what I want to claim is the following fact. So remember, again, we have this forkspace f of h. This is multiples of a vector plus higher powers of h. And then we have this operator l of h, which tensors things on the left. And then we have the adjoint of it, which kind of removes one tensor. Now I'm going to do a funny thing. I'm going to tell you how to make a variable with given r transform. So suppose somebody gave you this function r of z. Now that's an analytic function. It's defined for z sufficiently small. And so what I will do is I will look at a variable x. And the variable I will write down will be essentially l plus r of l star. Now I have every right to apply r to any operator as long as that operator in norm is less than whatever r is defined on, because r is an analytic function. So this l star has norm 1. And it might be too big. So for that reason, I can just rescale it. And I can put here an alpha and an alpha inverse. But actually doing this change is absolutely nothing, because if I'm interested in something like the trace applied to x to the power p, then you see that all that matters is that l star l is 1. And if I replace l by alpha l and l star by alpha l star and l by alpha inverse l, that's still true. So somehow doing that will not make any difference. And so what I claim is that this gives you an instance of a random variable which will have, so what we claim is that the law of x is g. Well, the Cauchy transform of the law of x is g. Which g? Well, this g that goes with the given r that somebody gave me. Now there's a little bit of caution here, which is that this variable x is not self-adjoint. So in principle, this Cauchy transform is defined, but it may not be Cauchy transform of a measure. We'll have to separately worry about that. But it's just a kind of a formal manipulation. It's something that will have the correct analytic theory. Yes? Right. Yeah. So r is the way it's defined. Well, let's say it's defined by this equation here. If the measure mu is compactly supported, it will be an analytic function in a neighborhood of 0. So it's, in fact, a convergent power series with some radius of convergence in 0. Exactly, right. So r is defined on some disk. It's an analytic function, some disk of some radius containing 0. So that's why I have, as I said, I have every right to apply it to any operator as long as the norm of this operator is smaller than the radius of convergence. And I can certainly do that by choosing my alpha small enough because L star has norm 1. No, we removed it by k does, but we got rid of it. All right. Great. So now let's do this computation. The computation is actually quite beautiful. So the trick is to use a special vector, this vector omega z. And this vector is simply the sum of z to the n en. And I will put, as I put there, e is just the tensor powers of that h. Yeah, so en is the nth tensor power of h. So in the en coordinates, our L simply shifts things by 1, OK? All right. So now the point is that if I act on this omega z by L, well, what am I going to get? I'm just going to get all of these n's shifted by 1. So this is some n greater or equal than 0. And so what we get is 1 over z omega z minus e0, right? Because, of course, I'm missing the zeroth term when I shift everything by 1. On the other hand, if I look at L star omega z, this will just be equal to z omega z, because, of course, everything gets shifted by 1. So this vector is an eigenvector for L star and almost an eigenvector for L, OK? All right. So now remember what x is. This is L, well, alpha inverse L plus r of alpha L star, OK? We'll write L for L of h, as I do over there. And then, OK, now if I apply x to my omega z, what am I going to get? Well, L will push it to something like 1 over alpha z omega z minus 1 over alpha z e0. And then this guy will give us a plus r of alpha z omega z, right? So now we can solve for 1 over alpha z e0. We can rewrite this as 1 over alpha z plus r of alpha z minus x acting on omega z, right? Now we can just get rid of alpha, right? We can just change variables and replace alpha z by alpha z. And so now if we do this, we get that, sorry, this should be z over alpha, OK? And so now if you think about it, this thing here is just your k mu. This is k mu of z, OK? And so for sufficiently, for appropriate z, this thing is going to be invertible, right? And so if we apply on both sides e0, so take the dot product with e0 on both sides, sorry, first invert. So this is k mu of z. So first invert. So what we're getting is that k mu of z minus x inverse times e0 is equal to z omega z over alpha, right? If I, OK? And now I dot with e0. And so when I do that, what I find is that g mu of k, so yeah, well, let me write this way. So k mu of z minus x inverse. If I apply my state to that, what I just get is something like z, OK? And now this, by definition, is going to be g mu of k mu of z, because I'm defining now g to be the inverse to k. I'm trying to figure out what measure I will get if I'm given the r transform r. And what we've just proved is that this is equal to z. What? This is k mu. This, by definition, is k mu. What happened to what? Here? Yeah, so if you remember what omega is, it's just the sum of something like z and n. So if I write omega z with e0, I just get 1, right? So this whole thing ends up being z. So yes, absolutely true. I've never used the fact that this is a trace. And all I'm just saying is that I'm defining an analytic function, I'm defining, so g mu of z is defined as, well, let me just even write it this way, z minus x inverse e0, e0. I'm just defining an analytic function in this way. And so what I've checked is that if you give me an r, some analytic function that's defined in some disk around 0, I will write down this variable x. And for this variable x, I know that that satisfies this equation with respect to the r transform. So this equation, of course, says that that, right? Yeah? So given r, I gave you an algorithm for producing an explicit variable, non-self-adjoint in a non-tracial probability space for which the resultant has the correct behavior. Incidentally, because we just are dealing with one variable x, of course, it is a trace. The algebra is abelian. We're just looking at the non-self-adjoint algebra generated by the resultant. So that's not a problem. Exactly. So if you give me any analytic function that you like, I will be able for you to write down some variable for whom the resultant applied to some vector in this way is going to be more or less the functional inverse of your r. Well, a functional inverse of your r corrected by this one over z. What? For the mu, like the actual probability. Right. So mu would actually come if this were the Cauchy transform of a probability measure. A priori, this is just some analytic function. God knows what it is. It may not be a transform of anything. But of course, if you knew from the beginning that this g mu is a probability measure, then this would be its Cauchy transform. OK, great. So now comes the clincher. So again, what we've done is we've made explicitly a construction of a variable x, which has a given r transform. Whose associated Cauchy transform has the correct r transform. So let's say that you're given mu 1. Well, you go ahead and you compute g mu 1. You go ahead, you compute r mu 1. And then you say x is equal to l. What did we say l? What was my function alpha inverse l? Let's me call it l1 plus r of alpha l1 star, where l1 is l of some h. All right, now let's say you're given mu 2. Well, you'll go ahead and compute g mu 2. You'll compute r mu 2. And now I will write a y, which is alpha inverse l2 plus r. So this is r mu 1. r mu 2 of alpha l2 star, where l2 is going to be l of g. And I will pick g perpendicular to h. So g two normal vectors, two unit vectors, perpendicular to each other. So now, if I look at x plus y, then they are going to be free. Because the algebra generated by l in its adjoint and the algebra generated by l2 in its adjoint are freely independent. So if I compute something like this and take the inverse of it and evaluate it at my e0, this will be g mu 1 box plus mu 2 of z. This is just by how freeness works. This is just because x and y are realized as two free variables.