 I've done it this way. The point I'm trying to argue is that actually my x and y have a very special property, namely, I claim that the law of x plus y, which of course, what is it? I have to do like alpha inverse l1 plus alpha inverse l2 plus r mu1 of l1 star plus r mu2 of l2 star. This is what they are, right? I've just copied verbatim what the definitions are. I claim that actually it has the same law as just alpha inverse l1 plus l2 star. And I have to divide by, what do I do? Divide by square root of 2, I think. I'm missing, yeah, this is on the next page. Yeah. All right, square root of 2 here. Oh, I divided by 2 here. Yes, sorry. By 2 here. And then here, what I will do is r mu, I was missing alpha, sorry, of alpha l1 plus l2 star, OK? So why is that? Well again, if you think about how you would compute any power series in x plus y and apply it to E0 and dot it with E0, the essential thing is that you have to use the fact that l1 star cancels l1, that's 0. You have to use the fact that l1 kills E0. And once you know these two things, basically that's sufficient to understand how to compute any power series in l1 applied to E0. And what I'm claiming here is that these two things have essentially the same relations. Actually, more precisely, what happens is that, yes, sorry, 1. Oh, there should be no stars, right? Sorry, I was, yeah. OK, to what? Well, alpha multiplying the two of them together. Yeah, but there are two of them here. So the point is that if I look at, the whole point is that if I look at l1 plus l2 star multiplying by l1 plus l2, then what you get is something like l1 star l1 plus l1 star l2 plus l2 star l1 plus l2 star l2. And now these are 0, right? Because one of them creates a vector, and the other immediately tries to annihilate a vector, but with a perpendicular vector. And so this is 2. That's my point. I mean, would this make you happier? But it's the same thing. I haven't changed the law. OK, and so the point here is, so why this has the same law as that? Well, more or less, this is a universal statement about all possible functions r. So you can just check it for monomials. So check this. Check when r of z is something z to the n or something like that. And you will see that actually here, just because you have this relation here, you on the nose get the same expression when you evaluate such a thing on E0 and then dot with E0. It'll be the same thing as evaluating that on E0, dot E0. So if I take any function of this operator and any function of this operator, you can check very easily just by these kinds of combinatorial identities. And the answer will be always the same. And so what that tells you is that the x plus y, which is the guy whose r transform is mu, is associated with the convolution of the two measures, has this r transform, simply the sum of the two r transforms. All right. So I don't want to spend too much time on just the combinatorics of it, but I just want to mention a very pretty formula which tells you how to connect things together. You see, if you are again in this compactly supported situation, if you look at g mu of z, which is the integral of, what is it, g mu of t over z minus t, well, the correct thing to do is to factor out a z. So you get here 1 over t over z. And this you can now expand as a power series in t over z, for z sufficiently large. So this is nothing but 1 over z, the integral of the sum of t over z to the m d mu of t, let me just call it k. And if you write mk, if mn is the nth moment of your probability measures is written there, then what you get is sum mk z to the minus k plus 1. So this is what this thing, g mu z, is written as a power series in 1 over z. Up to a shift it's the moments, the moment generating function of your measure. Now, the r transform can also be expanded. I told you in this case it's an analytic function. So it's some sum, say, am z to the, what I want, n, m minus 1. And it turns out there's a very pretty formula, combinatorial formula that links the numbers mn and the numbers alpha n. So the only thing that should be explained there is what is nc of n and what does it mean to be a block. So nc of n here is the set of all non-crossing partitions. So nc of n, well, you have numbers between 1 and n. You order them. And then you look at all possible partitions, which means just you write the set as a disjoint union of subsets. But these partitions are supposed to be non-crossing. So if you indicate which elements are in a given subset, so here, 3, 4, 5, 6, n equals 6, I'm looking at a partition which has 1, 3, and 4 in one part of the partition, two separately, 5 and 6 in the other part of the partition. This is non-crossing because I can draw it in a non-crossing way. Something that would be crossing is if I did this, for instance. If I decided that I have partition with classes 1, 3, 4, and then 2, 5, 6, that would be crossing, as you obviously see. So that's not permitted. And now you are taking the summation over all non-crossing partitions. Then you take the product over the classes of the partition, the blocks of the partitions. In this case, there would be three blocks. The block 1, 3, 4, the block 2, and the block 5, 6. And then you take the product of the cumulants associated to these numbers, alpha m, called free cumulants of the numbers, alpha m, associated to the sizes of the blocks. So the term in that sum corresponding to this particular partition would be what? Alpha sub, the size of the block is 3 times alpha sub 1 times alpha sub 2, right? From the sizes of these three blocks that we have here. And so it says that you can recover the moments by just summing these numbers alpha. In fact, you can go the opposite way. You can recover the numbers back. And it's actually a very useful combinatorial thing. Incidentally, the big interesting thing here is that the passage from classical to free probability is just the replacement of the space of all partitions crossing or not by the space of non-crossing partitions. A very similar formula will hold for the logarithm of the Fourier transform coefficients of it if you drop non-crossing. The alphas are called free cumulants. They're numbers indexed by integers. And they're the coefficients of the power series expansion of this r-transform. So in particular, they add. They're additive for free convolution because the whole power series is additive for free convolution. All right, the free central limit theorem. So the free central limit theorem is an analog of the classical one. And it says that if you do a central limit sum, x1 up to xn, rescale it appropriately, and put some conditions. So the conditions are that these xj's are independent. Let's assume that they're identically distributed or I don't think I'm assuming that. The key thing is some growth conditions and moments, which I did not state here. And then the point here, I think I should have. The way it's stated should be called IID. But anyways, there's some condition on the moments, on the second moment, and some condition on them being centered. Then the statement is that the law of the central limit sum will converge to the semicircle one. And the way to prove it, one way to do it, is very, very simple. Forget if I put the proof. Yeah, I didn't put the proof there. Let's look at this cumulant situation. So what you prove is that if you dilate your measure by t, then your r transform, so the r transform of the dilation of a measure by t, so by that I mean just apply the transformation, x goes to tx. So stretch the measure by t. This is something like the r transform of tz, 1 over t, I believe. And so what happens is that when you're computing the r transform of such a device, well, all you're doing is you're adding a bunch of r transforms. And then you're rescaling them by 1 over squared of n. And if you carefully pay attention to these r transforms and expand them as analytic functions, you will see that your r, let's say, xi of z, well, it has to start with 0 because the first moment is 0. And so the important thing is this number a, a squared. It will be times z plus higher order terms. And so when you're adding them up, so let's say you're summing things from 1 to n, you will get an n in front of here. And an n in front of there, if you like. But then you'll be rescaling by 1 over squared of n, which conspires to divide this by n and divide this by n. Well, let's just put it this way. Divide by more than n, by higher power than n. And so all these guys are going to go out in the wash. And here you'll just be left with a squared z. So your r transform is just going to be linear, and that's exactly what happens for the semicircle rule. All right. Now properties of free convolution. There are some things that are known. One thing is that free convolution is trying its best, if you like, to avoid atoms. So if you take a convolution of two measures, mu and nu, the only way you will have an atom at some point, t, is if you already had two huge atoms. You must have that one of the measures had an atom at a, another an atom at b, so that a plus b add up to t. And the total mass of these atoms is more than 1. This is sort of related with the fact that if you take two projections on a Hilbert space, finite dimensional Hilbert space, and renormalize everything so that the rank of the projection is between 0 and 1. So you normalize things so that the rank of the identity is 1, and the rank of the smallest projection is 1 over n. Then if I take two projections, they are forced to intersect, provided that their ranks together exceed 1. If I have two projections, p and q, p will intersect q. This will be non-zero. In fact, the rank of it will be bigger equal than the rank of p plus the rank of q minus 1. That's just how the geometry of the Hilbert space is. I mean, you just can't avoid it. Again, this is normalized rank, so that the rank of identity is 1. So sometimes, things have to intersect. And when you put things freely, this is more or less what happens. I mean, things only intersect when they have to. And so this is why the only way that you get a spectral projection for such a thing is when you already have substantial spectral projections, substantial eigenvalues for both operators. Another very interesting fact is something about infinite divisibility. And that there's a whole big story, which I will only very lightly touch. So if you have a measure, you can, of course, define its integer convolution power. This is simply the nth convolution of nu with itself. Or if you like, you can define the r-transform of this nth convolution power to be n times the r-transform of mu, because the r-transform linearizes free convolution. Well, this definition can be perfectly used to define any convolution power. So you can try to define the t-th convolution power of mu to be such that its r-transform is t times the r-transform of mu. The issue, of course, is that when you go from r-transforms to Cauchy-transforms, there's no guarantee a priori that you're actually going to get the Cauchy-transform of a measure. You can just get something that isn't. And this is what happens. So amazingly enough, if t is bigger or equal than 1, whether integer or not, this always works. This always gives a measure mu box plus t. But there are instances where below 1, you don't get a measure. Classically, the situation is somewhat different. If you take a measure, there may be no convolution powers except for the integer ones. Think of a pair of two delta masses. Whereas here, always, beyond 1, you can do this. And then there's some regularization effects. For instance, if both measures have density, then the free convolution also has a density. And in fact, if both densities are, say, in LP, then the new density is also in LP. So there is some regularization properties of it and so forth. What are the tools for proving these kinds of things is the so-called subordination theorem. And we will prove it in the last lecture if we have time. What it states is that when you do a free convolution, so I'm looking at the free convolution of mu x1 with mu x2, so if you like the law of x1 plus x2, the Cauchy-transform of that measure at z is actually the Cauchy-transform of either of these measures. You can take either mu x1 or mu x2 as you please, but evaluate it at a different point. And this different point, this function omega, that tells you at which point to evaluate, it's called a subordination function, subordination in the sense of complex variables. And so this actually is exactly the tool you need, for instance, to prove this last thing here. And it's done in the notes. So it actually gives you a fair amount of analytic information about what happens in these. Yes? I'm sorry? Here? Well, it's equivalent, right? Because I'm either saying that I'm taking x1, x2 free. So this is, if you like, mu x1 box plus mu x2. But I'm just writing it as the law of a concrete variable x1 plus x2. Is that the questioner? Yeah. Sorry, I was oscillating back and forth between the two notations. But yeah. OK. And we will see in a little bit how to do this, how to prove this subordination result. There are more things that can be done. Maybe I should mention two. One is that there is also a corresponding theory of box times multiplicative free convolution. So a separate theory is needed in the non-commutative case because we don't have exponentials. The exponential function works very well in the classical case. e to the x plus y is e to the x, e to the y. So log of x plus y, x times y, sorry, is log x times plus log y. So if you have classical variables, let's say they're positive and you want to multiply them, that's not a big deal. You just take logs of them and then you add the logs. They will be, again, independent. Now in the free case, these formulas are out the window because x and y don't commute. So you need a separate theory. Nonetheless, a separate theory exists and it has some flavors to say that actually the exponential function, if it were to exist, weren't so bad. I will not describe it in detail, but I'll just define it. If x and y are non-negative, then you define, so sorry, say it in a different way. If mu, mu are supported, probability measures supported on r plus, then you define their multiplicative convolution as the law of x. Well, let me do this. X one half, y, x one half, where x has law mu and y has law mu. And this actually is the same as the law of y one half, x, y one half, because of traceability. You can, if you take, for instance, a moment of such a thing, you can put one of these x halves on the other side and then take one of the y halves and put it on the other side to convert it into two expressions. So even though it's defined in this asymmetric way, it's a symmetric operation. This is, of course, a positive operator because I've multiplied a positive operator on both sides by another, by the same positive operator. So it gives you, again, a probability measure on r plus. It's also defined for unitaries. If I take similarly mu, new probability measures on the unit circle, then I can define mu box plus mu to be the mu of uv, where u is a unitary distributed according to mu and v is a unitary distributed according to mu. So there's also an analytic thing to do to compute that. But now you could say, what other things you can do? Well, you can, in principle, do a much more general operation. You can apply any non-commutative polynomial. And as a matter of fact, there are certain things which are known in this generality. What happens if you take a completely general non-commutative polynomial and apply it to free variables x1, xn? So for example, one thing that is true is that if each of these variables has an algebraic Cauchy transform, so that Cauchy transform solves an algebraic equation, then the polynomial of these variables will also have a Cauchy transform that's algebraic. If each one of these mu x sizes is non-atomic and p is not the constant polynomial, then a general polynomial will be non-atomic. Okay, so in particular, if you take n random matrices and g matrices to make a self-adjoint polynomial of them, this will always have a diffuse spectral measure. I would love to have, by the way, a random matrix proof of this. And then also there's some story about connected components. If each of the variables, well, let me just, since we're running out of time, let me just say that if each law of xi has connected support, then the law of z has to have connected support. And this is a very nice statement. It follows from the fact that if you take ai, abelian C star algebras with no projections, no non-trivial projections, then their free product has no non-trivial projections. So projection is simply an element f, which is self-adjoint and is idempotent. And this is a result in something called key theory, topological key theory for C star algebras. And I don't know of another way of proving this. Again, besides appealing to those kinds of high power results. So somehow it says that this free product keeps topological spaces connected, whatever that means. All right, one last thing that I just wanted to say. Everything that I've done so far here, except for the Breier's slide, was about one variable, right? How to compute convolutions of measures. Now what happens if you have several variables? Well, there the difficulty is that the object mu disappears. We no longer can attach a single measure or in fact any number of measures that would encode the joint law of n variables. For a single self-adjoint, you can do this, but for several non-commuting self-adjoints you can, because you don't have a spectral theorem of any kind. Nonetheless, g mu still has a very good existence. And this probably will be a little bit in Roland's talk. The very good trick here is to look at what are called matricial resolvents. So you start with some variable x, sorry, some n-tuple, x1, xn, and you invent a diagonal matrix with entries x1, xn. Okay, so you look at a diagonal matrix like that, x1, xn. Okay, now if you just look at the resultant of that as a single operator, nothing beautiful happens. Well, what you do is you look at what happens when you look at the resultant of this sort, where b is a scalar n-bian matrix. If you apply my trace entry-wise, this, the result of taking this resolvent will be, again, an n-bian matrix, a scalar n-bian matrix. And so you're getting a kind of a Cauchy transform, but a Cauchy transform that's valued on matrices. So what was zed before, a complex variable, now becomes a matrix. And there's a fair amount that can be done using this, but it's very much kind of a developing theory. And more and more, there are parallels emerging between that and the kind of complex analysis that's happening for a single variable, zed. So unfortunately, that theory is not mature enough to prove statements of this kind yet, but there's room to hope. And that is one of the ways to try to repackage the n-variable situation to make it as close as possible to the one-variable one. Okay, so I think I should quit. Yeah, thanks for the next one. We're just slightly out of time, so we might defer questions. So a little break, I think we have a quarter-hour break. Okay, all right, thanks.