 So I can think of this product as the new A1. So I can group things as much as I can to ensure that each one of my letters lives in a different algebra. So that's the first thing I can assure, that A i is an A, A k is an A i of k. And I can assume that i of 1 is different from i of 2 and so forth. i of 2 different from i of 3, et cetera. And the next trick I'm going to do is I'm going to recenter all of them. So let me write this A i bar for tau of A i. So I will look at A 1 minus A 1 bar, A 2 minus A 2 bar, A 3 minus A 3 bar, and A n minus A n bar. Now what's that? What's this equal to? 0, why? Because of the definition. The definition of free independence is precisely to say that a product like this is 0. So this is 0. On the other hand, if I simply expand everything out, I'm going to get one term like this. And then I will have terms where at least one of the A bars appear. So I'll have terms like A 1 bar, and then I'll have tau of A 2, A n. And then I'll have, say, A 2 bar, tau of A 1, A 3, A n. And many, many, many more terms will be 2A bars, 3A bars, all the way up to NA bars. But all of those terms have lesser degree in the number of variables. So I can recursively apply my recentering procedure and define the trace on everything. So therefore, I can solve for this in terms of lower degree terms. And therefore, this expression is well defined. Is that clear? All right. Now the consequence of this is that if I have two families of variables, x's and y's, and if I know the law of x's by themselves and the law of y's by themselves, and if I know that these two families are free, then their joint law is completely determined. In other words, I can compute the expected value of any polynomial in the x's and y's in terms of the expected value of the polynomial. The x is the expected value of the polynomials in the y's. So this is more or less this kind of computation that you have to do. And the last fact that is relevant is that you can always find a model where variables are free. So if classically somebody hands you two random variables, you can find a realization of them jointly so that they're independent. Same thing is true in the free case. If somebody hands you a family of variables, or if you like an algebra and another algebra, there is a way to put them together in a larger algebra so that inside that larger algebra, they're freely independent. And I don't want to go through the construction. But in the notes, one of the exercises is to do exactly that. So if you want to understand exactly how this free product works, you can do that exercise. If you don't want to do that exercise, which of course all of you want to do, but you can just understand how this works. So just understand how given two groups, you realize them jointly in the left regular representation of their free product. So if you understand that, you more or less understand how to do this in any case. OK, great. Now the next thing I want to do is I want to do a construction of what will end up being exactly what was on this board, namely the limit of this random matrix model. Only we will not be able to prove it just yet. But I want to give you a concrete set of operators that realize that joint law of random matrix models. And I want to convince you that knowing that these are operators and concrete operators in a Hilbert space is a good thing. Because sometimes you can make computations go a little bit simpler if you do that. So here's the setup. We start with a real Hilbert space. So this is like Rn. So think of H as being Rn. And then there's a complex version of H, which is Cn. OK? And then you create a bigger Hilbert space. It's if you like the Hilbert space of all non-commutative polynomials. It's simply all tensor powers. So f of H, that will be multiples of some vector. So you can think of these as like polynomials of degree 0. And then we'll look at H. So you can think of this as like polynomials of degree 1. And then we have H tensor H, and so forth. So a typical, what happened to all the chalk? It's time to applaud because there's no, oh, there's some. This is the fat chalk. There used to be also small chalk somewhere. Anyway, so a typical vector, a typical vector in this Hilbert space looks like C1 tensor, C2 tensor, Cn. So you can think of it as a kind of a product of n degree 1 things. It's like a polynomial of degree n. If I were to symmetrize things, if I were to insist on the symmetric tensor product, that would be really just the algebra of all polynomials. And now I have a very interesting set of operators. These actually are a bit like tuplets operators. Namely, these operators are operators of left tensor multiplication with a vector. So let me just write it down because when the slide goes away, I want to still have the definitions on the board. If I act on some tensor, C1, Cn, I just tensor it on the left with H. And then there's the adjoint L of H star. And then it tries to reverse that. It kind of eats up the first vector. And then there's a typo. The H2 should be C2, right? And finally, I have this operator S of H, which is the real part. L of H plus L of H star, H a real vector. And finally, tau is just given by my preferred functional of degree 0. It's just a formal vector. It's just a unit vector. So everything in this space is either multiple of omega, just just a unit vector. Think of it as a constant polynomial or something. Or a product of vectors from H. H does something to omega. It's just that you see, so omega is something in degree 0. When I act on this omega, I get something in the first tensor power of H. So omega, what L does is it always increases the degree by 1. So if you start with something of degree 0, you end up in degree 1. And if you start with something degree 3, you end up in degree 4. So this H here is inside H tensor power 1. OK, great. Exactly, right. So in general, your most general vector here is going to be an L2 linear combination of omega and tensor powers like this. Right, and so I've just written it what it does on a basis as opposed to just writing it in general. But yeah. OK. Everything and it's time. OK. All right, so the first thing that you discover is that this tau that I have just advertised is actually not a trace. That's very easy to see. If you think about it for a minute or so, you convince yourself that L of H star L of H is just 1, is the identity. Well, let's take that H as length 1. The reason for it is that what does L of H do? Well, it pre-pens everything with an H. What does L of H star do? It removes that H, right? So altogether, they do absolutely nothing. On the other hand, if you look at L of H star, well, it's almost the same thing, right? L of H star removes an H and L of H puts it back, right? The only defect is that if you started from something of degree 0, then L of H star will irrevocably kill that guy, right? And so there's no resuscitation by L of H. So what I'm saying is that this is identity minus p, where p is the projection on two vectors of length 0, right? And now if you compute tau on this guy, well, you have to act on the vector omega. Well, identity just gives you omega. And then dot with omega, so you're going to get 1. But if you do it the other way, you're just going to get 0, obviously, because you're taking identity minus the projection into that very vector, right? So this is not a trace. However, if you only look not at L H and L of H star, but only at the algebra generated by these guys for H real, then you have a trace, OK? And the reason for it, the reason for reality is that you can check that tau is a trace if and only if tau of x, y is real for all x self-adjoint and y self-adjoint. Because if you take the complex conjugate of tau of x, y, you get tau of y star, x star. But if they're self-adjoint, you just get tau of yx. So the reversal of x and y is equivalent to being real. And now if you have a bunch of vectors like this and you act on omega, you're going to tensor them with a lot of real vectors. And maybe you'll annihilate some real vectors. And at the end of the day, clearly you get a real number. So this is why we want a real Hilbert space. Otherwise it's not a trace, OK? The other thing that you can check, and this is done in the notes, is that if you start with perpendicular Hilbert spaces, you get things which are freely independent. So if I start with vectors h1, hn, let me just do two vectors for simplicity. Suppose that I have h1 is perpendicular to h2. And I look at the algebra generated by s of h1. And the algebra generated, this is by a1. And a2, which is the algebra generated by s of h2, then these are freely independent, OK? And the last thing that I want to prove is that s of h has the semicircular law. So there are several proofs of it. The first, how much time do I have? I have like maybe 15 minutes, yeah, because of the board time. OK, so the first way that you can do this is you can actually do a combinatorial proof. In fact, you could prove that if I take tau of s of, well, let's just say h1, hd are unit, orthonormal set. So what you can prove is that tau of s of h1, s of hd is equal to precisely the number of these non-crossing pairings that respect color. So that way, you will prove that this colored version of what Johanna Dumitru was talking about in the first lecture this morning will exactly be the expected value of this product. And this proof is not so hard. I mean, the idea is that you kind of, this is going to be like the degree in your f of h, what do I call it, f of h with a script f or, yeah, no script f. You see, you start with at omega, right? So how do you value this trace, right? If you call this, I don't know, z, then it's omega z omega. That's what you're trying to compute. So you start at degree 0, right? What is your z? z is a product of a bunch of h's. So here you can think of this acting on omega. Well, what is s of hd? Well, it either tries to annihilate this omega, which will give you 0, or it will try to create hd, right? So you will kind of jump up by hd. Then comes the next one, s hd minus 1. What can it do? Well, it can remember it's the real part. Sorry, it's the left multiplication plus its adjoin. That's the operator by which we're acting. So it will be either this one that's acting or that one that's acting. If it's the creation one, if it's l that's acting, what are we going to get? We will tensor by hd minus 1. So we'll end up with hd minus 1, tensor hd. And if we act by the other one, we would get what? We would get the dot product between hd minus 1 and hd. So this is if we used l, right? s of h is l of h plus l of h star. And depending on which one we used, we would get either this or that. And what you see is that whenever you arrive to one of these, you will have a choice of either go up or go down. And you can identify it with these kinds of paths that stay above the line. These are called dick paths. And the way you associate these things with bracketing is that you see that at the end, what you care about is that you arrive back to something of degree 0 because you want to dot again with omega. And if you do this, then you can see which part of going downhill canceled which part of going uphill, right? So for example, in this path, this part of going downhill canceled that one. So that will be your parentheses. And then this go up, this up, and that one canceled. And then, let's say we go down like that, then this part will cancel that part. So your resulting parenthetical expression is going to be that, OK? And then the fact that you have these dot products between the various h's will translate into same because they're orthonormal that the colors are preserved. Well, it's 0 depending on what, yeah, I should have put here like I1, ID. So it will be I, you're right. So it will be 0 depending whether ID minus 1 and ID are equal or not, exactly, OK? All right, so you can do that. And then at the end of the day, for instance, if all the h's are the same, so if we're looking at just the powers of one element, then you can actually compute explicitly and you can find that this, let's call it mn, if you take one h and you look at the nth power, then you have a certain recursive relation between the moments. That's very easy to check because if you accept this thing with parentheses, then if you, let me do it here, if you accept this thing with parentheses, then if you think about it, there is the opening parentheses and this opening parentheses will have to close with some other parentheses. It'll have to match with some of them. And once you condition on this one and that one matching, then here you have all possible parentheses, all possible parenthetical expressions, and here you have all possible parenthetical expressions, OK? So if you want to count the number of such parenthetical expressions, once you condition that the first parenthesis closes parenthesis number a, then you have a parentheses to worry about times m minus a minus 2 parentheses to worry about. And that's what gives you immediately this kind of a recursive relation. And from that, you can rather easily deduce what these moments exactly are. And from that, you can, in principle, check that the semicircle law has the correct thing. And I've done this in the notes. But let's use the fact that, I mean, you can check that the cut-along numbers, for example, satisfy the same recursive relation. And therefore, these are the cut-along numbers. But let me actually use the fact that we are on that Hilbert space and give you more straightforward proof. So here it is. I'm going to use that p. That will be the projection on omega. So this is the projection on two things of length of degree 0. And then I will use TR for the usual trace on B of H. So if you take an operator, you just take some orthonormal basis. And you look at the diagonal coefficients in that basis, and you add them up. So for some orthonormal basis, Fn, the trace of some operator is just this sum here. And now there are a few magic identities. The first one is what we call tau of y for any operator y is the trace of p, y, p. Let's think y. Well, first of all, when I take trace of p, y, p, I have to trace over, I have to sum over an orthonormal basis and apply this p, y, p to some Fn and then dot with Fn. Of course, the very useful basis to use is the vector omega together with some orthogonal complement. Now what will happen is that p, of course, kills everybody in the orthogonal complement. And so we're just getting omega. So this summation immediately becomes just one term. Now, p applied to omega gives us omega. y will get applied to omega, and then we will project back onto omega. So we get exactly the expression for tau, which is this thing here, tau of y is that. By the same token, if I want the product of two traces, I just take trace of p, y, p, z, p. And you think about it. Again, what will happen is that omega comes in, p takes it to omega, z acts on it. That gets projected back. So at this point, we've computed the trace of z. Then we again create an omega, act on it by y. We project back onto omega. We've created the trace of y. OK, one more trick. Let me look at r of h, which is the operator of tensoring with h, but on the right. I claim that tensoring on h, so this bracket here, stands for the commutator. I use the notation AB is AB minus BA. I claim that the operator of right tensor multiplication and left tensor multiplication commute. Well, look at our government. The right hand doesn't know what the left one is doing. So that's exactly what the situation is here. If you start with some tensor and you do something to it on the right and make it even longer, there's no way that the left side is going to be affected by that. So it doesn't matter in which order that's done. The situation with annihilation and creation is almost as dysfunctional. Namely, if you create things on the right, then annihilating things on the left will never see that creation except when? Except when you start with a super short vector. Because if you start with a vector omega, then first creating h allows you some annihilation. But if you don't create the h, then you right away try to annihilate, you just get 0. So that's why the commutator ends up being just the projection 1 to omega. So these operators have a rank 1 commutator. OK, now I'll write r for r of h and x for s of h for short. I will compute the commutator between z minus x and r. So this is the resolvent of my x and I take the commutator with r. Well, I claim that this is just that. OK, and the reason for it is that the commutator is a derivation. Let's call it delta of, it's a derivation. And so if you expand 1 over z minus x, let's write it as 1 over z times 1 over 1 minus x over z, which is 1 over z sum of x over z to the n. If you now take the commutator with r, what you get is 1 over z sum. And here I have x to the n commutator r over z to the n. And then if you figure out what happens here is, when you take this commutator, you get this x to the n gets split up by the Leibnitz rule of the derivation. And you get so many portions of x before and so many portions of x after. So I'm going to get things like sum over x to the k, xr x to the n minus k over z to the n, where k goes minus k minus 1 from what, 0 to n minus 1. And it'll be 1 over z and another sum over n. And then this is just p. And then if I convert this to a double summation, you see that I get exactly this resolvent here, the product of these two resolvents. So this is valid for z very big, but actually, by and by the continuation, it's valid for all z outside the spectrum of x. OK? All right. So now I will use this and apply it to the function g of z, which is that. So what's the point there? So I'm interested in tau of z minus x inverse. And well, let me do this. Let me just compute tau of z minus x inverse p. Let me just compute what that is. Let me compute tau of this. All right? So this should be r. Well, let's just write out what the commutator is. I've just written down what the commutator with r is. And now here I can move this r to the other side because I'm putting it as an adjoint. And then r adjoint kills omega because r decreases degrees. So this term is simply 0. And here you notice that r omega is the same thing as x omega, which is just h. So this is nothing but omega x z minus x inverse omega. On the other hand, I can rewrite this as the trace of p z minus x inverse r p. And then I can use my magic identity, and I can rewrite this as the trace of p x minus z inverse p x minus z inverse p. And by formula number two, this is just g of z squared. So I'm getting that if I write g of z to be, whoops, the trace of this guy. So what I get is that trace of g of z squared is, well, here I can do a little trick. I can write x over z minus x as x minus z over z minus x plus z over z minus x. And notice that this is 1. And so using this, I get that g of z squared is equal to minus 1 plus z g of z. So I get that this is minus 1 plus z g of z using this algebraic identity. So I just all of a sudden obtained an equation on my function g of z. And then I just did from this little fact. All right, and the final thing that I want to do is to just give you this lemma, which is proved in the notes, that if you take, if you know the Cauchy transform of some measure, you can recover the measure as a limit of the imaginary part of the Cauchy transform. And when you do that to that particular g, you see rather quickly that you get the semicircle law. So the summary is that we've proved that this combinatorial formula gives you the semicircle law. And we didn't prove it. It's in the notes that different h's give you free variables. All right, lunch. Questions? Right, so you will see later. So there will be some themes. Namely, there will be some kind of a funny derivation going on. And we will use this magic property of the resolvent, which is on the previous page, this marvelous property that when you compute the derivative of some resolvent, you get this factorization. And that turns out to be quite useful, because in other situations, not the semicircular law, but certain uniterally invariant random matrix models, have very much a similar story, but a different derivation or rather something slightly different. But similar to this proof. Yes? OK, maybe we should start here, because we are running out of time. So there will be the problem session for Dima at 1 from 1 to 2 in this room. And there will be a further of course at 2.