 So I mean this is related to Dima Schlaktenko's talk and I want to talk about random matrices and their limits and the limits are given somehow by operators usually in this context. But one of the main points of my talk is that I want to go over from polynomials to rational functions, non-commutative rational functions. I will tell you what is this. And I will also somehow look on unbounded operators. OK, so let me start with random matrices and operators. And OK, so I mean I'm talking essentially on the macroscopic limit of eigenvalues of random matrices. And the typical phenomena in random matrix theories that by concentration we have in the limit a deterministic behavior. OK, and this deterministic behavior quite often is also interesting, so it's worth to look on what it is. And for operator algebra people it turns out that actually this limit is related to very interesting operators, or maybe more fancy C-style algebras and phenomenon algebras. But essentially it's operators on Hilbert spaces. OK, and I mean this is somehow the fundamental observation, one of the basic things in free probability theory, which makes this relation between random matrices and operator algebras. And this goes back to Dan Waikoulescu, who created free probability theory. And one of the big results, the main results there, is that the limit of random matrices can very often be described by NICE and also interesting operators on Hilbert spaces. And he was, one still is, actually interested in the von Neumann algebras, which are generated by those operators. OK, but so I mean let me be more concrete. What do I mean with that random matrices converge to operators? So let's say I have my random matrices. Also I have N by N matrices. And I have M of them. And I'm claiming almost surely they converge in a limit to some operators. So those are guys which are living on a Hilbert space, bounded operators on a Hilbert space. And in order to give a meaning to these convergence, I also need a state. I mean for my matrices I have the trace which contains information about eigenvalues. And in the limit I also assume that I have a canonical trace there. So that's somehow the probabilistic flavor of the whole thing that actually is convergence. I mean convergence of moments. And so we talk about convergence in distribution of those guys to those guys. If I'm taking any polynomial in M non-commuting variables, I take this polynomial at my random matrices and then calculate the trace that this in a limit converges to the same polynomial applied to my operators and then taking this functional tau, this trace tau. That's the definition of convergence in distribution, which I mean you will see more of this in the talks of Dima about asymptotic freeness that's really doing exactly this. OK, so that's the basic kind of convergence in free probability theory. And again, I mean I should really stress that I mean even though we are talking here about operators, in operator algebra you usually have very different kind of convergence. I mean this kind of convergence, convergence in moments is change of perspective in operator algebra. So people in operator algebra usually are not looking on this. And so maybe free probability, the idea was really we should have this more probabilistic point of view on this whole thing and looking on moments. OK, but then in recent years, it has turned out that actually in many interesting situations we have a stronger form of convergence, which we call, let's say, strong convergence. I mean this goes essentially back to the paper of Harger-Rubin-Torb-Johnson and then there are many follow-up papers. And I think this naming is strong convergence. This goes back to Kami Mahler. OK, but anyhow, so what is it? I mean it means I have convergence in distribution, but I have more. Namely, I also have that the operator norms converge. So I'm taking any polynomial in my random matrices. I'm looking for the operator norm of them. But the operator norm of them is nothing else than the largest eigenvalue. And I require that this converges to the operator norm of my polynomial applied to the matrices. And so my operators are living in some Hilbert space. So there I have the canonical operator norm. So this norm here is well defined in the limit. OK, and it turns out that very often I have a situation like this. OK, and maybe I want to show you, I mean, yeah, give you maybe the basic example where you see this phenomena. And I mean the best example which we have is that what happens to the limit of independent GUEs, independent Wigner matrices, they converge to free semicircular elements. OK, so I suppose, I mean, this is what Dima will show you probably in this next lecture. And I think you have seen what free semicirculars mean. OK, but anyhow, so I mean, before I really show you, maybe more concretely, let me point out that actually, of course, there's a difference between the one matrix case, because this is more or less the classical situation, and the case of several matrices, which really is the non-commuting case in which we are finally interested. So in a one matrix case, this notion of convergence, which I have, the convergence of all moments, means nothing else that the classical distributions converge. So this is really just a convergence of probability measures. This is really in a classical realm. And that I phrase this in a way that x and convergence on operator, this is just a matter of language. And this means, of course, I can look on distributions, histograms, and usually you see what happens. OK, in this multi-matrix case, if you are saying that, let's say, a pair of two matrices converges to a pair of operators, by definition, this means the convergence of all moments. But I mean, those are non-commuting moments, because my guys don't commute. I mean, I cannot extend this collection of moments to a probability measure on R2. So this is really something non-commutative. And I mean, it's really one of the problems is to say, what is really a non-commutative distribution of a tupper? I mean, the simplest thing is, OK, it's the moments. So we apply our state to any non-commutative polynomial. But giving more analytic structure to this is something which is not so clear. OK, so this means it's hard to see anything, but you can go back to the more classical regime by trying to understand the convergence of x and y and to x, y by just applying sufficiently many functions to your operators. And then, of course, this means any function, let's say a polynomial, at least, of this pair converts to a polynomial of this. But the polynomial of this is just, again, a classical object. And again, you can look on pictures. OK, so let me make this clear. I mean, here is the classical Wigner's theorem of the semicircle that everybody should know. Yes, I hope. So I mean, the GUE is the classical Gaussian Unitary Ensemble. So it's a n by n matrix where the entries are chosen independently. And it's the same distribution. And in the GUE, we take the normal distribution for each of them. In the Wigner case, we can take different ones. But for this, it doesn't make a difference. OK, so here is the picture. And I mean, this is the classical, OK, this is what was done by Wigner in classical probability theory. But the point, my point is, that actually this limit, OK, in classical theory, you describe the limit by the semicircular curve. OK, but the point is, I think which was already in Dimas' talk, is that actually, this limit distribution is given, is a distribution of a very important operator, maybe the most important single operator, namely, it's a real part of the one-sided shift. So you take the one-sided shift. So this is one of the most important famous operators. So in your Hilbert space, you take a basis, which you index by the natural numbers, let's say 0, 1, 2, 3, and so on. And then the one-sided shift just shifts the basis by 1. So E0 goes to E1, E1 goes to E2, E2 goes to E3, and so on. And then you extend this linearly. And then the adjoint of this operator shifts backwards. So E2 goes to E1, E1 goes to E0. E0 doesn't have anything to go to, so it goes to 0. OK, so that's the adjoint. And then if you take, essentially, the real part, L plus L star, so this is a self-adjoint operator, then this operator has, with respect to a very canonical state, I mean, we need a state, if you want to talk about moments. And I mean, there is one state, which is somehow special. I mean, namely, a vector state. There's one vector, which is special, namely E0. OK, so if you take the vector state, this is back to this, then we can calculate moments of this guy x. And it turns out the moments of this guy are actually the moments of the semicircle. So in this sense, the GUE converges to this operator x. Yeah, OK. And I mean, this is really, I mean, the one-sided shift. I mean, this is Haymors and people. I mean, this is an operator about everybody. An operator algebra taught a lot. But calculating the distribution of this, I think this was nothing which anybody was interested in. And I think it was, this is why Kulesku, that we really had this change that actually, I mean, first of all, this guy has very simple and nice moments. And actually, it's a limit of this thing here. OK, good, so I mean, so the theorem of Wigner, I mean, in this language, we can say that xn converges to x to this operator. Of course, in the classical theory, of course, Wigner didn't talk about x. He just said it converges to the semicircle. OK, but so this is the usual convergence and distribution. But then also in the classical theory, there's some strengthening. I mean, what does this convergence and distribution tells us? It tells us that the average behavior of the eigenvalues goes to the semicircle. So we are talking about the average behavior of eigenvalues. But this does not really address single eigenvalues. I mean, in a sense, each eigenvalue makes a contribution of mass 1 over n, so it disappears in a limit. So Wigner's semicircle law, I mean, would be compatible with the fact that there might be very large eigenvalues somewhere. So the largest eigenvalue of the GOE could be, I mean, here in this usual normalization, it should be 2, I mean, the boundary of the spectrum. But I mean, this does not follow from Wigner's semicircle law, but I mean, there could be an eigenvalue at 10, let's say, which just disappears in a limit. OK, so OK, but one can strengthen this, and this was done in random matrix theory, that actually the largest eigenvalue of xn really converges to 2. And 2 is actually the operator norm of x. So this is this strong convergence, which I talked about in the beginning. So this is really, we see this in a classical context. OK, good, but now the multi-matrix case, that's what we really are interested in. Now let us consider not one, GOE, but two. And OK, the simple situation for choosing two is I'm choosing them independently. And then it turns out that actually in a limit, again, this converges to essentially two copies of this one-sided shift, which I had before, in different directions. And I mean, this is given by creation and annihilation operators on the full Fox space. And I think Dima showed you some things like this. OK, and this again, those operators, so those are now non-commuting operators, and they generate, again, some of the most important C-star and von Neumann algebras which are around. So if you don't take the sum, then this is related with the Kunz algebra. And the sum, they create the three group factors. That's really the von Neumann algebras in which Waikoulescu was interested. And again, I mean, there's a canonical state that a vacuum state. OK, and so I mean Waikoulescu's theorem, which Dima will prove in one of his classes, is that actually a pair of these independent GOE converges in distribution to this pair of x, y of this creation and annihilation, or sum of creation and annihilation operators on the full Fox space. So that was the basic observation for Waikoulescu. I mean, another way of formulating is that they are becoming asymptotically free, and we have the free-ness in the limit. So OK, so this is the basic result of Waikoulescu. But as I told you, I mean, this here, OK, it's not clear what this means. We don't have, you don't see it. But you can see it even if you are looking on polynomials or functions. So to see something, I mean, let us take a function, a polynomial in x and y, xy plus yx plus x squared, for example. And then, of course, this convergence distribution means that in particular, the polynomial of my x and y and converges to the polynomial of x, xy. And this is true for any polynomial. OK, and then, again, we have this tells us that if I take this polynomial, plug in my Gaussian random matrices, calculate the eigenvalue distribution. I mean, so here are the histograms. So actually, I mean, what I have done here, the histograms, I have taken one realization of this situation. So maybe my random matrices are maybe 5,000 by 5,000 random matrices. So this is the histogram of the 5,000 eigenvalues of this matrix. OK, and then I compare it with the distribution of this polynomial of those operators. And this is this curve. And in free probability, we have developed methods for calculating this distribution. I mean, of course, it's not a non-trivial thing, really, to get this distribution. But that's what we are able to do in free probability theory. No, no. I mean, I think, I mean, without the x squared, if we would have the commutator or anti-commutator, then we have, like, in free additive and multiplicative convolution, there are formulas. But if you go over to more complicated polynomials, OK, maybe you have hope. But I think there's no hope of really getting explicit formulas. So it's really more an algorithm which allows us to calculate those things. Good. OK, so we have this in distribution. But then, again, it's in the classical case. I mean, this picture doesn't tell us anything about the largest eigenvalue. Again, there could be an eigenvalue at 100, which survives sitting at 100, and doesn't converge to 10. Close to 10, where it should go. OK, but then we have this important paper of Harger and Torb-Johnson, who actually showed that in this situation, we have the same as in the classical case, namely the largest eigenvalue of the polynomial of my random matrices converges to the operator norm of my polynomial in x and y. So it really converges to the boundary of this distribution. OK, so we have here the strong convergence. And I mean, there have been more papers which do this for other situations. OK, but what I really want to do is I want to go further than polynomials. So I mean, in the one-dimensional case, of course, I mean, if I'm having big convergence or whatever, I mean, moments, those are polynomials. That's OK, but usually, I mean, you want the convergence of, let's say, continuous functions or whatever. And in many situations, you can go from one to the other. So if I have one random matrix, I mean, the things which I had before for polynomials are also true for continuous functions. OK, and in non-commuting variables, of course, this is now a problem. Because I mean, everybody knows what is a polynomial in non-commuting variables. But then what? If you want to go further, it's not so clear anymore. So I mean, so what can we do? And I mean, of course, can we go to continuous? I don't know. I mean, I don't know what is a continuous function of non-commuting variables. As Tim already mentioned, there is a theory of analytic functions of non-commuting variables, which is being developed at the moment. And there's a lot of progress on this. So this is called free analysis. And so in a sense, at the moment, seems there is a nice theory of analytic functions. But still, it's not yet at the end. And at the moment, I don't really know what to do with this on my random matrix level. OK, but what I do a little bit about is something in between going a little bit further than polynomials. And the next thing is rational functions. I mean, classical rational functions is you divide one polynomial by another. OK, and this is something which actually can be done in a non-commutative. But actually, it's non-commutative. Rational functions are much harder, but also much more interesting than in a classical case. So what I want to do is I want to tell you a little bit about non-commutative rational functions. Because I mean, this is independent of what I told you before. I think this is a very nice subject, which is worth to be well known. OK, so non-commutative rational functions. So this is a quite old subject. So they were constructed by Amitzou in 1966. And then Kohn did a lot on this. I mean, he wrote a lot of books. So I mean, he made quite a living out of this. So this is really part of non-commutative algebra or ring theory. And so I mean, what is a rational function? Principality is clear. What it should be, I mean, a rational function in my non-commuting variables, y1 up to ym, they are not just formal variables, which I assume not to compute. So this should be anything which I can get from the algebraic operations. But I also want to take inverses. So I do everything, including inverses. For example, something like this. So this expression should be a non-commuting expression. OK, but then, of course, one has to be a little bit careful. Because first of all, if I write down an expression like this, I mean, I could do algebraic manipulations and write it in a different form. And I should be able to identify whether this is the same or not. OK, and related to this is, of course, that if I invert things, I mean, one thing which I'm sure you cannot invert is zero. So I should not invert zero. But it might be that zero can appear in a very complicated form, and it's not clear whether it's zero. So it's not obvious to decide, for example, this guy here is zero. So the left-hand side. So if you get somewhere that the left-hand side looks like a good expression, so maybe you're tempted to invert it. But because it is zero, you cannot invert it. It's clear. But the problem is it's hard to see this. I mean, OK, with some playing around, you can prove this by algebraic manipulations. But it's not obvious. And one of the main problems is that rational functions, there's no canonical form. In the classical case, every fraction will just divide a polynomial by another polynomial. In a non-commutative, this is not true. I mean, you see here, for example, what I'm having here are really nested inversions. So I mean, if you are adding two fractions, you just get another fraction. I mean, this is basic, but that's a nontrivial statement. But if you are adding inverses in a non-commutative and invert again, usually you cannot reduce it to just having one division. OK, so there's also no canonical form of writing such a thing. And that makes the whole thing a bit complicated. And I mean, it's not really an easy task to construct these rational functions. But in the end, I mean, all this can be done and was done by those people. And what you get is a skew field on division algebra. So a skew field is a field which is non-commutative. And it's the same as a division algebra. Some more operator algebras, I think people call it more division algebra. So this means everything which is not 0 has an inverse. OK, so there exists something like this. And this usually is called a free field. And it's denoted by this. I mean, I think this goes back to corn, this bit strange notation. OK, so this is the free field. And I mean, there's one thing which makes working with it easier. And it's somehow the replacement of not having a canonical representation. Namely, it turns out you have a canonical representation if you go over two matrices. So I mean, if you're working in a ring, let's say in a ring of non-commuting polynomials, and if you just try to stay in this ring and do everything in this ring, things are very hard. But if you include in your arguments also arguments of matrices over the ring, so you go to matrices over the ring, then suddenly you have much more possibilities. OK, so that somehow seems to be a general message, which was an operator algebras people are aware of. I mean, for non-commutative algebras, you should also consider the matrices over this algebra. If the algebra is commutative, it doesn't contain new information. But in a non-commutative, there is a lot of new things there which are much easier to see on this level. And so for the non-commuting rational functions, it turns out that every rational function you can actually write as the inverse of a matrix, and you multiply by u and v, which essentially means you take one entry out of this, where this matrix contains polynomials, which can be chosen with degree less than 1. So you can do this one inverse, but on a level of matrices. And maybe here is the example, the guy which I had before. You can also write it in this form. And this is more or less the sure complement formula for inverting a block matrices. So I mean, this guy here is essentially the, I mean, this you hear you're picking out up to some factor, the one-one entry of the inverse of this matrix. And you see this matrix is very simple. This only contains polynomials. And actually, you can take them linear or f i. Only degree one is showing up here. So this means I mean, this here looks hard to deal with. But with such things, you have much more possibilities to work. And this representation, I mean, this is also true for polynomials. I mean for polynomials, maybe it's not so clear what it's good for, but for polynomials, if I have here polynomial, you can also write it in this form. And actually, the entries are linear. This is what we call linearization. And this was more or less what Harger-Robin-Torr-Bjonsen introduced in our context. And which was really the basis of many advances which we have made for calculating, for example, distributions of polynomials in three variables. OK, so Harger-Robin-Torr-Bjonsen essentially introduced somehow that we can write a polynomial in non-committing variables in this form. And so I mean, we had a lot of tools for working with polynomials. But then, I mean, slowly turned out that this linearization, this is a very old thing. I mean, of course, as I just told you, it's actually this theory of corn and so on. But it appears in many contexts. And it was maybe rediscovered many, many times. It has different names, or it goes in parallel. And maybe, I mean, the oldest one I know of is also called Hickman's Drake. So this goes back to 1940. OK, so in our context, it appeared with this paper of Harger-Robin-Torr-Bjonsen and Greg Anderson give another version of it. But slowly, we became aware that actually you can go back much in time. And we became aware that actually, instead of just looking on polynomials, we can look on rational functions. I mean, before we didn't even know about rational functions. OK, so this means now I should come back to my random matrices and operators. I mean, I have shown you these non-committive rational functions somehow as an algebraic object. But of course, I want to think of it as a function. I want to apply it to operators. OK, and so I mean, there are a couple of issues. One has to be a little bit careful about. So first of all, I mean, one has to notice, which maybe, if you see it for the first time, is a bit strange that not every relation which you have in this free field holds in any algebra. I mean, a free field, OK, in a free field you require everything which is not seriously invertible. So this gives you something. And here's an example. I mean, in the free field, you have this relation. So this is, I mean, the inverse of this. Of course, the inverse is the product of the inverse in the other order. So this is a valid relation in the free field. But I mean, it's not true that in any algebra this is satisfied. I mean, even in operators, I mean, there is a very basic example, namely, if you take an isometry, which is not unitary. So I mean, if you multiply its visits a joint in one direction, you get one. But in the other direction, you don't get one. Then if you plug it in here, then it's not valid. And I mean, the problem is not that you're not allowed to plug it in. Because I mean, you have to take here the inverse, you take the inverse of one, so that that's okay. But so you get that if you plug it in here, you get V star V, but this is different from one. So this means this relation, which is true in the free field, is not true in this specific algebra. Okay, and I mean, but corn, I mean, this already goes back to corn, shows that this is more or less the only problem that you have. So maybe the problem that you have here is that the left inverse is not the right inverse. And in the free field, you should have that the left inverse is the right inverse. So if you have an algebra where this is not the case, then okay, you cannot plug it in. And also if you go over two matrices, you should have left inverse is the right inverse. But if you have this, then everything is okay. And actually, I mean, so those guys where you have this property, called stately finite algebras, and actually also an operator algebras, I mean, they also are the ones which appear as the finite for Neumann algebras. And in particular, if you have a trace for Neumann algebra context, then you are in this situation. Okay, and in our situation, usually we have a trace. So this means we are really okay. Okay, so we are not dealing with type three for Neumann algebras, because then we would have problems. But in type two to one, this is fine. Good, okay, but then, of course, there are more issues. I mean, of course, if you want to plug in, you have to take inverses in the free field, you can take the inverse of everything which is not zero. Of course, for operators, this is not true anymore. So this means if I apply my rational function to some operators, first of all, it could happen that for my operators, I have some relations which are not there in the free field. So I mean, I could have an irrational expression which is not zero in the free field, but if I plug in my operators, I get zero. Okay, so something like this can happen. But then it could, of course, also be happen, could happen that if I plug it in and I get something which is not zero as an operator, it doesn't have to be invertible. I mean, not every operator which is different from zero has to be invertible. Okay, good. So this means I have to be a little bit careful if I plug in my operators. And possible solutions are, of course, I just plug in operators in rational expressions where all the inverses make sense for my operators. So I stay away from inverting operators which are not inverting. Okay, but actually, I mean, if I allow unbounded operators, then I can invert more operators. And so I might also consider the possibility of allowing unbounded operators and then I can plug in more things into my expressions. Good, okay, so let me first look on the first situation and yeah, in this situation, you can extend these strong conversions of random matrices from polynomials to rational functions. So this is a result of Sheng Ying. So he's a PhD student of mine and he's also here. So I mean, so let us, so the situation is we consider random matrices which converge to operators in a strong sense. Also, by definition, this means if I take a polynomial in those guys, then the distribution, the trace of those guys converges to the trace of this and also the operator norms. That was our definition of strong conversions. So we assume we have a situation like this and we know, for example, we have it if I have a GUEs, independent GUEs. Okay, and then the statement here is that then this kind of convergence remains also true for rational functions. So namely, if instead of a polynomial I consider now rational expression and of course, I don't want to deal with inverting things which are not invertible. So I only take an R such that if I plug in my limit operators, then this makes sense as an bounded operator. So whenever I have to take inverse, it should make sense as the inverse of my operator. Yeah, okay, and then if this is, if I consider such a thing, then I can also, at least for sufficiently large N, plug in my random matrices into this rational function and I get what I expect to get, namely that also the distribution and the operator norm of these rational functions of the random matrices converges to the limit, to the corresponding limit contribution. Good, okay, and the proof I mean is essentially by recursion and the complexity of the formulas with respect to inverses and essentially you only have to control what, when you take the inverse, okay. But because we can control the norm, we have the strong convergence, we can control taking the inverse in kind of uniform way for the approximating matrices and the limit operators. Okay, so what we have here somehow, if we start from convergence for polynomials, we can extend this to the convergence for rational functions on a strong level, which means if I can control the operator norm for polynomials, I can also control things for the rational functions. Of course, one could ask the question, what terms if I only can control the distribution but not the operator norm on a level of the polynomials and then it's not so clear, then I cannot really go over to the rational functions because then I might have some eigenvalues which prevent me from inverting things and it's hard to deal with them. So it's not clear, so I'm going from convergence in distribution from polynomials to rational functions. If I have no control on the operator norm, then this is not so clear, so we cannot do this. So this is really a theorem in a strong convergence and here's an example. So here is this rational function which I had before and now I'm plugging in again my GUEs and my limit operators. So I'm taking independent GUEs which I know converge to X and Y which are given by these three semicircular GUEs which are concretely realized here as two copies of the one-sided shift. Okay, and I mean I have chosen here such a rational function that I can plug in those GUEs. I mean X and Y, the spectrum of them is between minus two and plus two. So I have chosen this four here so that things are away from zero. So these inverses are not no problem with those GUEs and then the theorem tells me again now I can plug in for sufficiently large N, I can plug in the GUEs, I can draw the histograms and this again converges in distribution to taking this expression here in the X and Y. And furthermore, again, we have no separate eigenvalues. The largest eigenvalue.