 You really converges where it should converge to, so namely we also have the convergence in the operator norm. Okay, so everything works very nicely on this level. Good, okay, but then maybe I want to say a few words about unbounded operators. So I mean, these are more questions. That's the things I'm trying to understand at the moment. So that the situation is not so clear, but actually I mean unbounded operators. So the idea is I, if I allow unbounded operators, I can invert more operators. So the hope is maybe if I'm going to unbounded operators, I can maybe consider much more rational functions applied to my free semicirculars. Okay, unbounded operators in general are not so nice. I mean, they are not defined everywhere. They have only dense domain, and even the problem of adding two unbounded operators can be a problem because the intersection of the domain might be very small. Okay, so this means in general, the unbounded operators don't form algebra, and so it's not clear how to deal with them if you want to plug them in into some expressions. But actually in our situation, we are in a much better situation. If we have a trace, we are actually in a 2-1 situation in a von Neumann algebra context, and then it's one of the old results of von Neumann and Murray maybe, that actually then the unbounded operators which are affiliated to my von Neumann algebra, which are essentially unbounded functions of my operators in a von Neumann algebra, they form a star algebra. So really, I mean, they always have big enough domains so that I can add them and multiply them. So their things are really very nice, and furthermore, it's also, it's also easy. The problem of the invertibility is really the problem of zero-devisors. It's really the problem of the kernel. I mean, usually the problem of, I mean, yeah, in the finite case, we have bijective is the same as injective and surjective, usually it breaks down in the infinite dimensional situation, but in this situation, it's again, that bijectivity invertibility just is, it's enough to look whether it's injective or not, essentially. Okay, so the question is whether I have a zero-devisor, which means essentially I have a kernel in my operator, also if I find something in my von Neumann algebra operator so I multiply those things to their zero, then I cannot invert it. If I have a zero-devisor, then I cannot invert things, but if I don't have a zero-devisor, then it's invertible. Okay, and so the hope, which I don't know, which I have is that actually, I mean, if I allow unbounded operators, then maybe actually all my rational functions in my nice operators make sense, so they are well-defined and of course, the main problem is whether they have zero-devisors or not. If I have zero-devisors, then of course I cannot invert it, but if I never have zero-devisors, I can invert everything. Okay, and so somehow I'm hoping that for any rational expression, I really, it makes sense to take this rational function applied to nice operators, maybe to free semicircular, that's the nicest things we have, or maybe more general limit operators of nice random matrix models. Okay, and I mean, I only talked, up to now I only talked about the limit of the independent GUE, but of course we want to understand more general limits of random matrices and somehow nice, okay, I don't know, nice is something, these should be operators which have a maximal free entropy dimension, that's at least something what we expect. So maybe Dima will talk more about free entropy and those things, and the idea is that maximal free entropy dimension should be a situation in a limit where we can expect some regularity and nice things. Okay, and so I mean, we know a few things, so this is paper by Dima and Ian Charlesworth and maybe also Paul Scroffani's and also Tobias May and Moritz Weber and myself, that at least for polynomials we can show that we don't have zero divisors. So in particular for free semicirculars, this follows from another paper by Scroffani's and Schlaktenko, and also this more general situation, maximal free entropy dimension. Okay, so for polynomials we can show this, so this means if I have a polynomial, which is not zero, I can invert it because it doesn't have a zero divisor. I can invert it as an unbounded operator. But then of course I would like to go on, add another polynomial to this and then maybe invert again, but then I would have to know whether this new guy also has no zero divisors and that's something which at the moment is not so clear. Okay, so maybe he has somehow, what do we expect of nice operators? So I mean, we should have nice random matrices which converge in a limit to nice operators and I'm not really, I don't really know what nice operators means, but maybe we can take as a kind of definition that we have the maximal free entropy dimension. Okay, I mean this doesn't really tell you much at the moment, but Dima will maybe give talk more about this free entropy, this entropies. And in a sense, what I would expect in a limit, that these limit operators, they should be without algebraic relations, but in a very general sense. I mean, first of all, they should have no polynomial relations, so they should be algebraically free, but that's quite easy. But then they should also have no local polynomial relations. That's that they don't have zero divisors. So if I somehow restrict the operators down to a subset of my Hilbert space, to a subspace of the Hilbert space, then also there they shouldn't have relations. Okay, but then for polynomials, I mean we know things like this, but then I mean I also would expect that we don't have rational relations, and also no local rational relations. So this seems to be a stronger requirement. And maybe first of all, one should really be clear that it's really stronger. I mean that the fact that I don't have polynomial relations doesn't imply that I have no rational relations. So I mean it can be that if I take the polynomials in non-committing variables, I can embed them in many skew fields, in many division algebras. Okay, so this means, and this free field, this is somehow a universal field of fractions. So in a sense there are many fields of fractions for my polynomials. So this means I can have additional relations which don't contradict the fact that I have no polynomial relations. Okay, and I mean that the free field is somehow the universal field of fractions which doesn't introduce any new rational relation which is not coming from my manipulations. Okay, but there are other fields where I have additional relations. So every other field of fractions is somehow a localization of this guy. Which means it is given by having additional relations. Okay, and why do we expect no rational relations for nice operators? Okay, I don't know whether this is a good argument, but that's somehow what I'm trying to think of. Because I mean if I want to decide whether a rational expression is zero or not, I mean one way of deciding it is that I just plug in matrices of any sizes. So that's another possibility somehow of dealing with rational functions. So namely when there's a rational expression zero, it is zero if I plug in matrices, but matrices of all sizes. So I have to allow matrices of big sizes, then it's always zero. So I can use matrices for testing whether I have zero or not. Okay, and somehow I mean our nice operators which means maximal free entropy dimension. So this means they can be approximated by matrix tuples, by many matrix tuples. Okay, so I mean if I have a nice random matrix ensemble, this means I have many sequences of tuples which converge to my limiting object. So in some sense what I get in a limit should behave like a generic matrix tuple of arbitrary size. So it should be able somehow to check what matrices of all sizes can check, namely whether something is zero or not. Okay, so this is a very, okay, not very precise argument, but that's somehow the idea why I expect that I shouldn't have rational things in a limit. Okay, so in a sense I mean that the question which I have in this context is so how much regularity of the distribution of my limit operators is necessary to have that if I take the division closure of my operators that this is isomorphic to the free field. So the division closure means I take the polynomials in my operators and then I add inverses whenever it's possible. So I take, so this is somehow the smallest algebra which contains my polynomials but which is also being closed undertaking inverses whenever it is possible. So it's possible when there are no zero divisors. Okay. Okay, and so what do we know about this? And actually there's one situation where we know really that this division closure is the free field. And this goes back to the work of Linnell. I mean he looked on the zero divisor problems in the context of groups and in particular he showed it for the free group. But he showed it by embedding things into the unbounded operator. So what he really showed there is that the free unitary, if I have free unitary, high unitary elements, then they generate the free field. Okay, and I mean these free high unitary elements, those are limits, again those are limits of the other interesting basic random matrices namely of independent unitary matrices. So we have that the limit of independent unitary matrices, this really generates the free field. Then for the situation in the GOE for example, so let's say I have guys which are free and the distribution of each of them has no atoms. So in particular if I have the semi-circulars, then it essentially follows from the paper of Dima and Paul Scruffanis on the ATIA property that actually this division closure is a skew field. So it has no zero divisors, but whether it's a free field that's not so clear at the moment. Okay, so I mean this follows, I mean this is not explicitly stated in this paper, but this follows more or less by similar ideas from this also. So can Dima and James Pascoe and Cheng Ying are working on this and putting this in nice theory. Also there we know we have a skew field, so we can embed this into a skew field which tells us that we don't have zero divisors. But it could be that we have rational identity. So it could be that I mean something is zero which shouldn't be zero in the free field. So at the moment we cannot exclude this. Okay but I'm working on this and I mean I find it's quite unlikely so I would expect it should be like this. And I mean of course there are examples where we have non-trivial rational relations but we have no zero divisors. For example I mean if you go to the, if you take three semi-circulars X1 and X2, then for example you can build those new guys and those guys are algebraically free. So I mean they generate the polynomials in three free variables but they satisfy a rational relation like this which is not a relation which you have in the free field. Okay so this means this is a situation we are really here. So they, the division closure is a skew field but here it's clear it's not a free field. So for this situation, so there are situations like this. Okay and so what, what, okay I mean these are situations, the first two where we have some kind of results where we have freeness between our operators. But what I think we really, the nice, the general nice case would be something like maximal entropy dimension. And the question is whether there we have that it's isomorphic to the free field or at least to whether we don't have zero divisors. Those are things which at the moment we don't know. So I mean Sheng Ying and Tobias Meyer are working on this and so one has to see. So I mean I still have hope that maybe we can show that it's isomorphic to the skew field but the free field but at the moment this is not yet clear. Okay so I mean this was the end more questions for the unbounded operators but I hope I have given you some kind of idea what we are doing in free probability and how we relate random matrices and operators in the limit. And that's actually I mean there are a lot of interesting new structures around like non-committive rational functions which I think are worth to be investigated. And maybe let me end by making some advertising for the new book myself is Jimmy Mingo on free probability and random matrices which has a lot on this relation between random matrices and free probability and operator algebras and which just came out. So it should now be at least electronically be around. I mean you can also find a copy of it on my homepage. Good, okay, thank you very much. Yeah, if you take combinations you can go to other places and yeah if you go one over then you get the minimum. So if there are gaps in the spectrum then I'm not sure whether you can also get to those but you should be able by just I mean you this is true for all polynomials so you have a lot of freedom for playing around with them. And the rational case that is maybe why very briefly. Book finished too early but actually it started 10 years ago so there was time to finish it. So you mean those guys or? So what, they are what? I mean here I mean you can, here I take three semicirculars so that I already know by this result that this is embedded into a skew field so that I know that I don't have zero devices for them. Okay, so I mean, okay, I mean the paper of Linnell is quite complicated. I mean he has, so he has one, he embeds this into division algebra which would give essentially this result but then he also, because he's in a group case, I mean this is really about the free group so you have more structure around, he's using some other properties which in the end really show this. So there's a huge property in this context which he uses for this which I don't know how to extend this to this situation. Yeah, I mean this would be a probability measure a non-commitative probability measure but I don't know what this is for non-commuting variables, so. I mean that's, I mean the non-commitative distribution we have analytic ways, I mean these Koshy transforms which Dima mentioned so there are analytic functions which describe them but somehow I mean the probability measure in what we have in the classical context which maybe is the nicest thing to think about probability, really the probability measure that we don't have so indicator functions that's, I don't know.