 expression of these inverse, omega inverse, that omega inverse, omega and omega inverse somehow are part of this, even though it's not particularly necessarily, you know, jumping at you. And you have corresponding, so this corresponds to the derivative, if you can see, the derivatives of the polynomials. So you see here you have p sub i and x of z to the p sub i minus 1. That's like the derivative, in fact, no, it's not like the derivative. It is the derivative of the polynomial x to the p, right? And the same thing happens here. And then you have this interesting looking function and then you take some derivatives and integrate. And you integrate, interestingly, you integrate over semicircles. If you look at what these are, these objects, these are semicircles. Is that clear? You have the imaginary part of the variable being positive and you're on the circle. So it's a semicircle. It's not trivial to see how computing covariances by the method of moments, because that's what we'll be using. You arrive at this expression. You have to essentially employ generating functions and do a little bit of complex integration, not particularly hard. But I guess you either have to have an idea of what you're looking for or you have to have a spark of genius to actually come up with this expression. And for either one of those, I refer to you to one of the organizers of the summer school, Alexei Borodin, because he came up with it. I'm just explaining it. All right. So this is what the covariance structure is going to be. Now, this, and I'm going to say this and then I'll define the quantities and then I'll come back to it. This is essentially the same as saying that the height function process defined by the height function that I mentioned converges to the pullback of the Gaussian free field that corresponds to the map omega that I mentioned before. And of course, these two might sound like a bunch of nonsense, which is why I'm going to define the quantities involved. But I just wanted to explain how just a simple expression or, okay, simple is in the eye of the beholder. But in particular, the fact that you see this logarithmic term here leads to an expression for this Gaussian free field. Which, what the heck is it, right? What is this Gaussian free field? Well, the Gaussian free field is actually a random distribution. We generally define it on a domain in complex space. And it's going to be a distribution defined on smooth functions in the following manner. You give me a smooth function and what I will give you is a Gaussian variable that depends on this smooth function in such a way that I can compute the covariance of any two of these variables depending on functions f and g. And this covariance is going to be the integral over essentially u cross u of, for functions g and g2, it's going to be g1 of z, g2 of w, times the green's kernel on u for the Laplacian operator with Dirichlet boundary conditions. So zero boundary. Okay? And in particular, for the upper half plane, this green's kernel looks like this. So this should ring some bells when you see something like that. So that's how you define the Gaussian free field. It's a random distribution which gives you, you know, when you input a smooth function, it gives you a centered Gaussian variable. The covariance of any of these two variables being computable in terms of this green's kernel over d. It's an interesting fact and you might go, aha, when you see the problems in problem session later today, to have that another way to actually think of this GFF is to realize that given the Dirichlet in a product on smooth functions on u given by the gradient, excuse me, the variance of fg, so the variance of the Gaussian, the centered Gaussian variable that you get from the Gaussian free field when you plug in a function g is going to be 2 pi times the inner product of f with f. You can do this by essentially integration by parts. You get that this integral can be written as minus f laplacian g or minus g laplacian f, et cetera, et cetera. But the important thing is that you do two lines or three lines of computation and you can get this. And then you can use the polarization identity, which I will remind you, takes you from the values of an inner product on the diagonal to the inner product itself. Yes. Sorry. Yes, you're completely right. This should be the Gaussian free field on f, not on g. Thank you. Okay. So the polarization identity will then give you that the covariance between two different Gaussians is also given by the inner product. Okay. So now I'll define this notion of pullback. If you take gamma to be a conformal mapping or a bijection from the domain d to the upper half plane, the composition of the Gaussian free field with this mapping gamma is a generalized Gaussian free field on d with covariance given by this kernel. And integrals of the Gaussian free field with respect to measures d mu can be obtained from the integrals of the Gaussian free field with respect to measures given by d of gamma of mu. Okay. In this manner, gamma of mu we call the push forward of the measure mu. And in our case, we'll be interested in this gamma being the mapping that we talked about, the one that takes semicircles essentially in the half plane, two lines in the half plane. Sorry. Yes. Sorry. Could you say that again? Yes. Yes. Yes. Yes. So we have d and the Gaussian free field. But did I say it wrong? Generalized Gaussian free field. No, no, no. No. Sorry. It's a Gaussian free field on d. So it's an interesting fact that you do the pullback. It's the pullback that gives you the Gaussian free field on d, not the push forward. Initially, you have, yes, and it's defined as being minus one over two pi given by the Green's kernel minus one over two pi log of, yes. Okay. All right. So now going back to the remark that I made about the height function and the Gaussian free field and the pullback, what I mean by that essentially is this. For any polynomial, but actually you don't have to stop at polynomials. You can do functions that belong to a certain subolive space. Integral of f of x, h of x, y dx, so h of x, y being that height function, which I defined previously, is going to be the integral over z, not absolute value of z, z in the upper half plane on the semicircle, f of z, f composed to omega of z, dz. Okay. That's what the process, the height function process is defined as in the limit as l goes to infinity. Okay. But this is a bunch of very, if you want to think of it as a bunch of abstract nonsense. So I'm going to go back to the theorem that I had on the board, which gives the covariance of this process on the linear statistics of the k polynomials on the k overlapping minors and explain how you can get the covariance of this thing. Okay. It looks unusual and it only looks like this after you've done a lot of work to bring it to this form. But I can give you an idea based on hopefully some computations that you've done yesterday about the covariance for a single matrix yesterday in problem session, why this might be true. So let's see. Where is the, okay. So here's the combinatorics part of the program. Suppose I have two of these overlapping minors. Okay. And I want to look at what happens when I do trace of, so this is w1, this is w2. I'm looking at w1 bar as being one over root l w1. Same thing will be true for w2. And I will be looking at the covariance of these two objects, trace of w1 to the k1 or let me use the same notation as their p1 minus expectation thereof. This is object one and object two is similar, but it involves w2 and p2. And remember that essentially when I compute the covariance of these two guys, if this is x, little x to the p1 w1 bar, x to the x to the p2 w2 bar. And I look at the covariance of x little x p1 w1 bar, x little x p2 w2 bar. This is going to be as before a sum of covariances of words. Remember that we wrote it before, wi, wj. The only difference here is going to be that the words will correspond to different matrices. w1 or wi is going to be in matrix one and wj is going to be in matrix two. And so what can you say right away that will happen if wi and wj will not overlap precisely on entries that belong to the overlap of matrices? That term will be zero. Okay? Let me write that down. So we shall have that the covariance, this covariance. This is going to be 1 over l to n1. Let me write big because in the back, I know you cannot see. So n1 of l. Sorry, not l1. Never mind. p1 plus p2. And then I will have a sum over i and j, where these are indices or rather a p1 tuple of indices in w1 and a p2 tuple of indices in w2 of the covariance of wi, wj. So now the indices really belong to different matrices but the variables in the words can be the same by virtue of the fact that w1 and w2 overlap. And in fact if the two words wi and wj will not have overlapping variables, wi and wj will be independent and the covariance of such a term will be zero. So all the calculations that we did yesterday and that you did in your problem session where you were taking graphs corresponding to these two words and trying to overlap them and only managing to do it under certain restrictions like the fact that p1 plus p2 has to be even and the like, all of that will still be true. And the counts, the individual counts for wi and wj will still be true. The only things that are going to be slightly different will be the weights because when you divide like this, powers of the size of the matrix that will appear here will look like, let's see, ni to the pi and nj to the pj except it's actually a little bit more subtle. And the reason is that when you pick variables in this overlap then you have to count not by ni and not by nj but by nij. So in the end you get an expression that is similar to the expression that you probably have seen yesterday in the problem session but a little bit more complicated. You will have in it bi to powers, bj to powers, cij to powers. And you can manipulate that afterwards to get to this expression here. I recommend reading Alexi Borodin's book on the CLT for, he calls it the CLT for Wigner miners but essentially what it does is it defines and it establishes this Gaussian free field connection through the height function. Okay. Since I don't have much time left, I've covered this already. I want to talk a little bit about extensions. So far in what I said and what I showed you, now granted I haven't shown you too much but when you do this calculation, this moment calculation, you have to implicitly assume that you have moments of all kinds bounded, et cetera, et cetera, et cetera. That condition can be relaxed via a truncation argument like the one that we did in lecture two to show that the semicircle law is true not just when you assume moments of all kinds but when you assume very little, when you assume moments of order two in that case by just constructing coupled variables that are, you know, for a set of variables for which you don't want to make assumptions, you construct a set of coupled variables for which you can make assumptions, prove it, conclude that the proof works for those and then show that the differences are negligible. Okay. So the same idea of truncation, it will give you here that you can extend to any kind of distributions that have four plus epsilon moments, for any kind of epsilon, anything that's slightly more than fourth order moment will do. That's one extension. Instead of working with polynomials, you can work actually with functions in a sobolev space. So you need a certain degree of smoothness but not too much, not certainly not polynomial kind. And you can take a step. So this is for people who already know what the Gaussian free field is and who are interested in this topic. You can take a step toward true two-dimensionality by not looking at just functions, so not doing this just for functions of one variable, but by doing it for cross-products of functions with probability measures on compact sets, with compact support, sorry. And you define, but the way you define that is very simple. You define things by multiplication, so it's f of x, rho of y. Okay. So it's not general, sadly. But it is more two-dimensional because you'll have two objects over which you can define this height function process which converges to the Gaussian free field. So the remaining question and our open question is can you extend that? Can you extend that to any kind of function of two variables? How would you do that? And it's an open question. So this extension is done in a paper that I have with, well, for the Wishart case, but it can be done in very similar way for the Wigner case, in a paper that I have with Elliott Paquet. And we'd be very interested to know if you can actually go all the way and define truly a two-dimensional object. The Gaussian free field has been found to appear in other settings. So not just in this setting for Wigner matrices, but also for Wishart matrices, which are the companion matrix to the Wigner matrix if you want. So there are these two, well, I guess there are three really, but two of them are closer than the other. Types of matrices. So the Wishart matrices, which come from sample covariance and the Wigner matrices, and it can be done for Wishart matrices. It can also be done for Jacobi matrices, which are yet a third companion to the Wigner and Wishart cases. That's just an Epsilon farther away. It can be done for settings as unusual as regular graphs. Okay, so for a regular graph, you can define an adjacency matrix. It's going to be symmetric. It's going to have many of the properties of Wigner matrices, but definitely not all. But you can still talk about things like fluctuations from the limiting empirical spectral distribution, and therefore, you can still talk about centered linear statistics. And you can define, even there, you will find a Gaussian free field connection. So in a good number of instances of settings, this Gaussian free field has been identified. What about beta ensembles? So in the case of Jacobi matrices, Borodin and Gorin have looked at the beta general case, so beta greater than zero. This is just for the experts in the audience, and have been able to show the connection with the Gaussian free field there for any beta. But it looks like in beta Laguerre, which would be the extension of Wishart, the Gaussian free field or the process that you get on beta Laguerre matrices is not the same. It's not the same as the Wishart process, even for the classical values 1, 2, and 4. And so there's something that we don't really understand that goes on there. So it would be nice to know if somebody can find an explanation for this phenomenon. And I think this is where I'll stop. I'm afraid that I might have gone over a little bit. I just want to say that I've greatly enjoyed lecturing to you. I hope that you stuck with it through all the four lectures. I'm going to be here next week. I'm happy talking to you about more of these beautiful things. So thank you. And that's it for our course. Although I should say, do you have questions? Because I see that there's no one here to chair. Yes. Yes. So this is all true for non-Gaussian entries. All you have to assume is that the is the matching of the moments. So you have centered, and you have the variances match, and you have the fourth moment matching. And then the distribution can be anything. And you need these because otherwise you're not going to get this Gaussian free field. I mean, you get other terms. It's provable that you're not going to get that. So it's more general than the Gaussians, but you have these constraints on the moments, on moment matching. Yeah. Any other questions? Okay. So I guess we'll reconvene here in a few minutes with a research talk.