 And then you do the exact same thing for the other walk. There will be no overlap on the trees. The only overlap that can be is going to be on the cycles because remember, you already use each edge in a tree twice, you have no leftover, you can't have overlap. If you overlap on that, you're gonna have to use it four times. So, that means that once you've decided on a partition of the number k minus r over two for the first, for w i and on one for w j, then you have to choose the trees from amongst all of those trees with prescribed size. They have to be rooted because you care where you place them or how you place them on the cycle. So they have to be rooted. And of course, what's the number of unlabeled rooted trees of size k sub i, c k sub i. So once you've decided on partitions, the total possible combination is given by this product of c k i, c k i. So this is a square actually, okay? Moreover, you will be able to pick the labels on these graphs in n to the k ways because k is v. And given a cycle and dangling trees graph, properly partitioned, there are a total number k squared of ways of choosing starting vertices. So remember that we're looking at w i and w j. W i encoded in it is the starting vertex where you start doing your walk. That will have to be one of the k vertices available. And it's the same thing for w j. Again, you'll have to pick one of the k vertices available to do the walk. So that gives you a total of k squared ways of choosing starting vertices. You'll have two choices on how you walk on these things. You could start walking this way or you could start walking this way and they will give you different terms, different combinations, different pairs w i, w j. But you have to also take into account a rotational invariance of the cycle. So you'll get the same picture just by rotating the cycle, okay? So there's an over count. So these two are multiplicative and this one is division. So it's a dividing factor. So overall, the conclusion is that summing over all possible cycle lengths, you're gonna have this two k squared over r and then the sum of all possible partitions of k minus r over two of the product of c k sub one, c k sub two, c k sub r, and you square that because you have to have one partition for w i, one partition for w j. And since they're independently chosen, that means that things multiply. And this is the number of pairs and the number of ways of labeling the vertices in the graph that will give you non-trivial terms, non-negligible terms, okay? All right, so now, okay, so that's the count for when the graph has a cycle. The reason why the n to the k disappears is because we're dividing by n to the k in the, by taking the product of w bar. So remember w bar was the scaled matrix, yes. How can r be odd? Why wouldn't it be odd? Oh, good point, okay. So I just wrote it like that. If it's not possible, then that term disappears. Yeah, so c of an odd number is not defined, so that's gonna be a zero term, okay? Pardon me? Right, so if k minus r over two is not an integer, then you cannot partition it into an integer. So that's gonna be, yes, but it's a good point. So yes, you actually want r's that are even. Maybe I should have started the sum from four on, but there's a certain parallel that you'll see with the odd case, so I would rather keep it like that. Just consider that. When you can't do something, then that's a zero term. Oh, sorry, I actually wanted to make one more point here. Since each edge is worked on twice, when you take the expectation, that variable is going to be squared. So you look at the expected value of a square of one of your variables, so that's always one because that's how the Wigner ensemble is defined. So that means that everything here has weight one. Okay, so that's the count for having, for terms that come from cyclical graphs. But what if the graph is a tree? So that's the recall that that was the other possibility. You have either a graph that has a cycle and in which case you have every edge appearing exactly twice, or you have a graph that's a tree, in which case almost everything is going to appear twice with the exception of one edge, which will appear four times. Okay, so if it's a tree, that means that both the graphs for WI and WJ are trees and what you've done is just overlap them on one edge. That's all. Okay, if they are trees, that means that you have, oops, sorry, wrong button. There are CK over two possible trees that you can choose among and then you have to pick one of the edges to overlap on. There are two ways to do that. There are K over two total edges in each tree and you decide which edge to overlap. So that gives you K over two squared. But you have two choices of orientation on how you do your walk. So that gives you another factor. So there are two choices of orientation in that the edge that you decide to overlap could appear first in one direction for the one and the other direction for the second or in the same direction for both of them. So that's a much easier count. And each choice is going to come with this weight. And notice that this is the first time that we've seen a power of the variable that's other than two appearing. Well, and once the powers were two, then that was one. But in this case we have the fourth power and the fourth power appears from the fact that that edge is actually used four times so you need to have that. Every other edge we'll have in expectation weight two but this guy will have weight to the fourth, weight w i to the fourth. Then we subtract one because it's the covariance on the other side is just gonna be power two squared. So it's gonna be one squared so that gives you the one. So overall this is the total weight for that case. So to assemble things for the case when K is even, this is the variance right here. So this comes from terms that correspond to a single tree with one edge walked on four times and everything else walked on twice. And these come from terms that correspond to one cycle with dangling trees in w i, the same cycle with some other dangling trees in w j and overlap on the cycle. This is the variance for K even. Actually let me see how much time I have. Okay. So let's do the same calculation for K odd. I'm gonna go a little bit faster because now you've seen things. I am guessing that you understand what's going on. These are again all the possible at first glance, all the possibilities. And now just as we did before, let's figure out which one of them are actually not possibilities. So first of all, the last one again here is not going to be a possibility. In fact, this actually never happens because you have to have, if the graph is a tree, that means that w i and w j are both trees. And now you have that you're doing a closed walk on a tree which will have an edge of, that's only walked on once or three times. And it's the same impossibility as before. It doesn't matter how many edges you have total or what the parity is. You just cannot do a closed walk on a tree that visits all the vertices and has one edge walked on only once or three times. So that's not actually a possibility. And also, this is also not going to be a possibility because remember, this comes from the case when both w i and w j represent closed walks on trees where each edge is walked on exactly two times, but that means that the powers themselves have to be even. So that's also not a possibility. So the only possibilities that we can, that we have to examine are the case of a cycle or a loop. The case of a cycle is identical to before. It's the exact same count. It's the exact same discussion. And of course, now you're going to have that instead of summing over even lengths of cycles, now you're gonna sum only over odd lengths of cycles. But again, it's okay to express it this way because if you can't do a partition of K minus R over two, then this term just doesn't appear. So that term we know and we're familiar with. And now let's look at the last remaining case which we previously dismissed as being impossible. And that was the case of a loop. So you have now it's K odd and what kind of graphs can we have? Well, it's actually easy. We're gonna have a tree that gets walked on in a closed manner here and a loop that appears once. And the same thing here, not the tree that gets walked on in closed manner here with the same loop that is only taken once. And this time the parity is okay because you walk on a tree, it's a closed walk on a tree where each edge is walked on at most exactly twice and then plus a loop that gives you an odd number. So this is indeed a possibility now. So what are our choices? Our choices can be described as follows. First we pick the tree. The tree will have to have K minus one over two vertices. It's going to be a rooted unlabeled tree. The labels come afterwards, but it's important that it's rooted. And then we pick the loop to be on one. So we choose the loop to occur after one of the edges is walked on. So we don't just dangle it off of a vertex, but it's important when the loop occurs in the walk. So between each, what two edges does the loop occur? And that gives you exactly K possibilities. And since the choices are independent, you get a total of K squared, CK minus one over two, so the Ketlin number corresponding to K minus one over two squared, so one for WI, another one for WJ. We have these two trees with the loop decided in the walk, and then you just identify the loops, okay? And the total number of choices at that point is for the labels of the vertices, N to the K, et cetera, et cetera, et cetera. There's one important factor here. Did I write that somewhere? Yes, okay. It's the weight of the loop. So it's the first time so far that we have cared what the variance of the diagonal entries is. Diagonal entries do not appear at all in our calculations for calculating the ESD because remember, I mean, if you take a diagonal entry, then that means that you give up a choice. So you're not at optimal choices, optimal number of choices for vertices, but here it's important. So the weight of the loop is important. It's gonna be the expected value of W11 square, and that's going to be the weight that those terms are going to appear with in the total count. And the rest is just as before. It comes from the choices that we have to do for the cycles. So this is the variance for the case when K is odd. Fairly similar to for the case when K is even. Okay, higher moments. Remember, computing the variance is not enough. In fact, computing the variance is just the beginning. What we want to show is that higher moments behave exactly as you would expect a Gaussian variable, the moments of a Gaussian variable with this particular variance to behave. So for the higher moments, you have to examine these quantities, but the same ideas apply. You just have to look at joins of graphs and the like. Two things that I'd like to say. One of them is that you will examine a join of L graphs for power L. And you'll have the words W1, W2, WL. The graph corresponding to WI, so the graph corresponding to G sub I sub J, let's say, will have to overlap some other graph. So any pair of graphs will have to overlap. Otherwise, if it's, sorry, not any pair. I said that wrong. Any graph will have to overlap some other graph. Otherwise, if that's not true, then it follows that you have one of these words that's independent of the others, and therefore the covariance is going to be zero. So you will look only at graphs that overlap in some manner all together. But unlike before, when that meant that the join of the graphs is connected, that's not going to be the case here. It won't necessarily mean that the join of the graphs is connected. In fact, because the requirement will always be subject to this overlap, to maximize the number of vertices, that will occur when you'll be able to pair the words in such a way such that one word only overlaps with some other word. So only the pairs of words overlap and the rest of them are independent. So in other words, you'll have to construct a matching. And from the matching, you'll get the weak formula which will tell you everything you need to know for the higher moments, how to compute the higher moments. Anyway, that's one thing to say. And the other one is that you noticed that in the variances, we have an expression of the fourth moment of the off diagonal entries. So unlike, so to begin with, I assumed here just to make things easier that all the variables had moments of all kinds. But because only the fourth moment is appearing and because you might want to extrapolate from work that we've done before, you might be able to see that, okay, so maybe I need to have fourth moment for the off diagonal entries, but that's maybe all I need to have. That's almost true. So you can extend this result by constructing the same kind of truncated variables as we did before, but you'll have to assume four plus epsilon moments. So you'll have to assume that x to the four plus epsilon is integrable with respect to the distribution of the off diagonal entry. So it's not exactly just fourth moment, but almost, almost that. And you can extend it not just, so we did it for polynomials, but some sort of, so you can extend it to functions over some sort of space. Okay, so it's a pretty strong result. And I think that's all that I wanted to say today. So you have this central limit like theorem, which nicely involves only four plus epsilon moments and can be extended to functions with a certain degree of smoothness. And when it comes up tomorrow, we will see how in a sense this is a one-dimensional version of a result that we'll talk about tomorrow, which relates to the Gaussian free field and which is kind of like a two-dimensional statement. One way to think about this central limit theorem is the following way. You can say that the fluctuations give you a Gaussian process. So it's a one-dimensional Gaussian process over functions with a certain degree of smoothness. And I think I'll just finish here. Thank you. Questions? Yes, could you say that again, I'm sorry? Why you divide by r? Okay, I can actually maybe draw a picture. Yes, I don't have to, yeah, they can start at various, yeah, at different places. So here, I'll give you an example with where I have a single tree dangling of size one, right? So you have your cycle here and I have one vertex, one tree of size, well, I guess of length one dangling from it. Well, I mean, I can label things like this, one, two, three, four, okay? And that would give me one thing. But if I looked instead of this picture and labeled things one, two, three, four, I have the same thing, okay? So this is, I have to divide by the number of rotations. So before I put the labels, this is different from that. But the way I put the labels in matter, over counting that you can do, yeah. This is the only type of over counting. No, they're not silly. Yeah, so r equals one, I guess you could think of that as being the loop and r equals two is just the additional overlapped edge. But the problem with thinking of that is that you don't see the four power, yeah. So very good, very, very good. Any other question? Sorry. Maybe, can you use your microphone? Maybe people, here. Okay, so you know, if you consider a random topics matrix, with IID entries on the diagonals, and then you divide by root n to get this limiting spectral distribution, in that case, this is no longer true. So if you take linear statistics, they have square root n fluctuations. So it's not because you're dividing by root n, that's something more subtle that's going on. Sorry. Thank you, Shoraf, yeah. So like I said, the reason why you don't get this square root n is because the values are so correlated and fluctuate very little. Any other questions? You'll have some problems to work on. Actually, what you'll be doing will be computing the covariance of this Gaussian process in a problem session. All right, thank you.