 Thank you, Satya, in one point it will be important. So as I mentioned, I'd like to simplify my life considering only even integers n, when our formulas are slightly neater, although, as I said, it's only for convenience. So when you have this joint probability density, what is the next goal of every, I don't know, of every, I won't say, but of many random matrix practitioners is, of course, to extract so-called correlation functions, or in other context, they're known as marginal densities. It means basically to integrate this joint probability density over some subset of variables and finding the result depending on the rest. These are very important objects. Let me introduce them. They are standard objects in random matrix theory. These are the following RKs, depending on Z1, on K arguments, Ks, and they are given by n factorial divided by n minus K factorial integral, okay, over n minus K copies of complex plane, I mean as many complex variables of this probability density, okay, with some qualification, I should not say over this, but of, I will explain in the moment, and then integrate it. I will use the following. I use D2Z showing that we integrated the integrating in the complex plane. Some other people, you just use DZ, but I usually reserve DZ when I do contour integration So this D2 will be that you just integrate independently over DX and DY. D2Z K plus 1 to the last bit, D2Zn. Now, I said this is P, but it does not bear indices Lm. It means that I already summed in the appropriate way over all sectors. So it's really joint probability density over all. So we know that it may happen that Metrix has only real eigenvalues, then one pair of complex conjugate 2 and so on. So I should really take these conditional densities and sum them up and form really the joint probability density of all eigenvalues, and then integrating it over n minus K eigenvalues. I get the function of the remaining K, and this is known as, okay, at least frequently called K point correlation function, or also known as marginal densities, K point correlation function. Yes, you need, but this is also, you should really take into account appropriately. I mean, yes, you need to know weights, I just put this under the carpet, because I don't like to go into this discussion, but it's known how to do it. Okay, so why, okay, the simplest of these objects, the simplest of these objects is just R1, and what is R1? R1 is just as a function of one variable, which I'll call z, is nothing else as when you integrate here all but one, obviously this is just mean value of the spectral counting measure. So describing how many eigenvalues are density of eigenvalues around the point z in the complex plane. So I just, I will systematically use notation that I think physicists like very much about mathematicians not always. I will use angular brackets to denote ensemble average, averaging over ensemble of these matrices, so I will use angular brackets for this. So in formal, I can write this is just nothing else as expectation, so bracket states for this expectation of spectral counting measure, okay, I will write it, explain this notation. So by delta 2, I mean two-dimensional delta function. It's product of delta function for real parts and for imaginary parts. So it's basically counting measure for eigenvalues, and this is its mean value. So it's mean eigenvalue density given by R1. And higher correlation function, for example, R2 will be extremely handy if you ask the following question, you take part of the complex plane, some domain in complex plane, and you ask what will be variance of the number of eigenvalues inside this domain. It can be expressed very simply in terms of R2. And so on. Higher correlation properties can be characterized. If you know all these objects, you know a lot. I won't like to say that you know everything, but you know a lot. And these are considered to be the most important objects. We do not have any clock here? Twenty minutes. Twenty minutes. Thank you. So it's clearly a very non-trivial job to sum up with appropriate ways these probability densities, to form the density, and then to integrate it. Nevertheless, it was done, in fact, I believe explicitly it was done around year 2007, simultaneously again Peter may correct me because I may mix things. But I think at least in my timeline, which I have in mind, around 2007 there was a breakthrough in the understanding of how to calculate these objects. They were shown really to follow from very nice integrability, faffian structure hidden in this expression. Again I hope that Misha Poplowski will give hints if not all derivation, because derivation is again not simple at all. And so these three groups that I mentioned, so order in which I will call the name is arbitrary. I do not remember really how it appears, but there were three groups almost simultaneously obtaining these results, Baradin Sinclair, then Forester Nagao, and Zomers, I'm not sure whether with some collaborator on his own, roughly speaking about around 2007. And methods, by the way, were not identical. Baradin Sinclair method and Forester Nagao methods have more in common, although they do not coincide. Zomers really proposed quite interesting method of getting these results. And in some sense, I think, nice development. And I hope again I mentioned that Misha will give some account of these developments. But what was the result? The result is as follows, faffian structure, namely, it turns out, okay, I will write theorem, although I won't formulate it probably in full generality, but more or less. So theorem is that these RNs, RNs, they are given by faffians. I will explain, introduce what faffian is for those who never encountered it in the moment, but I just claim faffians. Faffian is always associated with skew symmetric or sometimes called anti-symmetric matrices. And for this correlation function, these matrices are made of blocks, blocks with two indices K and L. And this is N by N matrix in blocks, and every block is 2 by 2 matrix. So it's 2N by 2N matrix altogether. And these blocks Q, K, L, they are, they are on the following form, K, K, L, J, K, L. But notation may be in my setting not the ideal because J is not absolutely different. It's not the Geneva J, but I hope there will be no confusion. I just noticed that it's coincidence here, JKL minus JKL, so it's anti-symmetric, WKL. So this is 2 by 2 blocks and there are N square these blocks with this anti-symmetry property. K, L goes from 1 to N and all entries, all entries, three different types of entries in this matrix can be expressed basically in terms of two anti-symmetric functions. One is known as the kernel function. So kernel, kernel, I will call it K, calligraphic K, not very calligraphic in my writing, Z1, Z2. It's just function of two variables. I won't write it explicitly at the moment because I will write it eventually, but not now. But just one such function and second function. I don't know whether it has some standard name, but it's denoted frequently as F of Z1, Z2. And I will write it explicitly, exponential minus Z1 squared plus Z2 squared divided by 2. Here is the following, 2i, imagine the two-dimensional delta function of Z1. There are several ways of writing this function, they're all equivalent, but I just use one particular way which I find nice. Sgn is just a sign of real numbers, so here is sign of Y1, where Y1, okay, Z1, Z102 are equal to X102 plus iY102, right? So these are real and complex parts of complex number Z, so here is Y1. Now, this nice function, Erfic, of modulus of Y1 times square root of 2, I think, yes, one recognizes similar construction here. Then this is one term and second term plus delta function of Y1, delta function of Y2 and sign of the difference X2 minus X1 and break it closed. So this is explicit form of function F, and now how one builds these entries of this matrix using Ks and Fs in the following way. Kkl equal to kn of Zklzl, where Zs are just this set of Zs, right? So Kkl is just this kernel at values of arguments given by Zk in Zl. Jkl is equal to integral of the kernel Zkz integrated with Ff at Zl d2z. This is, so we just form this product and integrate, or basically it's like a convolution of two kernels. And okay, I won't write it slightly longer, but also a very explicit expression for Wkl as again, it's basically made of F and then some convolution using F and K. And using properties of F and K, one can show that Kkl and Wkl, they are anti-symmetric due to anti-symmetricity hidden in F and K. And basically this completes the construction, apart from explanation of what is Puffin, and this is probably, do I have still five minutes to explain? What the form of K? No, I don't like to give it now. I will give it explicitly next time, I will also use it explicitly, but for this calculation of K technically is the hardest bit. One really should go through quite tedious calculation in order to get it, but there is one trick which allows to get it relatively cheaply from this structure. I will explain it next time and get it. Not really get it, but hint how to get it. That's why I postpone it. But nevertheless to get just few words about Puffins. So, let A skew symmetric, so Aij equal to minus Aji, then formal definition of Puffin is the following. We know how to calculate determinant using product of entries from different columns and rows weighted with the sign of permutation. Similar structure, not dissimilar structure, is for Puffin. Realist here some goes over all permutations of size 2n, so I will call them 2n, of sign of this permutation, then product from 1 to n of A. With indices sigma of 2j minus 1, sigma, second index, sigma of 2j, 2j. So this is really definition of Puffin that you find where summation go over permutations of the set 1, 2, 2n. Not, however, I don't know how you calculate determinants when you confront it with it, but I usually use a method of, I think it goes to Laplace. Expanding into minus, taking one element then crossing out, I mean this is how we were taught. And fortunately similar method exists for Puffin, so I will just give it recursive relation, recursive, recursive. Puffin of A is equal to sum, okay, there are several equivalent ways of writing this down. I take one particular, which I think very Aij, Puffin, I write it in the following way, A tilde 1j. So what is written here, entry ij of the matrix, then Puffin which is obtained, sorry not ij, 1j. It's just expansion in first 1j, 1j, summation goes over j from 2. Of course, so for anti-symmetric matrix diagonal is 0. So A1j, Puffin of the matrix which is obtained from original matrix A by deleting first row and colon and then j's row and colon. So in this way you relatively easily by hand calculate say Puffin of 4x4, 6x6, further of course you should write some code. So these are Puffins and the last bit I believe for today, important relation between Puffins and determinants, namely that Puffin squared of matrix A is just determinant of anti-symmetric matrix A. Not quite trivial, I think again Misha was going to show how to derive it. But this is all we need to know next time to calculate some useful quantities for Geneva which will be used to analyze. First to discuss linear stability and then non-linear stability eventually. I probably finish here, it's natural point to finish. Thank you. We have time for questions. No, it's anti-symmetric in fact if you look more attentively and all these relations you will see that it's anti-symmetric. Because if you change, okay this is trivial anti-symmetric but this is hidden also anti-symmetricity is hidden there. If I did not do any mistake and I believe it's correct. It should be, it should be. Please check, if not tell me again. It was expected to be anti-symmetric. Other questions? Yes I will, okay I will develop next time basically some understanding of in simple terms of large deviations of the right most eigenvalue. It's right tail being very far away from the edge and also it's left tail going to the bulk. It will characterize how far we typically can be from this and justify this what may anticipated that typical eigenvalue will be close to the edge. But also I will use it in much more constructive way I think. When analyzing on linear system because I will really heavily use these large deviations to characterize to count equilibrium. So I will discuss it at length in fact. Okay quick announcement so we'll take a break until 3.15 when Sylvia Serfati will give her first lecture. The problem session for this lecture and Serfati's lecture will be 4.30 to 6 o'clock. We'll group the two together because there's a dinner at 6 o'clock and at 4.30 Peter Forester will give a lecture in this room research seminar. So let's thank Jan again and we'll resume in about 15 minutes. Sorry just one question regarding other problem sessions back in the tent. Yeah so the problem session will be in the tent because we have the lecture in here for the research.