 So I invited everybody to vote, what to talk about, and the votes came in. So I'm going to talk about the bulk limit this time. And this lecture is going to be in nature slightly different from the previous two. This we're not going to start from random matrices. But rather, we're going to go from top down. So I'm going to talk about these operators first, and I'm going to tell you roughly how you get to them. There's going to be a little bit more storytelling than before. So I'll try still to give you some math, at least. So I think the first thing that I have to talk about is the hyperbolic plane. And that's just because I gave you all this nice story about how you care about geometry and random matrices. And then people were complaining that the only geometry they got was the geometry of the line. So let's talk about the hyperbolic plane, kind of unrelated to everything else. OK, so you know that the hyperbolic plane has a long story. Starting with the parallel postulates of Euclid, with everybody trying to prove, and nobody could. In fact, Aquinas, you probably know, said that even God couldn't do such a thing that the sum of the degrees of a triangle is not 180 degrees. And then until, you know, and it went all the way until the 1800s, where three people independently found this geometry where this thing doesn't hold. So this is Gauss, Lobachevsky, and Boya'i. And this is the hyperbolic plane. So as you know, there are two nice models. I mean, there are several nice models, but the ones I'm going to talk about are the Poincaré half plane model and the Poincaré disk model. And you know, this is the Poincaré disk model. This is, you know, the hyperbolic plane is, just think of it as a manifold where, you know, if you have something here at length epsilon in the Euclidean plane at distance r, then the length here is epsilon times 1 minus r squared. So that's the, this is a disk of radius 1. So the lengths, of course, so things that are short here are much longer as you get to the boundary. And the same thing is true. So here, if the height is y, then the length element is 1 over y. So as you get to the boundary, things get longer. And I don't know, the, you know, the standard thing that you probably have seen is these Whitney squares here, which is a nice way to think about the hyperbolic plane. So if you have squares here and you have, you know, squares of half the size, and then you have here a quarter of the size and so on. So then all of these squares are actually isomorphic. So there is a, in fact, this is a, this is a, this is a transitive lattice. So if you have picked any point, and at the other point, there is an automorphism of the hyperbolic plane that takes one to the other and keeps, keeps this, this square structure. So this sort of tells you, for example, that, that if you have a point here, say, i in the hyperbolic plane, then if you have something which is of this, an epsilon from the boundary, and its distance from i is about log epsilon. Just, you can just see it from the squares, for example. So this distance here, if this is Euclidean distance, epsilon, then this is the hyperbolic distance from i is about log epsilon. In fact, it's exactly log epsilon. So this one, this distance here, dh is log epsilon. Okay? The other things that you know is that, is that, is that the, you know, the rotations and translation of all kinds of isometries of the hyperbolic plane, they all correspond to Mobius transformations, they're linear fractional transformations that fix, of course, the corresponding object, so the disk or the, or the line, okay? And those form various groups, you know, this one is SO2R and this one is SU11, this, this, this things of, of, of, of, of linear fractional transformations. So there is also a natural notion of boundary, which is kind of obvious, right? So, so, so this is the hyperbolic plane and this is the boundary. You know what it means to converge to a boundary point. So this, this model of the boundary is completely obvious. Sorry, can I ask you, you said it's log epsilon, but if that's all small, then this is, then this is negative. Okay, okay, thanks. Yeah, thank you. So, so yeah, minus log epsilon. So, okay, so, so let's see what's happening here. So, there is also an interesting thing. So, so, so suppose that there is a sun, you know, in hyperbolic planar world, okay? So, so what does the sun do? The sun goes, let's say around the horizon at constant speed, okay? So, so that's, you know, that's days and nights, I don't know what you like. Then, then, then, you know, there is this strange thing that, that actually you have different times depending on where you are, okay? So, so for example, if you, if you're, if you're here, you see the angle of the sun changing differently than if you are, than if you're here. And this is, this is again, different from the Euclidean setting, okay? So another word, there is another thing you can do and this is what we're gonna do. So I'll tell you, which is that you, you have a boundary point and they're interested in, so let's say you have a boundary point, you're interested in rotating the boundary point about some center of rotation at some speed, okay? So if this center is actually the center of the disk, then this is just the, the trivial rotation, okay? Okay, but if the center is somewhere else, then you can figure out what this rotation is by conjugating it, sending it to the center, doing the trivial rotation and then sending it back by this Mobius transformation. And, and, and, and, you know, these, these Mobius transformations, they leave Brownian motion, for example, invariant up to time change, okay? So, so, so this gives you a way of understanding how fast the rotation goes, okay? And, and, and, you know, what happened is that before this changing of center, you covered units, harmonic measuring, unit time, okay? For Brownian motion. And so, so, you know, harmonic measure is just the heating measure on the boundary. And that's gonna be true also if you're rotating somewhere else, but the harmonic measure is different, okay? So, so in fact, the actual speed is gonna be the inverse of the harmonic measure, okay? So the Euclidean speed is gonna be the inverse of the harmonic measure. So if you're close to here, it goes slower. If you rotate from here, it goes much faster. So that's just, that's just the, the geometry. Okay, so, so, that's hyperbolic geometry. And I'm gonna talk about an object called hyperbolic carousel, which we produced with Benedict Walco in, I think it's 2007. And so, so it's just something very simple. So you have, okay. So you have a point, have a, have a, have a, have a point in the hyperbolic, have a path in the hyperbolic plane, okay? Which we call BT. So this is a path. It doesn't have to be continuous. I still call it a path. And then you have a, have a, have a point on the boundary, which you call, I don't know, gamma t, okay? And, and you do the following operation. You don't know, I'll tell you later why, but let me tell you what you do. So you rotate this point gamma t with center BT at speed lambda, okay? So I write this down, okay? So, so if you, if you're in this point, carry this model, I can, and, and, so you write everything in the point carry coordinates and you can write an ODE. So let me, let me write that. So gamma prime of t is lambda. Times you have to put this harmonic measure. So the way it looks like is, is the distance between gamma and, and BT, okay, squared. And it's divided by one minus B squared. So that's, that's the, that's the ODE, it satisfies. Hmm? No, this is Euclidean distance. But, but it's important. So this is written in Euclidean coordinates, but this whole thing is defined intrinsically, okay? We don't care about, don't care about. What's going on? So, so, so then using, okay, so then I'm gonna define another point on the boundary, okay? So, let, let, let's, let's call this U0 and U1, okay? And let's say that T is, so T is in the interval 0, 1. So you run, so you run, let's think some time one, time one. And, and, and you're gonna define n lambda, so it's a function of lambda, okay? Which is just the number of times you pass U1, okay? So, so U by U, I mean gamma 1, gamma of T, okay? So you do this rotation around your path, you may go around several times and you just count how many times you have passed this point you want. So what is it? So it's a, it's some kind of increasing function, right? So it's step function, integer value step function. In any direction, it's not algebraic, so you can pass in one sense or the other, plus one. This one is always going in the same direction. So there is no, no issue like that, okay? It's, it's increasing, right? It's the, well, you first pick lambda and, and you rotate at speed lambda, lambda doesn't depend on time, okay? But lambda can also be negative, so you can, you can extend this to the negative direction as well, okay? Something like this, the same way, okay? So, let's be the, let's define that. So gamma, gamma is a path is given to you, it's a parameter. So gamma prime T is a directive or what does it define? Oh, sorry, B is a path, so gamma is defined for you. So it's gamma that depends on lambda, okay? And T, T here also. That seems gamma prime or gamma? Gamma prime, so the change in gamma, so how much this angle changes is lambda times this speed factor, which just depends on, which is the inverse harmonic measure. And so you said it's a definition for gamma prime? Yes, gamma, this is the derivative of gamma, so this is an OD. So solve the OD, you know, I just wrote what I told you in words in math, okay? That's all. There is no. Okay, so what you get here is you have this triple, so which is B, U0, U1, right? This is a path, these are two points on the boundary. And to this triple, you associate this n lambda, which is a counting function for some point cross, from some set of points, okay? So they're a point where you jump, you have points, right? Okay, so, or, or, or, so, which is the same, right? So this is the same as some lambda, which is a set of point, right? Okay. So this is called a hyperbolic corusel, okay? You have a path, two points on the boundary, it gives you a bunch of points. No, I'm sorry, so this is the, can the enumerate at the Euclidean distance or the? This is, this is all in Euclidean coordinates, point-coded coordinates, okay? But, but, you know, it's written here so that it's very explicit, but, but as you see, you know, the explanation shows you that this is intrinsic, it doesn't, you can define it anywhere, it doesn't matter, right? Do it in the half line if you want. Actually, we're gonna do it in the half line. Okay, so I'm gonna give you various examples of this, okay? So this is, this is, we're gonna, we're gonna be lots of examples, but first, let me try to write this in terms of automorphisms, okay? Because, because of course the nice way, the really nice way to handle it is in terms of automorphisms of the, of the hyperbolic plane. And so let's, let's go into the half plane model. Like this. So this is the point-curry half plane, okay? So, so we're gonna write B, B of t in this, in this world. Okay, so it's the upper half plane. We're gonna write this x t, x t plus i y t, okay? Okay? So you get from here to there by, here to there by a Kele transform. So it's another LFT that takes the, to do this to the, to the half plane. So, so, so let's, let's see how we can write this in terms of, in terms of matrices. So, so we're gonna take the boundary point gamma be associated to some, some function f of t, which is gonna be actually a vector. So f one of t, f two of t, okay? So, so it's gonna be a two vector because that's the way you represent the Mobius transformations, right? In terms of matrices. The points in the complex plane are gonna be two vectors. They correspond to the ratio. So this corresponds to f one over f two in the complex plane. But this is in the boundary. So actually, actually this happens to be actually real. Because this is a boundary point and the boundary here is exactly the real line. So, so that makes things a little bit nicer. So this is gamma. And, and so let me, let me see how you make this into a rotation. So, so let's just give you one simple thing. So if you look at the solution f prime, the, the ODF prime is one half zero one minus one zero f, okay? Let's just look at this ODE. So what this does, and I'm not gonna go into the details of this. So let's check. So let's actually then exercise. This ODE is the rotation at speed one about the point I, okay? So there is the point I in the upper half plane and this exactly corresponds to rotation at speed one about that. You could simply deduce this by writing the rotation in the point carrier disk model and just conjugating it to the half plane. It's good, good so far. Okay, so that's, that's the rotation. Now. Yes? The first example it was a rotation with center I. It's center I, so far. So your center I, we're getting there, we're getting there, we're getting there. Here we are, this is not, this is just, this is, this has nothing, this is just an exercise, okay? Do we deduce Euclidean rotation or positive rotation? No, not Euclidean rotation. This is the hyperbolic rotation at speed one about I, okay? It corresponds to the Euclidean rotation. It's the same as the Euclidean rotation in the, in the point carrier model but in the half plane it's not, right? In the half plane you, you, these things move at Cauchy's speed basically or inverse Cauchy's speed. So you just write the Cauchy density and the inverse of that because that's the harmonic measure. So the inverse of that will be the speed. So things that are far away from I will move very fast on the boundary and so on. Did you show that the rotation will, if I start from the real lines, will be stable on the real lines? Yes, because the boundary is kept in these rotations always. And if you, and if you start somewhere else you're gonna, you're gonna go in a circle because circles are actually also circles in the hyperbolic plane. But the center is not, the Euclidean center is not I. This is a circle. Okay. So, so now let's introduce this matrix valued function, okay, which is actually very simple. So it's one minus X of t, zero Y of t. So that's just, we just made an affine matrix, okay? Out of, out of, out of X and Y. So, so this has the property that if I take X, you think of this as a Möbius transformation and apply it to the point with X plus I Y. Okay, so X plus I Y, I just, I suppressed the t one. Okay, so this vector. Then you get, so let me put a dot here so that this is like a, this is matrix multiplication but it also corresponds to the hyperbolic action. Then you get, I think you know, some constant times I one. Okay, so it takes the point X, X plus I Y to the point I in H. Okay, as a, as a, as an action on the hyperbolic plane. Okay, so, so why is this good? Because this gives you now, this will allow you to, to, to write this, write the, the evolution of gamma. So what is gonna be the evolution of gamma? It's not gonna be the evolution of F, right? So it is gonna be F prime is equal to, so let me put the lambda because we, we, we, let's just, there's this over two because of that one half. We put this matrix, zero one minus one zero and you have to conjugate it by X, okay? Because X, X will take this point to I and if you conjugate it by X, then, then the rotation is gonna be exactly about that point and not, not, not, not about I. So you just put here X inverse X, okay? So, so this is a, this is what, this is what another equation for the same thing except in Noyce matrices. Excuse me? Do you want to take, take in those texts? I think it's, well, I hope not. Do you think it's the same thing? If X brings X plus X to I, it's possible, yeah. Yeah, let's see. Maybe it's this, okay. So, so let me double check what I have in my notes. I, I wrote, I wrote it like this, yeah. No, okay. So now here is, here is a, here is one thing. So if you look at this matrix, okay. This matrix you can write as follows just because of the properties of, of, of two by two matrices of this matrix X. You can write it as X transpose X over the determinant of X, which is by the way Y. So I'm just gonna write it so that it's, it's, this is just Y, okay? Times the matrix zero one minus one zero, okay? Just some identity. Check it out. Okay, and I, I'm gonna call this matrix R, okay? So now I got the following equation. You can write me write it again. So F prime is equal to lambda, lambda times R over two, zero one minus one zero F. And this R is, is, is non-negative, definitely. In fact, positive definitely in our case, okay? So we did this computation and we got to the place, which is actually very nice. Because this object that you see here is called a canonical system. So that is, that is exactly what a canonical system is. So let me tell you what canonical system is. So this is a, this is, you know, something that comes from scattering theory. And, and basically there's a, there's a history of, of understanding various generalizations of scattering theory. In some sense, this is, this is a continuous analog of a tri-diagonal matrix. So that's, that's one, one thing I could say, okay? It doesn't look like it, but, but in fact, you can put tri-diagonal matrices into this form. And this is somehow the nicest generalization that there is, okay? And the theory of these things was worked out by the branch, okay, in this beautiful book in the 60s, 60s, right? It's called Hilbert spaces of analytic functions. Okay, so here's the one who kind of unified this theory, the theory of such, such objects. And, and, and I'm not gonna tell you the details of this, but, but, but the one thing that's important is that, so this book is actually perfect. So there, there are things of the branch that are not perfect, but this one is almost perfect. It's not easy to read. It's, the language is very simple, but, but almost everything is done in exercises. So that's one problem. And actually if you want to learn about canonical systems, there is a beautiful, very recent review by Romanov, which is, which is sort of basically takes the branches book and explains it to, to, to people who have finite patients. So, so, so this is a, this is a canonical system. And what, what did, what did the branch use a canonical system for? He basically tried to use it to, to prove the Riemann hypothesis. Okay, but it was later. After, after this theory was completely developed. So basically he, he has some papers where he says, well, you can, you can set up such a canonical systems to this canonical system. You can associate eigenvalues. I'll tell you in a second how. And, and you can, you can set this up in a way that the eigenvalues are exactly the, you know, the non-trivial zeros of the Riemann's data function. Transform to the real line. Okay, so, so that didn't work, at least. But then of course, you know, what's the random analogy of the zeros of the zeta function? Well, that's, that's random matrix value, right? That's the bulk limit of, of, of the GUE, right? So, so even though it's nobody knows if you can put the Riemann's data zeros here, you may ask if you can put the GUE in here in this kind of setup. Okay, and the answer is yes, and that's what I'm gonna talk about, huh? Okay, so first of all, why is this, why is this, why does this have to do with an operator? Well, let me see if I, let this right. And maybe, maybe, maybe the R over two goes here. I'm sorry, I, I think the, yes, mm-hmm. Okay, so, so yes, so this is how it goes. So, so this is a canonical system, and it, it, it contains many things again. I'll show you how it contains unitary matrices in some sense. It also contains tri-diagonal matrices, as I said, it contains Schrodinger operators with potential, you can put them in this form, and then it contains the Dirac operators, and the Dirac operators are the case when this R is invertible. Okay, so it's always non-negative definite, it may have zero eigenvalues, but if it has, it has non-zero eigenvalues, then, then it's invertible, and then I can put this in the form, so let's see what do I do. So, so, so take the inverse of this matrix, which is zero minus one, one, zero, right? And then you take the inverse of R over two, so you get two R inverse, okay? And, and, and I write del x, or del t, okay? F equals lambda F, okay? This is, this is, this is what I got from here. So what is this? So this we can call tau, this is an operator, and this is the eigenvalue equation of the operator, right? It says I take some function F, I apply to it some operator, and I get F back to, get back lambda times F, okay? So, so what we have produced is, is given this hyperbolic carousel, we have produced an operator, which, which, which has eigenvalues of that, and this is actually the way that this was done historically. So we, we came up with this carousel 10 years ago, and then we found they can do it as an operator much, much later, so, so, but, and here is, here is how you do it, it's pretty simple, right? So, so again, what kind of operator is this, right? You take a vector-valued function, zero one, so the interval zero, one, two, you may want to leave this open here, I'll tell you why, two R squared, okay? And, there has to be some boundary conditions which correspond to that starting and ending points there, okay, so it tells you that F zero is parallel to some vector, maybe you zero one or something, and F one is parallel to U one one, so those are the boundary conditions. These are F at time zero and F at time one, right? They're vectors. And, and then you apply it, differentiate F, and you apply this matrix to, to that vector-valued function F, you get the new vector-valued function, and you check whether it's equal to lambda F, and this again, this R is a parameter T there. So, okay, so we have some kind of identification of, of, you know, of pass here, you get the point process, you get an operator. So, do you assume that F is continuous? Well, yeah, it should be differentiable. Okay, and the one is the limit. Excuse me? Because F is not defined once. Yeah, so at one, there's some technical issues, I'll, I'll, I'll probably not tell you about those. Yes? At the time, so you say it's from zero to one, but in this picture of n lambda, but where's the time there in the, the upper board? The time there is... In lambda, the, the, what you draw on the left, on the right side, the upper board. Right, so, so, so in n lambda, the time is gone. So you got the n lambda, at the fixed lambda, by running this whole process through the entire time, okay? So when you run this process, you get a number of how many times you have passed, and that will be your n lambda for that lambda. And if you wanna complete it for another lambda, you do it again with the larger lambda. But this is now a time to infinity or time to... Yeah, that, that, for, you can use it for any lambda. So this is, this is defined in the Horia line. Now here, you fix the time and you change the lambda, you vary lambda, whereas before, you used to fix lambda and vary the times of the... Well, you can do, you know, so, so, so let, okay, let's, let's clarify this. So, so, you know, so, so how do you, how do you check whether f, whether lambda is an eigenvalue? So what you can do is you can just start f with this initial condition and solve this ODE, right? This is an ODE if you write it like that. And then you see what happens with this ODE. Well, it's, this thing, f is gonna go around the boundary of the hyperbolic plane. And if it ends up at U1, okay, then you're happy. That means I have an eigenvalue. So that's, that's what you can see from here. But in fact, there is an oscillation theory, the Sturm-Riewell theory, which says that you can say actually more. And the more that you can say is exactly what I put up there. It's exactly that the number of times you have passed in one rotation will tell you how many eigenvalues there are that are less than, that are between zero and your lambda, okay? And it's kind of obvious if you think about it. It's just, it just follows from continuity. It's a topology on argument that I, that I, that I invite you to do. So these two lambda are the same? These are the same lambda, yeah. So, so the eigenvalues of tau are exactly the points in, points in capital lambda, okay? So those are the, the places where n lambda jumps. It's clear. That's when, because you can always solve this, right? This f always solve it. We can solve this for f. And, and, and that's when the right boundary condition is satisfied, okay? So let's, let's do some examples, okay? What if you just set, you know, r to be the identity, okay? Or x and y, too. This is the same as saying, setting x and y, x plus i, y equal to i. So you're just rotating about the point i, okay? So let's just, let's just do, just see what we, what you have, right? So, so then r to inverse is just this. So, so which one should we do? Maybe, maybe this, right? So you have f1 prime is equal to minus f2, right? Because, oh no, it's there. So actually plus f2. So f, okay, I do it like that. f1 prime is f2 times lambda, right? From reading the first row of this. f1 prime is lambda times this thing will take you, bring you f2. And f2 prime is equal to minus lambda f1, okay? Okay, so what's the solution? So let's see, so, so the derivative of cosine is minus sine, so the derivative of sine is cosine, right? So let's take f1 is, we're gonna set the boundary, the conditions to work for us. So f1 is sine lambda over 2t, and f2 is cosine lambda over 2t. And if you set the right boundary conditions, so let's say that u0, so the left boundary condition should be, let's set it 0, 1, okay? That corresponds to u0 equal to infinity, but that's fine. And the right boundary condition, you can also set 0, 1, okay? u1 equal to infinity. And what do you get? Well, you get that eigenvalues lambda i, okay, is 2 pi k, right? Just so, so, because that's when, right? That's when you plug in one here, then you should get this one thing parallel to that. So this corresponds to the point process, which is 0, 2 pi, 4 pi, and so on. So if you do a random shift of this process by 2 pi, then we'll call this process the sine infinity. It's rigid. You'll see why. So that's one example. One very stupid question, in n lambda, so what do you pass, sec means you mean this, in this model, the line from u1 to some pt, or what do you mean by pass u1 in n lambda? So you look at gamma t, it moves around the boundary, right, and it always goes in one direction. At infinity, okay. Yeah, yeah, yeah, and u1 is a point on the boundary. I mean, and you just count how many times you hit it. That's it. You can write it in terms of some org, you know, you solve this od there. And actually, yeah, I made a mistake here. I should have said, so there's a mistake here, because this should be really e to the i gamma t. Right, so you actually write the Euclidean coordinate, and this is the angle, okay, that's the correct. So okay, that explains why you're asking this question. All right, so let me look at examples here. Okay, and this is unitary matrices, or more generally, measures supported on endpoints. So probability measures on endpoints on the unit disc, unit circle, see, so this story is an analogy of what we did in the beginning of these lectures, where we had endpoints on the real line, which corresponded to the spectral measure of a matrix, of a self-adjoint, or symmetric matrix. Here if you take the unitary matrix, U n, right, and you take a vector E, and it has a spectral measure. So U is in, let's see, unitary group, U n, then U has spectral measure at some vector E, and that's a general, and that's a probability measure on the circle. Which is supported on endpoints. So, okay, so what does this have to do with what we're doing? It has to do with the following. So if gamma t, sorry, let's x plus i t, so the path, be constant. So we already did constant at i, constant at any other point is actually the same, just because you can conjugate the whole picture to send the point to i. But let's say there's just piecewise constants. So if constant has intervals, k over n, and k plus one over n, something like this. Is there a y? X plus i y t, or? X plus i y t. Okay, so the path, so we make it constant on the, so take the interval, divide it up here. So again, this is a path, right? So it's a hyperbolic plane, so it's a path that is some kind of step process. And it's not continuous, it's just piecewise constant. So then there is a theorem, which is again from our new paper. And it's actually very simple. So let me, all right, so then the eigenvalues. So if we have a, so consider the measure. Okay, so what should we do with this measure? It's a measure of sum of i equals one to n of qi delta lambda i, where lambda i are in the boundary of the disk. Okay, so this is that kind of measure. So if the increments, and I'll tell you this more precisely, of gamma, sorry, of x plus i, i y, are the Verbonski coefficients, maybe, say it more precisely, are given by the Verbonski. So I'm gonna put this here like that, I'll explain in a second of mu, okay? So, or maybe sigma, so this is the measure of sigma. Then the eigenvalues of tau are exactly, so okay, let's make it, let me parameterize this like this. So you'd like e to the i lambda i, okay? So lambda is real. n times lambda, i plus two pi z, where i equals one to, okay, so what does this say? So if I have a measure on the boundary of the disk, just like in the previous case, which is a probability measure, then there exists for it an operator, such that the eigenvalues of that operator are almost exactly this point. Not exactly this, but it's there lifting. Okay, so you lift these points to the real line and you repeat them periodically. That's why the plus two pi z, okay? And you also do it so that the average spacing will be, it will be two pi. n lambda i plus two pi z. Yes, n lambda i plus two pi z. So n lambda i is just some number, right? And then you take all of its shift. But then you range over all i, so all the eigenvalues. So really what you do is you take the eigenvalues, you lift it by the covering space, and then you stretch it out by n so that the average spacing is two pi, okay? That's all. Okay, so I told you how these two things are related, and I have to tell you what the Verblonsky coefficients are. And I have to tell you what I mean by increments, okay? So the Verblonsky coefficients are coefficients in the Segger recursions. So the Segger recursion is the following. You want to write, you want to figure out what the orthogonal polynomials are for this measure. Okay, so orthogonal polynomials are just the ordinary things. You want polynomials that are orthogonal are the degree, the i-th one is degree i minus one. And they're basically uniquely defined to normalization. And they satisfy a certain recursion, okay? This is two under your line, if you have seen that. In fact, the recursion is given by this Jacobi matrix. We didn't discuss that, but it's true. And here the recursion is not given by a matrix, it's just a two-term recursion, okay? So it's given by sort of two by two matrices, okay? And in those matrices there is only one number. Every matrix has one number, it's called alpha. That alpha is the Wierblumsky coefficient, okay? So it's a complex number. So the Wierblumsky coefficients are, let's look at. So the Segger recursion is really a beautiful story, but I don't have time to say it in now completely. But you have alpha zero, alpha n minus two. These are in the interior of the disk. And then you have an alpha n minus one, which is in the boundary of the disk. Boundary of the disk. And that's what these things look like. The complex numbers of this kind, okay? And as you can see, again, this data is two n minus one dimensional. Just like this data, because the QI sum up to one. And there is a one-to-one correspondence. And if you want to learn about it, Barry Simon has 2,000 page books. I'm not kidding. Maybe just 1,500, but it's long. And it's beautiful. There's a huge, huge theory about how this thing works. Okay, but, and then the increments. So the increments here will have to be understood in terms of matrix products, okay? So you multiply those matrices together, and then you see how they act on the upper half plane. And then that's, then you get the increments of the walk. That's the theorem. So every unitary matrix has some eigenvalue distribution or a spectral measure, and you can associate to this kind of operator. Okay, so that's example two. Okay, so example three. You've seen boundary conditions as before, from zero one to zero one? So the boundary condition actually is given by this guy. You start at one or something, and then you ended this. You started, yeah, you started at one. You started zero, and then you ended that one. But you have to transform it to the real line, so. Okay, so example three is just hyperbolic Brownian motion. Okay, so you take B, okay? So you can take BT to be hyperbolic Brownian motion, or. Okay, so let's write B is X plus I, T. And B satisfies on the real line. So DB is MB times DZ, okay, and B zero is I. Okay, so this is hyperbolic Brownian motion. This is an ordinary complex Brownian motion. So real part and imaginary part are independent real, standard real Brownian motions. You solve this ODE. So when the imaginary part is small, you bust slower because in a small distance, there actually mean large distances in the hyperbolic plane. So this is intrinsic in the hyperbolic plane. And this thing is called hyperbolic Brownian motion. Okay, so when this is the hyperbolic Brownian motion, you set some boundary conditions. You run it to finite time, so say time one. Or you can put in a very answer if you want, some sigma, so you can put the standard deviation. Then you have the following theorem. This is due to Krzysztofski. So it connects a little bit to Simone's talk. So if you look at the random Schrodinger operator on one dimension, okay, and you put some potential here, V1, V2, let's put the sigma, which is some standard deviation. And let's say that expectation of VI is zero and VI are IID. And you also want the variance of VI equal to one, so that sigma is gonna, okay, so it's N by N. And let's call this, let's call this HN. So it's the one dimensional, standard one dimensional random Schrodinger operator. So you take this guy. This writes you some formula, I'd like to be precise. Right, so you pick, so the spectrum of this is roughly from minus two to two, okay? Because, so sigma is gonna be small. In fact, sigma is gonna go to zero. So sigma should be some sigma tilde times one over ten, okay? So let's say that I'm gonna look at the spectrum of this operator at some energy level E, okay? Which is with minus two to two. So what am I gonna do? I look at HN, I subtract E, okay? I have to blow it up by a factor of N if I want to see a point process because they're N eigenvalues, right? So the spacing is about one over N, they live in this thing. And then there is another scaling factor row which is just some function of E. So it's one over one minus E squared over four. And let's say that the eigenvalues of this scaled operator, let's call this lambda N. Okay, okay. So then the theorem says that lambda N converges to the eigenvalues of this tau which corresponds to this Brownian motion, hyperbolic Brownian motion so let's call this limiting sigma. Sigma infinity equals sigma times row. And it's almost true, there is some shift story here. So you have to put here a shift which depends on N. So I call alpha N, this is just some number and alpha N is in zero and two pi forever. It's just some sequence, okay? I can tell you exactly what that sequence is but I don't want to know. Okay, and these eigenvalues of this we call the Schrodinger tau process. Schrodinger tau which is Schrodinger process. There's a parameter which is this variant squared. Yes? What happens if you, instead of taking constant sigma, right, you scale it? You scale it? You scale it. No, no, no, I don't, yeah, I'm thinking something that homogeneous in the sense that you take sigma depending on the J. Yes, yes, yes, you can do that. You can do that and you get the same result out of it? If you, no, so if you make this decrease. Exactly, make it decrease like that. One over square root of J. One over square root of J. Yes, if you do it, then you get beta ensembles. And which beta ensemble depends on actually the constant in front of the sigma and where you look in the spectrum. It depends on E and sigma. It depends on both things, yeah. That's in this paper. So the answer is in that paper, there's no, it's not, yes. Okay, so this is, so you have this nice thing where you have, you know, this, it's not a random matrix ensemble, although I haven't told you that, right? This could be that the Schrodinger process is some random matrix ensemble. But it's a random process, okay? It is actually, it is translation invariant by multiples of two pi. That's easy to check. It's not translation invariant by any number. So it's not translation invariant by say, multiples, by one. It sort of remembers the original locations of the eigenvalue. So this noise that you had, it still remembers. I really don't have enough time. Okay. Okay. And let's do example four. And now you write Db, okay? So the, so the Brownian motion is going to be one over, two over root beta times root one minus t, Dm, mb, z. Okay, so what is this doing? So if you don't put this here, then it's just ordinary hyperbolic Brownian motion. If you put this thing here, it just scales this, the variance is gonna be scaled depending on beta and time, okay? In particular, the square of this is not integrable. So that means that then this Brownian motion is actually gonna go, by time one, it's gonna get to infinity, okay? So it's running this funny time. This is actually, right, we call this logarithmic time. There is a natural, there's a reason why this is extremely natural. So you're gonna have actually a Brownian motion that goes to infinity. And then you have to, then the right boundary condition is a little bit irrelevant. In fact, we're gonna take B infinity. Infinity is the right boundary condition. And in this example, the eigenvalues of tau called the sine beta process, okay? And again, this is a definition, if you like, for beta, not one of the classical values, but when beta is one of the classical values, it's a theorem, okay? So the sine two process is just, the sine kernel process is actually has the same distribution as the eigenvalues of this particular time, okay? So example five is actually a continuation of example two. Remember, unitary matrices. And this is the result of Kilipe and Nanchu, who started with Edamon and Dimitryu, and looked at the unitary beta ensembles, right? So C beta E, right? So this corresponds to the measure where the joint distribution of eigenvalues lambda I minus lambda J product I less than J. Now these are on the disk, okay? On the circle, sorry, to the far beta with respect to length measure. And you can put weights which are Dirichlet just before, beta over two, okay? And so then you have some random measure on the disk, okay, which the eigenvalues at this, the locations are distributed like this, the weights are distributed like this, everything else is independent. And Kilipe and Nanchu said that in this case, you can get what the verbose coefficients are, okay? So the last one on the disk is just uniform. On the circle is just uniform. But the other is, let's see if I can get this right. So alpha, I think K minus one, alpha K. So it's first of its rotation invariant. So remember it's some random variable in the unit disk. So it's invariant in the rotations. And secondly, alpha K absolute square, so it has distribution, which is beta. And let me see, so it's one and you have, I think, I think one and K, right? Beta over two K, maybe K plus one. I may have this right wrong, but I think that this is correct. So what are these? These are just, you know, these are just random variables on the disk. This thing pushes the beta variable close to zero, okay? So what this tells you is that this has variance about beta over two, let's see if I can try. Yeah, yeah, something like K plus one over two K plus one over beta. No, inverse, yeah, sorry. Did I, is this correct at all? Okay, two over beta times K plus one, okay? Equal. So the variance is going to zero. Did I do this correctly? No, sorry, I think I have to write N minus K. So the variance is growing, sorry. Okay, so this is what the alpha Ks are, so let's look at what the path is. Okay, so remember, if you want to understand this path X plus I T, that corresponds to this C beta E ensemble eigenvalues. So the verbally coefficients have this very nice rotationally invariant thing. So what are they? So I'll tell you what it means in this sense, okay? So you pick how you can construct this. So it's actually going to be a random walk, okay? And so the path X plus I T for this unitary ensembles is a random walk. It's made into a piecewise continuous function. And what is the random walk? Well, you just pick a radius according to this distribution. You have to convert it to hyperbolic length, right? Look at the uniform circle around you with that radius and jump to a uniform point of that, okay? And this is a hyperbolic circle, so it's not the Euclidean circle. And then you do it again. And then you do it again, okay? But the variance of the radius is actually getting larger as you go ahead, okay? So you have a hyperbolic random walk with changing variance. So, so why is this interesting? Well, you already see the convergence, right? If you look at the CBi, Ili's, then it's actually just the operator that corresponds to the hyperbolic random walk. So you take a limit of the hyperbolic random walk. What do you get? You get a hyperbolic Brownian motion with changing variance. So this root of one minus T comes from this changing variance here. And that actually proves this. I mean, you know, you have to do some tail estimates, but that's it, that's the proof. So let me tell you a strong version of this theorem, okay? So what do you call that C beta e converters to the sine beta process? Mm-hmm, yes, so, okay, so the theorem, and actually, so this was, okay, so the fact that the eigenvalues converge to the process like this, that's a result of Killepp and Stoichu, okay? That's also from 2006 or so. And, but in fact, you know, we now have a convergence on the operator level, okay? And here is what? So you look at tau n, which is the C beta e operator, right? That we, the one that you constructed up there. And then you get, so you look at tau n, you look at its inverse. So this is a differential operator. Its inverse is gonna be an integral of a sky-time operator with a kernel, okay? So it's actually gonna be Hilbert Schmidt. It's very nice, very nice operator. I can try to do it down to explicitly. And you look at also the sine beta operator inverse, okay? And look at this norm, okay? And which norm actually, you can look at this Hilbert Schmidt norm, okay? And the theorem is that this is less than or equal to, for large n, you know, with high probability. This is less than or equal to one over n times n to the epsilon, okay? For all epsilon, when this is squared, okay? So in what sense is this? Well, of course, these are two random objects. So we have to sort of put them on the same probability space. So this says that there exists a coupling. So let me tell you how strong this theorem is. So that's just one conclusion. So if you look at lambda n of k minus lambda k, okay? So this is the k-th eigenvalue in the index bound and by k. So the one to the right of zero will be one and so on. This is the k-th eigenvalue in the sine beta process. This is the k-th eigenvalue in the lifted process. Okay, so you can look at the super word is for all k less than or equal to n to the one-fourth minus epsilon. Okay, so you look at the maximum distance between n to the one-fourth eigenvalues and this goes to zero in probability in this coupling. Okay, so not only you have convergence of the spectrum, you have a much, much stronger rate of convergence bound. You can go all the way up to n to the one-fourth eigenvalues and even those are gonna be close. This is just, this is an exercise to get from there to there using the Hilbert-Schmidt-Norman law of large numbers. So for beta equals two, the best results that I knew before was actually a recent result of beta. Oh, actually it was just Joseph Najnoudel and some co-authors, I don't know, where they had the same thing for n to the minus, n to the one-sixth. So even for beta equals two, as far as I know, this is very strong. Okay, yes. What about the joint law of this alpha k? The joint law, oh, they're independent. Yes, they're independent. I still have another question. The alpha k in the Schrodinger-Tau process is not the same alpha. Was there another, no, no, that alpha k is, sorry, there was a bad notation. No, that's just some shift, because this thing is somewhat, so you have to take care of some periodicity issues, so that's why. Okay, so I wanna do one last piece of math, which is a computation using the Schrodinger-Tau process. Okay, so what you're really interested in, when we first identified this process, you know, we can prove various things about it, CLTEs and low-volage numbers, all kinds of things. Gap probabilities, for example, what is the chance that you have a large gap, can prove all those things, using this representation. But the one that we are most interested in is whether this corresponds to a beta ensemble. Okay, and if it does, then which beta? So let's try to identify it, right? So we want to understand the probability that there exist two eigenvalues in zero epsilon, right? So remember, in the beta ensemble, this should be epsilon to the square, that's just for the eigenvalues to be there, if they're Poisson, and then plus another beta for the repulsion term. That's what it should be for beta ensemble. So we just wanted to identify this exponent, okay? So let's do a computation for this. I'm gonna give you an upper bound. So how do we compute this, right? Remember, we run this carousel, but we rotate extremely slow. We rotate with speed epsilon, and see how many times it passes the target point. So it has to pass the target point at least twice, okay? So at least I can say that this is the probability that the carousel does a full circle. If it has to pass the same point twice, it has to do a full circle, okay? And let's see, but we're doing a rotation at speed epsilon. How can it do a full circle? Well, let's look at the geometry, right? So we're rotating here about some point at speed epsilon. So really the speed, the average speed should be, the average speed should be two pi over epsilon, right? So that we go all the way around, at least two pi over epsilon. So that means that I have to make up for this rotation by epsilon by going far away, okay? So in fact, I have to get about epsilon close to the boundary, so that I get about unit speed, okay? So if you look at that formula which you have up there, you see the speed up there is one minus B squared, okay? So if I get epsilon close to the boundary, then one minus B squared will be about one over epsilon, so that will actually compensate for epsilon being small. Is this clear? The rotation speed should go up to, at some point, it should go up to about one over epsilon, and that means one minus B, the top thing there is bounded, so the only way that can happen is one minus B squared is about epsilon, so, okay? But this, so this is of course in Euclidean distance. So in hyperbolic distance, what does this mean? This means that I am about minus log epsilon away from zero. Epsilon close to the boundary is about minus log epsilon away from zero. But this is a Brownian motion that you run till unit time, okay? So basically it's tails, these distance tails are like tails of a normal. The hyperbolicity here doesn't matter. So what is this? So it's less than or equal to e to the minus, right? Some constant, which depends on the variance, times the distance, which is log epsilon, minus log epsilon squared. So what is this? This is equal to epsilon to the minus c log epsilon. Or maybe like it like this, so epsilon to the c log epsilon. So you compare this to that, right? So for the beta ensembles, the repulsion is two plus beta. For the disassembles, they're actually beta is infinite in some sense. So the repulsion is much, much stronger than in the beta ensembles. So it's a much, much more rigid ensemble, okay? And you can see we did all this for just by just with looking at this picture. And heuristics. Okay. I have seven, I have six minutes, actually a bit more because it started late, right? All right, so I can finish. I wanted to do another computation. One more computation. So this actually comes with a story. And because I started with talking about beta ensembles, I think it's kind of appropriate to finish this series of lectures with a story about beta ensembles. And the story is about Dyson, who in 1962, just like the golden year of beta ensembles, Dyson has three fantastic papers. Every single one is still important. And one of them is the introduction of beta ensembles. The other ones is Dyson's brandy emotion, by the way. And the third one is about the invariant ensembles. So what happened in that paper? So it was known, even Wigner could calculate that the chance that if you have a large gap between two eigenvalues in GUE, for example, is not like a Poisson, right? The Poisson, that chances exponentially in the size of the gap. But here, in this case, it's exponentially in the square of the size of the gap. And so even Wigner was aware of this. But Dyson was much, much more brave. And of course, he's a physicist, but he gave you a formula, okay? And this is what it looked like. So this is in the scaling that we have, okay? So gap probability, so it's minus beta square, beta over 64, lambda squared, okay? Plus, so lambda squared is the main term, okay? So again, this is the probability of, let's say tau beta or sine beta has no eigenvalue in zero lambda, okay? Okay, so let me write it like this. So this is what Dyson said. Then there is a linear term, okay? And there is a polynomial term, okay? And Dyson said that this gamma beta is equal to one quarter beta over two plus two over beta plus six, okay? Yes, okay, sorry, so let's put it like this, okay? This is when lambda goes to infinity. In fact, there is a constant here. So you can put here a constant plus. So you know, this is a physics story in some sense. So many people say there is a proof. That means they're convinced that it's true and they have very good arguments, right? But various arguments, but in physics there's a hierarchy of arguments. That some arguments matter more than some others because there may be more rigorous or people give more credit to it. So in 1973, meta and Classo actually computed these values for just the special betas. Beta equals one, two and four, okay? They computed this gamma beta with more precise method. I think they're still physics method, but they're more precise. And figure out that this is wrong. So at that point there was no guess, okay? So we knew that it's not that, and you know, you could have guessed some formula, but they knew the values for one, two and four, okay? So using these methods that we have here and SDEs, we actually proved with Valko, that this formula is true, you just have to put the minus three here. So that's now a theorem. And this is, I don't know, 2010 maybe, something like that. And so the last thing I'm gonna tell you is how you prove this. Not the gammas, but I'll tell you this. So the idea is the following. Remember, you have this boundary point, you have this hyperbolic Brownian motion, and this guy is moving around like this. I'm gonna see if it makes a full circle. The problem of doing computing with this is that there are a lot of things. There's too much to take care of. So you would like to have some quantity which is evolved by itself, and you can follow it. You don't have to follow lots of things at the same time. And there is a quantity like that which is the hyperbolic angle, okay? So hyperbolic angle between this point one, Bt and gamma t, okay? So this angle, so we call this alpha t, okay? And you can write down this hyperbolic angle, and it satisfies an SDE. Okay? In fact, it's better to do this in... I hope I have my SDE, yes. In fact, you see in the Brownian motion there is a time-dependent parameter. It's good to scale it out and put it somewhere else. So let me do it like this, okay? So you write the SDE for alpha in standard time, so not logarithm next time, okay? Looks like this. So the alpha is lambda, and now there is another function, f, which is not the same f as before. Anyway, I just write it like this. Okay, so there is a drift, lambda times f, f depends on time, plus two sine alpha over two dB. This is just a standard Brownian motion, okay? And it actually turns out that alpha converges to a multiple of two pi and lambda, almost surely, okay? S t goes to infinity. This is now in the time scale zero infinity, okay? Because of this time change. This is actually trivial. This is just the fact about SDEs, okay? Because let's see what happens here, right? So this is an SDE on the real line. What's happening? So there's some noise term, which has some bounded variance, right? This is sine here, and then there is a drift term. Oh, I didn't tell you what f t is. So f of t is actually just an exponential random variable with, so the density of an exponential random variable with beta over four parameters. So it's over beta over four t. So it's exponentially decaying this f of t, and it depends on beta. So there is a drift whose total integral is lambda, okay? That's it. This is not much of a drift. And then there is this noise term. So, and also this is, so it's essentially a martingale, okay, because there's a drift, but, yeah. So this, and it's non-negative also. It's another interesting thing, because it cannot cross down to minus infinity, because then when you get to zero, alpha gets to zero, this variance goes to zero. So you will never be able to cross zero, okay? So it's a non-negative martingale, so it has a limit. It's not a martingale because there is a drift, but this drift has integral, which is finite. So it's essentially a martingale. So all of this can be easily proved. So it does have a limit, almost surely. And it's also easy to check that the limit can only be an integer multiple of two pi, because otherwise there is still some variance here. It will still keep buzzing, okay? The only way the variance disappears, which it has to, if it has a limit, is if this is a multiple of two pi, okay? And actually, which multiple it is, so that's a theorem, it's just n lambda. So it's a number of eigenvalues in the interval. So what is, how do you compute this gap probability? Okay, so here is your f, and here is two pi. All right, so it could converge here, or it could converge to zero. So it starts at zero, has a positive drift, and it buzzes, and what we want is that it never reaches two pi, because if it reaches two pi, then again, the same way, it can go again below two pi. Okay, so this alpha, for the same reason, because there's a drift up that can push it through, upwards, but downwards, it can go because the variance vanishes when it gets to multiples of two pi, okay? So what are we computing? We have a large lambda, right, because we want to understand the large gap. So we want to understand the number of eigenvalues in the huge thing, and we want it to be zero, okay? So this Brownian motion, or this SD, which is a gigantic drift, it has to confine in an interval, right? So, so you know, how does that happen? Well, the way that happens, of course, it's easy, it's more easy for it to change its drift if the variance is large. Okay, right, if the variance is small, then of course, if the variance is zero, it will follow this drift no matter what. If you add some noise term, it can deviate from the drift, and the more variance you add, the more you can deviate. So actually, you can just forget this sine alpha over two, just look at the maximum. So it's probably gonna stay in the middle to be able to kill off this drift. And how much does it cost to kill off this drift? Well, you know, a change of variables for Brownian motion, right, the Brownian motion with drift is absolutely continuous with Brownian motion without drift, at least up to finite time, and what's the change of variables from here? Well, the probability that it follows something, some drift that it shouldn't, is just exponential to the minus, right, the L2 norm of F squared, actually lambda F squared, divided by twice the variance, but the variance there is two, so divided by eight. Okay, right, that's how much it is, that's how hard it is for this Brownian motion to compensate for a drift so that it doesn't go out of this drift. This is just standard facts about Brownian motion. And if you compute that, you know, you just get exactly exponential of minus beta over 64 lambda squared. And you can get all the way to that gamma if you do this thing more precisely. Quite a bit more precisely. Okay, well, thanks very much. It's a kind of, there'll be exercises too. Okay, this would be to compute the minus three. Actually, can we do it without the symmetry of the matrix? This is very rigid, so your random matrix should be with a lot of symmetry. Yes, so here is how I think about it, okay? So you call universality the fact that, that if you have two matrices that are of size n but have different distributions, then they are close, right, in eigenvalue distribution. The kind of thing that this does is you have one invariant ensemble and it shows that if you do it with n and n plus one, or n and two n, then the eigenvalue distribution will be similar. So this is a kind of thing, that's why you have a limit, right? Otherwise universality wouldn't tell you that you have a sine process. It would just tell you you have some matrix which has, you replace it with the same matrix with Gaussians and they look the same. So this theory is a complementary to universality, even though you can prove universality in certain cases, but it's really not the point. The point is to identify the limit and as a probabilistic object that you can say things about using your usual tools, not just analysis. So it's more that. I mean, you know, in the 70s, probability almost died because people were trying to prove various harder and harder versions of the central limit theorem, but fortunately statistical physics saved it. So let's hope that the same doesn't happen with universality. I have a question here, using this sine beta operator, you managed to analyze the eigenvalues of the, I mean, you made a connection between the eigenvalues which are close to unity, right? The eigenvalue from the circle, which are close to the sum of the ones in the bulk. There's this. This treats that. That's the bulk, it's invariant. Does it line the rotation? Yes. There's nothing, nothing else. But you can ask the same thing about GUI, right? And in fact, in this carousel paper, we proved that the GUI and the beta-harmit ensembles converged to this in the bulk. But the story is nicer with the unitaries, that's why. It's simpler in that case. Yeah, I have a simple question. Does this hyperbolic disc in your talk have some relation with a hyperbolic space in the prognosis of both columns? It's the same hyperbolic space. Yeah. Yeah. I think with some of the same symmetry, we were using the symmetry of some of the formulas we were using. Yeah. And the symmetry of this hyperbolic thing. So it's possible that for some, you could relate some random walk on the line maybe or something too. This is a reinforced random walk on the line or some weighted line or something to this. That's possible. So there's not a direct physical. No. I mean, you know. So, how should I say? Random walk is a Gaussian free field on the line. And so in that talk, so you have a random walk here, which is some version of a Gaussian free field in the line. So you can say that. And in that talk, there are also Gaussian fields in the line. And then also, the fields were not exactly Gaussian but hyperbolic. So in that sense, again, the case when this operator is aligned, but I think it's not very interesting for reinforced random walk, then maybe there is some more direct connection.