 I'm laughing from Colorado State University on the asymptotics of orthogonal polynomials ensemble via integrable methods. Can you hear me? Thank you, Marco. And I'd like to thank the organizers, Andre, Sasha, Tamara, Alice, and Sergei. Thank you very much for inviting me. Beautiful place. A lot of great friends, so thanks. So I couldn't actually decide what, I'm not working in this exact area right now. And so I wasn't sure what to talk about. But then I decided I would talk about orthogonal polynomial ensembles and their asymptotics via integrable methods, mostly because of Sasha's lectures on determinant point processes. And this is sort of the integrable connection to that. This is a review of kind of old work of Jinho, Baik, Thomas Krikabauer, myself, and Peter Miller. But there's, well, maybe interesting reasons to reconsider this work. OK. But it is supposed to be introductory, and so ask questions if you have any. And I'll try to keep a monotone so that you can, if you wish, sleep. OK. So at the top of the screen is the orthogonal polynomial of degree J. The coefficients are disgusting, but the leading coefficient, kappa jj, is positive. And they all depend upon a parameter n. And so there's usually the superscript n floating around. Sometimes that will be there, and sometimes that will have forgotten to latech it. And then the orthogonality is, today, for the next 45 minutes, it will either be on the real axis or it will be on a discrete set. And so these points are called the nodes. And there are n nodes, and they started index 0 and go through index n minus 1. And if you're on a discrete set, then the orthogonality condition is this condition here. So that is really an integral, but it's a discrete sum instead of an integral. And let's see the first, let's see. This should be m. That letter should be m. My apologies. I think the first two slides had no typos. Yeah, right. But I did say that sometimes that wouldn't appear. So yeah, OK. And the interest is in studying the behavior of these polynomials when degree n and parameter capital n go to infinity. And a goal is the uniform axiomatic description for all z in the plane. And these are the applications. I wrote down the applications thinking of the discrete case. So I'll spend most of the time talking about its discrete orthogonal polynomials. And so, well, the first application is not discrete, it's random matrix theory. And you can compute the asymptotics of any statistical quantities in sight for unitary and variant ensembles using orthogonal polynomials. And then the rest of the examples are really, I guess, they're either discrete orthogonal polynomial applications or orthogonal polynomials on a curve in the plane, OK? Now, so we're thinking about them as generating ensembles of random variables. So you think that lambda, which is a vector up there, lambda 1 through lambda n, is either it's living in the real axis or the individual entries are living in this node set. And in the case that you're living in the real axis, lambda has a probability density given by this formula there. And we've encountered that a lot today and yesterday. So I won't speak too much about that. In the discrete case, here's the probabilistic interpretation. The probability that there are particles at locations x1 up through xk is given by this formula here. And the x1 through xk have to be in this collection of node points, OK? k should be less than capital N, so that it makes sense, OK? And the typical behavior, the typical question is to ask what's going on when capital N is growing to infinity, k is growing to infinity, and the ratio is going to a constant c, which is less than 1. If c were to equal 1, then you wouldn't be random. All the particles would be sitting at all of the node sites, and there'd be nothing random going on. And with, yes, OK. So this is just up there as a repeat of what I had in the previous slide. The probability that there are particles at the sites x1 through xk is given by this formula. And n is going to be growing to infinity. And the weights, so on the two or three slides ago, I wrote down an orthogonal polynomial, discrete orthogonal polynomial here. And I had these weights, which should have been wn comma m, OK? The m goes with the node. So each node has a weight, OK? If you have all of these weights at the nodes, then you can really think of as a function w of x, where x is living on the nodes. And you have to pick, given an x in the nodes, you find the j corresponding to x, and that's how you define the weight function, OK? If there are k particles at these locations, and I've got a total of n nodes, then there are n minus k holes at the complementary nodes. And I guess I labeled those y1 through y sub n minus k. And so that gives me x1 through xk. And all the rest of the locations, this has to be the entire collection of possible nodes. And the reason I'm saying that is because, oops, I'm moving too fast. You can also ask, what's the probability that there are n minus k holes at these locations? Of course, it's got to be the same thing as that probability. But you can rewrite the formula in this way, where now instead of thinking of the locations of the x's, you can write down a probability measure using the locations of the y's. And that k bar means n minus k. And that k bar means n minus k. And you have a different weight function. So this weight w bar is related to the old weight by this formula here, OK? So there's really, there's particles, and there's holes. And those are two collections of particles. And they're dual to each other for obvious reasons, OK? So a simple example of what kind of questions you might ask, the product of what two things is one. No, the two probabilities are equal, right? They're equal, yeah. So the first thing you might ask about is this random variable at the top of the screen. I should use the laser to point at it there, OK? The number of particles that are less than x. So that's a random variable. You can compute its average. Sorry, it's a fraction of particles less than x. And that is represented as an integral in the case of random matrix theory of a density function rho 1, which is written there in terms of orthogonal polynomials. So rho 1 is this function kn of xx. And the function kn is the function of two variables. It's called the reproducing kernel. And it's written there to sum over the first n minus 1 orthogonal polynomials. It's a projection kernel under the first n polynomials. OK, times the weight function. In the discrete case, what you have instead is a sum instead of an integral. But the function rho 1 that you're computing the sum of is built in the same way. It's a function knk of xx. And knk is this finite sum over the orthogonal polynomials. The sum stops at k minus 1. And that is right. So you're thinking I have a collection of k particles living on n nodes. And they can dance around. And you want to compute their statistics. And that's the formula for it. So you need to use the orthogonal polynomials up to degree k, or degree k minus 1. Good. And as I pointed out before, or maybe I said it and didn't explain it. But a lot of statistical properties can be expressed in terms of the orthogonal polynomials. So this is a slight generalization of what I wrote before, but it's basically the average of 1 over n times the sum of functions evaluated at the eigenvalues of a random matrix can be expressed as the integral of f times rho 1. And if you're in the discrete case, it's this sum. That's pretty cool. And then the variance of the same random variable is expressed in this way. Again, all in terms of the orthogonal polynomials. If you're in the discrete case, the integrals are just sums. And if you ask for the probability of having no eigenvalues in an interval a, b, it's a Fred Holm determinant of 1 minus kn, calligraphic k subscript n. And that is an integral operator whose kernel is kn, but it's restricted to the interval a, b. And the only place where a, b appear on the right-hand side is that you're dealing with an operator on the interval a, b. And again, if you're in the discrete case, it's a operator on little l, 2, and it's a sum instead of an integral. Yeah, it depends on the problem at hand. But so you look at the equilibrium measure. And if the equilibrium measure achieves an upper constraint, then it's actually easier to look at the dual polynomials because there's no upper constraint achieved. Sometimes. Yeah, whichever equilibrium measure looks simpler, you use that one. But I'll show you an example, or a grotesque example, but you'll see what I mean. OK? Yeah, it's advantageous to find the one for which you need to do fewer machinations. OK? So the first application, OK, I'll go through this quickly, but we've heard a lot about this. But the first application of orthogonal polynomial ensembles is unitary invariant, or the unitary ensemble of random matrices. So that means that the matrix entries aren't independent, and I must use a measure more or less like this. You start with a measure on matrices, but I've written it down already on the eigenvalues. And the very first thing that you have to do if you're going to compute anything is understand the equilibrium measure. And so here's the first, it's a well-known fact that the mean, the average density of eigenvalues converges to a function psi, and it's actually the density of a probability measure. And the function psi solves a well-known variational problem. I pose it here in terms of over the space of probability measures. You try to find the maximum of this functional. OK? Now if v is convex, then you know that the measure that achieves this is a single interval. And if v is not convex, then all hell can break loose. But if you know that v is real analytic with suitable growth at infinity, then the density of the equilibrium measure is supported on finitely many intervals, and it's analytic on the interior of each one. I think it would be interesting, although nobody has, I don't think anyone knows how to do it quite yet, but it would be great to compute the asymptotic behavior or identify interesting phenomena if you have really an infinite gap situation somewhere. But that requires you build a special example where that's going to happen, and that would be cool, I guess, is the way to say it. Now a more refined, a more detailed asymptotic result, I'm referring to this one. This result here is asymptotic in n. A more detailed result is this version of universality. So you pick any a in the support of your equilibrium measure, so there's a point a. This is supposed to be the equilibrium measure, this curve there. And so psi is positive. Pick a where psi is positive and consider this probability. Probably no eigenvalues are in the interval a up to s divided by a parameter n times psi of a. So when n goes to infinity, this interval goes to 0. The density is positive, so you know there's going to be a lot of eigenvalues near a. And if you let the interval go to 0, then, well, so you might expect that there's very few eigenvalues in that small interval. And so you ask for something like the probability that there are no eigenvalues there. You might get a nontrivial answer. And there is the answer. As n goes to infinity, this probability converges to a Fredholm determinants of another interval operator with a sine kernel which we encountered in Alexei's talks. And the first version of universality is this doesn't depend upon a. And then the second version of universality is that it doesn't depend upon v. And then the more recent version of universality are that it works for Wigner type matrices and even more general things than that. So the first result of this kind was in 69 by Gaudan and Mehta. And then it was extended to the case of analytic functions and Lipschitz continuous functions and then even more esoteric functions by Doran Lubinsky. And in the direction of Wigner matrices and their brethren, there's this very fantastic group of people and results there that's only a smattering. But Laszlo showed us a table with which summarized all those results. So I will not go through so much detail on that. Thank you. And then you can also ask about the behavior of the largest eigenvalue. So up there, that was a computationally drawn image of the equilibrium measure in a two-cut case. And that's the upper, sort of the maximum of the support of the equilibrium measure is beta. And so it's easy to convince yourself that this is true. The probability that lambda n minus beta absolute value is bigger than epsilon goes to 0 when n goes to infinity. And the more refined result is up there. So you rescale by an n to the 2 thirds and this distribution function converges. First statement, it converges. Second statement, it's describable in terms of the panel of A2 transcendent. And that was established by, well, Forrester and Tracy Wittem, Forrester established, I guess, the limits existed and Harold and Craig figured out this powerful connection to integrable systems. Yeah, in this picture here, you mean I might have chosen a v which goes to the wrong infinity? There might be an outlier. What is it called that you call that? Yeah, that's true. So this is a, yeah. Sorry, think like that. When I said clear, I meant, oh, I'm thinking of the best possible case where basically there's no variational equality outside of this support set. Thanks, Elise. Sorry about that. OK. And this too has been generalized to crazy examples like the v is Lipschitz continuous and convex. You need convexity for that. OK. So I'm just summarizing that. The second application is sort of random tilings. And so I'll spend a little bit of time going through some examples of, an example of random tilings. And that's where you get, you're describing things in terms of discrete orthogonal polynomials. And so here is a hexagon, side lengths are A, B, and C. And inscribed within it is a triangular lattice, so everyone can see that. If you glue together rhombi along either that way or that way or that way, you get three different types of rhombi. And they've been named horizontal rhombi here, these two, and that's called the vertical rhombus. So you can ask questions like how many tilings are there. And there's a formula for that. That's the formula for the number of tilings that there are given parameters A, B, and C. And maybe an interesting question is the study of the limiting statistics of the tilings when the size of the hexagon goes to infinity. So that's a, I'm just choosing one particular asymptotic problem to study, where A, B, and C scale like a parameter N, and alpha, beta, and gamma are positive constants. So three examples, I guess I have to do that. Amazing, near this corner up here, all of the tiles seem to be green with the same orientation. And oh, that's true of this tiling and that tiling. And it turns out that as the hexagon gets larger and larger, that occurs with overwhelming probability. And one typically enjoys this picture where, OK, that's supposed to be sampled at random. And every single time you sample one of these with overwhelming probability, you see a picture like this, where there's this temperate region in the middle, and then there's a frozen region near the boundaries. And a boundary between them, which is apparently fluctuating from one realization to the next. And I've labeled then the draw vertical line. I've labeled the place where that vertical line enters the frozen region. That's called either C or X-top. Problem study fluctuations of that point on that line as the hexagon size goes to infinity. I forget why that's there, but let me skip it. So here is a picture. And it's an amazing fact that we learned from Kurt Johansson, that if I draw a vertical line right along this place where the middle of the location of any vertical rhombi, then I can put white dots at the bases of the vertical rhombi and black dots at all the other grid locations. And what I find is a collection of holes, which are the white dots, and particles, which are the black dots. And you pick two different realizations, two different random tilings, but choose the same line. You will always get, along this line, you'll always get the same number of holes and the same number of particles. Their locations, of course, are not the same, and they're really a random configuration if you think that you have a collection of all possible random tilings, and they're equally weighted. OK? And so by selecting tilings at random and fixing this line, you can really think I'm looking at particles and holes, and one might like to study the statistics of the particles or the statistics of the distribution of the holes. OK? And a formula is on the top of the screen, then, that explains the probability of finding, I forget which, the tilde means which thing, particles. Particles at locations x1 through xc is given by this formula here, where so clearly it is an orthogonal polynomial ensemble that happens to be discreet with a finite number of nodes. And the number of nodes is going to grow as the size of your hexagon goes to infinity. And this weight is actually classical, so these polynomials, they're referred to as Han orthogonal polynomials. Yeah, so maybe it wasn't exactly because of this, Jinho and Peter and Thomas and I were interested in discreet orthogonal polynomials more or less because it was there. And we began working on the asymptotic behavior of discreet orthogonal polynomials and then learned about that connection. So this slide shows the Riemann-Hilbert problem and its solution in the continuous case. So for orthogonal polynomials on the real line, that might come from random matrix theory. So capital Y is a matrix. The first column contains the polynomials of degree J and J minus 1 with crazy normalization factors in the second entry. And in the first entry, that's actually just to make it so that the leading coefficient is 1 times z to the j. And then in the second column, you have basically the Cauchy transform of the first column times this weight function, e to the negative nv. And well, so the Cauchy transform satisfies boundary value relationships. And so you can summarize or you can characterize this matrix through these three conditions. You have a matrix valued function. The entries are analytic. In the upper and lower half plane, it behaves in this way as z goes to infinity, so identity plus a correction term times this matrix, e to the j, 0, 0, z to the minus j. And then on the real axis, it has good boundary values. And the boundary values from above and below are not the same. y plus being the boundary value from above, y minus from below, and they're related by what's called a jump relationship. And this matrix is called the jump matrix. So the three things there, they 1, 2, and 3 represent a problem, find y. And it's right there. That's the Riemann-Hilbert problem that characterizes orthogonal polynomials. And the solution is at the top. And so when you start computing asymptotic behavior of the solution of the Riemann-Hilbert problem, you extract information about the orthogonal polynomials. So in the discrete case, this is the Meromorphic Riemann-Hilbert problem at the bottom of the screen. And the solution is this at the top of the screen. It's really the same thing, except the integral is replaced by a sum. So that's the solution of this Meromorphic Riemann-Hilbert problem. And what you have is it's a function. OK, I wrote analytic in C take away R. It's actually Meromorphic in C take away R. It has poles at the nodes. And you can characterize the poles by saying that the residue at each node is given by the right-hand side of this equation. And if you read that, it implicitly is saying the first column has no residues. And I didn't write it carefully, but it's simple poles. And so if it has no residues, the first column is entire. And then the second column, the residues are multiple of the first column evaluated at the node. And so if you look up at the top, you see oh, at each, yeah, the Z should be, I forgot to change the denominator of my sum. So that should be, just so you can see it, the denominator up there should be x nl minus Z. Doesn't matter? Yeah, the right-hand side never uses a second column. It just says that the poles of the second column are obtained by multiples of the first column times the weight. I'm sorry about that. That should say x nl minus Z and x nl minus Z. So there's a pole at each x nl. So on the left is the Meromorphic Riemann-Hilbert problem. And on the right are the standing assumptions that we made in our work on discrete orthogonal polynomials. And so here they are. So there is a finite number of nodes. It's a discrete orthogonal polynomial system in which there's nodes that go from 0 to n minus 1. And capital N is fixed but getting large. And they're described by a density function rho 0 of x. And the nodes are actually defined through this formula right here. So you integrate from a up to xj, the density rho. And you choose the xj so that this integral equals 2j plus 1 over 2n. And that, because rho is assumed to be analytic, that gives you a very regular n positive. It gives you a very regular distribution of nodes. The one point to make is that if you have an infinite number of nodes, then the work that we did doesn't apply directly. And you have to do more gymnastics to make it work. And so the work that we've done does not include cases where there's infinitely many nodes, such as the Meichner-Charlier examples. The weights are defined in this kind of sick way. It's got an e to the negative nv that's natural. And then these terms, this product here and this negative one to the n, those we included because the weight and the dual weight, it's useful if they look the same. And so we put this extra disgusting product factor there so that both weight and dual weight had the same factor. So the product is not essential. But anyway, it's there, sadly. So I promised myself I wouldn't go through a Riemann-Hilbert analysis. So I just have one little picture to show you the first step. So if you look at the formula that appears there on the left, I'm taking the matrix y, which has poles. And I'm not doing anything to the first column of y. I'm multiplying by upper triangular matrix. It doesn't touch the first column of y. And the second column, what I'm doing then in the second column of r is I'm combining y and the first column of y, I'm combining the second column of y plus that sick factor times the first column of y. The sick factor has poles and they cancel the poles that appear in the second column of y. Do it one way in the upper half plane and then another way in the lower half plane. So this term, because I'm assuming that row zero is analytic, this term is a nice analytic function in the upper half plane. And I use a different sign down below, either the negative i pi times the integral down below. And they just happen to hit exactly the right number, what's 2 times pi times an integer at the nodes so they don't affect the pole. So whether I choose plus or minus, the poles have been eliminated and r is actually a function that has no poles left. It still has a jump on the real axis because of that exponential factor out in front, but it has no poles. And so you wind up with the Riemann-Hilbert problem on those three contours with no poles, and then the game is afoot. You just, if you like that sort of thing, you might start doing the rest of the steps in the asymptotic analysis of Riemann-Hilbert problems, which I won't go through. But here's a picture of an example of an equilibrium measure. And so the first thing you have to do in this sort of analysis is you seek the equilibrium measure, which is a probability measure that minimizes this functional. I changed the sign in the earlier. I had a negative sign here, and this was in the numerator. But you try to, and I was maximizing, but now you try to minimize this functional. And the discreteness has an effect. So you minimize this subject to the constraint that all measures are actually bounded. So they're positive, but they're a probability measure, so I don't need to really talk about that. But on this side, all of the measures that you consider have to be bounded from above by the density of nodes, OK, rescaled because you only have k particles, not n particles. Yeah, phi is v minus the term from the product, but think it's just the external field. So then in the asymptotics of discrete orthogonal polynomials and the asymptotics of things like the continuum limit of the total lattice, this upper constraint is both a blessing and a curse because it's bounded, which helps with the analysis a lot. If it's bounded from above, you're easier to prove existence and stuff. But on the other hand, you can have, so here's a picture. The solid curve is the density of the equilibrium measure, and there's a dotted curve, which is the upper constraint rho 0 divided by x. And you don't see the dotted curve sometimes because the upper constraint is achieved by the equilibrium measure. So this is a region where the equilibrium measure hits the upper constraint, and here is a place where the equilibrium measure is like neither 0 nor hidden in the upper constraint, and it typically vanishes at those places like a square root. And then it comes back to the upper constraint, and then, OK, it goes all the way down to hit 0. And so we picked an example that was quite grotesque, but that's the typical behavior you would expect. Places where the constraint is active and places where you hit that the equilibrium measure is 0, where you don't expect to find any particles, and then separated by places where the equilibrium measure is sort of freely determined by the energy constraints. And there's a lot of interesting potential theory. So you could wonder, does it happen ever that your equilibrium measure jumps from the upper constraint to 0? And the answer is not unless something really sick happens. So if you have a nice analytic external field and a nice analytic upper constraint, then in order to jump from one to the other, it has to be a band where you have a discontinuity. That's all I'm trying to say. OK, in order to have a finite number of, sorry, it's going to take me a second to say it right, I should consider both the measure and the dual measure. So if I want to know that the measure has finally many places where the constraints, upper and lower, are active, and finally many places where it's not, and the same thing is true of the dual, then I need everything analytic. And there's no results other than that that I know of. Except probably convexity would give you everything in sight, but you need analyticity to rule out infinite gap situations. So yeah, I couldn't do it off the top of my head right now, but yeah, like upper constraints really become lower constraints, and lower constraints become uppercut. It just flips, you just flip it. I mean, the reason is because there are holes in particles and the union of them is the entire node set. So if you know your particles are doing this, then the holes have to be doing the opposite. If you think intuitively, what this means is that every node, there is a zero of the orthogonal polynomials right next to each node. And that means that for the dual, there's no zeros near those nodes. Similarly for the dual, there'd be lots of zeros occupying those nodes. So it's really flipped it. So if you find that you have a situation where one of the constraints is not active, like for example, you can imagine a case where you have the upper constraint active, and then there's a free region, but you never hit the lower constraint. In that case, it's actually convenient to use the dual. But we picked this sort of because it's maximally nasty. So I'm just stating some results. So here's a collection of node points up there. Bn is nodes xn, j, xn, j plus k1, xn, j plus k2. So the k1, k2, up through k sub m minus 1, those are all fixed integers. And these are nodes, and you fix them. And xn, j is going to converge to some value x. And I want to pick that value x to be someplace where neither constraint is active, so someplace in this zone here, or someplace in this zone here. And because these are finite integers, all of these nodes are in this band, active band. And then you can ask the question, what is the probability that precisely m particles are living in this set Bn? Asymptotically as n goes to infinity. And there's a formula for it. 1 over m factorial, that's the number of particles, right? Could be capital M, I guess. No, sorry, m, little m. 1 over m factorial, derivative raised to the mth power evaluated at t equals 1 of this Fredholm determinant where s is an integral operator with this kernel. So now s is still a discrete operator. I think Sasha showed it to us earlier today. As he was telling us, it squashed a little bit, depending upon where you are. So this coefficient there is the contribution from the equilibrium measure, the local density of eigenvalues. So capital M, the set Bn contains capital M nodes. Capital M is fixed. Yeah, built of the discrete sine kernel, OK? So that's sort of local universality, right? And then if you look at an edge, so you have to know that the end points are next to a void so that you know you're going to have a largest eigenvalue, a largest particle that's allowed to fly around. And so if you're endpoint A, the left-hand endpoint of your node set, is adjacent to a void, you check that with the equilibrium measure, then you can ask for the distribution of the leftmost particle and you get the tracy-widim distribution in that case in the similarly for the right-hand endpoint. So this picture, the left-hand endpoint, is not adjacent to a void. So this example, you would not have tracy-widim for the particles there, but the right-hand endpoint is adjacent to a void and so the largest particle is going to fluctuate around here and the fluctuations will be given by the tracy-widim law, OK? There are constants which are defined up there, but I won't annoy you with that. But there's also the tracy-widim distribution for the location of the leftmost and rightmost holes. So if A is adjacent to a saturated region, as it is in this picture here, then the distribution of the leftmost hole is also given by the tracy-widim law, OK? That's a case where you go to the dual to prove that. And so if you go back and apply that to the random tiling problem up there, you see, so C star is the location of where the random boundary hits the vertical line and n beta is supposed to be the intersection of the Arctic circle with the vertical line and the probability under correct rescaling converges to the tracy-widim law. So interesting directions. So what I described was really sort of an optimal situation regarding the nodes and the external field. If the nodes are not so regular, so for example, the case of q-orthogonal polynomials, the node set is drastically different than the types of nodes that we considered. And it would be great to work out the asymptotic behavior of orthogonal polynomials that were sort of q, like the q-raka polynomials or something like that. Or even if you just said, OK, I want to consider a case where there are infinitely many nodes. Pavel Blecher and Carl Likti considered that situation. And in order to apply it to the six-vertex model, a couple of the parameter regimes they needed, discrete orthogonal polynomials, the weight was e to the negative x squared and x was on the integers, so doubly infinite situation. If the node density vanishes at the endpoints, that is a case where you're going to have active upper constraints at the endpoints. And so that's an interesting case to consider that it hasn't been worked out. We assume the node density was positive up to and including the endpoints. And then a last thing is if you take the GUE beta, I wrote L of 0 because we might think of that as initial data for the total lattice. But the diagonal entries that I wrote there are independent normally distributed random variables. And the chi's that appear off diagonal are chi random variable with parameter k. And it's known that everything is known about this example, that the eigenvalue distribution is exactly GUE beta. And you even know a lot about the eigenvector distribution. And here's an interesting observation. So the quantity pL of lambda is defined by what I wrote there, vL plus 1 divided by vL. The vL plus 1, those are the elements of the eigenvector with eigenvalue lambda. So that ratio is a discrete orthogonal polynomial with respect to a random measure that's generated by that Jacobi matrix. And while it would be very interesting to understand something about asymptotics of those discrete and random orthogonal polynomials.