 To start, the first speaker this afternoon is Yi Sun, and he will talk about algebraic structures for multi-level eigenvalue densities. All right, thanks. First I'd like to thank the organizers for the invitation to speak here. So as you have mentioned, the title of my talk is algebraic structures for multi-level eigenvalue densities. And so I'll be talking about some algebra which underlies the multi-level structure of eigenvalue densities. Okay, so that was the reasoning of my title. So let's talk about specifically what type of densities I want to consider. So first, the first random matrix ensemble I want to talk about is the generalized beta-wishtred ensemble. So to construct this, first I'll fix some n, and I'll take two sets of parameters, pi and pi hat. And then I'll create a large rectangular random matrix whose entries are Gaussian with covariance, which depends on these parameters. So as you can see, the ij entry has covariance, which depends on pi j and pi i hat. And I want it to be Gaussian, but either real if beta equals one, or complex if beta equals two. And let me take x sub m to be the first m rows of this matrix. And so the eigenvalues I want to consider is the joint distribution of the squared singular values of x sub m. So I'll call that lambda m. Now if I consider just a single level, then this corresponds to a spike covariance model with spikes theta one through theta k, if I specialize the parameters so that all the pi hats are just zero, and all the pies are the inverses of theta and then ones. So this is a model that shows up in statistics. And alternatively, if I specialize all the pies to zero and all the pi hats to one, then I just get the standard wish-run solve. And so that means that the single level density is this standard wish-run density. Now a second type of model that I'll be considering is the beta Jacobi model. So to construct that, I'll take two matrices, which are large and rectangular, and where each entry is gonna be an IID Gaussian. Again, it's gonna be real if beta equals one, and complex if beta equals two. And again, I'll take various slices of these matrices. So I'll pick a number a, which is greater than n, and I'll take the first a rows of x, and I'll pick a number m, which will vary from one to n. I'll take the first m rows of y. And then I'll construct this matrix here, x star x times x star x plus y star y inverse. And this matrix you can see from the constraints on the sizes that it almost surely has m eigenvalues between zero and one. And what I wanna study is the multilevel structure of the joint distribution of these eigenvalues as m varies between one and n. So this type of construction appears in statistics in the study of Minova models. And it's single level density is again one of the three classical random matrix ensembles. So I have the Van der Montt, and then I have a weight corresponding to the Jacobi orthogonal polynomials, for example. Okay, so I said I wanna consider the multilevel structure. So what does that mean? It means I wanna understand how the eigenvalues vary as I change the level. So that's gonna correspond to this parameter m that I specified. And the basic fact is that the eigenvalues at adjacent levels interlace. So what that means is that they satisfy this system of inequalities. But, or you can see that the eigenvalues of level three, well they lie between the eigenvalues of level four, and also they lie between the eigenvalues at level two. So that's what I mean by interlacing. Now, what's more, if I just consider the beta-wishered ensemble with parameters, the standard parameters, pi equals zero and pi equals one, then in fact I can consider this picture as some Markov process when I go up in level or down in level. In fact, the transition densities of this process are known explicitly. So another way to say it is that the multi-level density has a beta Gibbs property. So when beta equals two, this means that if I fix the distribution at the top level, then conditioned on that, the distribution of the lower levels is simply uniform, subject to the constraints imposed by interlacing. So that means it's uniform on some polytope, which is called the Galfan-Selton polytope. Now at beta equals one, there's similar description, except now the measure is not uniform. It's some beta deformation of the uniform measure. Okay, so this is some structural properties of a multi-level matrix ensemble. And for the first time of the talk, I'm gonna talk about how to relate to these properties in the more general settings that I introduced earlier to certain branching properties of algebraic functions. And so these will be functions which are defined at general beta and they provide some algebraic interpolation of this structure to all betas. And these functions will come from the study of McDonald polynomials and they're called multivariate Bessel and Hekban-Optim functions. And in the second half of the talk, I'll introduce some dynamical model whose fixed time distributions will actually produce this multi-level structure in the beta equals two case. Okay, so let's do goal one. And so for this, I need to introduce these special functions, the multivariate Bessel and Hekban-Optim functions, which come roughly speaking from representation there. Okay, so what's a multivariate Bessel function? And the most straightforward definition, I think, is it's simply a certain integral. So here's an integral formula for the function. And all the formulas, a more convenient parameterization is to use theta equals beta over two. But theta and beta are just related by this constant factor. The multivariate Bessel function is function of two arguments, lambda and s, which will be strings of length n. So there'll be vectors of length n. And one way to define it is as a certain explicit integral. So the domain of integration here is the Galfan-Sellen polytope. That means I have a chain of lambda one through lambda n, where lambda i is a vector of length i, and every pair of adjacent landers should interlace. So this is an explicit finite polytope. And the integrand has an exponential weight here, and then some sort of term which involves adjacent levels. So of course, this is not the original definition of these functions. They originally came from the study of certain integrable systems in representation theory. So you can define them using the, as eigenfunctions of the rational Collodro-Moser system. And so they're studied by many authors. One thing I want to point out is O'Connor-Vinalschansky actually showed that they were a certain scaling limit of the Jack polynomials, if you know what those are. Now the form that I've shown here, this integral was first studied by Gerr and Kohler. And in what follows, I'll need to use some reweighting of the Bessel function, which I'll call the conjugate Bessel function. Okay, so the property is that I care about for these multivariate Bessel functions is first they have a branching structure. So what that means is that if I want to consider this function as a function of the S variables, which is indexed by the lambda variables, then in fact I can view it as a function of just the first, of the last S, S-N, and ask somehow can I represent it as an integral over a function of the other S's. And in fact it shows that I should just integrate over a smaller multivariate Bessel function with this exponential weight and then this branching factor. And the key point is that this branching factor depends only on this lambda or this mu which interlaces with lambda. So if you've seen branching of Schur functions or Jack functions, this is actually just a scaling limit of the Jack branching. The second point is that actually these multivariate Bessel functions can be related to certain orbital integrals. And this is where the relation to random matrix theory will come in. So at beta equals two, it's well known that actually it's simply the multivariate Bessel function and the HCIZ integral are the same thing. And the outcome has a terminal form and this allows you to relate this beta equals two multivariate Bessel function to some reparameterization of a Schur function. Now at beta equals one, it's not true that the multivariate Bessel function has some explicit form like this. However, it's possible to represent it as an orbital integral over the orthogonal group. And that takes a form that's essentially identical to the HCIZ integral. And so these follow from some results I've been reading. Oh yeah, sorry. So in these two integrals, lambda and S are priori are vectors of like N. So here when I take the trace, I take lambda and S to be the diagonal matrices whose entries are those vectors. I presume you can create something similar, but I haven't looked into it. Okay, so the first result on these eigenvalues is actually that the generalized beta-wission ensemble with certain parameters is exactly a measure which can be expressed in terms of the branching structure of the multivariate Bessel functions. So I'll call that a multivariate Bessel measure in analogy with a Schur measure. So what that means is that a single level density is simply the product of two multivariate Bessel functions in addition to some weights. That's the first thing. The second is that actually the level to level branching of this measure is given simply by the branching of the multivariate Bessel function. So if I want the joint density of the multivariate ensemble, then what I should do is open up the integral formula for this first multivariate Bessel function and what I get is what I get. And so for beta equals two, this is simply a result about Schur measures and this was conjectured by Borden and Pache a while ago and proven by these authors. So I don't want to dwell too long on the proof, but essentially what it does is compare to this standard setting and then use the fact that multivariate Bessel functions are related to these orbital integrals. So in some sense what this theorem is saying is that the branching structure or the restriction structure of multilevel eigenvalues coincides with the branching structure of these multivariate Bessel functions and you might expect that because of the relation to orbital integrals. Okay, now I want to talk about the Jacobi setting and for that I need to introduce what's called the Heckman-Oftum hypergeometric functions and you should think of these somehow as the trigonometric analog of the multivariate Bessel functions. So again, they have, I've given a very explicit definition just to be concise, so it's somehow you take the expression for the multivariate Bessel function and change every term to its trigonometric analog. So I replace the Van der Mond by the trigonometric Van der Mond and here I do the same thing. And again, there's this exponential weight in an integral over the Galvan-Sellen polytope. And now these functions were also studied in representation theory and they come roughly from the trigonometric Clojure-Amoser system. And this specific integral formula was discovered by Bordian and Gorin in their study of limits of the McDonald branching rule. And you can also prove this via quantum groups. Okay, so these Heckman-Oftum functions have pretty similar structure to multivariate Bessel functions. So I want to emphasize three things. First, there is a similar branching structure. So this formula expresses a larger Heckman-Oftum function as an integral over smaller ones with a specific exponential factor and then a branching factor which only depends on the indices. Now something which is a little different is that there's a principal specialization. What that means is if I take the argument S to be a specific arithmetic progression whose difference depends on theta, then these functions sort of collapse and have a very special form. And finally, this is formalized as what I said earlier, there's just some limit between the Heckman-Oftum functions and the multivariate Bessel functions. Okay, so the result in this setting is actually that the branching structure of the Bayesian-Coby ensemble corresponds to the branching structure of a principally specialized Heckman-Oftum measure. So what that means is I pick the special parameters here, then actually the single level density of the multilevel Bayesian-Coby ensemble is just given by a product of Heckman-Oftum functions with certain weight. And again, the multilevel structure is just given by the branching. I open up the integral on the first Heckman-Oftum function. Okay, so in this case, because I don't have parameters in my Bayesian-Coby ensemble, actually I'm forced to choose, the correct choice is these principle specializations for the Heckman-Oftum functions. And so this was a conjectured by Bordy and Gorin and they allows to link some of their work on these Heckman-Oftum measures to an interpretation for random matrix ensembles. So what they did is they studied the asymptotics of these measures using techniques from McDonald processes and they showed certain Gaussian-free field fluctuations. So this actually is able to translate that work to the random matrix results. Okay, and I wanna talk very briefly about the proof of the statement. So for multilverite Bessel functions, one might expect that they're related to random matrices because of this relation to orbital integrals. Well, for Heckman-Oftum functions, it's a bit less clear. And so, and I actually, I find this proof a bit mysterious. So the way it goes is that first, remember you're considering the eigenvalues of this matrix. So what I do is I first condition on the eigenvalues of x. Because somehow the branching is independent of that. So if I make this conditioning, then it turns out the resulting eigenvalue distribution is somehow algebraically related to a multi-level beta-wisher distribution with certain parameters depending on the eigenvalues of x. So I should pick pi equals lambda, the eigenvalues of x and pi hat equals zero. Now we earlier showed that actually the level-to-level transitions of that process are Markov with some explicit transitions. That involves somehow multivariate Bessel functions. And so now the level-to-level transitions of the eigenvalues of this matrix, well, if I condition on the eigenvalues of x star x, they're Markov with some transitions. So the resulting process is some mixture of Markov processes. And if you compute, you can actually show that they remain Markov and they have the required transitions. I don't really have a very good conceptual explanation for why that's the case. But somehow you can translate the relation between multivariate Bessels and random matrices to this relation between principally specialized Heckman-Optoms and random matrices. And I wanna say it might be natural to try to add some parameters to the Heckman-Optoms. Of course, the Heckman-Optom functions admitted parameters as well. But I think I tried for a bit and I don't think that actually corresponds to any random matrix ensemble I can create. Okay, so somehow this part of the talk gave some relation between these two families of special functions and certain families of random matrices. Okay, so in the random talk, I want to talk about a dynamical way of realizing these models. And this will apply in the case that beta equals two only. So only in the complex case. Okay, and so what I'll do is I'll create a particle system which has local interactions such that the fixed time marginals are gonna correspond to these multi-level eigenvalue processes that I've defined. And to do that, I wanna talk about the dynamics, a dynamical model for these eigenvalues at a single level. So this will be a generalization of Dyson Brownian motion. So just to fix notations, if I take a Brownian motion in the space of n by n Hermitian matrices, then Dyson Brownian motion is just the resulting process on the eigenvalues of that system. And it solves the following SDE where I have a Brownian driving term and then some strong repulsion given by this term. And I wanna say that you can view this as a dub H transform of a system of n independent Brownian motions. Okay, so in the beginner case, another thing like an analog of the Wishart and Jacobi ensembles is what's called the Jui corner's ensemble. So that corresponds to the joint law of the eigenvalues of K by K principle sub matrices. So previously we had the squared singular values of successive slices of a rectangular matrix. Here the correct analog is eigenvalues of principle sub matrices. And again, these have domain in like certain Gelfand-Sellen polytope which enforces some interlacing condition. And secondly, they have a Gibbs measure. That means if I fix this top row, then the distribution of the ones below is uniform subject to these Gibbs constraints. Okay, so in the mid 2000s, John Warren constructed a process such that the single level dynamics will be Dyson Brownian motion and the fixed time marginals will be the Jui corner's process. So here's how you construct that process. First, you take this stochastic differential equation with reflection. So I have a triangular array of particles. The particle at level K and index I will have a Brownian motion driving term and then I'll have two local time terms which enforce reflection off the particles below it. So if I look at this particle on level five, then it will evolve freely with a Brownian motion except that it reflects off the two Brownian motions below. Okay, so Warren is able to actually prove first that this system of stochastic differential equations emits a solution. And so there's a unique weak solution when you have a Gibbs initial condition. And in particular, you can start it from zero with this explicit entrance law. In this case, he is able to do this by just transforming a deterministic array of Brownian motions through a deterministic score hot mapping. And as I said earlier, he's able to show that if I project to a single level, then the process actually evolves Markovian in a Markov way and it coincides with Dyson Brownian motion. This might be a bit surprising. You have a triangular array of particles that are all interacting, but somehow you can cover up the bottom layer and still get something Markovian. And he also showed that this process preserves Gibbs measures and in particular means that the fixed time distribution of the eigenvalues is just the G.E. corner's process. Now you might be tempted to think that this means that this process just corresponds to taking a Brownian motion in the space of complex Hermitian matrices at each time looking at its distribution of principal eigenvalues, because that's also something which might have, which would have these two properties. But in fact, it does not coincide and it was shown by Adler, Nordenstem and Van Marbeke that if you look at that process and you look at the, if you project to two adjacent levels that still evolves in a Markovian way. But in fact, if you project to three adjacent levels, it does not evolve in a Markovian way. And in particular, when you take all levels, it's not Markovian. So somehow this process is a little different. Okay, so since Warren's work, there's been a number of, there's been a lot of work in this area and there have been a lot of generalizations. So in the Brownian setting, there's work several authors. It's been generalized to general beta by Gorin and Shkolnikov. And of course there's a lot of work on discrete models which exhibit this type of behavior. Now there's essentially two types of proof for this type of statement. One, which is what Warren did, is that you can explicitly compute the semi-groups and show there's some intertwining property. And the second more recent approach due to Gorin and Shkolnikov, or sorry, Paul and Shkolnikov, is to take a sort of more differential approach and essentially work with the Markov generators and not look at the explicit formulas for semi-groups. And so for the rest of the talk, I'll talk about how to generalize these results to processes which come from the Laguerre and Jacobi random matrix ensembles. In this case, I'll take the Paul and Shkolnikov approach, but both steps are gonna become more complicated. It's gonna be more difficult to show existence and also their approach will require some generalization. Okay, so now let me talk about what the processes are. So first, I need a replacement of Dyson Brownian motion for the Wishart setting. And so here's how to construct it. Let me take a fixed matrix N by P of complex Brownian motions. And I wanna look at the process of its squared singular values. So these are gonna be in this order. And it was shown by Koenig and O'Connell that this process evolves in a Markovian way and it solves the following stochastic differential equation. So here I have a squared Bessel term, so two root L plus this driving term. And then again, I have a strong repulsion term where I now have a linear numerator. And so it shouldn't be so surprising that squared Bessel processes occur because if I set P equals one, then X sub N just literally is a squared Bessel process. So one might expect that you should have squared Bessel process existing here. And similar to Dyson Brownian motion, this is simply a process of N independent squared Bessel processes, which are conditioned to never intersect via the dubache transform. Okay, in the static setting, what I wanna get is, so here this is essentially a beta Wishart process, except that beta equals two. So again, I have this interlacing property and the eigenvalues have a Gibbs measure. So in this case, because I'm in the beta equals two setting, the conditional distribution of the lower levels is uniform once I fix the top level. And so that's gonna be important property in what follows. Okay, so the replacement, the generalization of Warren's process to this setting has the following form. So I take an SDE with reflection, or I take the same squared Bessel driving terms as in the Laguerre eigenvalues process. I take the same driving term as in the single level process, and I'm gonna replace the strong repulsion term with two local time terms, which enforce reflection off the particles at a lower level. So here again, the domain will be in a Gelfand cell. Okay, so pictorially, what this process is, is I start in blue a squared Bessel process, which evolves freely at level one. And then in black, I take two squared Bessel processes with a different dimension. So here, the dimension of the squared Bessel process is gonna depend on the level that again, it evolved freely, except that they reflect off the level one particle. Now, at level three, I'll take three squared Bessel processes, again, with a shifting dimension that interlaced with and reflect off of the level two particles. And I'll keep going. Okay, so that's essentially what this stochastic differential equation with reflection is. Okay, so the result is that actually, if I start in any Gibbs initial condition, then I can get a unique strong solution to this stochastic differential equation. And its projection to any fixed level coincides in law with the Lagrange eigenvalues process. And that's the first point. Second point is that if I start with a Gibbs distribution and I run this evolution, then what I get is a Gibbs distribution. And finally, I can start it from zero with this explicit entrance law. And so what that shows actually is that the finite time distribution started with this entrance law is simply the Lagrange process. Okay, so I wanna point out one interesting point about this, which is if you project to the left most or right most particle on each level, then actually those evolve in a Markovian way, just alone. And if on the left edge, what you see is simply a sequence of squared Bessel processes, each of which sort of bumps into the squared Bessel process at a lower level. And so somehow this is the particle system with local interactions. Each particle only interacts with the particle below it. But the smallest eigenvalue should reproduce the smallest eigenvalue of a P by P Wissert matrix. And this is somehow a very strongly coupled system from a system which has only local interactions. Okay, so for the Jacobi setting, I want to talk about a different dynamical construction for the Jacobi ensemble. So for this, the right thing to consider turns out to be the following guy. Let me take a Brownian motion on the space of N by N unitary matrices. So a large Brownian motion on unitary matrices and I'll cut out a rectangular corner, the top N by P corner and consider its squared singular values. So they're gonna be somehow between zero and one. And it was shown by Demerick that these evolve in the following way. I have a process which has this term. The first three terms correspond to a univariate Jacobi process. So this is a certain one-dimensional stochastic process that's related to the classical Jacobi polynomials. And then I have a strong repulsion term which is similar to what we saw before but with a quadratic term in the numerator. Now the upshot is that the invariant measure of this diffusion is proportional to the standard Jacobi ensemble from random matrices. So this is another construction of the Jacobi ensemble. Now here's the analog of the warm process in this setting. I'll take at each level, again I'll take a triangular array of particles. At each level I'll have the same Jacobi generator and then I'll have two local time terms which will replace the strong repulsion. If I run the same story what I get is that if I start from a Gibbs initial condition which means the same thing as I did before, then when I project to any level I'll recover the single level Jacobi eigenvalues process with these strong repulsion terms. If I start with a Gibbs distribution I will preserve that property. And finally I can start with an invariant measure which is given by a Gibbs extension of the Jacobi random matrix density. And so this shows that the fixed time distribution of this process still recovers this Jacobi corners process. Okay so for the rest of the talk I wanna talk about how to prove such statements. And essentially there are two difficulties. First in this setting you have to somehow show that the stochastic differential equation with reflection has a solution. And second you need to prove some properties of that solution. So let me first talk about first thing. So first why is this even challenging? Well if I look at the stochastic differential equation with reflection for the Laguerre-Warn process there's a couple features. First this diffusion term is singular meaning it's not Lipschitz. Second the domain is the Gell-Honselen cone. So that's some polyhedral cone and in particular it doesn't have a smooth boundary. For certain reasons if you study stochastic differential equations with reflection this is not so good. And the last point is that the reflection off the boundary is oblique. So let me draw a picture to illustrate what that means. So let's imagine that there are sort of two particles with X and Y coordinates. And I wanna say that this particle X is particle Y. I wanna say particle Y moves freely and particle X might run into particle Y and reflect off. Well let's draw this in the XY plane. Particle X is going to, so X is gonna be less than Y. It's going to somehow go from here, collide with Y and then when it collides we'll reflect back. So as you can see that's not a standard reflection. It's not normal to the domain. And so that's called oblique reflection. Okay so somehow each of these things makes solving this type of SD a bit more difficult. However in this setting we're able to sort of get around these difficulties by using the recursive structure of the definition. So we're gonna reduce to the 1D setting in which case there's some pretty powerful results. So what we'll do is we'll sort of go level by level and it will construct the particles at level K. And if you look at the structure of this process actually a particle level K only depends on the particles below it and around it. So that corresponds to the setting of one-dimensional stochastic differential equation with reflection with time dependent boundaries. So in general I'll have some process DX with some diffusion and drift terms and then some pushing terms which enforce that it always stays between lower and upper boundary. And in this case we have some pretty strong theorems which parallel the usual existence and uniqueness theorems for SDs. So as long as I have some lip-shits or some boundless conditions, lip-shits on the drift and then this Yamada-Wattanawe condition on the diffusion then I have actually strong existence and uniqueness. And so this is exactly the condition which holds for this type of square best-solve diffusion. So in fact we're able to construct the process in a level by level fashion in a way which doesn't require the Gibbs initial condition at all. So if we just keep using the 1D criterion what we need to do is show that we can't have triple collisions in this system. We can't have that two particles at the lower level sort of pinch around a particle at a higher level. And so this is insured by some techniques of Andreas Sarantsev who's studied triple collisions for reflecting Brownian motions in great detail. And so we're actually able to show that for any initial conditions there is a strong solution to these SDs with reflection and in particular for Gibbs initial conditions.