 All right, thank you Ivan and the rest of the organizers as well. I'm very happy to be able to be here and give a talk It's a great great conference So I'll talk about some recent joint work with Dong Wang who is here on a model that can be thought of as one-dimensional model of fermions, but also as a Finite temperature version of the GUE the Gaussian Unitary Ensemble, so See if this will work turn it on, okay Ah, I see Thanks So because it's a finite temperature version of the GUE I will first introduce the GUE since we're on day five of a random matrices conference. It's maybe not necessary, but Okay, this is the Gaussian Unitary Ensemble of Hermitian matrices so it can be written as measure on Hermitian matrices with okay, there's a flat measure on the entries of the matrix and a Gaussian term involving the trace of the square of the matrix So that's probably familiar to most people and of course, it's the unitary means that this ensemble is invariant under unitary conjugation and We know that we can separate eigenvalues from eigenvectors and Integrate out the eigenvectors and we end up with a nice density on the eigenvalues Which is given here? We've seen it in several talks this week already and This GUE and related Models of complex Matrix the complex random matrices have this nice structure of being a determinant whole point process which Which means all of the correlation functions can be written as the determinant of Some matrix whose entries are given by a kernel function and the case of the GUE the kernel function Is this it's the Christoffel Darbou kernel for the Hermit polynomials, so the classical Hermit polynomials Okay, and There's the definition of the Hermit polynomials so So these are orthogonal polynomials with respect to the Gaussian weight and these are well-known things The one-point function is the the global density of eigenvalues It has a limit as n goes to infinity, which is given by the semi-circle law Locally if we take local scaling limits in the bulk then this Christoffel Darbou kernel Converges to the sine kernel in the bulk or if we take a slightly different scaling At the edge it converges to the airy kernel, which involves the the airy functions and Okay, also the if we are interested in the largest eigenvalue Then if we scale properly we get the the tracy-wittem GUE distribution, which is just a Fredholm determinant involving this this The integral operator Given by this kernel. Okay, so Why is this interesting well for 56 maybe 60 years I guess we've seen That the random matrix type statistics are showing up in Many different models of highly correlated statistical systems, so here's our summer examples fermionic particles at low temperature we see Things showing up in combinatorics and conformal field theory namely in in tiling problems In electrical engineering if you look at the exit times for Customers in in cues those are given by random matrix statistics in certain cases The arrival times of buses in a certain city in Mexico and even in analytic number theory the famous Montgomery conjecture about the distribution of non-trivial zeros of the Riemann zeta function So we'll show up a lot in correlated systems now an uncorrelated system. We expect to see Poisson type statistics or independent variables and there's been a lot of a lot of work in the past maybe 10 or 15 years about trying to understand transitions between these these two types of statistics random matrix type statistics in which Particles tend to repel each other and Poisson type statistics where things are independent. So this is just a little cartoon of On the left we see Okay, more regular particles, which is typical of random matrix statistics and on the right we see Some things that are bunched together and big gaps, which is typical of independent particles So so what about in the middle? What's going on there? So? Okay, lots of people have have tried to approach this question in different ways So here's some examples of Models and even things observed in in real life, which are Are seeming to demonstrate this interpolation between a random matrix and Poisson statistics. So One example is random banded matrices so certainly if you have a Diagonal matrix and you look at the spectrum of that and all the entries are independent then The spectrum is independent But if you have a full matrix then you'll see random matrix statistics for the eigenvalues So there should be some transition between the two I had a few names written after this but I took them off because I can't possibly include Everybody so I'm sure many people in this room have worked on this type of problem Related Model is random Schrodinger operators Those should also exhibit the similar transition Another example is thinned matrix models So if you take say the eigenvalues of random matrix and you just throw out independently Each of them with probability P Then if P gets to be pretty close to one then you're throwing out a lot of them and the Correlation between them will disappear and you'll you'll get some kind of uncorrelated system, so if you take this P to approach one then you should see some transition there and Okay, that as far as I know started with bojigas and pato in the early 2000s probably and then more recently Okay, I'm still gonna miss names, but at least I'll include some people in the room Botner dived it's Krasovsky, Botner Buckingham and and others have have have been working on this type of problem Here's a another example, which is very recent. It's not a model. It's just a Some observation of Trogden and Jagannath they looked at the Arrival times of trains on the MTA in in New York City on the subway system and they notice sometimes the the trains arrive Similar to random matrix statistics. Sometimes it's more Poissonian and they have some kind of guess as to as to what separates the two so it's very interesting and Finally the subject of this talk is models of free fermions at finite temperature. So I'll talk much more about all of these works, but about well in 1994 there was a Paper of motion new burger and Shapiro in which they relate the free fermion model to a random matrix model And for a certain version of that model Johansson studied all of the local statistics asymptotically about 10 years ago and then very recently this group of Physicists in France at least two of whom are here have been have been looking at this model. So I will probably Have their names up again, and it'll say DDMS. So So they've been doing really good work In this direction, so Let me define this this model of free fermions at finite temperature. So it's it's defined Okay from quantum mechanics originally so so here is the Hamiltonian of Quantum particle in in a quadratic potential and We'll fix some of these parameters so that the Hamiltonian becomes very simple and in this case The the eigen values are sorry the eigen functions for the Hamiltonian are exactly These her meat functions, which we saw Back in the GUE Okay, and we're gonna consider not just one particle, but n particles. So let's say I have n identical fermions and they all are in this harmonic potential and Okay, so they have all of these eigen states and the eigen states are indexed by Okay, k1 k2 through kn and these are integers. So I think I had it on the last slide oh, maybe I didn't sorry the So the eigen values corresponding to these eigen functions are just integers plus a half So each of these case tells me what? energy level corresponds to each of the Particles and the sum is the total energy So so for a fixed energy given by the sum of the case plus n over 2 Then the corresponding eigen function is given by the Slater determinant, which is here. So in particular This is the the wave function if I want to know about the The probability density for these particles, it's given by the square of the wave function so in particular if k1 is 1 and k2 is 2 Etc. These are just the first n integers then this is exactly the square is exactly the density for the GUE so the the ground state for this Free fermion model is exactly the the GUE Okay, so but I want to consider You know that any any eigen state is possible so typically the probability for a certain eigen state is given by exponential of Minus energy divided by temperature and there's probably a constant missing there, but the constant I said equal to 1 And so if I just write q equal minus e to the minus 1 over t for temperature, then I can write the The probability density For the free fermion model as a sum overall eigen states of the Slater determinant squared weighted by q to the total energy, so that's that's the model we're looking at and I Think that's all of the quantum mechanics that I know so now. I just will Stick with this density function, and that's what I'll work with Okay, so that's the model of free fermions now I mentioned that motion new burger and Shapiro Related this model to a random matrix model and so to describe the The matrix model which we call the MNS matrix model We can just write a formula for the for the probability measure on Hermitian matrices given by this so Okay, there's a normalizing constant. This looks exactly like the GUE and then there's another term here which takes my Hermitian matrix H and Integrates okay, it hits H with a unitary matrix you Take exponent trace and integrate over all the unitary matrices. So this kind of smears H out a little bit And we integrate over the the harm measure on unitary matrices and okay if we look at this formula there's a parameter B here and It's clear if B is equal to zero then this whole Term Disappears it's just a constant and this is exactly the GUE On the other hand what happens when B approaches infinity what happens when B gets big well, it's not that obvious I don't think from this formulation but this integral actually you can you can do the integration exactly this is the Harsh Chandra, it's it's in Zubir integral so you can do the integration you can again separate eigenvalues from eigenvectors and we can write a very nice formula for the density of eigenvalues and Here is okay. Here's a product of independent Gaussians and then the interaction instead of a van der man type interaction you get Okay exponent of minus B to the power difference of the of the axis So okay, so now I can see what happens when B goes to infinity because if B goes to infinity then Then this entries of the matrix here become just a delta function on Xi equal xj so it just becomes a diagonal matrix in other words the particles become independent Okay, so So because to infinity we get independent particles because to zero we get GUE Okay, so So this is a very natural model to study The the transition from random matrix statistics to to Poisson statistics Okay, here's one more model Which is also natural So consider n Brownian bridges, so That means I take Brownian motions which are conditioned to start and end at the same points and let's call the starting and ending points x1 through xn and I'll let them evolve over time t okay, and I'll also take the starting and ending points to be Just independent Gaussian random variables. Okay, so I just take n independent Gaussians and that's where I start my Brownian bridges And then furthermore I condition Yeah, somewhere I said that I conditioned that they don't intersect over the whole time t, okay Okay, and now I consider the distribution of the starting points after the conditioning that they don't intersect Okay, so if I took my sample of independent Gaussians and these points were or any two of these points were really close together then Okay, it's kind of likely they'll intersect and so then we start over and we We try again. Okay So if t is small then okay, then it's unlikely that the particles will ever intersect You know, they're not gonna be right on top of each other. So for a small t These starting points are are basically Independent of one another but if t gets big then okay if we sample the x's and they're too close together Then they have a good chance to intersect over the time t And so after I condition on the nine intersecting these starting points really start to repel each other it becomes more likely they're farther apart and In that case we get a random matrix statistics. So this total time t also Is going to somehow interpolate between Poisson and random matrix statistics in this model So this is the fact that was noticed by Johansson ten years ago or so the distribution of these starting points for these non-intersecting Brownian bridges is exactly the same as the M&S model with just a simple relation between the B in the M&S model and the t the total time in these non-intersecting Brownian bridges and also This is a fact that was noticed by motion nougar and Schupiro the M&S mock model is Equivalent to the free fermion at finite temperature model in the quadratic well So I remember q was e to the minus one over temperature So if we take B to be just a simple function of that q Then we get exactly the same density of particles So so we can really think about this as a random matrix model or a Model of free fermions. They're exactly the same. I like to think about random matrices. So So from now on I think I'll be talking about the M&S matrix model So what can you do in terms of analysis of this of this model well Unlike random makes unlike the GUE the M&S model is not determinant. Oh, it's a little more complicated however, if we take a Grand canonical version of the M&S model then we do have something which is which is determinant So I think Johansson calls this a deformed GUE in some of his papers So the idea is instead of taking Matrix of fixed size or the number of particles to be fixed We let the number of particles be random with a certain distribution which is given here. It's like a Poisson distribution Then we end up with something that is determinant tool and the correlation kernel instead of being a finite sum of Of the Hermite functions It's given by an infinite weighted sum of the Hermite functions and there's a lambda here, which is Which is controlling kind of the average number of particles in the ensemble okay, so so what Johansson did about ten years ago was to to look at this grand canonical version and take asymptotics as Lambda goes to infinity in an appropriate way So the average number of particles is going to infinity and he found local statistics Which I'll describe Well, I'll describe in a moment Okay, so the the results of Johansson are exactly the same as as the results that that don't and I found and that DDMS Dean, what do you solve Majumdar and share found For the for the canonical ensemble that is when the number of particles is fixed So so let me talk a little bit about the canonical ensemble and then I'll tell the results for the local statistics So as we said the the MNS model if the number of particles is fixed is not a determinant whole point process however It's related to one so the statistics can still be described using determinants. It's just Not directly. So here's a formula for the gap probability For the canonical version of the MNS model where the number of particles is exactly n So I wrote P sub n just That n stands for the total number of particles in the system so So if I want to know if I pick a measurable set on the real line And I want to know the probability that there's no particles at all in that set That's the the gap probability and I can write it Here's a Fred Holm determinant one minus some operator K times the Times the indicator function on the set a so that's what we would usually see in In GUE type models for the gap probability But now I have to take it depends on a parameter Z. I have to integrate against some given function over Z Just over a circle around the around zero in the complex plane Okay, so so there's a determinant, but I can't work directly with the determinant. I have to first integrate and Here's the formula for the for the operator K. So you'll notice It's this is exactly the same as the the kernel Which shows up in the grand grand canonical ensemble just with lambda Replaced by Z so somehow I'm taking that lambda and I'm integrating Over some some Contour in the complex plane Okay, so that's a nice exact formula For the gap probability what about Correlation functions well, it's very similar so Dong and I found this formula and then we noticed that six months before Ddms had the same formula in their paper, which we somehow missed so so this is This was the result first I guess But it's it's exactly the same the correlation function for for a determinant a point process would just be the determinant of Some kernel function and I take exactly that and then I have to integrate over actually the same I have to multiply by the same function and Then integrate over the same contour in the complex plane So those are very nice formulas which which really reminded us of some formulas we saw in well in a lot of work in the past ten years, but Maybe first that I saw in the paper of Borden and Corwin on McDonald processes. It's big paper So I'll try to come back to to that connection at the end of the talk But before I do Let me talk a little bit about Trying to do trying to use these formulas for asymptotic analysis So so these are exact and formulas and then we want to take some limits as n goes to infinity now okay, it doesn't look too bad the the main difficulty is that I'm gonna want to scale temperature to go to infinity if temperatures fixed then Basically the the asymptotic statistics should be the same as as GUE so I'm gonna scale The temperature close to infinity, which means I scale q close to one and when q gets close to one this kernel Has a singularity on the on the negative relaxes and as q gets close to one these singularities are accumulating so we can't really avoid getting close to To the singularity, which means somehow we have to control this this determinant Carefully, it's not so bad, but you just have to be careful So what are the results? so as we said if q is fixed that is if temperature is fixed then I can ask about global density of particles and if q is fixed then the global density of particles converges to a semicircle after rescaling and What if q is getting close to one so if I scale q so it that it is Of distance one over n roughly from one So I can write q is e to the minus some constant divided by n then Okay, then we have a different Global density. It's given by This this is the poly logarithm function, which okay some special function you can look on Wikipedia to see what it is But you can also just look at these pictures better so for finite see the This density is is supported on the whole real line So it's not supported on a compact set anymore, but I plotted for I think this is c equals one-tenth C equals one c equals a hundred c equals a thousand And you can see this is getting closer and closer to the semi circle Okay, those are global asymptotics now, what about local asymptotics so So the local asymptotics are given by Determinant processes in the in the limit. So this for the grand canonical ensemble. This result was obtained by Johansson and For the canonical ensemble DDMS last year and then Dong and I very recently We're able to do this asymptotic. So So again if q is fixed if temperature is fixed then If I ask what the location of the largest Particle is well, it's governed by the by the GUE Tracy Woodham distribution. That's not too surprising And if q is getting close to one now since I'm looking at the edge here. I use a slightly different scaling I want the q to be of order End of the minus one-third from one so I use the scaling e to the minus c times end of the minus one-third so I still have a Scaling parameter c here, and I ask what's the distribution of the largest particle? Well, it converges to Some distribution called f cross and F cross is the Fred Holm determinant again of of Some other kernel which is given here. So it's an integral airy times airy and then this explicit pre-factor so Right so if this explicit pre-factor wasn't here and we integrated From zero that would be exactly the airy the airy kernel so So this distribution has appeared in several places before Namely in the weak coupling regime for the for the kpz equation in this paper of Amir Corwin and Costell and So there's yeah, that's the first connection I guess with with kpz theory and as I mentioned it also appeared earlier in in Johansson's analysis of this grand canonical version of this model and and In the DDMS analysis of the canonical version So this is what so Johansson looked close it you know I guess this kernel first appeared in the paper of Johansson, so he looked closely at it and he found Okay, certainly if you take c to infinity this converges to GUE. That's very easy to see And it's just a little bit more calculation to say see that when c goes to zero And you rescale properly then this converges to the Gumbel distribution So the Gumbel distribution is extreme extreme statistics Distribution associated with with Gaussian random variables, and so it's not surprising I'm looking at the the maximum Particle the biggest particle and if the particles become independent then I Expect to see the Gumbel distribution Okay, so that was asymptotics at the edge the The distribution of the largest particle. What if I go into the bulk and I asked for the local statistics in the bulk So once again in the limit I get a determinant point process Which is not too surprising since the grand canonical version is determinant. Oh We expect the canonical version to agree in the in the large end limit. So Okay, so this is a result If Q is fixed once again And I look at the correlation kernel in the bulk then I get exactly the correlation kernel for the sine process not Again, not surprising and if I scale Q. So now I'm back to scaling Q as e to the minus c over n since I'm in the bulk and So now if I take a scaling in the bulk with Q depending on n in that way and I take a limit of the correlation function I get a determinant again. So I still get a determinant to point process and kernel is written here So The definition here of K inter interpolating It depends on a parameter a and here's a it's in the denominator of the Instagram Now in my formula for the correlation function, I'm plugging in a equal C times x sorry e to the c times x squared divided by e to the c minus 1 so So this kernel depends on c certainly, but it also depends on x x is the point in the bulk at which I've I've Scaled at which every I've zoomed in So this was a little bit surprising to us because usually this is this is actually a random matrix model It's an MNS random makes model and we expect to see universality in the in the local statistics and So there's a universal form of course, but the kernel the local kernel does depend exactly on where you've you've zoomed in So you can see this It does interpolate between a sine kernel and a Poisson process In Johansson's earlier work on the ground crane canonical ensemble, he only considered x equals zero So zooming in close to the origin so Yeah, this is what I just said the limiting kernel depends on the location x Then we looked well, we didn't look we We it was pointed out to us by Gregory share that Actually, they had found something similar the DDMS group But they were actually looking at a much more general model in Higher dimension or general dimension and general potential. So we didn't quite Catch that we didn't translate To see that they had found something similar this location dependence or Yeah, in their case, it's actually potential dependent. So I can Give a quick sketch of the proof of our formula for the for the gap probability That's maybe instructive As to how this stuff works. So So if I'm interested in the gap probably I want to know the probability that all n particles are in some set a Okay, so I can just write that