 Thanks a lot. Yeah, very long title. It's there anyway. Yeah, so actually, it's a very, it's really a great pleasure to be here in Trieste. I should say be back here. Because there was a sort of version of this conference six years ago, I guess, 2013. And I was a little baby at that point. I was just starting my PhD. And I came in the audience. Like, it was probably one of my first year as conferences. So it's nice to be back and I'm very thankful to the organizers for giving me a chance to be back and to give a talk on a bit of my work. By the way, this is joint work with Andei Martínez Finkelstein from the University of Omega in Spain and also Baylor University. Parts of it is already on the archive, but we made some improvements that we might update. We're putting a new paper, who knows. But anyway, I'm going to talk more about that along the way. OK, to get started, let me just pose a question which is super, super basic. It can appear already like in the morning session, for instance, doing a leases lecture. And you take, let's say, your favorite matrix, m, random, of size n, capital N, with certain eigenvalues. And you just look at its counting measure here. One of our ns, some of delta masses at the eigenvalues. And what you're interested in is in the limit when the size of the matrix goes to infinity. You have more and more eigenvalues. You're constructing a measure that's counting those eigenvalues. So you really should think of this as essentially the measure that counts eigenvalues in a certain interval if you integrate this measure over the fixed interval. And anyway, we're interested in the large and limit of this, of this random measure. Actually, there is a more deterministic side of this question, if you want to say so, which is the following. Instead of looking at the random measure because, again, this right-hand side here is really a random measure because eigenvalues are random. You can actually average that. So you take the average of your matrix of the characteristic polynomial. So now this Pn here is a deterministic polynomial. And you can look at the counting measure for the zeros of this polynomial and ask yourself about the limit when n goes to infinity of this deterministic measure now. So what I would call the measure for the zeros here, so deterministic zeros. And here the measure for eigenvalues, random. And of course, I'm saying here that the limits are the same. But this is like a calculation that they have to do sort of model by model. But it turns out that in many models, when the eigenvalues are on the real line, this is true. So the limiting, the measure for the eigenvalues happens to be the same as the measure for the average characteristic polynomial. And that's going to be the case in our talk. So for most of the time, I'm going to be focusing on the last one, on the counting measure for the zeros. And then just at the very end, I make some comment on that. Anyway, but just to have a warm up, let's look at one model, which I can say is classical right now, which is what's called the emission matrix model, emission random matrix model. So many of you know, and many of you also saw today's lecture by a least. So I have to thank her, because she introduced many of those things. So over there, if you put, let's say, so over here, we're considering like emission matrices. Random with distribution, which takes these forms, exponential minus n trace of v of m. So v here is just an appropriate function. Let's say, let's stick to a polynomial. And so just to make a connection, if you put, let's say, v to be Gaussian, you're just exactly back to the model that Elise was talking about this morning. Well, let's say we have a general polynomial, v. Well, it's not that hard to compute, to be actually an orthogonal polynomial for the measure e of minus n v. Again, let me just trace back to what now Alexander Buffett was saying this morning. And he computed, like, in that case, if he had Gaussian, he computed that to be admit polynomials. If you have a general potential, you just get other orthogonal polynomials for other measures. Anyway, so this polynomial is known. This measure, the limiting measure for the zeros or for the random eigenvalues exists. They're the same, let's say almost surely, if we're talking about the eigenvalues. And this measure minimizes the log energy, the weighted log energy on the real line. So if you want to find this mu star, what you should do, you should look at lots of measures on the real line. So every probability measure on the real line, then you compute this double integral for this measure, plus this single integral with your potential. And then you minimize this quantity over all probability measures, right? So you give me a probability measure, compute this quantity here. This gives me a number, and I minimize this number over all probability measures on the real line. And the one minimizing it is exactly the one that's going to be mu star. That's gonna describe the limiting eigenvalue distribution, in that case. There's another characterization of this measure, which is perhaps not well used, but we're gonna need, or at least we're gonna generalize that immediately, which is the following. So if you compute, okay, suppose that you know already this limiting measure, and then you compute this Cauchy transform, or sometimes called also still this transform, right? Just a transform, a transform of the measure. So as a function, right? So it's a function of z, analytic function of z outside the support of mu star. And it turns out that this function here satisfies an algebraic equation. Where this algebraic equation here, because v is a polynomial, these those a1 and a0, they're also polynomials. So you can imagine that, you know, for hz, you solve for psi, you have two solutions. One of the solution is exactly this analytic function, which is the limiting eigenvalue distribution. And I must say that this is some sort of characterization. So if you look at the equilibrium measure here, this is very, I mean, there's a unique measure minimizing this functional. So you know that, so there is uniqueness of mu star through this characterization as a minimizer. But also in some sense, if you impose some conditions on a1 and a0, there's a unique measure on the real line that satisfies the corresponding algebraic equation in some sense. So in some sense, you can think about this algebraic equation as uniquely determining your limiting eigenvalue distribution. So just have in mind two main objects here, an energy minimization to determine your eigenvalue distribution, or also an algebraic curve, or what's called spectral curve to determine the limiting distribution. Again, if you didn't follow any of these, let's be very concrete, as I told you, if you consider, let's say, V to be a Gaussian, so a potential to be a Gaussian, then essentially we're talking about matrices whose entries are independent Gaussian, appropriately normalized. Your average characteristic polynomial is a mid polynomial, as I mentioned. The limiting eigenvalue distribution is just the semicircle, the semicircle distribution on minus root two to root two. And this algebraic equation is very explicit and it's over here, right? So a0 is just constant equals two and this a1 is z and then you can solve here very easily for psi and then you can recover those series transform and from there you can recover the measure itself if you want to, right? Okay, so you're gonna now look at that picture, those type of results, but for another matrix model, which is a kind of very, I would say nature, very simple, at least generalization, which is when we consider the very old trace of v of m, but now we just add some perturbation here, a polynomial perturbation, or sorry, a matrix perturbation of this v, which is given by minus a m, where this a here that you see is a deterministic matrix, which actually we can fix to be a diagonal because you can essentially diagonalize m and fix a to be diagonal. Anyway, and this is called the external source, this matrix a, and v as before is just a fixed real polynomial, right? And for the talk today, I will fix our results, fix on actually a with a certain form, which is when you have only two distinct eigenvalues, the symmetry that imposing that a of a minus a is not a big deal, you can always change your potential reduced to that if you don't have symmetric eigenvalues, but what we cannot reduce that we're imposing possibly different multiplicities, let's say a comes with multiplicity and one, minus a comes with multiplicity and two, they don't have to be the same, right? For some of our results, we believe, although we didn't do, we believe that they can be extended to general a with more distinct eigenvalues, I might make some comments at the end, but for now, let's just, for our results, we're gonna be restricted to this situation here. Okay, so the picture that you see there is exactly what we discussed already about the standard Hermitian matrix model. So we start with a counting measure for eigenvalues or counting measure for zeros of the average characteristic polynomial, and we understand the limit if it exists. So let's say, can we write down some sort of variational problem that characterizes this limit before we saw that for the Hermitian matrix model, you minimize a log energy with a weight on the real line and you get the measure that you want. On the other hand, you can also look at a more analytic, from a more analytic perspective, you can look at a more complex analytic perspective, you can look at, let's say, your, the Cauchy transform of your limiting measure, if it exists, and try to see if it has some properties. Let's say if it's social algebraic equation, as we saw before in the Hermitian matrix model, when a equals zero, identically zero, you just get a quadratic equation in the Cauchy transform. And from that, perhaps you want to see sort of local properties for your measure, or in other words, you want to perhaps classify university classes, whatever that means. Okay, so when they equals zero, again, we just recovered the Hermitian matrix model that I showed you in one of the first slides, and so this whole description is well known. I have to make some, of course, I have to thank several authors, many of those in the audience, for lots of things that I've learned from them. For instance, if you consider this polynomial v to be symmetric, and also when the multiplicities are the same for the perturbation for this external source a, there are several partial results in the literature. Again, a very symmetric situation. Let's say you could also fix, let's say n1 and make n2 equals zero, or sorry, I should make a, or minus a equals zero, just fix let's say one, one, this thing I can value with small multiplicity. That's kind of our small perturbation, any time symbol. This has also been studied in the literature. And also when you make, let's say your external source as general as you want, but you make, let's say, just a Gaussian potential, there are also several people in the audience who contributed to that. And so there are things that are known in various different setups. As far as I could trace back, there's not much known for the situation that I'm describing, although there are lots of natural questions with natural answers already available, although no natural proof, let's put it this way. Anyway, so okay, here's the picture and what we're gonna talk about today is the following. So first, can you go from let's say, let's say a saddle point, so let's say a variational characterization and then find this spectral curve, an algebraic equation, and the answer is yes. We were able to do that back in 2016. I'm not gonna talk much about that today. Recently, what we posted on the archives, let's say the other arrows, so we somehow can start from the algebraic equation with a certain structure and then obtain the corresponding variational problem from there we can classify all university classes. So just from the algebraic equation or from the spectral curve. And again, why we started with that algebraic equation? Well, because there were all those cases before that told us, hey, perhaps the structure of the algebraic equation should be of this form. So we start from there, from the structure that we wanted, we did lots of things and then we went back and say, wait, but we have an end to prove something concretely for the matrix model, not only starting from, let's say, the ideal situation of an algebraic equation, and we were very lucky, actually, we managed to prove in the past few weeks that actually there is a limiting measure under various mild conditions, which at least told me it's a hard condition anyway, but I'm gonna tell you later. But anyway, under some very mild conditions I'm gonna show you today, we can actually prove that there is a limit of those counting measures, at least along subsequences. And any such limit along subsequences has all the characterizations that I just told you about and that I'm gonna describe today. Yep. Okay, so this is basically the picture that tells you everything they're gonna discuss today, but let's go a bit to the results. The first result that I wanna talk about too today is some sort of finite version of this algebraic equation. And that will give us a hint of the, let's say, infinite N version of it. And this version is as follows, I just have to introduce some quantities. So remember that I told you before that in the one A equals zero, in the standard emission matrix model, the average characteristic polynomial is just a tonal polynomial. Now we don't have necessarily a tonal polynomial, but we have a slide generalization, which is what's called a multi-portal polynomial. So for a tonal polynomial, what you do is just ask the polynomial to be orthonormal with respect to one fixed weight and now possible monomials except for the degree of the polynomial itself. Now instead of asking for the polynomial to be orthonormal with respect to all monomials, we split the monomials into two groups. And then we ask for our polynomial to be orthogonal to have, let's say, whoops, where is my pointer? Oh, my pointer just disappeared or my computer just collapsed. Okay, so there was a pointer here somewhere. Ah, yeah. Okay, so instead of asking for, let's say, all polynomials for the polynomial to be orthonormal with all monomials, we split the monomials into two groups and then you ask for the polynomial to be orthonormal with respect to, let's say, some monomials for one measure and some other monomials for another measure. And yes, perhaps, it'll come back. I can talk in the meanwhile. Yeah, so you just change the orthonoid conditions split into two different groups and if you look at those conditions over there, you can just put it there. If you just look at those conditions over there, if you just count, yeah, thank you. If you just count the conditions over here, I have, let's say, let's say K1 condition in the first line, K2 condition in the second line, so I fully determine a polynomial degree K1 plus K2. If I impose this polynomial to be a degree K1 plus K2 and monic, right? So these, those conditions, determine a nique polynomial. It turns out that the average characteristic polynomial satisfies those conditions, right? So it's a characterization of this average characteristic polynomial in terms of multi-portal normality conditions. And I'm going to need that. And then what do you do? If you look at that, those multi-portal normals, they depend on both N1 and N2, the multiplicity. You can play around and then reduce one multiplicity or the other multiplicity and then you can construct a vector. So here's the average characteristic polynomial and the other two are just polynomials that you obtain when you reduce the ortonite conditions by one, one way or the other. So, and then you multiply by E of minus Nv, so this is sort of a wave function, right? Well, the first result that is wave function here satisfies a first order ODE with where this R here is a polynomial, is a matrix with polynomial coefficients. I don't even want to put this as a result, as a theorem because for people who are familiar with integrable systems, this is pretty much almost trivial. It's just a calculation that you do and you verify, right? So this is not exactly a theorem, but it's there. But what perhaps we can put as a theorem is how this actually characteristic equation of this polynomial R looks like. So again, so the setups that we have a potential external source here, and we are going to look at the limit when the multiplicity N1 over N converges to a certain parameter alpha here. So I'm fixing one rate of convergence for one of the multiplicities. And then basically you take that polynomial R in the ODE and then you compute this average is its characteristic equation, right? So you just take the determinant, the characteristic polynomial of the ODE, right? And this ODE has a very special form. It's a cubic, which just followed because R is a matrix of size three. But then there's like a V prime here, coefficient. There are some coefficients A1 and A0, which are polynomial just because R is a polynomial, no big deal so far. But perhaps what's not so simple, it took us a bit of calculations to do, which is to see that actually this degree of A1 is at most the degree of V minus two. And even more special, we can compute exactly the first two coefficients here of this R. We don't know R exactly, although we have a representation for R, which is not very explicit, but we can still do stuff with that. Anyway, and then there's A0 here. It's just like connects back to the potential, the external source, and also the multiplicity parameter in this form here. Again, so Vm is the linear coefficient of V, Vm minus one is the second linear coefficient of V, and then you have A square, alpha. Anyway, it's not very important how it looks like. Just have in mind that we're just making a connection between, let's say if you give me the data V, A, and alpha, there is a way to connect those coefficients with those. Not exactly compute the whole coefficient, the whole way is zero and A0, but at least the very first few coefficients. Whoops, and second also, this perhaps like the most important, that's why we call this some sort of fine inversion of the spectral curve. Suppose that you can actually, that you know that all the zeros of the average characteristic polynomial remain on a compact set when N goes to infinity. In other words, suppose that we are in the right scaling. And then what we can prove is that if we evaluate, let's say the Cauchy transforms for the zero counting measure of those polynomials, make a shift, then essentially this Cauchy transform for the zeros goes to zero as N goes to infinity. So in other words, this characteristic equation of the ODE is, gives in the large, the large limit if it exists of this characteristic equation gives your spectral curve, right? If it exists, and again we can prove that all those things, actually we can take the limit, these are, yeah, we can prove actually that we can take the limit over here under this condition of all the zeros remaining on a compact set. Okay, yeah, so let me just give a few words to indicate how we prove that. And again, it follows pretty much like arguments on integrable systems. So if you don't know anything about that, I'm sorry, I just wanna give a few words on that. Perhaps it means everything, but it's gonna be only one slide, sorry for that. But if you have seen some of those arguments, probably gonna recognize some of the steps. So the point is that, okay, we look at this ODE as a vector value, ODE, so there was a vector psi and then there was ODE with polynomial coefficients, but actually there is a full matrix version of the ODE where actually the first entry is the one that we just constructed before, this wave function. It turns out that there is a Hema-Huber problem associated with this T, and this Hema-Huber problem has constant jumps. And once you have constant jumps, you can easily verify that there is an associated ODE and you play a bit with the structure of this Hema-Huber problem and you essentially get those conditions that I wrote here as a star, the conditions on the coefficients. There is quite a bit of calculation, I can't say two, three pages, but one can do. It's not like super, super hard. But now, as I said, we also want to take large and limit of that and making sure that we can control all the coefficients in that ODE. And there's more to do that than you have to realize that this R here is very easy to compute if you know T, right? You just invert T and then you compute R basically, right? And it turns out that the inverse of T can be constructed in terms of the biotogonal functions for the system. There is like those multi-paternal polynomials, there is associated with some biotogonal functions. And we can compute the inverse using that. And once you do the math, you realize that this R actually is gonna be a certain non-linear expression but that can be written down just in terms of the recurrence coefficients or several recurrence coefficients although in a very complicated way. And then basically, if you construct the recurrence, or if you are able to control the recurrence coefficients for this biotogonal system, you are done controlling R itself, right? It takes a bit of work, but you can do it. And basically, that's why the boundaries of zeros helps you. So if you know that the zeros are bounded, you can play around some arguments and show that at least the coefficients, the recurrence coefficients are gonna remain bounded, hence like the matrix R, the coefficients in the matrix R itself are also bounded because those can be expressed in terms of the recurrence coefficients and then you just compute everything else or start taking limits along subsequences. Yeah, and as I just told you, under those conditions, you now take the large, you assume that the zeros of the average characteristic polynomial stay on a compact set. And then you can take limit of the, let's say find an spectral curve when you get rigorously that any subsequential limit of the zero counting measure has to satisfy an algebraic equation of the form that you see on these lights. So what you see here is a cubic plus V prime or minus V prime psi square plus, oh, there should be an A1 psi here. So there's a A1 psi here plus A naught of Z equals zero. Where psi here, the solution that you obtain is the limiting measure for the zeros or for the random eigenvalues, right? So any subsequential limit will necessarily satisfy spectral curve, although we don't have uniqueness at this point, unfortunately, right? And again, in those polynomials, as I mentioned, they have a certain, those polynomials are zero and A1, they have a certain structure, right? Those coefficients, again, it's like psi here, right? So then you start, okay, I have like an algebraic equation and there is a probability measure that will Scottish transform solve this algebraic equation. What can you do with that? And that's our, let's say, next starting point. So we can, at this moment, just forget a bit about the rest and then just look at algebraic equations of this form with certain structures. So you impose a certain structure for the coefficients which are just coming from the matrix model, although at this point you don't need to have matrix model anymore. And then you impose also that one of the solutions to this algebraic equation, to this spectral curve, is the Scottish transform for probability measure on the real line. And then you wanna say, can I say something about this measure? And the answer is yes, we can. Yay. Another thing that I should say, again, as I said, you can forget about the matrix model at this level and everything I'm gonna talk about today even works for any polynomial, V doesn't have to have even degree. So the spectral curve doesn't have to be coming from a matrix model, well-defined matrix model, could come from a ill-defined matrix model, whatever that means. As long as V here is real. Okay, and our result now is essentially completing one of the pictures that we saw in one of the first slides which is, how do we go from a spectral curve to actually a variational characterization of the measures? And the theorem is as follows. So you start with this measure, mu star, which solves your spectral curve, right? And then we are gonna construct a vector, three measures where the first of the sum of two, of the first two gives back your measure. And very importantly, this vector, three measures is constructing such a way that this vector is a saddle point of an energy. Remember, before, when we were talking about the emission matrix model, we were minimizing a certain energy. Now this third measure, mu three, which is kind of auxiliary, is gonna live on the complex plane. So we're not gonna exactly minimize, but we're gonna look at perturbations of this vector in a very specific way, but I'm not talking about what specific way that is, but just look at certain perturbations, so nice enough perturbations of the vector along a certain direction, and we see that actually for a certain energy that we're gonna write down in just a moment, we see that this vector of measures here that connects back to the spectral curve is critical, is a saddle point for that energy, right? Also, one thing that might be important for people who like Him and Hilbert is that this third measure, this auxiliary measure is connected. It lives on the plane, on an arc on the plane, but if it's non-zero, it's gonna have a connected support. And also as a consequence of our results, we can classify all possible singular behaviors that could occur for, let's say, this measure solving the spectral curve. What we see is that this measure could potentially vanish like a root in pretty much the same case of, let's say, the classical emission matrix model, so power of a root type of vanishing. It could vanish like one third in the bulk, which was already described. It could happen even for V Gaussian in that case, but it could happen also that this limiting density vanishes like five over three. Although we don't have concrete examples, our arguments tell us that this is a possibility, probably when V has odd degree, not even. I don't know. But so the fact that you have this type of vanishing is not a big surprise, but it was kind of a surprise for us that we don't have higher powers of three appearing over here, so that doesn't appear. So your density, if you put an external source, you might create some sort of new university glasses, but not many. You don't get a higher order of piercing, for instance, in that case. Yeah, and I promise to you to show the form of this energy. Let's dive into that, it's a bit technical, but I said we have three measures. The sum of the force to recover the measure that solves the spectral curve, having mind the limiting eigenvalue distribution. So the first two measures here, they have totem as one, and then the difference between the force and the auxiliary measure has totem as alpha, alpha is this parameter of multiplicity, the limiting multiplicity for the external source. And the energy here involves laws of terms. There are certain log potentials of each measure, mu one, mu two, and mu three. There is a certain plus interaction between mu one, mu two, a minus interaction between mu one, mu three. So sort of, if you were to minimize mu one and mu two, don't like to be close to each other, but mu one and mu three do like to live close to each other, although mu three is on the plane. I'm gonna show a picture in a moment. And there's certain potentials acting on the three measures, which come from V and the external source itself. Okay, ugly form of three measures, what not? Just having mind at the end that we're looking at saddle points of an energy functional for three measures. And this one that solves the spectral curve is also, can also sort of be seen as a saddle point for this energy. Okay, let me show you a picture. So I'm gonna show you a picture of the supports in a specific case. So what I'm doing here, I'm taking my potential V to be a quartic plus quadratic. And I'm making my external source to run from A between zero and one. The picture that you see over here is A close to zero, right? With A is zero, this orange part, the third measure escapes to infinity and we just have mu one. When A becomes very, very small, but positive, the third measure is supported on an arc on the complex plane. The first measure is supported in a single interval and the second measure is not present yet, right? It's zero. Then you start tuning up and then at one specific moment you see that the support of the third measure merges into the first measure and then it separates the first measure into red and blue, meaning first and second measure. Now the three measures are present. The orange is auxiliary one on the plane and blue plus red describes the corresponding eigenvalues. Then you keep moving forward. You see that this orange starts to shrink. And it shrinks, shrinks, shrinks, shrinks up to a certain moment where it completely disappears. The mu one and mu two separates. At this point, I guess it's A equals a half if I'm not mistaken. And at this point you just have, it's connected support for the eigenvalues distribution. So the eigenvalues is supporting two intervals in that case. And then you just keep moving forward and they start to separate more and more and more. And at this moment, the third measure doesn't play much of a role. Yep. Okay, let me make some comments. As I said, so first off, this result that I just described to you doesn't have directly to do with the matrix model. We start with a spectral curve motivated by matrix model, but the input to get this result simply start with algebraic curve with those coefficients like this form and a probability measure solving it. And also we have that, we can write down the variational problem. So this works for V odd as well. Doesn't have to be V even to have a well-defined matrix model. For V even, we can encode all previously known results including A equals zero and also including the results when V was symmetric. Everything we can recover from that. And the main technique which perhaps I won't spend, like I talk fast, but the main technique which I'm gonna spend the five minutes talking about is actually to try to do this not on the plane, but on a Riemann surface. Try to construct those measures by playing around the Riemann surface associated to this spectral curve. Yeah, and I must say that we were successful to apply this to other situations. For instance, constructing what's called a motor bug problem for the normal matrix model. We applied similar techniques and also for us in topics of multi-partonal polynomials which don't have necessary to come from matrix models. And I mentioned especially the second one because I'm gonna show you some more pictures, but the techniques are similar but the pictures take like a week to produce so I just produced them for that example a while ago. So I'm gonna just reuse those pictures and explain to you the ideas. And it just follows. So again, so the picture you're gonna see here is again pretty much the same as before. I'm describing how the solution to this variational problem, how the support of the solution of this variational problem changes with a certain parameter. In this case, so it's not exactly a matrix model but just some zeros of our tonal polynomials. For a small parameter, the zeros split apart into red and blue. Red is on the real line, blue is on the complex plane. I'm gonna play the same game as I did before. I'm gonna start tuning up, right? So to start tuning up, they start to move up to a point where they touch each other. When they touch, things just start to move to the other side. Yes? Ah, here V is cubic. If you wanna look at this V, as again, it's not a matrix model but I'm also not putting external source. I'm dividing V as multiplying one sector and then another sector without external source. So it's multi-portal analogy. That's why I get this geometry but it's not a matrix model. But again, the idea would be very similar. That's why I just wanted to show this one. Yeah, so at this moment, after the measures, blue and red, they merge together, a third measure is created and then you might wonder, hey, but what's happening? Why exactly when they merge or when they cross each other, why do you create this new measure? And that's the answer. That's why you have to move to the Riemann surface because if you look at these at the Riemann surface, not much is happening. If you just look at a plane, something weird is happening. You have this sort of single light that one touches the other and transferring to something else. So what you should do is you look at the Riemann surface. So let me just show you what this means. So remember, although the cores here don't correspond necessarily to the cores we had before. So we're for that. But what I want you to look at first, this interval here, right? This interval here corresponds to this interval that we started with on the real line. Yep. And if you look at this curve here, this other curve here in green, this is the other cut that we started off the real line. So we started with an interval and then another contour like this. Yep. So those counters, so this one and also this one are the ones that encode your measure. And let's say what you do, you're perhaps with one specific parameter, you are able to compute those measures and then you just look at those in the Riemann surface. What are those other curves? Those other curves are what we call trajectories of a quadratic differential. So you construct this quadratic differential on the Riemann surface, which, in other words, you construct a harmonic function which is the real part of a well-defined object on the whole surface. And then you look at all those trajectories, all those curves, all those level lines, if I'm gonna say so, some of them encode your measure, right? So let's say in this case, as I told you, this one here encodes a bit of your measure and this one here also encodes a bit of your measure. Yep. What you're gonna do now, you look at the surface, you forget about everything, you just trace back the analytic continuation of this, of those level lines. And then you tune up your parameter. You start tuning up your parameter, things are moving around, some trajectories move here and there, up to this point nothing changes for the measures. Something now is gonna happen. So you see, so your measure was here, right? And now so here. What's gonna happen now? So now those curves are in order, so those level lines they are in different sheets, they're not touching. What's gonna happen is that some of those level lines are gonna move to another sheet. And in that case it's gonna happen that this interval now is gonna stick out and appear over here. That's when you create the other measure. But again, measure one is here, measure two is here, and then the third measure is gonna be created in another sheet. So actually they're not intersecting, it's just that one measure moves to the other sheet and then it becomes another measure. So there's nothing really, nothing single happening at that level. And then just keep moving forward and of course you have to be able to control all those parameters. As you see there are lots of curves here and there, but at the end of the day what you get is that there is like a green arc sticking out here, there is this red arc here, and there is a blue one here, and those three are the ones that give back your measure. So all you have to do once you are able to look at everything as interjectors of a quadratic differential, all you have to do is to do the formation and to control this deformation all you have to compute, all, although it might take some work, is actually to compute certain integrals and look at the signs of them, which one is vanishing first, which one is vanishing second. So it's kind of not complicated if you are able to control those numbers basically and see which one vanishes first basically. Anyways, so the deformation argument, so as I said, what everything that I just showed to you means is that we look at everything at the Riemann surface and embed everything in terms of quadratic differential. So we start with this algebraic equation, so we start with the spectral curve, there is associated a Riemann surface, which is the three sheets that I showed to you before. So they associate the Riemann surface and it starts from the Riemann equation, you just construct the surface in your favorite way, and then what you do is that you construct a certain canonical quadratic differential on this surface. In a very easy way, you just take differences of two solutions, the square, these are quite differential. It's not very important how it looks like, what's important that basically what you're doing when you do that is that you're encoding the trajectories, those level lines, and therefore the measures that you want, you are encoding them in terms of level lines of integrals of psi j minus psi k, where psi j and psi k are two different solutions of the algebraic equation, and the whole point is that if you look at those functions here on the plane, they are not globally well-defined harmonic functions on the plane, they have single lines, so the psi functions have branch points, but if you look at the surface, those are globally well-defined, so this is one globally well-defined object, harmonic in the whole surface, so it's much easier to control the level lines. And now, so again, so because of compactness, so this is like it's gonna be harmonic on the surface, and those level lines that you just saw on the animation, they are actually, those level lines are critical trajectories, and actually if you look at all of them simultaneously, this form a graph, what's called the critical graph of the quite differential, and it has a lot of nice structure. So because this graph has so much nice structure, it's a lot easier to sort of control what possible behavior could happen if you start playing around with parameters in your model. Yeah, and as I mentioned, so the measure we want to material recover from those level lines, and in exactly this external source model, what we did is that we call, we did this idea, there's not exactly the formation going on there, but the idea is the same, we just look at the spectral curve, we look at it at the level of the Riemann surface, and it turns out that this Riemann surface, okay, we can kind of describe it a bit, but more importantly we can describe the order of the pole of this function at one of the points of the infinity, and because we know the order of the pole at one point and infinity, we can recover everything, everything. We can recover the support of the third measure, and just because we know the order of the pole at one infinity, we can recover also only the possible critical behaviors that I talked to you about. Of course, it's a bit technical, it's like 70 something pages to do that, but just to say that the ideas are simple, and as I told you, we applied to all the random matrix models as well, and I've heard almost from a few cities that if you do a nice calculation one time, you call a trick, if you do it two times, you call it a theory. This is not exactly a theory, but just like a technique that has been shown to be kind of powerful in many situations for us. Again, the details of course are hidden, but the main idea is over there. I hope at least you get some idea from that, and you can draw pictures. Okay, so I talked a lot. I just want to then wrap up what we just discussed. So we started with this matrix model with external source or polynomial potential, external source and some condition on the multiplicity of one of the sources to convert, let's say, to a certain alpha, G01, and then we obtained, exists on a spectral curve with a very specific structure provided that the zeros of the average characteristic polynomial remain bounded when n goes to infinity. The point is if the limit of this, again, we're just imposing polynomials, if for some reason you know that the limit is unique, then actually you can make sure that the limit for the eigenvalues, so the random measure of eigenvalues is also the same as the one for the average characteristic polynomials. That's why I mostly focus on the average characteristic polynomial. And actually also, again, we're imposing that the zeros remain bounded. We pretty much believe that this should be always the case because of the scaling n that we're putting in the front of the potential. And for instance, if we can show that, let's say the absolute value, the maximum absolute value of eigenvalues of the random matrices remain bounded, we say with positive probability uniformly as n goes to infinity, then we are done, which I thought, well, this should not be, we don't need any refined, any refined constants in the front, and so I thought, well, that perhaps should not be hard, although we haven't spent much time. We started thinking about this idea a few days ago, but then I asked Alice today and she said, well, you're asking me a hard question. How can you expect an easy answer? So I don't know if we can prove yet, but okay, is that conditioned at this point? So I don't know if we're gonna be able to get rid of that. Yeah, so also I must say that in principle, our calculations indicate that we can extend the existence of the spectral curve exactly in the set of the dimension to you, when you have also possibly different, more eigenvalues, distinct eigenvalues for A. So the existence of the spectral curve in the large and limit, that argument, we haven't done the calculation yet, but when we were completing the calculation for the two different eigenvalues, it seems like it can be extended for several. So we don't know if we're gonna do it right now, but again, so the first part of the talk kind of can be extended, but not the second part. So for the second part, when we start with the spectral curve and construct the variational problem, that's the corresponding spectral curve is gonna go to a degree higher than three, and that can be quite complicated. But anyway, in the case when you have two distinct eigenvalues, so the spectral curve is degree three, then we are able to obtain this variational problem, right? And as I mentioned, I guess along the way, also it was a surprise to us that we could actually classify all possible singular behaviors by describing those trajectories as well. We could rule out lots of other singular behaviors that in principle could appear. And as I mentioned as well, we apply those techniques to other models. And it should stop right here, thank you.