 I'll be speaking about Laplace eigenfunctions, so this talk falls in the Riemann spectral geometry, but I'm not going to be talking about eigenvalues though. So this is not a talk about spectral rigidity, it's more a talk about how eigenfunctions of the Laplacian behave. So I'm going to start introducing the setting. So we are going to work on a compact Riemannian manifold, we are going to assume it has no boundary and throughout the talk I'm going to write little n for the dimension of the manifold. Now the manifold is compact, so you have the Laplace operator acting on it and the spectrum of the Laplacian is discrete. So if this is the standard Laplacian, I put a minus sign in front to turn it into a positive definite operator. We are going to denote by philand.j the eigenfunctions, all the eigenvalues are positive and I'm going to be writing philand, lambda j squared for the eigenvalues. And because again the manifold is compact you can form an orthonormal basis of l2 of the manifold when you and l2 with the Riemannian volume form by this using these eigenfunctions that throughout the talk I'm going to take to be normalized. So the reason why I'm interesting studying eigenfunctions is because they encode all the dynamic and geometric properties of the underlying manifold. From a quantum mechanics point of view if you want to understand what's the probability that a quantum particle belongs to this region A in space, what you do is you grab this square of the modulus of the eigenfunction and you integrate it over A and then you get the probability of your particle being in that region of space. So eigenfunctions they carry all this information about the underlying manifold and they reflect a lot of what's happening with the dynamics of the geodesic flow. So just to illustrate that point I have this picture here you have a disc and a cardioid and in red what you see is the trajectory of a geodesic in each of these two surfaces. And well there is a big difference here this one looks quite chaotic and in these four plots you have the density plots of four different eigenfunctions. The eigenvalue is growing in this direction and what you're seeing here is the plot of the modulus square of the eigenfunction. So you're seeing this function plotted. Black means that the eigenfunction has a that the modulus is high while white means that you're getting a zero there. So for example in these pictures here it looks like the probability of my particle being near the center it's zero or very small. For example in these two my particle is going to be concentrated near the boundary. While here where the geodesic flow is highly chaotic it looks like the region is becoming evenly grayish which would mean that the probability of finding the quantum particle in any region A in this cardioid would be comparable to the area of the region. So this is one way in which eigenfunctions keep track of what's happening with the dynamics of the underlying manifold and this talk is going to focus on two aspects of eigenfunctions the critical points and the zero sets of the eigenfunctions. You can think of critical points if you think of maximums and minimums as the places where this modulus square is the greatest so those are the most likely places for your quantum particles to be found that while zero sets are going to be the least likely places for this quantity quantum particles to be at. And what I wanted to do before starting to talk about which kind of questions we are going to ask was to give you some pictures of how zero sets look like and the standard thing is to show you video of the cladney plates experiment. So here in this video what you're going to see is a metal membrane that's put its place on top of a speaker. This speaker is connected here to a frequency generator and what you're going to see is that for different frequencies this membrane is going to start vibrating and what they just did was to put very thin grains of sand on top of a plate. So the frequencies that are appearing here in this corner you'll see and what's happening is for each different frequency these frequencies are associated to standing waves so they actually correspond to eigenfunctions so the wave function with which this membrane is vibrating it's an eigenfunction whose value lambda is this one here on the corner and what happens is the membrane is vibrating and the places where it doesn't vibrate at all those places are attracting the grains of sand so those places are the zeroes of these eigenfunctions. So what you're seeing is just different zero set configurations of eigenfunctions for these different values of lambda and you can see that as the frequency gets larger and larger the configurations become much more complicated. This was an experiment done first in I think it's 1680 by Hooke he did it at that time with a metal plane and a violin bow and then it was replicated a hundred years later by Kladni and since then known as the Kladni plates experiment he was the first one to record at least a hundred configurations of the zero sets and what we are going to do throughout this talk is well when we are in two dimensions is to try to understand what the structure of this zero set is we are going to talk about how these components are going to be nested within each other if you look at the complement of the components those are called novel domains we are going to try to understand what the connectivity of these components are and what I want you to keep in mind is that we are going to be working in this limit where the frequency lambda is going to infinity so just as a reminder so lambda j square is the energy lambda j is what I refer refer to as the frequency okay so these are pictures of zero sets on different surfaces so that you don't only have the picture of what happens with a square this is a quarter stadium this is a biturist this is a square biturist and here you have the round sphere the lines up here are the zero sets of a high frequency again function and in the two bottom pictures what you see is the complement so in black you see where the egg and functions take positive values and in white you see where the egg and functions take negative values so the zero sets in these bottom pictures is just the lines divided in black from white and what I want to do first is to tell you what is it what do we know about these zero sets and I'm going to do this in the realm of surfaces which is what we know the most so for zero sets you can prove that they are rectifiable so you can actually measure their length and there is this construction by STR that says that the measure of the zero set should grow like a constant times lambda when lambda gets large so this construction was open for a long time and very recently low now prove this lower bound here the construction says that there should be also a constant up here but we don't we are nowhere close to getting that constant at the moment um what we also know about the zero set is how it's pressed across the surface so we know that there exists this constant c here such that if you grab a ball of radius c over lambda no matter where you place it on your surface it will always intersect the zero set so what that is saying is that if I start here at a point x in my surface and I walk a distance of c over lambda then I'm always going to see a sign change in my function because this way functions they are oscillating at frequencies that are comparable to one over lambda what we also know is if you take the complement of the zero set we know that the inner radius of the complement is bounded in between two constants over lambda so there exists this constant here time is c such that if in every nodal domain you can fit a ball of radius c over lambda so this is giving you a notion of how thick the complement of the zero set is and finally what we also are interested in is in understanding the number of components of the zero set in all the pictures that I've been showing you it looks like the number of components grows to infinity as the frequency grows we have no proof of that at the moment in a very in very specific cases we can show that the number of components goes to infinity but it's actually quite hard to do and we believe that it should grow like lambda square if you are on a surface or like lambda to the end if you are in a compact manifold but the only thing that we know and this is a standard result is quran's nodal domain theorem that tells you that at most you will have a constant times lambda square number of nodal domains okay so this is what we know on surfaces on general manifolds we know even less and what I'm going to do in my next slide is to tell you what are the questions around which this talk is centered so we are going to discuss number of critical points of the eigen functions when you divide this by lambda to the end we think that this quotient should remain in most cases bounded above and below by two constants so the number of critical points should grow like lambda to the end and being the dimension of the manifold but there are no results point that prove anything like that at the moment the measure of the zero set as I was saying before this question should remain bounded above and below by two constants the number of components of a zero set and so this is going to be the first part of the talk and then it's going to get a little bit crazier and we are going to talk about the different morphism types of the components of a zero set and also about the nesting configurations of these components okay so the first thing that you have to do if you're trying to attack these questions is to realize that very little is known so one of the things that you can do is to try to randomize the problem and instead of answering what happens with these quotients or these quantities for an actual eigen function you can ask what happens for a random eigen function so um suppose you're working on the n sphere or on the n torus okay where multiplicities are high and now fix an eigenvalue and what you will do is to consider a linear combination of eigen functions whose frequencies are equal to the frequency that you started with okay so what this is is just an eigen function with eigenvalue lambda square that's what you just did but if you pick these coefficients at random so we are going to allow these coefficients to be standard Gaussians independent then what this becomes is a random eigen function with eigenvalue lambda square um there it is okay so here in the slide I have a normalizing constant here one over squared n lambda and lambda being the number of frequencies equal to lambda so the multiplicity of the eigenvalue this is just a normalizing constant so don't pay attention to the point is that you have now this random eigen functions and um on the on the round sphere they are called random spherical harmonics on the on the torus on the flat torus they are called random arithmetic waves and the idea is to try to answer these questions but the these questions but now for this random eigen functions so um on the case of the n sphere and on the n torus eigen functions can be computed explicitly we know how they look like we know their eigen values so you can actually say a lot and there has been a lot of work done in this direction so for example on the two sphere uh there is a series of works by Nicolaescu, Camarota, Maninucci and Bigman that prove that the number of critical points will converge to a constant in probability so what this means is that the mean is going to a constant and they can actually show that the variance goes to zero and they actually in this last paper they get a nice rate of decay for the variance so for the number of critical points indeed these quotients are converging to a constant it's not only that you get the get it bounded above and below by two constants but you actually get convergence in this random realm for the measure of the zero set on the two sphere there are computed the expected value of this quotient and then you highest and Bigman control the variance so they can also show that the variance not only goes to zero but they get actually really good rates of decay and you get that the probability of so you get convergence in probability of these quotients to a constant so yeah this construction is holding in this random realm so these two first questions are what we call local quantities because you can if you want to understand the number of critical points or the size of a zero set you can start with your manifold and you can chop it up in neighborhoods that have sizes comparable to one over lambda and then compute the number of critical points in these tiny walls and then just add them up however you cannot do this with the number of components of the zero set because if I have two tiny walls I may have components that are going from one ball to the other and you don't want to over count and actually these components will definitely go from one ball to the other so this is a much harder quantity to study and there is the work of Nassarov and Soling that they first did it on the two sphere and then they were able to generalize this where they show that in mean the number of components of the zero set is converging to a constant it getting the variance is much harder exactly because this is not a local quantity so they only get convergence in mean and the idea of what I want to do now is to start working on these questions but on a general compact Riemannian manifold where you don't have formulas for the eigenfunctions or the eigenvalues so that's where we are held the problem is there that if you want to work on a general manifold if you fix the base manifold and you look at the space of all Riemannian metrics that you can put on it generically all the eigenvalues are going to be simple so each eigenspace only has one eigenfunction so you can not do linear combinations of eigenfunctions within an eigenspace so you will have to really like change this definition this doesn't make sense anymore you don't get anything random here so one thing that you can do is to fix an epsilon and instead of working with frequencies that are exactly equal to lambda you can work with frequencies that are in a window from lambda to lambda plus epsilon no so what you're doing is you're incorporating more eigenfunctions to your random linear combination mix by working now with a window from lambda to lambda plus epsilon so what you just when what you get when you do something like this when you're working with this random linear combinations it's a function that has a frequency that's very that's concentrating concentrated near the lambda that you pick but it's no longer an eigenfunction right i'm mixing different eigenspaces i'm mixing frequencies from lambda to lambda plus epsilon so that's one of the things that you have to keep in mind from now on however we do believe that they should behave like eigenfunctions that these are an honest model for what eigenfunctions look like and this is the content of the random wave construction by by very that what it says is that if you're working on a manifold where the geodesic flow is chaotic enough what happens is that the statistics of these waves that are called monochromatic random waves by the way should be the same as that of actual eigenfunctions whose eigenvalues are in these windows whose frequencies are in these windows so what you have here for example is just a histogram for the value distribution of an actual eigenfunction whose frequency is 500 on an arithmetic surface and so you can see that actually so that those are the points that are plotted and you can see that it actually adjusts to a Gaussian distribution and there is some numerical evidence towards this construction but nothing like this has been proved but just keep in mind that we do believe that these are a good model for how eigenfunctions should behave and so okay suppose we want to answer these questions for these random waves and now the question that you need to ask yourself is what do you need to study the number of critical points or the size of the series sets so these waves that we are considering here since they have Gaussian coefficients they are called so they are Gaussian fields on the manifold and there is this theorem by Kolmogorov that says that Gaussian fields are completely characterized by the two-point correlation function so if you know the two-point correlation function for your field you completely understand how the field behaves you can compute any quantity related to the field so the two-point correlation function is exactly this so what you do is you fix two points x and y on your manifold and you compute the expected value of the product of the value of your wave at x times the value of your wave at y so you are trying to understand how these two values are correlated to each other depending on what x and y are and because the Gaussians the variables that we are considering are have mean zero variance one and are independent it turns out that this is exactly what you get as your correlation function so you get the sum of the cross products of the eigenfunctions whose frequencies are in this window so if you're familiar with Val's law this is an object that appears a lot to compute number of eigenvalues say for example in a window from lambda to lambda plus epsilon only that what you do is you deal with this object on diagonal and you integrate over the manifold with respect to x but here if you want to understand these waves you cannot evaluate on diagonal and you cannot integrate x out you actually have to deal with this sums of cross products what this is is the kernel of the projection operator from l2 of the manifold onto the direct sum of eigenspaces whose frequencies are in these windows from lambda to lambda plus epsilon so this is just the kernel of that operator so if you want to say yes are you thinking of epsilon as small as a fixed small number okay how many eigenvalues do you expect to find in that window excellent so now it really depends on on the geometry of the manifold that you're working with from now on i'm going to work on there an assumption that says that at least there has to be one point in your manifold for which the measure of geodesic loops that start at that point and close that is added has measures zero so that under that assumption you have for that epsilon fix epsilon lambda to the n minus one eigenfunctions roughly in that window so there are a lot of them and what you need to do as i was saying if you want to understand this two point correlation function is to understand this spectral projection operators and if you want to understand actually what happens with the zero set and the number of critical points you need to understand this two-point correlation function but at one over lambda scales because that's where the eigenfunction is how the eigenfunction oscillates so the largest correlations are going to happen in those scales so what you need to do is to be able to so if this is your manifold and this is a point x that you fix on the manifold so if we identify it with its tangent space be an exponential map what we need to do is to work with vectors here that have that are of the form u over lambda b over lambda and map them here be an exponential map so i need to be able to evaluate my two-point correlation function at these points so one over lambda close to a fixed point x so what we really need to control is the two-point correlation function evaluated at the image of u over lambda and the image of b over lambda this is the quantity that i need to control as lambda goes to infinity um are there any questions so far so now as i was saying to make sure that i have eigenfunctions and to actually be able to prove any result i need to work under the following conditions so fix a point x on your manifold and then look at the space of all initial velocities that generate closed geodesic loops so it doesn't have to close smoothly just closed okay and you look at the set of all possible initial velocities so you're working in sx star m okay and here you equip this with a liberal measure on what you're the condition that we are going to be working with is that the measure of the space of initial velocities that generate geodesic loops has to be zero okay and under that assumption what we proved together with Boris honey is that we have a limiting function for this uh scale core two-point correlation uh for psi lambda so the assumption that we have to work under is that uh if you fix the point x the measure of geodesic loops that close at x has to be zero and under that assumption we can control that lambda to the n minus one shouldn't be there cross it out the limit of this rescale covariance function is the function uh that's also going to be the two-point correlation function for a field so what i'm going to do now is to explain what this infinity field is so what you get is that this is converging to epsilon times the two-point correlation function for a Gaussian random field in r n so this infinity field is what's called a superposition of random planar waves what they said what it satisfies is that it's an eigenfunction for the inclusion laplacian with eigenvalue one okay and uh since it's a Gaussian field it's completely characterized by the two-point correlation function you can actually define it in terms of the two-point correlation function and the two-point correlation function is exactly this thing that we have here on the right hand side so you integrate over the n minus one sphere e to the i u minus b and and w so what you have here on the right hand side is actually um the Fourier transform of the spherical measure so this evaluated at u b is simply the Fourier transform of the spherical measure evaluated at u minus b so what we are getting is that no matter what the geometry or the topology of the manifold that you start with is you when you rescale the two-point correlation function like this you get always the same limit and this limit only depends on the dimension of the manifold you're integrating over the s and minus one sphere and that's the only thing that you remember about the starting manifold so this result is true as long as you have this hypothesis however I strongly think that it should always be true to prove this result we use microlocal analysis and so we have to work with the wave operator with the wave kernel and we have to provide the singularities along geodesics and that's why we need this condition in the set of geodesic loops but I really think it's a problem of our proof that we have to enforce this condition I think it should really be true on any manifold it is true on the sphere and it's true on the torus and the fact that it's true is what allowed all these people to get these results on the n sphere and on the n torus for the number of critical points and the size of the zero step so this this convergence holds in this infinity topology so you can take as many derivatives in u and b on both sides as you want and you still get the limit and it holds uniformly for u and b inside of all here in the tangent space of constant radius r another way of reading this result is that if you start with your wave and you rescale it so you fix the point x and you rescale it at one of our lambda ranges about x so now you see you think of this as a function in rn in rn so this is a function of u okay so as a random variable in rn it converges in distribution to this fields this random planar waves in rn that's what this result says because you have convergence of the two point correlation function so all the moments converge so these random waves really behave like this limiting guys that we have in rn and just so that you have an idea of what the heuristics are behind this statement this is not a proof and actually it has nothing to do with the proof but what happens is that if you grab an eigenfunction an actual eigenfunction of the Laplacian and if you fix your point x and you rescale this function about x then when if you hit this with the Euclidean Laplacian plus some lower order differential operators that I do not want to define then what you recover back is the eigenfunction itself where it's scaled so what's happening is that to leading order in lambda terms the rescale eigenfunctions behave like eigenfunctions of the Euclidean Laplacian with eigenvalue one which is the property that this limiting guys have so that's that's what's happening behind the scenes and this is why and that statement holds on any manifold so that's why we are getting this universal limit that forgets the metric g or the topology of the manifold okay so are there any questions about this statement what I'll do now is to show you how you can apply this result uh so for example if you try to count the number of critical points or measure the size of the zero set we can primarily mean they converge to a constant this constants an and b and only depend on the dimension so they are the same for every manual manifold compact manual manifold of dimension n as long as you work under this assumption that what you need is that for almost every point on your manifold the measure of closed genetic loops has to be zero so uh yeah that's assumption and if you want to control the variance uh you actually need to ask for something else because when you when you control the variance you need to control your two-point correlation function in both that are in points that are very far apart for example picture the sphere like the values of your function on the north pole should be super correlated to the values of the function on the south pole so it's not always true that only the one over lambda ranges matter you have to be careful of those things uh so uh if we work under the assumption that if you for almost every pair of points x and y on your manifold if you look at the set of arcs geodesic arcs that are joining these two points that the set of initial velocities will have measured zero for almost every pair of points then we can control the variance and not only we have that it decays to zero but we can also control the rate so we have convergence in probability of these two of these quotients to this constants a n and b n okay so now uh for the second half of the talk i'm going to start talking about uh the components of the serial set and their different number there are different morphism types and how they are nested so this is uh the serial set for actually it's not for psilamide it's actually for this guy here psinfinity in r3 where we just plot it to a box this picture is by alex barnett and uh so when you're working in dimension three uh your serial set has dimension two so it's going to be a surface or actually a collection of connected surfaces and the limiting guy the serial sets looks like this so you can really not tell what's going on there um the first question was can i count the number of components of the serial set and this question was answered by nazarevan solving and in min if you are working under the assumptions that boris and iphone you can show that the number of components will converge to uh will grow like a constant times lambda to the n and um what uh you can ask after that is well what happens if instead of just counting the number of components what i want to do is to count the number of components with a given difium morphism type can i say that 90 percent of my components are always going to be spheres that's the type of question that we are asking um so let me tell you how we are going to go about thinking of this problem so suppose you have a realization of c lambda and these are the components of your serial set so the manifold has dimension three the components are a collection of uh surfaces and you are going to organize this surface according to their genus so here i have ten components in total i five of them are spheres three of them are tori nothing of genus two and two components of genus three so what you are going to do is to collect that information into a measure into a probability measure what this probability measure tells you is the frequency with which each difium morphism type is appearing so five of those ten times i get a sphere three out of ten times i get a torus and the question is as lambda grows is there going to be a universal distribution for my difium morphism types is there going to be a law that tells me for lambda large enough 90 percent of the components will be always uh will always be spheres five percent are going to be tori and so on yes so i mean the random sort of this random random kind of linear combination zero is a regular value again oh yeah with probability one it's a regular value so these are going to be smooth manuals but the i mean the collection of difium morphism types of course depends on the choice of the ij aj yes definitely yes so what these are these are probability measures but they are random probability measures for so for each realization of cylinder you have a different distribution of the difium morphism types so let me actually define these measures so they are probability measures so they give you back a number between zero and one and the domain of these measures is the space of difium morphism types so if you grab a component of your zero set it's going to be a compact manifold it will have dimension n minus one it will have no boundary with probability one you can show that it's going to be smooth and with high probability you can show that it can be embedded in our n so this is the collection of components that we are looking at and we are going to question this by the space of difium morphism types so that's the domain of the measure to each difium morphism type i associate the frequency with which it appears among the components of a zero set so in this example i have one over ten ten being the total number of components and then i'm putting a delta mass every time a difium morphism type is hit that's how the measure looks like in this example so i have five times a delta mass of genus zero three times a delta mass of genus one and so on and that's in general how you construct the measure so if i call c capital c sub c lambda the collection of all the components in my zero set then the measure looks like one over the total number of components and then a sum of delta masses where i'm going to add a delta mass every time the difium morphism type of that delta mass is hit by a component in my zero set okay so the question is again in the limit as lambda grows do i have a limiting probability that will encode how these difium morphism types look like and the answer is yes this is a result by peter sarnak and jor bigman so what they show is the existence of this limiting guide mu infinity to which the mu epsilon does are converging and in the the way in which they converges the space of difium morphism types is discrete so to measure the difference between these probability measures you simply evaluate them on finite subsets of their domain and and you can compute the difference so what this statement says is that for any epsilon fixed okay any small epsilon that you pick the probability that the difference between the two measures be bigger than this epsilon is going to go to zero as lambda grows to infinity so in that sense these measures are the limiting guides and the assumption under which you need to be working is that the for almost every point on your manifold the measure of closed geodesic loops has to be zero simply because you need this convergence of the two-point correlation function to prove any result like this okay so now there is a limiting distribution i have to say two things though so from the proof of sarnak and bigman you cannot track how this measure looks like this is an existential proof they prove the existence of the limiting measure but you cannot keep track of how this measure is built so you cannot say that 90 percent of the components are going to be spheres you cannot hope for anything like that the second problem is the following we have no clue what the domains look like for high dimensions so if the manifold has dimension three the zero set is a collection of surfaces so you can definitely organize them by their genus so in that case you do know the domain of the measure but if your manifold has dimension four or higher then the zero set is a collection of components of dimension three or higher and we really don't know what the space of the morphism types of such manifolds look like so we really don't understand the domain of these measures so that's part of the problem that one has but despite of that what we did with bitter sarnak is to show that the support of the measure is the entire space so even though we have no clue how this looks like if you give me a morphism type with a strictly positive probability that the morphism type will occur in the zero set of your random ways for lambda large enough that's what's happening you're observing all the possible morphism types once the frequency is large enough okay so what I'm going to do next is to discuss the similar problem of the nesting of components so are there any questions about this statement yeah like how well I guess super hard but how computable who would be to find the limit from an example from an example no at the moment we really don't know how to find any lower bounds on this probability yeah yes yeah yes we have Alex Barnett running numerical experiments what he's observing is that there is going to be a giant component and then tiny components around it that will have these different diffeomorphism types but actually in his numerical experiments he's only being able to see spheres so it's very likely that the probability of seeing all the diffeomorphism types are super super tiny so I mean at this moment in time the computers cannot give us any information yeah yes yeah so all these statements are about in the end so if you want to understand this limiting distribution what you need to understand is the limiting distribution uh it's it's this gosh of fields infinity you need to understand the diffeomorphism types of this guy and what happens with this guy is that it satisfies this equation so it's a it's it's quite rigid it's an eigenfunction for the Laplacian with eigenvalue one so uh it has a lot of structure into it okay so now for the nesting of the components if these are the components of your zero set the way in which we are going to record the nesting since it's using a finite rooted tree so for this tree the root of the tree corresponds to the big nodal domain each of the nodes is a nodal domain so it splits into three pieces so there's no domain here that one and that one and then for example this one splits in three pieces further this this and that okay so each nodal domain is one of the nodes in my tree and you put an edge joining the nodes every time that you have a component of the zero set separating the nodal domains so you can actually record the nest things in your zero set components using finite rooted trees and the way in which we are going to record the proportions of the different nest things within your zero set is the following so to each component of the zero set so suppose this yellow one you look at the edge that's associated to it in the tree and once you remove that edge from the tree it's going to split the tree into two pieces and you just grab the smallest one so we are we are defining this map that to this yellow component of the zero set it associates this small sub tree here or for example to this blue component here that's associated to this edge you associate the leaf of the tree and the way in which you build the probability measure that's associated to the different nest things is simply on the space of finding through the trees you put a delta mass every time one of these sub tree configurations was hit among the components of your zero set so it's exactly the same construction as before only that now what you're recording instead of a few morphism types are these nesting configurations and the question again is in the limit as lambda grows to infinity will they have a universal distribution of the nesting configurations and the answer in this case again is yes there is in the same paper peter sunnag and your big man prove the existence of this limiting guy new infinity to which the new c lambas will converge so there is a universal distribution of the nesting configurations of the zero set once lambda is large enough there will always be a proportion a fixed proportion of components that are going to be isolated and then a fixed proportion of components that will be above inside another bubble and so on and what we prove would be their sunnag is that the support of the measure is the entire space of trees so if you give me any nesting configuration we can show that for lambda large enough that nesting configuration is going to occur with a strictly positive probability that's what we were able to show and what these statements this the proofs of these statements what they reduce to is actually to working in the n and what you have to do is to find solutions to this equation whose zero set contains a collection of components that are nested according to any tree that you make a map with or a solution to this equation whose zero set components has a component with the defymorphism type that you like so that's how actually the proofs of these statements on the supports of the measure what they really are about is about working in our n on finding solutions to this equation for which you can make sure that the zero set will have at least one component with the defymorphism type that you want or at least a collection of components with the nesting configuration that you want okay and to finish the talk in my last slide what I wanted to do is to show you the only setting in which we actually understand what the limiting probabilities look like because of numerical experiments so this is when you work in dimension two in dimension two the defymorphism types of the components of the zero set are boring it's because it's just you can prove that all of them are going to be embedded circles the components of the zero set so what you can do instead which is way more interesting is looking at the complement of the zero set so you have all these neural domains each of them with a different color and you can study the connectivities of these components so you can count the number of holes that these components have so for example this green component here has a has another one inside so one hole for it that one this violet mass here it looks like it has at least two holes and so on so you count the number of holes in each component and you ask is there a universal distribution for the number of holes the answer is yes and alex but not computed the limiting distribution in this case so this is the only setup in which we know what the limiting measure looks like what he's getting is that 91 percent of the components will have no holes about five percent of the components will have one hole one person will have two holes and then they the the the number of components having higher and higher number of connectivity starts decreasing super rapidly that's what he's observing his table actually goes all the way up to connectivity 20 and the error if I remember correctly was in the fifth decimal place so you don't even see it here and this is really the only case in which we understand what these limiting probabilities are in the other settings the only thing we know is that the support is the full space and that's it that's the limit of our knowledge that's it that's all I wanted to say thank you very much this function sir infinity does it depend on the manifold or what no no it's the same one for all the manifolds what same thing it is is a Gaussian random field in our n that's defined by this two point correlation function so Gaussian fields are completely defined by a two point correlation function so this Gaussian field is the one that has this two point correlation function so you just take the Fourier transform of the spherical measure and you can also I mean physicists like to think of it as a superposition of random planar waves they are equivalent and the only information that you have is that they solve this equation but it's it's just a function in our n and it has nothing to do with the manifold it's always the same limiting guy but yeah but it's not connected to the manifold it forgot it yeah yes when you build your measure mu counting the types of different morphisms would it be meaningful to ponder it by the average measure of each component the size of the component the size yes definitely that's actually the right question to be looking at sir i'll expand what he's observing in his numerical computations is that he's getting in dimension three or higher dimension two is completely different but in dimension three or higher he's getting a massive component of the zero set always so it looks like there is this percolating component of the zero set no matter how many experiments he runs that's eating up the whole space and then he has like this small isolated components that are then reflecting all the morphism types that we are saying but the main guy it's like this huge component that it's taking most of the volume of the zero set most of the house of measure of the zero set so yes definitely that would be the right question and the answer would be probably there is this guy that it's actually leaving the behavior of the zero set but so far we cannot show that there is a component that's large actually in dimension two dimension two is different because this is connected to percolation so the probability of being able to cross from the bottom side of the square say to the top is the same as crossing it this way so you it's unlikely that you will have percolating components for the zero set in dimension three it's different because to close that I mean to go from one side to the other you would need to block it with something of dimension two so it's much harder so yeah you you do have this percolating component but we have no proof of that like we are far away from a proof of that yeah and by the way I should say similar things can be done when you instead of working with this again functions when you work with a problem that's slightly easier so you do the sum from zero all the way up to lambda so you're mimicking polynomials on the manifold and you look at the zero sets and the proofs there are much easier and a lot more can be done so you can bound from below the probabilities and stuff this problem is much more rigid because you have an elliptic that needs to be satisfied throughout both your experiments on your explanations suggest that the number of critical points in a small region is kind of proportional to the volume and and lambda to the power n of this small region can you show some kind of equidistribution so in small scales yeah you mean well like you yeah does the the repetition of critical points goes to remaining volume or something that's yeah so for the critical points it's hard for the zeroes so let me explain one thing so for the zero set we can show that if I okay let me say it in words so fix a ball of radius c over lambda okay fix a c and look at balls of radius c over lambda and look at the zero set inside that ball so now think of the that zero set and the remaining measure that it's induced on it okay let's call it d sigma lambda what we can prove is that this d sigma lambda will converge in distribution to a d sigma infinity so in small scales you have conversions of this measure and one what's crucial to get these conversions in distribution is that we know that in our n the zero sets of this equation if you restrict to a bounded ball it will have bounded measure so the moments are going to be bounded so we get conversions of all the moments for the number of critical points if you try to do that we don't really know that the moments are going to be finite we actually come check sure that at some point actually for high enough moment it will be infinite so we don't we cannot get this convergence but we don't really know what happens with the moments yeah but that's a really nice question yes I think maybe related to that so for a lot of the local things there's a common limit distribution that doesn't depend on the manifold is that just because everything is almost everything starting to happen in really really small balls that are almost like euclidean space so the distribution is the one so eigenfunctions they are oscillating like crazy as lambda goes to infinity so really what's happening is that the larger picture really looks like a student and yes that's what's happening there was a question