 Okay, thank you. Thank you very much for the invitation. It's a great honor to be here speaking at home So this talk is joint work with Jeffrey Galkowski and It's about a game functions of the Laplacian, so I'm going to start introducing what the setting is We are going to be working on a compact free money manifold So M and the dimension of the manifold if I ever refer to it is little M So this is a compact manifold. No boundary smooth we are going to on the manifold on the L2 space over the manifold you have the Laplacian acting the manifold for us is going to be compact see there The manifold is going to be compact So the Laplacian when it acts on the manifold it has this grid spectrum And it can only accumulate at infinity and the notation that I'm going to use so this is the the Laplacian The egg and functions are going to be denoted by fill lambda and the egg and values are going to be lambda square But this is just a sequence of discrete egg and values. They are accumulating at infinity So in real in reality is our lambda j's I usually I am going to omit the sub index Okay, but this is a discrete sequence of egg and functions and it's accumulating at infinity and Throw the talk we are going to take this day and functions to be normalized in out to so The out to norm of all these egg and functions. I'm going to prescribe it to you one and if you're not Used to working with egg and functions the way you should think of them Or one way of thinking of them is to use the quantum mechanics per Perspective so if what you do is you fix a point in the manifold And you evaluate the egg and function at the point and what you do is you go and measure the absolute value squared Then what you get is that this is the probability of finding a quantum particle of energy lambda square So the egg and value at the point x. This is how you should think of the egg and functions That's the information that they are giving you. So this is the probability of finding your particle your quantum particle of energy lambda energy lambda square at the point x Okay, so this is the information that they are giving us and The idea this is just the same line only that I wanted to keep it To use it later. The idea is that we want to relate understand how the behavior of this egg and function is affected depending on the metric that you put on the manifold. So we want to know how they respond to the different dynamical systems that you can create on the base manifold M and From a numerical point of view, we have a lot of evidence that again functions respond to Dynamics of the manifold and I'm just going to try to convince you with a picture here So here you have a disc or a cardio And in red what you have is just the billiard trajectory of a particle that you kick with some initial position and momentum and And what I'm going to show you next is the plot of four different egg and functions So the egg and value is growing like this. Okay, and what you can see is that on the disc Where the billiard trajectory was quite nice the egg and functions also look nice They are responding to the geometry to all the symmetries that you have in the disc and for example here I mean, you know that the trajectory was avoiding the center of the disc And in these four pictures the way in which you should read this information is that You are seeing the density plots of this absolute value here Black means that the egg and function is highly concentrated there white means that the egg and function is almost zero So what these egg and functions are telling you is that this quantum particles Corresponding to the energies that are given in each of these Plots is that at the center of the disc For example, you will never find your quantum particle of energy whichever you're looking at so the probability of being Close to the center of the disc is zero in all of them If you look at what happens in the car, the other picture is quite different because the billiard trajectory is chaotic And the egg and functions they are also they are capturing this information The only symmetry that you are able to see in this fun in this plot for the egg and functions Is the symmetry with respect to this axis the horizontal axis, but other than that the plot looks quite chaotic and If you were to plot more and more egg and functions let the egg and value grow to infinity in this direction You could picture that what you would see is that The cardio would start looking evenly gray and what that means is that the probability of finding your quantum particle In a certain region in your cardio is simply the area of that region. That's what you would get So the idea is to try to understand how these egg and functions are concentrated across the manifold and We are going to try to input different geometric conditions on the manifold and see how the egg and functions respond to these conditions now the way in which we are going to phrase the question is to grab a sub manifold H inside them and We are going to average these egg and functions Finlanda Along H, so I just want to integrate these functions on H with respect to The volume form that my Riemannian metric induces on the sub manifold H And it is to try to understand how these averages behave. That's going to be the focus of this talk And This egg and functions they are oscillating like crazy. They are going up and down and they And they are doing this with frequencies that are like one over lambda So the egg and values lambda square they go up and down and these intervals are like one over lambda close So the larger your egg and value the more they go they go up and down so what you What you should expect of this average is that they should go to zero that this cancellation You should have cancellations along H But this is not all going to be the case always actually So I'm going to start the talk giving you an example where this average is do not go to zero We are going to take that as our enemy We are going to try to build intuition of why is it that that's happening and try to discard that But before I do that what I wanted to do is to mention the case in which H is just a point Which we are going to allow in this talk in when you do this you are just getting information on the value of your function at the point X and I'm saying this because towards the end of the talk the talk is going to turn into a Understanding what are the value distributions of these functions? What's the supreme and value that these functions can take depending on the point X that we will choose and Towards the end the conditions are going to sound like if I'm not a point X that is self-conjugate with maximal Right then I'm going to be able to control the supreme But before we get into that I Want to talk about the enemy which is I'm going to do it on the two tourists so square flat tourists think of it as the square That's zero to buy By zero to buy So here we are identifying opposite sides And the sequence of hidden functions that you are going to build on the tourists to give you an average that doesn't go to zero looks like this So when you evaluate it at the point X 1 X 2 you simply do it to the eye Lambda being the frequency times X 1 This is an egg and function for the Laplacian with a game value lambda square You just differentiate twice with respect to X 1 and you get the function itself Of course, I need this to be to buy periodic so lambda needs to be long here to 2 by Z So that this makes sense as a function of on the tourists, which is what makes the spectrum discreet and What happens with this function is that if I take my curve H to be this vertical strip here So H is the curve X 1 Is zero which corresponds To this current in the tourists. What's going on here is that when you are at the function Along H this function is simply the function constant equal to 1 there So you just pick up to buy you pick up a constant Now this is a very specific example in what's What's happening here is that the functions they are constant along vertical lines Or put in other words all the actions for your function is happening in this direction, right? This is how they are oscillating they will oscillate like this and Actually, whenever you have this behavior whenever your functions Propagate only in the normal behavior to H. It is that we are going to have maximum averages in this sense But that's getting ahead of what I'm planning on doing in the next slide, which is just giving you The settings in which any information is known So this is the main question for the talk trying to understand the averages and just remember that this is also a talk about Value distribution because you get to pick your some manifold to be just a point Okay, so these are some results. I'm going to start with what's known on a surface That's this side of this life on a surface in 8283 good and H how independently prove the same result, which is that these averages They are bounded above by a constant. That's this big O of one notation and In order to do that, they needed the surface to be hyperbolic and the curve H to be a geodesic but it turns out that this is not new actually as an assumption and Ten years later Steve Seldich proved that on any manifolds that you may choose You can do an average over as a manifold H whose column mention is K That's the I'm going to another throughout the talk the call I mentioned of your some manifold K and What happens is that these averages are going to grow at most like a constant times lambda to the K minus 1 over 2 That's this notation here And the top takes place always as the eigenvalues lambda they are going to infinity every time that you see this notation I'm just saying when the eigenvalue grows You're bounded above by a constant times whatever is in the parenthesis So Steve Seldich he proved this result Which is telling you that you always have this upper bound This is P of lambda to the K minus 1 Over 2 and as I was saying here K is going to be the co-dimension of H and And so what the talk is mainly about is when is it that you can saturate this upper bound When is it that you're going to have maximum averages? So you reach the lambda to the K minus 1 over 2 growth When H is a point this question was studied way before than Seldich's work It started with the work of Leviton and Then Abakumovic and Hormander generalized it and what you get is When you're a point the co-dimension is N So you just get lambda to the N minus 1 over 2 That's an upper bound for the sub-norms of the eigenfunctions Okay So as I was saying the talk is when is it that we are going to saturate this upper bound here Or you can think of it as when is it that you can improve it to a little O bound And the first result in this direction is by Chen and Sorgh in 2015 what they showed is that these averages are actually a little low of 1 So they will go to zero as you expected at the beginning because the eigenfunctions are oscillating like crazy But they were all they were all only able to prove it when the manifold has the surface Has negative sectional curvature strictly negative and the curve H is a geodesic And actually in this setup So Xi and Zhang actually improve this rate of decay and instead of saying chess goes to zero They were able to prove that it's bounded above by a constant over square root log lambda So they can actually control the rate of decay to zero and the same was proved by Wyman in 2017 so last year and What he was able to do was to relax the conditions of on H H no longer needed to be a geodesic for him So now H is a girl that has to satisfy some curvature assumptions namely it has to avoid two critical values But I don't want to get into that Because the dog takes place on this side I want to work on a general manifold and on a general sub manifold and in this setting What we know is the following We know that you can go from a big O to a little low As long as the following condition is satisfied, so What's going to happen here is that we are we are working within the consphere bundle s RM I'm denoting the points here by X C. So this is position a momentum and So you have your curve H here And I want what I want to do is to understand. What's the set of? unit conormal directions So I'm going to be looking at all these vectors that are perpendicular to H This is this is what I'm denoting by n star H, but I'm going to normalize them to have unit length. So S and star H. Okay, that's the orange set that appears right here So you can go either way So this is just a set of all unit conormal vectors to H When H is just a point Then this is simply the fire attacks So S X star M This is what S and star H looks like. These are just all the vectors that are normal to X and unit length and the idea now is that What you what why man needed to do in order to control these averages was to Look at the set of looping directions that are conormal to H That means that I start At a point on H with some velocity that after some time will make me look back to H So the geodesic flow brings me back to H and it brings me in such a way that I am conormal to H So I'm perpendicular to H when it brings me back So that's this set here that I'm denoting as mathematical L So these are all the points that are Conormal directions to H that look back to H after some positive time T In the space of unit conormal directions, this is a subset of the ghost fear bundle So you can use this as a symmetric to to input a volume form here So this inherits a volume form. That's this sigma here that I'm writing And when the measure the volume of the set of looping directions is zero one One was able to prove that he that these averages are not going to be saturated So you never reach the lambda to the K minus 1 over 2 growth You just go slightly slower and you get to a little low Are there any questions before I move on this is a setup and I need to understand I need to know if you understand the problem Yeah, so I have a remanium metric in the go-down shin bundle There's a sake metric and it it induces a metric here and it induces a metric on my some manifold and I'm just use What is this set here? If it's right, no, I don't know how it looks like it's just a subset of a star. Yeah, it can be nasty Just a volume, yeah Yes No, I can actually cross h in some weird direction and then come back normally as long as I come back at some point Yeah, I don't care if it's the first time that I touch h at some point I have to come back Yeah, normal when I when I leave age and when I come back after some time. Yes Okay, so these are the results known and what we want to do is to But this talk is about actually is I'm going to present you with a set of results and techniques that are going to recover all these results and extend them actually So hopefully by the end of the talk You'll have an idea of how to deal with a game functions. That's a new way of looking at the Different idea functions. Yeah, and if I'm lucky I will get to talk about this logarithmic improvements Usually people to get logarithmic improvements of this type when working with egg and functions is they ask for no conjugate points or negative sectional curvature They lift everything to the universal core and they work with the egg and functions there because you can propagate the things by lower Yeah for lower it mid-times. So that's how they get improvements for the egg and functions Towards the end of the talk. I'm going to present results with logarithmic improvements. We are going to actually recover these results in particular But the techniques that we are going to use to get them are completely different These are just going to be techniques about what happens when you restrict your egg and functions to tubes Around geodesics. Okay, and you just keep track of what happens along these tubes Depending on what the dynamics of your geodesic flow look like what will happen with these tubes and what will happen with the mass of your egg and functions Okay, so now I want to talk about Gaussian beams So in this light, I just want to give you the heuristics of what's happening behind being able to saturate this Bounce this upper bounce and in order to do that what I'm going to ask you to believe just for this life Is that one can decompose egg and functions into a linear combination of Gaussian beams? So I'm going to define what I mean by that so Suppose we are on the sphere But you can build got Gaussian beams on any manifold as long as what you need to have is a closed geodesic so this is my closed geodesic gamma here and What a Gaussian beam is is simply a sequence of Laplacian functions that will concentrate heavily Along a closed geodesic so here for this one This egg and functions will be very near so they will look like zero at the poles and along this geodesic They will look they will blow up So They are oscillating right so they actually go like this picture them blowing up along the equator like this The way in which they blow up is prescribed there. So this height here is like lambda to the one quarter and They blow up around a band that's Center at the equator and this band has a height like lambda to the minus one half On the sphere this sequence of egg and functions is called the highest weight spherical harmonics But as I was saying you can build them any time that you have a closed geodesic You can find a sequence of egg and functions that will localize Along your closed geodesic And here in this live When I'm showing you it's just the profile of this Gaussian beam of one Gaussian beam So here is the peak that I was showing you over there So I was saying in this picture the egg and function is Picking like at most like lambda to a one quarter So that's this pick up here and then at the base you have lambda to a minus one half Which is the width of this band and as I was saying they can they are Oscillating along your geodesic. That's this oscillation that I was trying to draw in this direction And they can they go up and down like an egg and function. So every one over lambda Loops they go up and down and if you believe just for this Just doing this lie that you can decompose an egg and function Into a linear combination of Gaussian waves what you want to do is to say, okay I'm going to start with my curve H and I'm going to try to Grab a Gaussian beam and run it across H and see if I can do anything with these averages so you want to maximize the averages and In this for example the way in which I'm positioning H there would correspond to grabbing an H Like this here in this picture and what's happening when you do that is that these oscillations? They are happening in this lambda to a minus one half band So they are going to cancel each other when you are at the egg and function along H so When you look at what happens with these averages, they are going to go to zero faster than any polynomial in lambda that's what this notation lambda to a minus infinity means and What we need to do is to remember to normalize them to have to mass equals one Just one Gaussian beam is of course not enough to saturate the constant upper bound that I'm trying to achieve So you say okay for sure I cannot put it like this because I'm not getting anything But if you put it perpendicular to H right if now you go with your curve H and you put it perpendicular like this Then what happens is that you are going to pick up this contribution when you average along H You're going to pick up The area under the graph which is like lambda to the one half times lambda Sorry lambda to the one quarter does it say one quarter. Yes, and one quarter times lambda to the minus one half Which is this lambda to the minus one quarter that you can read here for the average So this is still going to zero It's not achieving the constant upper bound But it's quite good because we picked up something and what I can do now is instead of using chess One Gaussian beam to ask my eigenfunction. I'll use two and now what I'm doing is I'm duplicating the average Over the curve all I need to do if I'm going to do that is to keep track of the fact that the L2 norm changed now if they are spread apart from each other at least like I Need them to be as far away from each other as this lambda to the minus one half here Which is the width of the bound band where they are supported if that happens then these two beams They are going to be orthogonal to each other. So when you compute the L2 norm, it's going to grow like square root 2 and If I put three instead you get a square root 3 here and a three times the average and then you quickly realize Okay, what I want to do is to Hit this upper bound what I need to put is as many Gaussian beams as I can fit along this curve While still keeping them orthogonal to each other and that's what you do You just put as many as you can which is lambda to the one half Because they are spread apart like lambda to the minus one half and I want I do not want them to go here I want them to not collide and When you do that what happens is that the L2 norm grows like lambda to the one half and The average is they grow like lambda to the one half times the area I had under this bump Which was lambda to the minus one quarter So now these two quantities are lambda to the one quarter And if you remember to average the eigenfunctions so that the L2 mass is one not this lambda to the one quarter here then that means that This the average is for these normalized Functions will be constant So this is how I get to set rate my upper bound and I just need to put As many Gaussian beams as I can in the direction that's perpendicular to H And what this means if you think of this in terms of your eigenfunctions is that your eigenfunctions If you want to saturate the up the upper one They just need to be orthogonal to H anything happening for the eigenfunctions has to happen Uniformly along H I need I needed to fill it first and also in the normal direction to it Which is exactly what was happening here in the torus example in the torus example this sequence of eigenfunctions that we built We're only moving in the normal direction to H. That's what that's where all the action was happening So this is what you have to keep in mind if you want to saturate these averages All you need to do is to grab your eigenfunctions and make sure that they are Uniformly distributed along H, but all the action for them is happening in the normal direction to H Okay, so now what I want to do is to Talk about how is it that we are going to Relate the dynamics of the geosync flow with the behavior of the eigenfunctions and a very convenient way of doing this Is to use the effect measures? So I'm going to explain what that effect measure is A defect measure is a probability measure that you put in the unit code sphere handle it's a measure here and What it does is it's associated to a sequence of eigenfunctions and it's defining the following way I'm going to say that me it's an effect measure for this sequence if what happens is that for any operator a I'm here this notation means any pseudo differential operator on M But if you're not aware of what that means and I do not want to define what it means Just think of a differential operator on M and what it has to happen is that for any pseudo differential operator on M when you apply your operator to your eigenfunction and you do the Interproct in L2 with the eigenfunction itself Then this inner product as the eigenvalue grows to infinity needs to converge to the integral over the unit code sphere bundle Of the principal symbol of your operator with respect to the measure me And this needs to happen for any operator that you may pick This is how we define a measure that's so see that's associated to the sequence of eigenfunctions So this measure is going to capture how the eigenfunctions are going to be having the in high energy limits now For full disclosure, we don't know anything about these measures. We know very little actually This is a big branch in micro local analysis try to understand What's the possible that what's the set of defect measures that you can get? But I'm going to tell you a couple of things that we do know So for example, if you start with a sequence of a game functions Then we do not know if there is a defect measure associated to it But what we do know is that you can extract a Subsequence of a game functions that will have a defect measure associated to it And that's my first point here And this is something that we are going to use because I'm trying to understand when is it that this? Averages are going to be saturated when the upper bound is going to be saturated So I'm going to say something like if I if I have a sequence of a game functions That's saturating me my upper bound then I can extract a sub sequence that will have an effect measure associated to it And the second property which is the interesting one Which is the one that's going to keep track of what the geodesic flow is doing is that these measures are invariant under the Geodesic flow in a star m So we have this measure that has both information of what the geodesic flow is doing and what the sequence of a game functions is doing Just so that you have an image of how these things look in this tourist example What the effect measure looks like is a delta mass in the 1-0 direction for the frequency variables So this vector here is the vector 1 comma 0 and Then the levegg Measure on the torus in the x variables. So what this measure is telling you is that My eigenfunctions they are going to be uniformly distributed in position But any action happening in the momentum Variables is happening in the 1-0 direction, which is what was happening here in this example For the Gaussian beam the measures look like a delta mass along the geodesic So just grab the closed geodesic that you have lifted to the coast for your bundle and then the measure is just the delta mass along the Geodesic which what saying is that your sequence of eigenfunctions is only Concentrated along this closed geodesic. That's where all the action is outside this geodesic You're basically zero. There is nothing that your eigen of your eigenfunctions that survives in the higher energy limit And what we are going to do now Is to grab these measures and to restrict them to the set of unit conormal directions on H So what I want to do is I want to define a Measure here in the in s and star H and the way in which we are going to define it is you're going to give yourself a set a Okay So you're going to start with a set a in s and star H And now you're going to flow it out using the geodesic flow You're going to flow it out four times say delta into the future and delta back in the past and So that's my next slide here. You grab your set a in s and star H and you just Flow it using the geodesic flow for time smaller than delta Delta is a number that you fix positive number that you just fix beforehand And what you do is you measure the volume with respect to your defect measure me of this set And then you divide by one over two delta this definition actually is independent of the delta that you pick They could have written a limit as though that goes to zero in front of this definition And it would go but I just want you to have a picture of how this measure looks like So in order to use a measure on s and star H You just grab your set you flow it out and use you use your defect measure to understand the size of this set after you flew Yeah So now Now I'm going to start presenting the results that we have You are going to assume you have a sequence with an effect measure and You're going to decompose it into two pieces one that I understand Which is the volume measure on the set of unit conormal directions and one that's mutually singular with respect to it if it's just the density here in front of the volume the derivative if you want And the first result that we prove is that you can control this average is in a very specific way from above So there exists a constant that depends only on n the dimension of the manifold and K Which is the co-dimension of the sub manifold Such that this average is are vulnerable by that constant times London today K minus 1 over 2 and now this integral here comes which is quite specific You're just integrating the square root of your density With respect to the volume form and then some error term So there is a couple of things that you should note the first one is that constant times lambda to the K minus 1 over 2 This is exactly Seldic's result. This is the supper bound that we are just recovering now only that with a very specific Constant right we I'm telling you that the constant it only depends on n and K and the integral of your density Okay The second Comment that I have regarding this is that so that you can understand what's happening in the two examples When you're working with the torus the function f becomes a one so actually This is your upper bound While in the case of the Gaussian being the function f is a zero So this term dies and you cannot have maximal averages Which is which was was what was happening with the Gaussian beam as I was showing you you have cancellations if you put no matter which age you put so they go to zero But this tells you way more than what I'm What I just said what this tells you is the following that if you want to saturate your maximal Your upper bound then what happens is that this term cannot be zero your function f cannot be zero Which means that the function mu H Sorry the measure mu H and the volume measure cannot be mutually singular if you want to have Maximal averages it just cannot happen that the two measures be mutually singular So this is just an observation because on this observation will rely all the information that I'm going to give you in the rest of the talk So if my sequence has maximal averages then the two measures cannot be mutually singular and Here I want to make a point of the fact that we do not understand these defect measures well enough to say anything So I need to get rid of my defect measure and that's what I'm going to do in this next life So with Jeff Galkowski, we were like, okay What can we say about these defect measures that will actually get rid of what the defect measure is and We started thinking okay, perhaps we can understand the support of the measure. So remember at the very beginning when I was showing you the set of Geodesic loops that were normal to H and would come back normal to H Which and the condition that why man needed on this set was that them? Volume of the set needed to be zero if you didn't want to have maximal averages Well, we found out that actually in order to understand this problem You need to look into a subset of this guy, which is a set of recurrent conormal directions So these are all the directions That start conormal to H That will look back to H being conor ma But actually if I start with this direction, I come back infinitely many times to H being normal and arbitrarily close to the direction that yes that I started with That's what I'm going to call the set of recurrent conormal directions And what we were able to show is that this set is what has full measure for me Nothing else if I want to understand my defect measure all I need to do is to understand it on this set And this is great because now I have the following result Remember that we were saying that in order to have maximal averages mu H and sigma Couldn't be much mutually singular So this implies the following result Which is that if I know that the volume of our age is zero and now there is no longer a defect measure in this statement If I know that the volume is zero Then I cannot achieve my maximal averages and the reason for this is because you're finding a set the set of recurrent directions That has zero measure for sigma, but full measure for mu H So you're finding this set that's making the two measures mutually singular as soon as you impose this condition So then you cannot have maximal averages and then you get this little low So this was the first result that we proved Which as I was saying it's quite nice because you don't have the effect measure but it wasn't as happy because we Do not work in dynamical systems and we did not know how to check this condition at the beginning So now you have this volume measure and the set of unit conormal directions And you have this set of recurrent directions and you need to understand when the volume of that set is zero So in this in the next slide what I'm going to do is to show you all the settings in which we were able to check this condition So in all the following settings you have that the volume of the set of recurrent conormal directions is zero Which implies this little low averages The first example is when you are working on a manifold that has constant negative curvature and H is any some manifold Then we actually went in and checked by him that the volume of the set of recurrent directions was zero The second setup is that of a surface with geodesic flow. That's an awesome I can grab any curve there look at the set of recurrent conormal directions and show that it has volume zero I Am sure that this result should be true on any manifold with geodesic flow That's an awesome and on any sub manifold, but we just have no clue how to approach this problem So this is an invitation The second the next case is when you have no conjugate points I here we were using implicit function theorem a bunch of times So we knew any we had to impose this condition on the dimension of H The dimension of your sub manifold cannot exceed n minus 1 over 2 n is the dimension of the manifold When you're working with no conjugate points again, but now you look at the other extreme case Where the dimension of the manifold the of your sub manifold is n minus 1. So now you're in the geodesic Sphere case we were also able to predict and then if you work in the Intersection of an awesome geodesic flow with non-positive curvature. So in this intersection here an H is totally geodesic We were also able to show it But again, we do not work in dynamical systems So I'm pretty sure that all these things can be improved a lot in any case this is As far as we went working with defect measures The fact measures are these limiting objects that record what egg and functions are doing when the egg and value grows But they don't capture anything of the growth of Convergence of your egg and functions to a limit So if you want to get improvements on this little low upper bounds, you cannot choose the effect measures but so what we did with Jeff Kalkowski was to try to Get the logarithmic upper bounds because this is actually recording all the results that I was showing you in the first life All the little low bounds, but there were some results with that logarithmic improvement and we wanted to get them So the rest of the talk is about this logarithmic improvement Which is actually what I think it's the best part of the talk because it's really understanding what's happening with the egg and functions Along geodesic to tubes it for it doesn't use the effect measures at all. So the idea is that In all these setups we were able to show that instead of a little low you can actually write a Squared log lambda. I wanted to put here the case of constant negative curvature because we did not check the details That you can get the logarithmic improvement, but it has to be true again The point is that the rest of the talk is on how to get this logarithmic improvements And what I'm going to do is to actually Make it take place in this setup where H is just a point all the results work for this Some manifolds, but for the rest of the talk H is going to be a single point And I'm going to be talking about distributions of values of egg and functions So that you have an easier picture of what's going on with this egg and functions Because picturing the set of unit ground on directions is much easier. It's just a fiber or next Okay So what's known about the values of the egg and functions at a point? So H is just a point and as at the beginning I was saying That you can look at the set of loop looping directions That are conormal to H, but here is just looping directions, right? The set of unit conormal directions is just the entire circle here And Whenever the measure of the set of these looping directions is zero then so themselves in 2002 they prove That The values at x cannot exceed lambda to the n minus 1 over 2 Remember that in general we had this upper bound with the big O So they just improved it to a little low when the volume of the set of looping directions is zero Actually, this is also true if instead of looking just at the set of Looping directions you look at the set of recurrent directions at x there And this was proved by its dog tot and seldage in 2011 So it just if you're only studying Supremums of egg and functions at a point all you need to do is to understand the volume of recurrent directions at that point and The logarithmic improvements were proved way before these two other results they were proved by Berard and What he needed was to have no conjugate points and then he gets a logarithmic improvement here and The reason for this is as I was saying at the very beginning He would live to the universal cover be able to provide things for logarithmic times And this is how he would get the logarithmic improvements. We have a very good description of what the wave Kernel looks like in the universal cover when you have no conjugate points What we were able to prove that Jeff Galkowski is the following result which allows for conjugate points you can Step you can choose yours of manifold to be a point that self conjugate And by that what I mean is there is a geodesic that come back comes back to the point And there is a jacobi feel that's zero at the beginning Non-trivial and then when it comes back to the point of origin, it's zero again. That's what I mean by self conjugate The sphere is the worst setup for conjugate points, right because you have the maximum number of linearly independent jacobi fields That will vanish at the point that are linearly independent, right? So in the sphere you have n minus one linearly independent jacobi fields That will be zero at x and come back at x after some time and be and vanish again that's when you say that a Point is self-contrugate with maximum multiplicity And this is exactly what I cannot be in order to be able to state this result I cannot be a pole in the sphere as long as the point that you pick is a point. That's not a Point that self-contrugate with maximum multiplicity Then I get the logarithmic improvements. So now I'm allowing my manifold to have conjugate points They just cannot look like poles in the sphere. I just cannot have all my geodesics looking looking back at the same time that's what we are avoiding basically and And what I wanted to do is to give you an idea of how one goes about proving a result like this using jacobi fields So Nadia is as follows. We are looking at the set of unicornormal directions So this the fiber at x and what we are going to do is to look at Geodesics that are running away from x and we are going to build tubes around these geodesics And these tubes are going to be super thin. The radius is going to be like lambda to the minus epsilon So the radius is going to go to zero as lambda grows. So these are tubes that are shrinking depending on the eigenvalue of your eigenfunction and you're going to just consider a bunch of tubes that are emanating from x and This is the first result that we prove So you can control the value of your eigenfunction at the point x by a constant times lambda to the n minus one which is again the correct correct power and So what we picked up here was the radius of the tubes to the power of n minus 1 over 2 and then a sum of The masses of your eigenfunctions when you restrict them to the tubes That's what this thing is. I'm summing over all the tubes and I'm What I'm summing is the L2 mass of the eigenfunctions restricted to the tubes Now when these tubes are living in phase space when I say that I restrict the mass of the eigenfunctions to the tubes I'm operating in phase space, which means that what I'm actually doing is I'm building a kind of function in phase space for my tube and then what I do is I quantize that kind of function I turn it into an operator and I got I apply it to my eigenfunctions Finland So this is what it means to microlocalize the eigenfunction along a tube. That's how you do it and So if I can control the mass of the eigenfunctions along the tubes, I can control the supner and What I mean, this is not a lot of information if you cannot control the mass of the eigenfunctions along tubes But this is the nice part. I can divide my tubes in two classes good and bad tubes Where by good what I mean is I'm going to say that my tubes are good and I'm going to put them together in this union If what happens is that this union is non-self-looping for logarithmic times So I just grab the union of these tubes unit length tubes And I let them run under the joystick flow for logarithmic times And if they don't come back to the original union, I'm good. They are good. Those are my good tips And then the other ones that do look back those I'm calling bad here And what we were able to do was to control This part here the sum of the masses of the eigenfunctions By this parenthesis here And what this thing says is the number of bad tubes is raised to the power of one half This part you can easily get by splitting the sum into two pieces the good and the bad part I'm just taking Cauchy Schwartz what you pick up is the number to the one half of bad tubes Times this which is an upper bound for all the terms The nice part about this is that I get the number of good tubes to the one half But with the logarithmic improvement So you may ask how is it that you get the logarithmic improvement and there we are goes like this so if I have just one tube and I compute the mass of my eigenfunction restricted to that tube that mass is going to be smaller than the mass of the entire eigenfunction of my manifold So now what I'm going to do is to propagate it once Okay, I can I get to do that using a gorebs theorem Which says that the mass is going to be preserved. So now what I know is that this The mass along those two tubes, which is this number on the left is bundled up all by the mass of the entire eigenfunction And I can apply a gorebs theorem as many times as you want three times if I want All the as long as I cannot I cannot run for times that are larger than logarithmic times So I run it for logarithmic times And then I get that logarithm of lambda times the mass of my initial tube is bounded above by the total mass of the eigenfunction This is how you get the one over log lambda Improvement that when you take the square root becomes this thing here in the denominator So now I have my point I Create these tubes on what you need to do is to put a dynamical condition on your own With respect to the joyless how their geodesic flow behaves with respect to that point That should help you distinguish which tubes are good and bad for you and So in my last live I'm going to show you how we did that when we impose this non self conjugates Non self-contrugated condition with maximum multiplicity. So this is my last live So here we are going to be a good and bad tubes And we are going to assume that X is not self-contrugated with maximum multiplicity, okay, so the idea is like this. I start with my point X. This is approved by picture. I Start with my point X and suppose you have a geodesic that loops back to X Okay, I needed to spread them in the picture. Otherwise, this would look crazy Along this geodesic, I may be a self-contrugate point. So along this geodesic there may be a Jacobi field that's zero Over here runs along gamma and then vanishes again at the end. I may have one of those But since I'm not maximally self-contrugate, I know that I have one Jacobi field That will start being zero at X And it will rotate along gamma with some weird shape and At the end it will not vanish. This is what I know I have one Jacobi field like that, which means that if I start with a tube that points in the direction of this Of this Jacobi field, the good Jacobi field, this tube is going to start at X It's going to rotate with my Jacobi field But once it gets close to the final point for my geodesic It will have spread apart from the point X simply because it doesn't vanish The Jacobi field doesn't vanish there. So my geodesic moved away from X. So this is a good tube for me This is a tube that does not look back to X And what happens here is I'm going to look in a neighborhood of that point I'm zooming in and This is a geodesic that's going out of the point X and this is the good direction, the green Arrow points in the good direction from my Jacobi field and it may be that in this orthogonal direction I have a Jacobi field that starts being zero and at the end vanishes again But this is a closed condition and it's only one direction So that means that what I can do is I can cover this direction with very few bad balls And just say that all the other ones are going to be good balls. The rest are just good balls for me So when you look at what happens here is that all the tubes that start in the balls that were in the good directions Are going to end up Looping back but not touching X again. I'm going to spread that away from X And I only have this very few bad sets, bad tubes that may look back to X But I can control their number and if I can control their number and say that they are very few I get I can control this part and for the most just get the logarithmic improvement in all the other tips And so this is how we got this result The fact that you can get the logarithmic improvements at the point and similar results actually holds When you are doing averages over some manifolds rather than just considering one point But I do not want to state them here and bore you with them But this is the state of the art right now. Thank you very much Again, so if I can get improvements For the integral Not more than just taking the contrary Statement now not at the moment and going in that direction is quite hard for us And usually we do not think of this like what happens is that at the moment computing again functions is quite hard And when you have weird geometries, we do not know how to do that So usually all the statements that we try to create are if I have this Geometry on my manifold this dynamical system Then the egg and functions will do this and then I get information on how my quantum particles are behaving But not the other way around because I in general we do not know how these egg and functions look like we don't know Yeah, only on the disk and the square we can compute them. Otherwise, we don't know Yeah