 So, I'm very pleased to have this opportunity to speak at this meeting, celebrating the life and work of Marcel Berger, Marcel had a big influence on me. I won't say more about it now because we're going to talk about it later in the afternoon. So, I want to talk about some work, its joint work with Aaron Naver and Wenshui Zhang, and it concerns the structure of non-collapsed Gromov House-Dorff Limit Spaces with Richie Bounded below, so too as the non-collapsing condition. So I want to discuss some results on the structure of such spaces. Various things were known and I want to talk about the new stuff. So HK denotes House-Dorff Measure, K-Dimensional House-Dorff Measure, and then the basic result that I'll talk about today is that given such a limit space, you can decompose it into two pieces, one of which is Ars of Epsilon, the Epsilon regular part. So this is the part that's not exactly smooth but where the tangent cones are very close to Euclidean space in particular. Ars of Epsilon is a manifold. It's bi-herder to a smooth Riemannian manifold and the other part is a piece of the singular set and with just a lower bound on Richie the singular set could actually be dense. Singular means those points where the tangent cone is not Euclidean space, not Rn. It could be dense. You could make up starting with a polygon and put more faces on it and so on. But almost all of the singular set in a certain quantitative sense, the sense of the theorem, the singularity is very weak so the Epsilon regular part also contains points where the tangent cone is not Euclidean space but very close to it in the Gromov-Hausdorff sense and then the piece that's left over is, well it has a certain structure, it was known previously from our work with colding that it has Hausdorff codimension 2 but now this is a much stronger statement that first of all this set is rectifiable, second of all, which is 4, it has a definite bound on the measure, let's say if you take the part lying in a ball of radius 1 and minus two-dimensional Hausdorff measure has a definite bound and more than that, a tube of radius r about this S epsilon has a bound with the exponent r squared that you would expect if it was just a sub manifold, a nice sub manifold of codimension 2 and a Riemannian manifold. Later in the talk I'll mention how what was just said compares to what was previously known, what was previously known was seemingly quite close but the difference between this result and what was known previously requires new techniques and so on. So just to be clear about the meaning of rectifiability in this context, it means for us that say a metric measure space is rectifiable, the measure is Hausdorff measure in this case, say K rectifiable if you can decompose it as a countable union of measurable subsets that is decomposed up to a set of measures zero and these subsets are by Lipschitz to positive measure subsets, these are the A.I. of R.K. So it's sort of something like a manifold but only in a measure theoretic sense and in fact it wouldn't be true in the context of the previous slide that this S. Epsilon is a manifold rectifiable is the best you can say. So the background what was known, so in work with Tobi-Colding we showed that you had such a decomposition where the Hausdorff codimension was indeed N minus 2. Dimension always means Hausdorff dimension. I should have said the Hausdorff dimension not codimension. And the methodology we used, we didn't really understand this I think when we started but it was the classical methodology that was introduced in the context of minimal varieties by the Georgie Federer Fleming and later maybe brought further by Olmgren. But to implement this idea in Riemannian geometry required new techniques which was what Tobi and I developed in a word because you don't have a background metric. So in particular what's constructed is a filtration of the singular set. So the singular set that's just the set of points where the tangent cone when you keep magnifying it and pass to a weak geometric limit also known as a Gromov Hausdorff limit you don't get Euclidean space. So that's the singular set and by definition. So to repeat what I just said, the tangent cone is a pointed Gromov Hausdorff limit of some sub-sequence that you get by fixing a point in your limit space and blowing it up that is rescaling the metric by a sequence RA inverse which goes to infinity. So one of the basic results from the work with Tobi is if you do that on a non-collapse space then what you'll get is a metric cone. So it need not be unique in general although of course most of the time it's just going to be RN but in any case whatever you get is a metric cone. I'll remind you of some cross-section. So here's a picture of a metric cone in two dimensions. Here's the idea that this one is a limit that if you could have convex surfaces that are closer and closer to it and then some obvious weak geometric limit sense because the cone is no longer smooth even though the manifolds in this picture had non-negative curvature two-dimensional surfaces is what's indicated in the picture. So now the filtration in this case SK is the set of points such that no tangent cones wits off our K plus 1 isometrically. This is something if you want a picture in your head you can think of the closed K skeleton of a simplicial complex and then as would be the case in that picture that is in a simplicial complex if you thought of SK as the closed K skeleton. We would have that the dimension of SK is less than or equal to K and this was what was shown in the earlier context by Alngarn in the sense of Hausdorff measure and it continues to be true here and the method classically was blow-up arguments. I mean the cones are already defined by blow-up in the sense of this rescaling and passing to Gromov Hausdorff limit in this case and to obtain this dimension inequality, the dimension of the K skeleton and the dimension bound SK as dimension of most K you need to do iterated blow-ups. So what's the point of doing a blow-up? So the point of these arguments they're arguments by contradiction so there's a part that comes in from geometric measure theory and it will be said in a little more detail on the next slide and the point is to show it's an argument by contradiction you want to show this dimension bound and what you try to show is if it were not true it would not be true for some cone. A cone is a more special object than a general limit space so that's helpful and the cone is always in this form of the K skeleton context including the original context minimal services harmonic maps or here they exhibit radial invariance so we'll kind of say in a while where this comes from but a cone obviously so I mean a cone with an arbitrary cross section which means geometrically I take these scaled copies and I stack them on top of each other they exhibit radial invariance and in our context more specifically what it means a cone is the formula for the distance in polar coordinates so it's topologically our plus cross something the cross section which could be any metric space in principle and the distance squared is given by the law of cosines that's what I call a metric cone some people call it a Euclidean cone and in this context of limits with Ricci bounded below the diameter is at most pi and in fact you can think since the cone is a blown up limit space and blowing up rescaling by a large constant makes the lower bound smaller so in some general sense the cones have non-negative curvature in some limiting generalized sense and therefore the cross section of the cone should have curvature Ricci curvature I'm talking about bounded by n minus one times the metric just as it would be when the cone is Euclidean space and the cross section is the unit sphere and then that's consistent with this result that I always get into trouble with this I think I'll just use these arrows okay so the bound on the diameter by Meyer's theorem is consistent with the idea that the cone the cross section should have diameter should have curvature bounded below by n minus one in fact this also strictly speaking comes from the splitting theorem with without which it wouldn't even be clear the cross section was connected okay so why is this helpful in proving the inequality is because any point on the cone except the vertex lies on a ray which is in the sense of Riemannian geometry I have infinite geodesic going out to infinity each segment of which is minimal so if I take a point other than the vertex lies on a ray now suppose I blow up again now the ray becomes a line and if you like by the splitting theorem the line splits off isometrically so we can keep doing this iterated blow up always avoiding the set of vertices now we did it once the original we got something across a line so whatever vertex was in the non-line factor now everything on here is a vertex but if we avoid it we can keep blowing up again now the geometric measure theory it's just a density argument said if we had if the inequality failed to begin with it would fail for a generic point on one of these blown up cones and then by iteration again it would fail but eventually we split off R K plus one and then every tangent cone on such a thing that splits off R K plus one of course would also split off R K plus one and then the fact that this S K was not empty but that consisted of things that don't split off R K plus one and so we would have reached a contradiction so that's how the iterated blow ups go now the next thing I want to say because it's important to say it is why are tangent cones metric cones so we can look at the volume ratio up here where this point is in this simply connected space curvature identically equal to H some hyperbolic space for example and this is the volume ratio and this ratio as a function of R if we fix H right is going to be monotone decreasing a priority bounded that is it starts out in a limit as R goes to zero in the smooth case it just becomes one because manifolds are infinitesimally Euclidean but if we go out to let's say R equals one the non-collapsing assumption just says that it's bounded below so it's bounded above by one and bounded below by something involving this V that we assumed in our non-collapsing and then there's a third thing which is crucial which I call coercivity and will be explained on the slide after this one so the monotonicity that's just Bishop Gromov inequality the boundedness as I mentioned is just the non-collapsing assumption and the last point is something we proved with Toby the thing I'm calling coercivity this refers to the almost equality case of Bishop Gromov so first the equality case which is much easier and was probably understood that this ratio which is monotone the script V sub H suppose first that actually it didn't change at all as a function of R well then as in earlier results in Riemannian geometry the proof can be by a string of inequalities each of which would have to be an equality if it didn't change in the end and then you see that it must be a cone actually strictly so I'm going to go ahead and I'm going to go ahead and it must be a cone actually strictly speaking in the Riemannian case it would have to be smooth but if instead of the volume of the ball used the corresponding thing the volume of the end minus one dimensional volume of the boundary of the ball and you assume that that ratio which is also monotone didn't change between two fixed values of R then the conclusion would be that it has to look like an annulus so here I mean strictly speaking an annulus in a cone now when I say that I'm assuming the lower bound is zero but since we're talking about rescaling small balls in the limiting case it's just as if the lower bound is really zero in our context and otherwise it would be whatever it was in the warp product with warping function like in the space of constant curvature H so the coercivity however says that something much stronger is true namely what happens if it's almost constant on some interval that is this thing which is monotone then in the Gromov Hausdorff sense we're almost in the situation of an annulus or depending on what we're saying if we're the volume and we're in a limit space where it actually could be a cone then we almost have a cone now this turned out not just to be a matter of following the string of equalities and imagine what would happen if each one were almost an equality it really needed new ideas which will be mentioned presently but this is true and this kind of stability if you have the almost the hypothesis you have almost the conclusion is much much stronger as we'll see so now let's make normalization may as well after an initial scaling say H is say minus n minus one that of n dimensional hyperbolic space I think I meant here H is probably minus one is what I meant which means the lower bound on Ricci is minus n minus one so that's a slight misprint and let's call something of the form two to the minus j a scale so now if we think about it we have this quantity it's a function of r and let's evaluate it on each scale well it's pinched between one and this constant that comes from the non collapsing but there are infinitely many scales in between two to the minus n equals zero and two to the zero which is one but because of the monotonicity on almost all of them it hardly changes at all right because it's like a series where all the terms are positive and it's bounded and there are infinitely many terms so all but a definite amount are extremely small and therefore we're in the situation when it's small of it's almost looking like a cone so strangely in my opinion this idea probably the quality case was understood and the almost equality case which is non tribletal to prove at least could have been guessed earlier in the game and then if you just thought about it you would see this picture the implication being that on almost every scale all apart from a definite number in a quantitative sense the ball must look like to the naked eye like a cone after magnification so this really could have been done had anyone thought to do it I mean not prove but at least conjecture but I don't think that actually anyone did that okay so now I want to emphasize a certain point that is going to come important in the rest of the talk and that is that in order that we needed this effective estimate which is what I was calling coercivity and one place where that's used is in passing the estimates on smooth manifolds under this weak convergence to tangent cones so you need that for example to conclude that tangent cones of limit spaces are metric cones you need the qualitative version so you have an effective estimate and I gave one consequence of it that on most scales it looks very close to a cone can't tell which ones are bad but you can tell there are only so many bad ones and yet here when you say the conclusion is that all cones are tangent cones that's an infinitesimal conclusion from an effective estimate so this suggests that the effective part isn't being completely used so then here at the bottom is just the remark that there's also an effective version of the splitting theorem which in the non-effective case for non-negative curvature we did with Gromal and it does apply to even possibly collapse tangent cones because they have non-negative reach in the generalized sense okay so now let's I said earlier that in fact in the non-collapse case the singular set had co-dimension to which was part of our work with Toby and how would you prove something like that well it's really part of this general methodology that went back to the minimal surface or minimal varieties story and in fact the most famous version of that I would say was what was eventually concluded by Jim Simons that's minimizing hypersurfaces the singularities are co-dimension seven so the point is that once you know that if you have any singularities then the tangent cone and a singularity must be an actual cone then you can ask yourself well what kind of cones might I have so they have to split off a certain Euclidean factor the ones you're interested in so what could they be and so in our case if you want to show the singular set has co-dimension two then you say well if I had a cone which split off Rn minus one what could it be well the cross section has to be either two points or one point it can't be three points it would contradict the splitting theorem so it's really Rn minus one cross a line in which case it isn't singular at all so we don't have to worry about that case or it's R plus cross Rn minus one that is to say a half space now the point is if you think about it so the half space is a perfectly good cone right well I know this is a cone on a point whose cross section of the point cross this it's a perfectly good cone but our question is can it arise as a limit space and the answer is no and the reason why if you think about it so the point when I say a pointed limit the point is over here on the boundary right so a ball then looks like this half of a ball right it has a boundary in here but on the other hand the balls of which it's a limit don't have any boundary in there the boundary is out there so there's something wrong with this picture and that you can make into a proof so there's a perfectly good cone but it doesn't arise as a limit space that was the and that's always there was the same thing with Simons you know he showed that the geometric measure theory technology said you would the cone you would get if you had singularities had to be minimized Simons with the Simons equation showed how to deform it to something smaller with smaller area and so it couldn't have arisen in that way was a perfectly it would be a perfectly good cone over a compact minimizing smooth thing in the sphere by induction but he showed up to the appropriate dimension where it turned out not to be true anymore that you could deform it to something smaller and that was the proof is the same idea here okay so some of the take so I said that in the Riemannian case it wasn't just a matter of following a string of ODE inequalities you really needed new techniques in the work with Toby and some of them were involved a particular kind of regularization the idea being that distance functions are the natural functions in Riemannian geometry and of course in Euclidean space they might be smooth in fact a coordinate function in Euclidean space or maybe in a Riemannian product would be a harmonic function even with constant norm of the gradient or distance squared from a point Laplacian of that would be constant and equal to 2n so but on a Riemannian manifold of course distance functions need not be smooth on the cut locus so the idea was to approximate them by the solutions of these equations with on the region you're looking at the same boundary values and then measure how close the solution of the elliptic equation was to the corresponding distance function on your manifold the point of this approximation is that these guys actually are smooth and you have control of things like the point-wise norm of the gradient by the Cheng-Yao estimate and Bachner's formula gives you L2 control of the Hessian and then you can pass those results to control of the first derivative part that you wouldn't have otherwise in the form of integral estimates initially so you can use the regularized thing to get some kind of integral estimates on the actual distance function that are the relevant ones for the geometry and then there was basic technique for turning the integral estimates into the estimates on the actual distance estimates and that involves something for instance called the segment inequality so that was some of what was in the paper was Tobi a considerable amount of the important part and now you know there was also something about two-sided balance so suppose you now in addition to the lower bound you assume you have an upper bound on the Ricci curvature and upper bound is not significant in and of itself but if you it adds something basically because you have more regularity when you assume it in addition to a lower bound so there had been a co-conjecture going back to the 90s in particular due to Mike Anderson and probably some of the earlier pioneers like Gong Tian maybe Nakajima that the singular set in the case of a two-sided bound that actually occurs in Mike Anderson's ICM talk should have co-dimension four rather than co-dimension two and eventually we were able to prove this with Aaron neighbor it stayed open for a long time so it was understood here's what was understood here maybe I just take a minute so using the same sort of idea as I just explained over there now suppose with the two-sided bound you wanted to show the singular set had co-dimension four well it was actually understood if you could show it had co-dimension three there was a topological argument you could show then that it had co-dimension four so what it then came down to was ruling out a tangent cone like this so here's the paper cup cone cross Rn minus two and you had to show this couldn't arise as a limit space this perfectly good cone was not a limit space with the two-sided bound on Ricci now in dimension two of course it's kind of obvious that it can't but with the right from this picture but to show that this can't arise as a limit space with Ricci zero because of the blowing up part turned out to be quite hard but there was a rather technically difficult argument we were eventually able to do that so this was the easy part in going from three there was the hard part in going from three to four was known to be easy if you could get co-dimension three so now also in 2012 some new ideas came in in the work with Aaron and that returns to the point I mentioned earlier that if you have effective results like this what I was calling coercivity this function monotone almost constant means you're close to a cone in the Gromov house or sense you would like to make more use of that and you would like to make more use of that and you would like to make use of that and that's what we pointed out in this 2012 paper or realized maybe I should say that you are in fact able to do so in the co-dimension four with the two-sided bound what we showed was that the singular set you actually had an estimate on the tubular neighborhood which is stronger than a bound on the house dwarf dimension and so it had co-dimension four but what we showed there is that so where are we here? Okay probably going to continue to happen because it happens to me in every talk so notice if this were a smoothing co-dimension four manifold then the volume of a tube around it would look like this but without the eta and without the eta over here it would just be four and a constant so we almost showed that in the sense that we showed for any constant the tube would look like this and this was the first result of this type which just wasn't something about the house dwarf dimension but there was this annoying point that the constant in front of the estimate blew up as eta went to zero so it seems like a small thing but it actually turned out to require totally new ideas to get rid of it which I'll come to at the end so then the corresponding thing is in the co-dimension four case that we showed that you get a bound on see so suppose you're outside maybe let me just say it in words I forget what might be on the next slide so here's the bound on this tube around the singular set but what this says in effect is if you're not in the tube namely your distance r away then you have a definite bound on the Riemannian curvature. Not just at the point that you're looking at but on a ball of comparable radius around that point so this is the soup on a ball outside the tube again with this annoying eta but nonetheless so this is much stronger than just having a bound on the center for example so this was a kind of new take on this subject not just how stored measure bounds and the idea being that if you have this soup bound and you have some equation and these are all equations right the same kind of story held in all of these cases partly joint then when you have an elliptic estimate and you have this sort of soup bound you rescale it to unit size and then you get bounds on all the higher derivatives automatically the appropriate bounds so for example in the harmonic map case where the a priori bound from the definition was an L2 bound on the norm of the gradient which you assumed then this led to an L3 bound sorry an LP bound for every P less than 3 and LP is actually wrong so the idea both of the tube controlling the size of the tube and a definite amount of regularity in this sense over here outside of the tube so this we call this the regularity scale sometimes called the curvature scale that's the thing where you have such a bound with the two there is a crucial part of the discussion and has applications and the way that certainly I came to this was in work with Bruce and the sophomore in a very different context and there was earlier work using this the ideas of something like this by Dorn Soro, Jones, Sims and Rebay each in a very different context from what I'm talking about here and the point about this in my opinion is that this work that I'm talking about here stems from is from the 80s but the most elementary case of it just applies to a function of one variable with a bound on its derivative and then you look at all dyadic intervals and you ask yourself for a given epsilon what's the sum of the lengths on which this function after magnification to unit size like if you looked at it under a microscope is not epsilon close to being linear the linear function is not its derivative is just the best linear approximation so one way of saying that a function is is differentiable is right like the earth is flat one way of saying it is if you look at it under a microscope it starts looking like it's tangent line but if all you know is a bound on the first derivative you might have a function say with a high frequency but small amplitude and if the frequency is high enough and the amplitude is small enough it still has derivative bounded by one and this guy although it doesn't look like it's tangent line until you go to an extremely small scale it does look like a linear function most of the time and that linear function is zero because the amplitude is you know that is it's a constant linear function so that's the the idea and this this simple idea was somehow not taught to undergraduates which I think it should be and therefore in these more advanced contexts stemming from the 60s the kind of results I'm talking about here were missed it wasn't understood because most people have made a survey of various very distinguished analysts and they've never heard of what I just told you over here so it's an interesting sociological point so now we started a little late I'm not sure how much time I have how much time do I have 15 great okay so here are the new text techniques that were used in the work with Aaron that I've described up to this point but this this technique is what you need to get the result but still with this minus eta and the coefficient depending on eta so there's a quantitative version of the stratification or what I call the filtration which I'll come to a quantitative version of what I call colon splitting I'll explain what that is something called the energy decomposition a certain covering argument which is not so difficult and novel epsilon regularity theorems so now I want to explain what these terms refer to so this was in the work with Aaron from 2012 so we want to take this filtration and just make a more detailed analysis of a more effective definition some people believed that that's all we did was make a definition but I beg to differ with that okay so let's introduce some parameters into the SK we call SK epsilon and that means the point is in SK epsilon are if we look at the we look at the interval say between R and 1 and we say no ball on its own scale is epsilon close so that's what this means on its own scales epsilon close to splitting off yeah is K symmetric but really what I want is it's not K plus 1 symmetric up to an error of epsilon on its own scale the R means on its own scale so then that's a quantitative version so all balls down to radius R have to not be epsilon close on their own scale to splitting off an R K plus 1 factor isometrically and then it's easy to see that if this happens down to the intersection of all of these is just the ones where tangent cones are epsilon away let's say on a ball of size 1 from splitting off R K plus 1 in fact this is the epsilon K where K is N minus 2 that appeared on the very first transparency and then well or maybe this one I think there shouldn't be yeah this is a misprint there shouldn't be an epsilon there okay it's the union of these okay so the whole singular set the same thing it's Kth element to the filtration okay so these things are very easy to check but nonetheless fundamental they allow you to introduce these quantitative ideas so now another ingredient was cone splitting and by that I mean the following so first in a version of this actually occurred in this earlier work say on harmonic maps you can find that there are some notes for example by we on Simon from these whatever they're called at Utah where they have these summer schools whatever there are some nice notes and a very nice one line explanation in the context of functions of the point I'm going to just explain geometrically so in our context if you take a cone and cross it with let's say a Euclidean space it's still a cone and moreover if this was your original cone let's say two dimensional any point of course since it's an isometric product is now a vertex right so you can regard it as a cone it has a whole N minus 2 vertices once you cross it the converse is also true that's that's the point here that if you have something a space and you can regard it as a cone in two different ways that is where the vertices are distinct then it actually must split off a line by cone I mean a metric cone must actually split off a line isometrically in the two ways you read the two cross sections of your cones are are isometric so I'll call that cone splitting right if I have a space I can regard it as a cone in two distinct ways then it must split off a line isometrically must go through the two vertices and this is the iterated version of it so that's cone splitting so then as you might expect since we're emphasizing the effective version another ingredient is an effective version of cone splitting so it means if the cones are almost cones and we almost have these these vertices where the points are separated then it almost splits off so you could you know just fill in what it would have to mean it's an effective version of the previous statement and this is now this is the most effective version of the previous statement maybe it's worth saying so you could check this where what am I talking about here so I have I have these points which look like their vertices up to a small error and now I'm saying I can actually find this regularized coordinate function by taking the difference of so if it's going to split off a line for example so just think of the following illustration of what's supposed to be on this and then just look in the plane so that splits off the x-axis never mind about the y-axis for the moment take these two points say one zero minus one zero now look at the squared distance from each of these so this one is maybe the r0 this one is r1 over there so so what we're looking at is now x squared minus one minus x squared plus one and if I divide it by four I should just get x which is the coordinate that's what this refers to up here so that's an analytic version again of what I was talking about over there so the difference of squared distance functions is illustrated over there and that's that's what's occurring over here so so this is now actually a generalization of what was on the initial transparency namely the initial transparency was the case of this but for the whole singular set because the whole singular set was really sn minus two so something similar actually this is saying something similar is true for all of these quantitative strata of the singular set so that's true more generally the thing that's special say to the n minus two or n minus four if you have the two sided bound is the regularity outside of it so and then and also the bound on the measure because of the regularity outside so this is just really a generalization basically of what was on the first transparency but without the rectifiability statement from the first transparency and with the eta in there okay so I just I won't be able to say what's involved in getting rid of the eta except maybe a word or two it's too technical but I will say a little bit more about how we arrived at this with the eta using the ideas which have been explained so far so in particular what I said early on was that there was a bound on the number of scales where it wouldn't look like a cone on everything else it looks as close to a cone as you want and we and that wasn't made full use of in the earlier work so so now here's the thing so there there's something which in the 2012 work with Aaron which we call the energy decomposition and it simply was so we fix an epsilon and then we fix a point and then we make a record of the scales such that it doesn't look on that scale within an epsilon error on its own scale like a like a cone so at every point there are some bad scales but only a definite number but the the bad scales which ones they are depends on the point right so I can't say from point to point which are the bad scales only that there are a definite number having fixed at most the definite number having fixed epsilon so what we did was we can if we wish group together if we wish group together all of the points where they have the same good and bad scales right now if you think about it no matter how far down I go so I'm I'm taking some finite r from the previous transparency but it could be extremely small but yet the number of bad scales if I fix epsilon is bounded so the fact that it's bounded means if I think of these collections of points which have the same good and bad scales it's not as bad as you would think because it can only grow so to say polynomial the collections it would be quite different if I were grouping them together and I didn't have an a priori bound on the number of bad scales then there would be many more groups is what I'm saying but here they're only growing polynomially that's this end to the queue the number of groups because I have n scales but only a definite number independent of n are bad so they're only roughly end to the queue possibilities for the number of groups so that's a crucial point ok now why do I want to group them together because of the following two things when I'm on a good scale and I look at all those points if they're close together and they look very conical then I will have this idea that it almost splits off a line ok they're separated you know fairly close together they look very conical then that will mean the geometry splits off a line see if not it will be considering both points where it looked good and maybe other points where it looked bad so that's why I want to at least on the good scales which is almost every scale then I get this kind of line splitting or Euclidean splitting so I want to use that in a covering argument and there will be finitely many bad scales and then I'll be able to do a very cheap kind of covering on that scale so to get the estimate on the volume of the tube so here it kind of says on the good scales the set we're trying to cover will look like if we're talking about sk a k dimensional Euclidean factor namely this where it's k dimensional and the set I'm trying to cover will look very near to this so I roughly need only half to two to the k balls to cover it and go down to the next scale on a bad scale it's not close to a k dimensional thing but I can cover it with two to the n balls and as they dimension and they're only finitely many so that will just put a constant in front of the whole estimate also because there are so few of the groups right because of the a priori bound to deal with the whole thing I can fix my intention on one group and then just add the estimates because it's true there are infinitely many but much smaller it's it will be absorbed into the the eta so that's kind of the idea and so this is just saying the same thing again in words that the covering argument to estimate this size of the quantitative stratification or filtration I cover and then I recover with balls of half the size and so on and if you think of it I'm trying I'm sort of covering the thing by a generalized canter set you can think of it in that way where when I go from one scale to the next which is almost every time having fixed a particular collection of good and bad scales then I only need when I go down a scale if it's l less than k the worst case would be k I only need two to the l balls of half the radius whereas on the bad scales I just recover the whole ball that's a factor of two to the n but I only have k of those so it just put something in the constant and that's how the proof goes and then there's a kind of soft epsilon regularity theorem which tells you that outside of the tube when you're at the top stratum you really it's really regular so in what what this would say in words would be that let's say we have the two-sided Ricci bound case where the singular set has co-dimension four so singular point it's allowed to split off r n minus four but if we're at a point where it's efficiently close to splitting off r n minus three on a ball then it must be smooth that's the relevant epsilon regularity theorem let's let's imagine it's Einstein so it but it's the same it's really the same just with C1 alpha C1 alpha regularity if Ricci is bounded so it's a kind of actually relatively soft epsilon regularity theorem if it has enough symmetry the tangent cone then it must absolute then it must actually be smooth this is where you have an equation which corresponds in in words to a two-sided bound and this is a contradiction compactness where the compactness is in a weak topology and the equation meaning the Ricci formula and harmonic coordinates in this case allows you to get back over the weak convergence as in the blow-up arguments that were introduced by Mike Anderson so now I don't know what do I have a minute or two or I'm done yeah yeah what I'm over time I know but I know I started late this I kept careful track of so in okay in the last minute or two I just want to say what could be the source what where could this Ada come from what was lost in this argument well mainly the idea is when you do the recovering maybe the ball stick out a little bit and something is lost there you can't quite cover a ball by balls of half the size but you could probably get away with that by being more careful about the domains not balls but what are sometimes called Chris cubes or something like this but the energy decomposition which was so pleasing when it was discovered it was discovered is actually a source of the error at worst you would still have a logarithmic error say log R instead of in front of the R to the fourth log R to some power from this energy decomposition because the idea was you did have quite a few of these groups that I was denoting script C just they weren't growing fast enough so they could be absorbed by the Ada so you really need a different idea to get rid of them get rid of the Ada and in a way you could think at a completely formal level that as I said before you can imagine you have a series of positive terms and the sum is some definite C well the first thing which was the non effective estimates about tangent cones is the first observation is this implies a to n goes to zero the second observation is well actually by Markov's inequality there can only be a definite number of terms since you know C which this probably should have a C in front of the X inverse okay but a definite number of terms that can be only so big if you have a bound on the total but still that statement is weaker than simply what you started with was that the sum of all the terms was C so to get rid of the Ada you have to somehow take advantage of in our context that the sum of the terms is C and not just this which leaves you with the Ada and this is possible but certainly not easy so there is a paper by neighbor and Danieli Veltor to that introduce some new tools one to deal with the rectifiability which we also of course didn't know was a Reifenberg type rectifiability theorem which much weaker hypotheses than previous ones and a very non trivial covering argument as well and also the context the concept of what's called a neck decomposition which I won't have time to say anything about initially I thought I might but then I decided it was hopeless and then they proved this in the context of harmonic maps they got rid of the Ada and they proved minimal varieties harmonic maps and they proved rectifiability and got rid of the Ada which was still left from our previous work and then a really remarkable paper in the Riemannian geometry context but assuming a two-sided bound by neighbor and when she was young where they proved again the rectifiability this time in the context of the two-sided bound so and more over this tube type bound which is much stronger are to the fourth no Ada and completely remarkably an a priori L2 bound on the curvature so from we we had done this in the work with Aaron in dimension four where the singular set consisted of points and we had done it in general but with the Ada and one reason why this is particularly remarkable is in the other context of harmonic maps and so on this is not true what you get is it's in weak L4 as with the earlier result was in LP for every P less than four that was from the work of Aaron and myself the harmonic maps and so on it's in weak L3 but here it's actually in curvature in L2 so that's special to the Riemannian case so this is a very fundamental remarkable result in my opinion and the last thing I want to say is that so now what I was talking about today was the case where you just have a lower bound and this actually requires completely new estimates from the two-sided bound one because they use an essential way something they called a super convexity estimate which doesn't hold with just a lower bound so the neck decompositions certainly play a crucial role I didn't get to say what they were but it requires new estimates and something in place of the Reifenberg argument which used a more canonical version having to do with harmonic splitting functions and their behavior on neck regions and in particular a sharpening of something that played a key role in the 2015 paper with Aaron something called the transformation theorem so okay I'll stop here. Any questions? Is there any chance that your main theorem characterizes non-collapsing from cost of limits among method measuring bases with lower rigid curvature bound? Try to make sense out of what you're asking maybe you could ask it a little more precisely that would help me maybe I'm just not thinking clearly so it characterizes them among what? So if it's every metric measure space? No I don't think that that's right I think that you know there is a synthetic theory of Ricci bounded below and I think it definitely I mean which includes as an assumption the fact that it's infinitesimally Euclidean as opposed to Finsler but I think it's I think it's a stronger condition actually there may be another thing that one could say in this context that's relevant so there well alright so first of all there is a synthetic theory which is kind of closer to what I think you're asking and also after almost 20 years 17 or something of the synthetic theory it finally starting to prove structural results as opposed to just more and more inequalities which was something I advocated for quite a while so without answering your question precisely I suspect if I understood that the answer would probably be no but anyway something is coming closer to what you asked I feel. Any other questions? Yes, can you improve your results if you replace the Ricci bounded by bounds and the sectional curvature? Yeah I think then well let's see I mean well the lower bounds I mean the two-sided bound of course then there's the whole theory of bounded curvature and then you would have compactness where the limits would be smooth if they were non-collapsed and collapsing theory. But, well I mean certain things are true like tangent cones are unique but I think there so you can in certain ways but I think in certain ways it kind of resembles the Ricci case also. I mean definitely there are some stronger statements and they're not my results. I mean there's the theory of Alexander of Spaces maybe Karsten as a person to ask. But it's rather similar in a way I mean one crucial point of course is tangent cones are unique and you have strainers and there's this whole discussion there. Any other questions? Thanks for speaking again.