 such that, one, well, the center points all happen to live inside this T. So automatically from these other conditions, there's a submanifold that contains these things. In some sense, we kind of know that from this already, if I'm not very precise about how nice of a submanifold this is, right? This is the Reifenberg condition. We kind of got that before. What's that? K-dimensional, absolutely. Two, T is, and this will answer that question. Two, one plus some constant here. It depends on the non-collapsing of the bilipschitz. Now, it's a no longer bi-holder, but bilipschitz to a ball in RK. So this says it's not just a Reifenberg submanifold. It has at least a bilipschitz structure to it. And three, that the measure of our neck region is less than some, again, constant depending on the non-collapsing times delta. So exactly what the stupid example kind of illustrates actually has to be true in general. Let me sort of make a point, too. It's written up there, so I'll just point it out. Note, you should take this as an exercise, right? So those first three are exactly what I just wrote. There's no 10 there. Sorry, I changed the definition of neck region this morning. So there's no 10, just BR. So in particular, if we look at these center points in the radii, then we have the following sort of packing estimate on them. That is the K-dimensional Hausdorff measure of the set of points with zero radii is bounded, and so is the sum of r sub x to the k. So that's sort of to say, if you were treating this like a Hausdorff contents, the Hausdorff contents of this set of balls is bounded. Why is that automatic from the other condition? So that this is an exercise that's useful because, in essence, when you're trying to prove either packing estimates or bounds on things, it's always like this. So you have your sub-manifold t, which is bilipschitz, to a ball of radius 1 in rk. Imagine this set of center points, see not. So what does it satisfy? It satisfies that if we look at the balls of radius r sub x, let's just say they're just joints. Maybe you drop by the tau squared, but whatever, they're just joints. So if this was a collection of balls in the ball of radius 1 in rk, then that condition is automatic by a volume estimate, so that this is your exercise. So a volume estimate would say that would have to be true. Now, this isn't an rk. They're sort of scattered around all over the place. But now that we have our bilipschitz map, we get to turn that covering into a covering of the ball of radius 1 in rk and then apply the result. So it's a nice exercise, 35, not great. I'll say this in words. So if you look in the notes, this is all everything. Basically, everything I'm saying in my lecture, actually, in words is in the notes. Right now, I mean, they're just notes, and half of it has come from the fact that I was expecting to say something here, and I figured I'd jot it down and I'll clean it up later. So most of what I'll say is there, if you kind of half get what I'm saying and want to check it. So a neat point here is that instead of looking at that sum there, you can associate to this region a measure. So you can so-called packing measure. This is a very useful construction. So the k dimensional house dwarf measure on C0 plus the sums of rx to the k and C plus times the Dirac delta is at x. This is the so-called packing measure. In the same way that this sort of C here is sort of discreetly approximating a sub-manifold, that's discreetly approximating the k dimensional house dwarf measure on that sub-manifold. And one can prove from all this that that's actually an outforce regular on measure. And this is a useful point in applications. Upper and lower. Upper and lower, that's right. OK, great. And actually, it also follows from this. There's a packing estimate on this T, but I'll skip this. OK, wonderful. OK, we have done two of three painful things today. We've introduced neck regions, which is somehow the worst. We've said that, so think that picture there, which is not so bad, anyway. We've said that if you happen to have a neck region by some miracle, your hand in a random measure, and it just so happens some ball has a neck region structure on it, then it's pretty well behaved. You can at least break it into these two pieces, the neck piece itself, which has a volume bound and a submanifold, which has a biliftrych control. So now the last question that's sort of remaining, if you have any hope of using this to prove a more general theorem is, do these things exist? Because if these things don't exist, and I mean often, then so what? So what we're going to do next is talk about the neck decomposition. And I'm going to do that straight up here, because it's long. So for all of these things, it's the type of thing where if you write down the definition, and you think about some basic examples, and you ask yourself, what are all the properties of these examples? And you list them, and you start asking, which ones exist in general? Then all the ones that answer yes, or what are appearing here? I mean, that's basically what happens if you give yourself about three hours of thinking. So here's the setup. We have a measure. I'm going to assume here that there is a point-wise bound on the sum of the Betty numbers, this Dini sum. But actually, we can do better. And you have to if you actually want to do more useful examples. Actually, the examples I'll show you. I'm going to show you examples of this decomposition as well. And the examples won't all satisfy that, because that rules out most interesting things. You can replace this with the Carlson estimate. So you can instead assume the following. So for every ball of radius r inside the ball of radius 1, you can assume an integral estimate holds instead. But it's got to be for every ball now. So instead of assuming a point-wise bound, you can assume this integral bound holds in every ball. That may mean nothing to you. What's the difference? In the examples, you'll see the difference. I mean, the interesting example is actually satisfy this, but not that. But I'll just mention it for now. So we have a measure. We have control over Betty numbers. Assume point-wise, whatever. Then fix some constants in your background. Fix a non-collapsing constant, v. Fix a sort of a Reifenberg constant, delta. Then as long as delta is sufficiently small, the claim is that your original ball of radius 1, which is where your measure lives, can be covered in this nasty way. And what's being covered by is the following. So three pieces, s minus, sk, and s plus. So in the spirit of all the examples I've been giving, this is supposed to be the sk is going to be like where it's behaving k dimensionally. The s plus is going to be like where it's behaving bigger than k dimensionally. And the s minus is going to be like where it's behaving less than k dimensionally. So that this is how you want to think. And s plus will involve two sorts of pieces. It'll involve a whole bunch of neck regions and a whole bunch of other balls, with which I'll tell you exactly what they're actually easier than neck regions in some sense. And sk, now the k dimensional piece, is only, right? So we got from the next structure theorem, right, that whenever you have a neck, you had that the c not. Part of it is k rectifiable. So our k dimensional piece is only the union of the k rectifiable pieces that come from the neck regions. That's it. There's nothing else in our rectifiable piece, right? So it's actually quite simple from this point of view. And the conditions that hold the following. So as the name n would suggest, n is a neck region. It's a k delta v neck region. And in particular, the measure of this neck region in sub A is bounded by delta times r sub A to the k. So again, that's the scale invariant thing. Take that ball, rescale it to the ball of radius one, get it bound, rescale it back down. What do you get? You get that. And the c nots are all k rectifiable. Theorem seven obviously means that the next structure theorem I copied and pasted this from the notes. These b balls, the other part of the s plus, well, they're very simple. They just have measure bounds, right? So each one of these balls, the mass of our measure mu isn't very big. It's at most this v we picked, which could be very small if you want. It doesn't matter as small as you want, in fact. So note, both pieces in s plus have measure bounds, right? The neck regions have measure bounds. The balls of radius rb have measure bounds. They all have measure bounds. And what's the important part? This would be totally useless in practice if there were too many of these balls now. So first we have to, on the one hand, we have to know there's at least some of these balls for this decomposition to be useful. On the other hand, we don't want there to be too many of these balls because there's too many of these balls. We're not really proving a mass bound on anything, because it sums up too much. So the second condition actually is that we have the content estimates that says we don't have too many of these balls. If you think for a minute about what this means, in fact, let's just do it, because it's actually a nice thing. I just want to fuel for what's happening. So remember in our rectifiable theorem, we're trying to decompose into a mu plus, plus a mu k, and this is supposed to have bounded measure. Let's just define this to be mu on the set s plus. Then in this context anyway, why are we done? Why do we have a measure bound? It's just a combination of one, two, and three. So the mass of mu plus on the entire ball is what? It's less than or equal to the sums of the masses of the neck regions, plus the sums of the masses of our volume bounded balls, plus, is there anything else? There's nothing else. And what's this bounded by? These are both bounded by, well, some constant, whether it's v or delta, so I'll just say c of n delta v times the sum of r a to the k plus the sum of r b to the k. And now that's bounded. So we get our total volume bound for that set and like four lines. Let me point out for you PDE people out there. So if you're talking about nonlinear harmonic map or Einstein manifolds, say for instance for L2 bounds for Einstein manifolds, the first way these sort of effective estimates were proved is exactly like this. You do exactly this decomposition. You prove that on each of these neck regions and these balls, you have a scale invariant bound on whatever you're trying to control. The energy of your function or the L2 norm or the curvature or whatever you're trying to do. And you prove there aren't too many balls and just sum the things up, right? It looks exactly like this, like verbatim. The proof of the bounds is quite different, but once you're at this point, it's verbatim. Here's the last one, probably the way I'm writing it. Yeah, gamma. So keep in mind, right, as I said at the beginning, let gamma be one. So rescale, gamma can be one, rescale it back, right? That's actually how I'm gonna do it. I probably want to keep track of it and keep the gamma there, but that is too much effort. So I just do a rescaling. Absolutely. So as conditions one, two, and three, this is basically what controls S plus. I think this also controls SK. Is there something left? Ah, yeah, one more piece, right? Sorry, I meant to have a break there. And notice to finish the rectifiable Reifenberg. So what do you have to do? We had to take our mu, split it into two pieces and it's supposed to satisfy two conditions. This guy was supposed to have the mass bound, which we just proved. This guy was supposed to be K rectifiable, where we'll have, let's just say, K dimensional house worth measure of bounds, but even more. And that follows immediately as well, because, well, if we just look at the decomposition, what's left? What's left is to restrict mu to the SK and the S minus pieces. So we're gonna find mu K by definition as just being mu restricted to SK mu and S minus. And what do we have? Well, S minus has K dimensional house worth measure zero. So this has, I mean, this doesn't control house worth measure bounds at all and it doesn't, if we're looking at the support of this head. So the support is this and this, right? And that's all we're trying to control about this guy. And it doesn't stop being rectifiable. On the other hand, SK is now just some countable union of K rectifiable things and therefore it's K rectifiable. All that's left is to get the house worth measure bound and it's exactly the same as this, which is that if we want HK of the support of mu K, then we end up getting our following bound here. It's just, well, we don't have to worry about the S minus piece. We do if we want the packing estimates, but let's ignore that. So it's the sum over A over HK of C naught A. And recall what our bounds on this for did I actually write it on this chart? I did not. So recall from the next structure there and this was uniformly bounded, which means uniformly bounded by RA to the K. And again, by condition three, this is bounded. And then that gives us our decomposition. So this is simply where the, I mean, this is much more general than the rectifiable Reifenberg. Basically the rectifiable Reifenberg I wrote down so I could give an understandable version. I mean, actually in most applications, this is what you care about. But it takes a while to absorb this one. The other one's a little bit easier to kind of wade into. I think we made the mistake in that paper with Nick Edelin and Daniel Vultorch of sort of stating it more like this originally and who the heck's gonna get anything from that. Okay. So I'm gonna keep this up because in my last couple of minutes, I just wanna give two examples of neck decompositions so we can see how this works in a not completely trivial situation. I'll erase this. Okay, so let's do two examples. No, I mean easy ones, reasonably speaking. So let's take a, let's just look at R two or not. And let's look at the following. So here's our ball of radius one. And let me look at two lines inside here. So let me look at, here's some L and here's some L prime. And for simplicity's sake, I'm gonna sort of mimic that example over there. I'm not gonna let there be any other fluff floating around but you can easily add the other fluff exactly the way we always do. We're gonna let mu just be V times the house rough measure on these two guys. So the mass of a ball is simply what? I'm gonna take a set, restrict you to these two lines and then simply integrate what their one-dimensional measure is on these lines. That's what our measure is. Now case in point, by the way, already that this sort of Carlson estimate has to be used to apply this, not the point-wise bound. Why, because what happens if I'm here and I look at a ball of any radius, I mean how close is the support of this guy to be contained inside a single affine plane? It's some definite distance away, whatever it is. It's the same definite distance away on every single ball. So if I go doing that integral at some point, which is like summing over every scale, I clearly get infinity at this point. So if I only had the point-wise bounds, I couldn't even look at this example but we can do this weaker estimate, which is certainly true. So basically it blows up like log here so it's definitely gonna be in some L1, perfectly fine L1 sort of sense. In scale invariantly it will be bounded in every ball. So now what's our decomposition look like here? So actually it's not so bad. So first off, what are the neck regions look like? Well let's make the following observation. Let's take a ball of some radius here. Note this ball here, the measure on this ball looks exactly like that example over there. I mean you've now ignored everything else so there's no other plane out here, everything's exactly like this. This here has a neck region structure exactly like that. Let's see, just all be C naught. Let it all be this piece here. So our N1 can be, here's our sort of, maybe N1, something like that. Now here, let's half the radius of that ball and do the same thing. Here's our N2. Let's half the radius of that ball at our N3. And so forth and so on. They keep halving the size of the balls so they keep missing all the other nonsense going on. And you do the exact same thing over here and so forth and so on over here and over here to get all your neck balls. So that's it. So your neck balls in this case are simply, basically it's a copy right where you drop by, every time you go down a scale, you drop the size of the radius by a scale. And now you ask, I mean why does the content bound on this actually hold? I mean so what's the one dimensional content of this like basically a half, a fourth, an eighth, a sixteenth, so forth and so on. So the one dimensional content is summable and you get a bound. So there's not a finite number of balls but you get a bound on the one dimensional content on the number of balls for the neck regions. And for the B balls with bounded masses is exactly the same thing. I just do it over here instead. So over here I can call this an R sub B and an X sub B. So the mass on this ball is zero. Well that certainly satisfies our condition. It would still satisfy our condition if we add some two dimensional house or measure piece over here for that matter and you play the exact same game. Half, half, half, half. Do it some more over here so you can fill it in. That's it, right? So this is somehow the easiest non-trivial example kind of looks like in this context. So if I wanted to just be annoying, I could have easily added a delta plus an H2 here plus a huge number, maybe alpha naught as big as I want times the direct delta at the origin for instance and that this would still be a decomposition for this measure which would still satisfy the same conditions. Where does the origin go in this decomposition? It could be C naught. Exactly, that's a great point. So what he's saying is I missed a point and the way I should actually get this point and all this is I should have kind of a C naught here which is a single point here. So what we've basically done is take a measure which doesn't look one dimensional but we know it kind of looks one dimensional on all these pieces and we know the bound up by regions where it looks one dimensional so we get to treat it like it's one dimensional and the number of such pieces is one dimensionally bounded. All right, that's really what all this says in words. Example, one more example and then you're free to even regret having come. By the way, tomorrow's lecture won't be this bad. It'll be technical but it'll be kind of a it won't be a new technical, right? You won't have to absorb a new idea in the process so it won't be quite so mysterious looking. So let me do something similar here but let my L and L prime be parallel this time. Stop that. Okay, all right, L prime and L. I'm gonna play the same game. I'm gonna let mu just be the sum of those two. I've got two minutes so I'm just gonna say, here's our mu. So what should, and let's let the distance here between these two guys be some R and say R is super, super, super small, right? I'm drawing it as being not so small but let's say it's super, super, super small, right? So how should one actually do the decomposition in this case? Then the way you should do it is that, you know, if R is really, really small and you're on the ball of radius one, you're staring at this example, this L and L prime, they look the same. You can't tell the difference between them. It looks like a single affine sort of line that everything is stuck nearby, not two of them, right? You can't see the two at this point. So what you're gonna do is call your neck region to basically, you know, your first neck region will basically just cover both of these guys and here will be our first neck region, right? Everything away from this. Everything away from these guys here. Just too many lines floating around here. Let me draw it like a tubular neighborhood instead of a, you're in a ball so it's not so ugly. Then I'm saying, okay, so I throw that out. So our first neck region is everything away from that, right? So you just keep going down, going down, going down until you suddenly get close enough that you notice, hey, wait a minute, this wasn't one line, it's two lines. And then what you're gonna do is treat these guys, that's the neck region, just like over there, cover it and do the exact same thing for this neck region, fill in the rest with some balls that are bounded, right? And now these are your last neck regions that all appear. So in this example, this is what it looks like. One neck region which becomes sort of two unions in neck regions as you get down. Okay, I'm done, thank you.