 Today, we enter the meat a little bit more. So I've only actually got four slides here. But the point is going to be the following. So I mean, I'll remind you what the spectrifiable Reifenberg theorem was, and chat about it for just a minute. And then we're going to start going through some of the main ingredients necessary in the proof. And so this is going to involve introducing you to so-called sort of neck regions and their structure and decomposition theory. And this can be a tad bit dense. It takes a few minutes to sort of absorb this all. So the two things I want to sort of point out in this regard is that first, well, I mean, after I give definitions or theorems, we'll leave them up there while we talk and do examples. So we can kind of keep glancing, right? And secondly, just sort of as motivation, that this notion of a neck structure and a neck region, I mean, it's not the one that was in the original proof of the rectifiable Reifenberg. And I'm putting it here to sort of streamline it with other proofs that are out there now. It is the notion that that is somehow used when you're studying singular sets and their stratification theory. And more importantly, actually, where it really came from wasn't in studying the rectifiability of sets. It was in asking the next questions. So once you start to ask more refined questions, so for instance, in the proof of the L2 conjecture, the M-4 finance conjecture for Einstein manifolds, or the energy identity for Yang-Mills, these sorts of regions have to be there, right? It just sort of turned out once they were there that it was actually the better way of saying the other things anyway that had been said. So that being said, these things actually appear quite a bit, so it's worth sort of absorbing a bit. Okay, so let's just start with a bit of a reminder of what we're talking about. So what we want to prove here is that the rectifiable Reifenberg theorem, and the statement was the following. We have some non-negative measure supported on the ball of radius one, and recall we had these nice Jones beta numbers that measure in some integral sense, how far away. The support to the measure is from being an affine plane or any given ball, and if we have this integral of the Dini sum, as being bounded, then what we concluded were the following two points. Well, we could decompose our measure. I mean, what we want are measure bounds and sort of structure theories, or rectifiable structure, and you can't do that, but you can split the measure up into two pieces, one of which will have measure bounds, and the other of which will be rectifiable. So if you recall from from, and the rectifiability here is very strong, and the sense that it'll have a finite measure, and actually even finite packing content, and Minkowski contents, and all these other niceties that we like to deal with. So recall our examples from before sort of said that it didn't have to have finite measure, but when it didn't, the point is it has to become supported on something that's rectifiable, and conversely, it doesn't have to be rectifiable, but if it's not, it's got to have bounded measure, it's got to be able to float around some. So we want to understand how to prove this a little bit. Now let me make one more remark, too, about this. Actually, let me make a few remarks because there were some comments afterwards. So people were interested in the following points about this. First off, the measure bound is precisely gamma, right? So if the Betty number bound there is gamma squared, then you have that the measure bound is precisely gamma, so it's a nice relationship. And interestingly enough, the house of measure bound on the rectifiable piece has nothing to do with gamma, right? This sounds surprising, it's not surprising. Why is it not surprising? Think about the following. This is a fully scale invariant statement. So let's take our measure, let's simply divide it by gamma so that this estimate in one is now one, right? So now you'll go through the theorem, you'll prove, you'll get some measure bound, whatever it is, you'll get some bound on the house of measure of the rectifiable piece, whatever it is, but now when you read the scale of the measure back up, what happens? The measure multiplies by gamma, but the house of measure doesn't change at all. The sets the set, right? So I mean this is actually, I mean the two don't scale the same as the point, so this is actually the correct statement that that one gets out of this for just sort of silly reasons. And the other point that's maybe worth pointing out here is that one can actually do this for Hilbert spaces. So the measure doesn't need to be an R in, you can make all those constants depend on only K, so the measure's inside of Hilbert space. And what's much more subtle is that you can actually allow it to live inside a Bonoc space, but life becomes harder there. So if you do that, you can't have the power two on the beta number, you get a different power which is related to the so-called modulus of smoothing of your Bonoc space. So just sort of as a random piece of interesting information. Okay, fine. So we're gonna try to understand what this is. So what I wanna do is what we're gonna do today actually is find a much more refined notion, a way of decomposing our measure. That's better than this. We'll talk about it for a while, we'll work lots of examples, try to at least get our heads around it a bit, and then we'll show that the so-called neck decomposition that exists and we'll see how to actually prove this from that in a few lines. And then with whatever time is left on tomorrow, we'll discuss a little bit about how one constructs these neck regions and proves the structure theory and so forth and so on. So let's start by trying to understand what these neck regions are. So let me give you a quick definition real quick. So let's talk about something called the epsilon linear independence. This is a very easy notion. So if I give you some points in Euclidean space, then you're gonna say they're linearly independent if no one of these points is inside the span of one of the other points. And what we want is basically just to say this in an effective way. And then if you think for 10 minutes, there's only one way to say it. And what you're gonna get's the following. So definition, I'll call this epsilon linear independence. So we call a set Xi, this is K plus one points the way I'm writing it. And I'm gonna put it inside a ball of radius R because I'm gonna automatically state things in a scale invariant way. Do I need to do this? Maybe I don't need to confuse you. We're not going through proofs. I can put it in ball of radius one. So we have K plus one points inside the ball of radius one. And we say that they are epsilon linearly independent. If what? Well, not only can none of these points be in the span of the other points, but none of these points can be with an epsilon in the span of the other points. So if Xi plus one, say, is not an element of the span, tube of radius epsilon around the span of the first. Make sure I say this carefully, actually. Points of the, points that came before. Okay, so that way, right? So three points, right? So as an example, we might say that these three points are linearly independent because that one's slightly spaced off from that one. But if I want them to be very independent, my notion of value was that tube here around this span, so here's the span of the first two points, then they wouldn't be effectively linearly independent. I'd have to move this point a little bit further away. So this makes sense sort of for the following reason or we're going to use this in the following way. So the Reifenberg condition basically says what? If you have a bunch of points, then we say that it satisfies a Reifenberg condition. If they're basically very densely close to a plane nearby, a weak version of this would be, well, maybe we can't guarantee a certain set is densely close to an affine plane, but we can guarantee at least it isn't contained inside a lower dimensional plane. It at least sees that many linearly independent directions. So this is kind of how it's going to be used in a few moments. Okay, so let's do our first painful definition, neck regions, and we'll do examples. I mean, most of this lecture is going to be examples of various things. So definition, we'll write it up here first and then I'll stick it up upstairs. So let's let mu be a measure. And I'm going to let it be on a ball of radius once that we don't have R factors floating around, but I will apply this to balls of radius R, scale invariantly. Meaning if you want to know what it means, it means take the ball of radius R, go to ball of radius one, apply this definition, put it back and just see where the R factors appear. So neck region. So let mu be a measure, say in the ball of radius one. And let's consider the following objects. So this is all going to look pretty mysterious and we'll have to do some examples. I may be doing an example while we're talking about it. Let's let C be some closed subset. This is going to end up forming a set of center points for a bunch of balls that I want to use. And I want a radius function R sub x, that's center on this. So this pairing here is going to be telling me, ball centered here with this radii. The reason I'm doing a function is that we may allow for sets that are just closed sets where they're like radius zero, right? So I need to let this be a zero set as well. Such that our first condition here being simply that the collection of balls, ignore this constant for say imagine they're just all disjoint balls for a second. And in practice what I'm actually going to do is, and let me introduce you to three constants here. So the three constants that are going to appear over and over again are the following. There's tau and that's just a dimensional constant. And I think one can take it explicitly as being something stupid like this. I mean essentially it has to satisfy some covering but it doesn't matter. It's very small, it's dimensional. So what I'm saying here is that the balls at these points centered here may not be disjoint but they're basically like a Vitali disjoint. They're almost disjoint. You drop some definite factor and they're disjoint, right? The second constant that's going to appear here is a delta. The delta is going to represent how small the beta numbers are. So how closely this measure is gonna be to be contained inside L dimensional affine planes. So whenever this talk appears and you see a delta that's what it's representing. Whenever you see a tau this is what it's representing, right? They have the same constants represent the same things. And nu is gonna be a non-collapsing scale. You'll see explicitly in two seconds what this means. It basically means that I'm going to insist that some of these balls have a definite amount of mu volume and that's how much mu volume I'm going to insist they have. So we call the following set N. So it's the ball of radius one minus all those guys. It's an open set, right? By definition, this is the ball of radius one minus the union of all these closed balls here, right? So if I throw out all these closed balls, what I have left is some open set N. I'm gonna call it a neck region if the following happens. So let me actually even draw you a picture. I'm gonna basically give you an example that will be a neck region. You won't know what's a neck region, but we'll check it satisfies the conditions of a neck region so you get a picture of what this is supposed to look like. Here's our ball of radius one. Here's a bunch of center points, X and C. Here's the radii, r sub x, right? And you can sort of see they're all kind of roughly speaking living on top of a k-dimensional plane. In fact, in this example, they're exactly some k-dimensional affium plane. The only condition I'm gonna put on these balls is that condition there to satisfy the disjointness. And the measure I'm gonna stick here is gonna be a really dumb one. It's gonna be mu, the non-collapsing constant, times the k-dimensional Hausdorff measure on L. Maybe restricted to the ball of radius one because I'm insisting on this, right? So our measure is just integration over this guy. Our collection of center points are just a bunch of points on this guy with a bunch of radii on this guy that cover L, right? So they cover L, but they don't cover it too much because if I drop to a slightly smaller ball, they become disjoint, right? That's it, this is what you wanna keep in mind. And in that creation, it's basically a complicated version of this, right? That's all. It depends on the indexing, but like basically to a dimensional constant, right? If you rearrange them around based on this, it would still be epsilon independent based on like a C of n, right? If you want, you could pick any k plus one points. That'd be fine with that too. And that would probably be a more reasonable way of saying it in any way. Yeah, has a what now? Great, so what I could have done on this example when I skipped is to draw a prettier picture, but let me not since you're asking. I could have done the following. So here's also C, which is a part of this closed interval here, right? So what I'm allowed to do is this last ball covers part of it, but after I dropped to the slightly smaller ball, I now miss it, right? So an accumulation point is perfectly acceptable. Yeah, right? The radii can go to zero as you approach that or they can not, right? And the radii can vary any way you want here as long as that's satisfied and they cover it. Yeah, great. Actually, this is the cylinder being C not. C not will be the set of points with zero radii, so I'll just put it there. It's the whole interval itself. Enditions, N one. So the first condition is like a Riefenberg, it's sort of a discrete Riefenberg condition. So for every ball of radius r inside the ball of radius one with x as center point, and r at least whatever the radius is. So I'm never gonna look below that radius as the point. I only look on scales bigger than that radius. There exists a K plane, L, which like in Riefenberg may depend on the radius and the point, such that the Hausdorff distance between the center points now on this ball and L on this ball is some dimensionally small amount. So again, this is so satisfying a Riefenberg condition, but not a super great one. It's really, really, really small, but not dealt the small necessarily. I'm not insisting it be a arbitrarily small thing, just really small. If one thinks about the disjointness condition and something you can't get away with better than that, it's the right way to think. And only down to the appropriate radii. So in this example, despite my really awful picture, there we go, that this is clearly true because by definition, they all live inside a single K dimensional affine plane. So I can just pick that on every scale. And the only thing that you're losing here is sort of how far, this is clearly in this. So how far away is this from this? And if these are disjoint, but maximal, because they cover, right, right, right, one can sort of check this kind of condition. So it's close. So you have a Riefenberg condition down to some scale, at least a bad Riefenberg condition down to some scale. Condition two. So this almost has nothing to do with the measure, right, right, so far. The second condition does, I wanna look at the following subset of these center points. These are the non-collapse points, is what I'm gonna call them. So that the set of points, such that the volume of any ball around that point has some definite size, some mu R of the K, at least for R bigger than or equal to, and I'm gonna say down to the disjoint scale. So that's simply a definition, right? A priori, there may be none, right? I mean, so far anyway, in the definition, there may be none. But these are the points for which have at least some definite amount of mass as we go down. And the condition is that these things, well, we can't insist that these satisfy a Riefenberg condition, that's the problem. But we can't insist that they be linearly independent, which is why I put that condition now. Sorry, meaning that there is at least a, meaning on every single ball, there's a K plus one elements of this guy, which are linearly independent, which is what I mean to say. And again, that's for every X in the center point in every radius bigger than R sub X, so that'll be my consistent choice of X and R sub X here. I have no break points, okay, well then. I'll put the last in, just one more condition, and then we're done. Condition in three. We also insist for every single one of these center points that if we integrate from R sub X to two of our beta number, then this is less than delta. So remember in the end, we'll be dealing with some sort of integral condition on the integral of this, which may just be bounded. But by assumption, if I'm on a ball like this, it's actually small, at least down to that scale. But if I go integrating below it, all kinds of nonsense might occur, and it may not be small anymore, but up to that point, it's small. So there it is, written for once and for all. Okay, condition in two. Why is that satisfied here? Well, my assumption that our measure was just new times the, called V, I'm gonna confuse which is which. V times the k-dimensional Hallis-Dorff measure guaranteed that that non-collapsing condition actually held for every single point in here. So certainly it has to span a k-dimensional subspace at each point effectively. It actually doesn't have a very strong sense. And why is this satisfied in the example? This is satisfied in the example because it's actually zero. They all live in the measure strictly inside one when k-dimensional, if I'm playing. Okay. So this is the challenging definition for the course. Let me make a handful of comments about this. Did I put that in the wrong place? I put that in the wrong place. Okay, so I'll write it here. So for the most part, right, that this condition doesn't even see mu infitesimally, right? It sees mu on one scale down to some point, right? Unless the radius is zero. If the radius is zero, it's making a statement about how mu behaves inside here. So case some points, if you want to start coming up with other examples, what you could do is basically change mu in all kinds of manners inside these balls here, right? If you did it as long in some way that that's on a small scale, this will remain a neck region, but mu could be all kinds of craziness inside these balls. Right, so it says nothing about mu below those scales. It's in a purely effective condition on one scale. Okay. So let me give a couple of sort of moral points here. Right, so we have a bad right from birth condition coming from here, right? But the point of this and this is gonna be the following. If we look at the best approximating subspace, on each scale, I mean the one that minimizes this on any given scale, then the fact that there are at least these K plus one linearly independent points on each scale will guarantee that that best one is basically like a delta good approximation. So at least from an integral sense, if we look at the best on each scale, they won't differ from each other, even if they differ from the measure in various ways. This is gonna be super important because this is essentially what's going to turn this crazy measure with this crazy whatever's into a classical Reifenberg in the end, right? It basically says that even if the measure's crazy, if we look at the best approximations, this is basically just the standard Reifenberg all over again. So had we actually gone carefully through the proof of the classical Reifenberg, the moral of how it actually goes through would at least be clear or in this context and we'll do that some tomorrow. Point number two, and I'm gonna make this point in part for the people who are in the working seminar signing the singular set things. So these sort of neck regions, almost the same definition, is exactly what appears when you're signing stratifications of nonlinear equations, except that in three lets us cheat. So our proof of our, so we're gonna wanna ask a couple of questions here. First off, what are the structure of these regions? Well I like this example suggests a whole bunch of nice things about the structure. We wanna see that holds for all neck regions. So what are the structure of these neck regions and do they exist much? And this condition in three's gonna let us cheat. And for those of you who are in the singular set working seminar, you replace that instead with saying the monotone quantity for your equation drop is small. And turning that into a Betty number thing is very challenging, right? Because we wanna use these L2 subspace approximations to turn monotone drops into Betty numbers, but that's only after the volume's been proven to be finite, right? So one has this giant loop that you have to be very careful about. We'll get to cheat and kind of get a nice easy version here. Yeah, let's see. So that's for a ball radius. So let R be one here, I'm sorry, right? I did the scale invariantly up here. So that R, so this R is that S, right? And now you're saying condition two is for all S that are bigger than R sub X, right? Am I missing a point? No, no, that's okay. Yeah, yeah, R is one. R is one in that SDA, yeah, exactly. I decided to unscale invariant things here and then I forgot I did it in the slides which sort of defeats the purpose somewhat. Okay, good. So if we're new to this, think this, right? Let me do one more quick observation and then we'll talk about the structure of these things a little bit. So note what's sort of happening here is that we have an arbitrary measure on which we're basically saying on this region we're gonna get to pretend like a Reifenberg condition holds, right? This is gonna be the key point in the structure theory. And then let me do sort of an example too real quick. So let me take exactly that set up over here with the center points and the radii, but let me just change mu in the following sort of silly, with the same way we were doing it before in the, when we were getting up other examples. So mu, well we can let have this k-dimensional piece like is over there. So mu times Hk on Lk. And then we can add to it some mu minus k, let's say, which is anything we want whose support is inside the L. So it could be direct deltas, it could be all kinds of nonsense. That doesn't change the beta number, it doesn't change anything else that's going on there, right? So these sum here is still a neck region in that exact same example, nothing's changed. And then if we want, we can also add something like delta times the indimensional Lebesque measure on the whole ball. So all that changes is like from our computations from last time, that makes the beta numbers no longer zero, but they'll still be small and summable. And this can, in that case, because they'll actually decay polynomially. So this is a slightly more involved example that would still satisfy that as being a neck region. When I've been giving talks on singular sets and energy identities or whatever else, I've been skipping all the neck structure stuff lately because it is just impossible to do it. If you're doing this with an entire rest of a course, you don't have a whole hour, it's literally impossible. All you do is lose everybody in five minutes. It's probably still true in an hour, but there's really no chance if you have less. However, that's the advantage of recording a talk. One can sit there and absorb it later. Okay, so let me make a couple of observations now about, well, this example with that collection of center points, and they're both almost kind of, I mean, dumb, obvious observations there. But what we want to know is, are they dumb, obvious observations for an arbitrary neck region or not? So the observation number one is the following. The center point C live in a well-behaved submanifold, in that case, an affine subspace. So clearly, that affine subspace is by ellipschitz to do a ball in RK, because it is a ball in RK. The other thing, now this is something that's obvious, but maybe you had an actual thought after this point. So in this example here, we get that, well, once again, with this piece here, we have absolutely no measure control over mu whatsoever globally here. However, we do have measure control over all of mu on the neck region. So mu of the neck region, now, is bounded by some dimensional constant here. I mean, whatever the volume of the ball of radius one is. I think it just makes it less than one, but I'll stick it there anyway. Times delta. So what you've now done is say that, well, you've got this collection here for which you have Hausdorff and Minkowski control over this. This is a very well-behaved Hausdorff sort of set of balls. And away from it, the full-blown measure of mu is bounded, right? I mean, this is very much in the spirit of what we're trying to do. And the main theorem, of course, is gonna be that that's not an accident, that that's what's always gonna happen. So once you think of this as sort of coming in two pieces, first off, I've given you this long definition, long complicated definition of a region here. A, is there any point? Do we understand more about these regions than we do an arbitrary one, right? And then that's what the structure theorem is supposed to say in a second that, yes, we do. And the second point is, do these things exist? Because if they don't exist, who really cares anyway, right? And that's gonna be the point of the decomposition theorem in a minute. So right now, we'll start with the structure theorem. The theorem, so we'll call this the next structure theorem. So let's let mu be a measure on the ball radius one, and B1 minus, stop that, a neck region, and more precisely, AK, what do I do with these things, delta V, neck region. Then the following holds. Then there exists some manifold, T, living inside my ball radius one.