 Okay, so why is this proof nice? This proof is nice because in fact, if we now move and talk for five minutes, most of the rest of today will be kind of just hand waving. If we talk about the general Reifenberg for a minute, it's actually exactly this, but reverse engineered. Even the estimate and the way you'll estimate is exactly the same. So this was Reifenberg's real idea between seeing how to actually control these Reifenberg sets. So that is to say, let's make some recollections here. So S is Reifenberg, epsilon Reifenberg if you want, classically, if for every X and S and are positive, maybe less than one on the pair, there exists some subspace, which we should really call L sub XOR because it depends on the point in the scale, such that on the ball of radius R around X, S is very close to looking like an affine subspace. So this was the condition. So this is where we're starting. Okay, so we're starting from the limit S infinity in the last example in some sense. We have this set, we know it's Reifenberg, and now it's always gonna live close to some subspace. And now what we really wanna see is the following. In fact, if you have such an S, then we can actually go backwards and build such an SI. So we can build a sequence of smooth submanifolds that will be close to S on scale I, whatever that means and we'll write that down carefully. And that when we look at these different smooth submanifolds, if we consider literally just the projection maps from one to the other, we get verbatim this, you get exactly this, maybe stick a C here and a C here, I don't know. But past that, it's exactly that, which means everything else we just did is verbatim. I mean literally verbatim. So roughly speaking, let's try to see how we get our picture. So we have our, here's our crazy S now, it's too crazy. What's the information we need to build some sort of smooth manifold approximation to this on scale R? Scale R to me means on the ball of radius R around, that's where it's going to approximate it. So in fact, all that really matters and I kind of want to emphasize this point for at least our hand-winding sketch of how we're going to do this for general measures is we need these subspaces and we need them to remain close to one another. So here's an exercise. It's basically 10 lines, but it's extremely nice because it puts the right picture in your head. So let's imagine now, I want to consider this as being fixed, right? So I have chosen for every X and S and every radius, I have chosen a subspace that I know S is epsilon close to on that subspace. It's fixed. Then the exercise, if I ever find the right page of notes, is the following. The setup is here. We have an epsilon right from Berg subset S and we have these fixed choices of subspaces, L sub XR, which are always close to S. Then let's choose two comparable balls. So let the S of X, let's call that Y, the R of X, be inside of our original ball where our subset is, whatever, with X, Y and S, and they need to be comparable. So what that means is, then you have at least one of about 60 ways of trying to write what that means is the following. The ball of radius, let's say S over 10 around Y will be strictly inside the ball of radius R around X, which itself is gonna be strictly inside the ball of radius, I don't know, 100 S or something around Y, right? So the picture here is, if this is X and R, then we have something like this, right here's Y and S. Or it doesn't actually have to be that bad. It could be like this, right? So the radii are comparable and the distance between X and Y are comparable to that radii. And so they're kind of the same. Then the claim is simply that these two subspaces are basically the same. There's about 60 ways of writing this. Maybe I'll just write in terms of the howlsdorf distance because we're hopefully getting some fuel for this. So the howlsdorf distance between L sub XR on, I don't care, ball of radius $1,000, whatever. L sub YS, so these two subspaces on the same ball is less than or equal to some dimensional constant maybe times epsilon times R. So the picture is this. And this is what you actually care about. Here's a ball of radius R maybe. And here's my best approximating subspace. And here's another ball over here, which is roughly equivalent, give or take that Y and whatever radius this is. And this is the best approximating subspace here maybe. Then these two subspaces look the same, right? They aren't hitting at angles like this, right? And in some sense, this is all we care about. That exercise is basically a triangle on quality of sorts. But you gotta go up a few scales. Okay. So then thinking intuitively for two minutes, right? What are we trying to do? We're trying to take our set S and now I'm gonna fix a background R and I wanna say I wanna smooth some manifold, I'll call it S of R, why not? Which is really close to S in the ball of radius R, right? But it's smooth and it's not moving very much on sort of scale R. So how much you build such a thing? Think one dimensionally like this where it's actually super trivial, right? So take your S and let's cover it with a bunch of balls of radius R. So let's even just say these are all R. Associate, let me get rid of my circle radius because I don't wanna confuse that with my subspaces. For each one of these guys, I have subspaces that are kind of best approximating subspaces and they're all close together. And what can I do with these things? I can glue them together. Now making this, you can utterly make this rigorous, right? It takes a few lines, but it isn't worse than gluing functions together but with more work, right? So you get a partition of unity and you average them out. And to mention one, it's super simple. I mean, you can just sort of picture it, right? Which is all you need to do for the moment. I mean, so you're gonna get some smooth sub-manifold now that's basically, I somehow drew it to be completely the opposite of all the lines but you kinda get the point. So you have all these really, really close subspaces everywhere and all you do is find a way of smoothly averaging them out, right? You can do this in about 16 different ways. One is to choose a partition of unity and one by one go through and just glue them together, right? You can do this and make it rigorous. In high dimensions, it takes a few lines. In dimension one, it doesn't actually take that many lines. It's worth doing. And that's gonna build your S bar. And now the point is you can do this for every radius. Now there was no restriction on radius here. All we need it was the fact that for every point on S, at that radius, we had well-behaved subspaces that were always close to one another and that was it. Where they actually came from was irrelevant at this point. But we have them and we can glue them together to get smooth sub-manifolds. And then the idea is simply, well, if you can do that for every radius, let's do it for some discrete set of radii like two to the minus i, whatever. Here's our S. So I'm gonna build my S1. Actually, let's build my S naught. Let's build S10. So S10 is something like literally just an affine plane, nothing else. Just like in our original example, we took our first one just to be an affine plane. S1 maybe moves around a little bit more. Let's call S i to be S to the R i, say. So I can use i's instead. This is like S naught, S1. We go a little bit more for S2. And what you prove is that the projection map from this guy to this guy satisfies exactly estimate one there, exactly. And then you just compose just as before and that's right from Burke. Yeah, parallel lines can be glued together. I mean, go like this, right? Now, while you're, I mean, in fact, it's easier to glue parallel lines together. Let me find something I don't mind to read. Well, this was ages ago. So think of the following. Which in dimension one, you can make this idea perfectly rigorous. Actually in higher dimensions you can too, but it takes more work. So, you know, you've got these two lines here. This is what you're thinking, right? So take some partition of unity with two elements. All right, so there's a phi one and they sum to one at each point. So all you wanna do here is get a function, right? From here, right? So let's call this the x-axis, which I'm gonna rotate to make the same as these two. And all I want is a function here, which is equal to this height, whatever this is, up to about here. It's equal to this height after here and just smoothly goes from one to the other. Glued. Epsilon r, which is exactly how close they are. That's it, right? And if you sort of just think through kind of, you know, how smooth would such a thing be? It'll have scale invariant estimates, right? So this is the ball of radius r, then it'll live in between these. It's derivative will be like epsilon times r inverse. The second derivative will be epsilon times r to the minus two. It is scale invariantly smooth on that scale. So if you rescale it to a ball of radius one, it's a completely smooth sum manifold that's completely well-behaved, which is why you can project from one to the other with these estimates. They're both really close to affine planes. I drew epsilon as being huge there. Now, let's see. I have 15 minutes left and I am halfway through my lecture. Let me decide where I'm gonna head with this. We won't prove things, we'll just give some examples. See what can go wrong, why it goes wrong. Roughly speaking, what it is you need to fix it and we'll stop there. So how do we deal now? So this was classical Reifenberg. How do we deal with a measure? So we wanna do similar sorts of things where we have measures and we have control over our so-called beta numbers. And we're gonna wanna build approximations just like this. And in principle, it should be clear that's utterly impossible, right? We need something else about this. This is what all the neck region nonsense was about yesterday, is that these are gonna be the reasons in which we'll be able to do this, but we would like to understand why a little bit. So first off, let me define for you what our, I mean, we have a background measure now, right? So we're gonna have a measure mu. It's gonna be in the ball of radius one. Why not? And let me define subspaces for this, right? So there's in some sense only one natural way to define subspaces. We are studying this thing through its so-called beta numbers. So we should have find the subspaces on a given ball to be the minimizer of the beta quantity. So our L sub XR, so again, we have a ball of radius R around X, which I guess I'll stick inside the ball of radius one, but it doesn't really matter, I suppose. It's gonna be the argument. So the thing that actually minimizes the following quantity. So we're gonna minimize over all subspaces, affine subspaces, which are k-dimensional Lk of the integral. There's a two to the minus, R to the minus two minus k, just to be scale and variance. That isn't squared from y to L d mu, right? So the thing that minimizes this was the subspace, which in some sense lived closest to the support of mu inside this ball, that was our argument before. And there's really, I mean, if we're gonna try to pick for every point in every scale of subspace, there's really nothing else to pick but this guy. Now, you can do this for a totally arbitrary measure. And the first point somehow being is that having small beta numbers, I mean what we want from these guys is exactly what we just showed in the exercise there. We want these things to be close on comparable balls. That's what we need to glue together. And there is zero reason in the world that needs to be true. So even if the beta numbers are zero, I mean, right? So even for controlled beta numbers, this doesn't need to be true. So let's do a nice example of this that's illustrative but easy enough. So let's just do an example. So I wanna measure mu, which is two direct deltas at some points P and Q. So let's choose some points P and Q inside the ball radius one. Let's say their distance is like one-tenth or something. I mean, something that's somehow on that scale. Maybe one of them's the origin, I really don't care. I want a bigger ball to base that. I can stay close to the definitions. Excellent. Nice manly ball. Okay, so two points P and Q. P and Q. And I'm gonna say let mu be the direct deltas at these two points. Mu is, I don't care. It could be some reasonable size for all I care. Let's just actually think of the direct deltas. Why not? The direct delta P plus the direct delta at Q. Now, note the following. What are the beta numbers? So the one beta numbers of this measure, they're all zero, right? So for every x and every r, beta one of x, r is zero because there is one affine line that y's keep the support of the whole measures inside, right? So they're always zero, no matter what x and what r is. But let's sort of see what happens. So let me pick a bunch of balls. So that's one ball. Why not? Let's pick another ball like this. Maybe here's another ball I can pick. And here's another ball I can pick. And so what are the best subspaces for all these balls? So these are all comparable balls to be clear. They're all kind of all, I mean, this is radius 120th, right? Something like this. So for these two balls, the best subspace, well, you have no choice. It's just coming from the affine line that connects the two, right? So this is L is L01 and whatever this other ball is. Guess I'll call it L minus a half and two. It's not drawn that way at all, but whatever. So for these two balls, I get the same affine subspace. And for sure we have this being satisfied. And they're close, there's no problem, right? So if I look at just those two balls, my two affine linear subspaces are exactly the same. But now if I look on this guy, in fact, I can pick absolutely any line I want that goes through this point, right? I could pick this here, right? So this could be Lp and one over 30, say. And since I can pick this to be whatever I want, it is absolutely not the case that it's close to this here, right? So what went wrong in this picture? So what went wrong is easily enough to seem to be the following. So what's special about those two balls and bad about this one? So these are both points that have a definite amount of mass, which is nice. One mass, to be exact, right? So they're both balls with mass, which means they have some hope of actually controlling an integral like this. How many are there? There's two of them. More precisely, in these two balls, there are two effectively linearly independent such points, right? While in this one, there's only one linearly independent such point. And if we're talking about one dimensional affine lines, these are determined by two points. We need two independent such points. And that's it. And if you recall, the second statement of the neck structure was that it guaranteed that on every ball, there was at least K plus one effectively linearly independent points where your measure had a lot of mass nearby, right? And so that's exactly what you need in principle to actually force your subspaces to live close together, right? So since I don't, by any means, have time to prove that for you, I'm going to stop. Thanks.