 Add this nice little construction where we had our two points that we started with, and we had an interval. And we called this guy some some LAB. So there's a line segment from A to B. And we called this particular guy S naught. And then what we did right was that we start making new examples by coming out to a Saucerly's triangles, boop, boop, boop, boop, moving out some some epsilon times the length in this direction. And continuing this process so forth and so on to get our S's. So as we had written up before, our Si, which was supposed to be a union of Li j L ai j plus 1, a piecewise linear guys. And what we did was we let Si plus 1 equal the union of L epsilon, which was exactly what this thing here was. We applied each line segment of the ai j's ai j plus 1s to get our piecewise linear guys. And what we said was we got this nice sort of Reifenberg curve in the end that was bi-holder. And we sort of outlined where the bi-holder construction was going to come from. So two points on this. One is that as was pointed out by several of you, that's not Reifenberg. It's a one line fix. So what I did wrong was the following. So the way I described it is I said, OK, let's push this out. And then let's push out again. And push out again. And keep pushing out like this. And of course, that's silly because after like just stare at this point, what you're going to have is an angle that looks like that after a fine number of steps. And clearly, that's not Reifenberg. What you have to do is alternate. You can always go out this way or go out that way. And you have to alternate. So what you need to do for step two, for instance, is that should actually look like this. And then this guy for step three will be something like this. That was down. Dump, dump, dump, dump, and so forth and so on. I mean, it's 10 lines to check that fixes it, but it's one line to fix it. So what you should really do is view this as the following, i plus j times epsilon. So if it's negative, that means you move in the non-oriented direction. And what happens when you do it like this is that the angles will still move in every switch direction as they have to. But they'll be continuous with respect to one another. So when you fix at one point, it'll still look like a line on a scale, so it'll have the Reifenberg condition. And every other computation we did is absolutely verbatim. I mean, that actually didn't depend on this, just that one condition. So yeah, you've got to alternate. We will come back to this example. I think the way I'm going to do this is the following. So I actually want to understand the bi-holder a little better for this example. In fact, I want to do it way different than the way I outlined before, because it's more motivating of how Reifenberg will work. But lecture four is dedicated to the proof of the next structure theorem, which as I said, the proof of the classical Reifenberg is set up to mimic this. So I'm going to come back to this example in lecture four and do it more carefully at the beginning of that. And for now, we'll just move on and recall this as being a nice Reifenberg example that showed the sharpness of everything. So what I'm going to do today, then, is the following. I want to talk about a more general Reifenberg theorem, something for measures to deal with a few issues. And we're going to basically build a lot of background to try to understand this. I think I'm going to leave this here in that board blanks. I am going to come back to a slight refinement to this example in a moment. So the idea is what? The Reifenberg was knee, right? So it said that if we had a set and we could approximate it by affine planes in all scales, then it had to be bi-holder to a manifold. And this is pretty interesting. In practice, when we want to apply these things to singular sets, it doesn't work like that. It's not quite so clean. So there's a variety of issues that appear if you're trying to use the classical Reifenberg. And let me list them for you. So the first issue, so one of these is not such a big deal. The other two are bigger deals. So the Hausdorff distance, I'm not going to write every line. So Hausdorff distance used in the definition is like an L-infinity bound. And as happens in PDEs, you don't have L-infinity bounds. You have LP bounds of some sort or another. In particular, for most of the equations, this is an applied to so far. It's more like L2 pounds or integral bounds. So in practice, we want an integral bound on things. Integral. So this is going to be not horrible to fix. So what we're going to end up doing at some point in this lecture is introducing something called the Jones beta number, which is to fix this, the main point that one really wants to think through is that you're not applying it to sets anymore. Where L-infinity makes sense, you're going to apply it to measures. So we need to move to the measure world. And when you move to the measure world, it's now open that you have things besides L-infinity of other LP estimates. And this is what the Jones beta numbers are. And we'll introduce those and work with those probably for half this lecture. I mean, this is what we want to get a little bit of a feel for. Second problem. In applications, our sets or our measures, which are supposed to be like singular sets of things in practice, the measures have holes. Actually, more to the point, they simply don't satisfy the Reifenberg condition. Which is to say, it's not a question of holes or not holes, you just don't have a Reifenberg condition. So that might sound like a deal breaker. But what happens is basically the following, that this is where the notion of a neck region is going to come into play and the notion of a neck decomposition. A general measure is a general measure. But we will break it into pieces where it turns out if we have some sort of integrable control, even if it's some large number, on our beta numbers, it turns out that most pieces, there are a lot most, but at least there will be lots of balls for which it will have sort of a weak version of the Reifenberg condition. So on the whole, it may not be true, but we can at least focus on so-called neck regions where we have sort of a weak version of this to actually work with. So we'll need neck regions to deal with this. And the neck decomposition theorem, which the decomposition is basically saying most regions either are neck regions or at least have some other bounds that are useful. And the third point, and this is also a deal breaker, by holders too weak. So in practice, right, when we're studying things, say singular sets of nonlinear equations, we want two things. We want manifold structures, rectifiable structures, which I also defined for you today, which are like bi-elliptic manifold structures. And we want volume bounds on the singular sets. We want to know the singular sets on too big. And bi-holder doesn't do this, right? Bi-holder's all kinds of crazy. We need control ingredients to do this. So what we need essentially is a stronger condition to get a stronger result because we know Reifenberg doesn't do this. And actually, to discuss this a little bit, I want to go back to this example real quick just so we can see how it is that even in this sort of silly situation where we keep sort of doing a piecewise linear, Guy and that is an ugly picture at this point. I just don't like leaving that up there. It bugs me. Let's fix that. We should stare at something pretty. And let's say boop, boop, boop. Okay. So let's look at the same example where we keep taking our piecewise linear guys and breaking it up into more piecewise linear guys, but let's do something else. On top of this, right, so how do we get from our S0 to our S1? We went up by epsilon, right? So to go up from here, we went up by epsilon. Well, instead, what we could do actually is move up by a different amount on each, right? There's no reason not to. It's a perfectly interesting thing to study. So we can move up by some epsilon not here, whatever that is. We can move up by some epsilon one here, whatever that is. And in general, we can move up by some epsilon i plus one, whatever this is here. So it's the exact same construction. We can even insist these are all at most some epsilon if you want, I don't care. I'm interested in small in that big here. But we could, if we wanted to actually move out by less and less. And now what happens when we do this? So recall our computation from yesterday, which was simply that if we want to know how the length changed, right, then the length of Si plus one, well, we got to it by saying it was the square root of one plus epsilon squared times the length of Si. This is just because we do our nice little Pythagorean theorem here. And now, of course, if we're moving out by a different amount at each stage, it's gonna be more like, I think this is like an i the way this is written, but whatever. It's gonna be one plus epsilon i squared. Now, for curves, right, by elliptic structure, gradient structure, and volume are all the same, right? So life's very easy. So what we're really asking here, right, the problem was that the volume of this thing for a constant epsilon was going to infinity, right? So that meant in principle that our actual volume was four times the product of the square roots of one plus epsilon i squared. And that clearly just goes to infinity if these are all epsilons. But now one can ask the question, well, when does it not go to infinity? And you get a pretty clear answer from this, right? And the answer is pretty interesting because the square root of one plus epsilon squared, let me just sort of fake it out a little bit here. This is, roughly speaking, four times the product of one plus epsilon i squared, right? So Taylor series, you're going to die out a little bit if you want, because these are small. And this here is gonna be finite, right? If and only if the sum of epsilon i squared is finite. And even though I'm being a little fake from here to here and here to here, this is certainly a true statement, right? It's finite if and only if that sum of epsilon i squared is finite. What that's telling you is that if I actually want a rectifiable curve here, an actual curve of finite length, then what I am stuck with here is that not only do these epsilon i's have to be going to zero, but that's not enough. They have to be going to zero sufficiently fast that their squares are summable. The fact that there's a square here is really good and really bad. So it's much better than it being epsilon i. And the fact that the square lets you get away for nonlinear equations. It turns out that square, it's precisely what you can control for some nonlinear equations. No better, no worse. If that was any other constant, you'd be in bad water. Let me point out that this sort of moral from this example was refined by DeVito and Toro into a theorem that said if you had a Reifenberg set for which on each scale write your epsilon i, Reifenberg, with it being summable at each point and bounded, you actually do in fact have a rectifiable curve at the end. Okay, good. So now that we have some motivation of where we're heading, we need that and that. So let's start with the following. I'm going to crash course you on content. So when you are doing, well, this or any type of quantitative analysis or PDs that nowadays, you wanna talk about sizes of sets, of course. And there's more than one notion of a size of a set. And what notion you deal with actually is quite important. And the three notions here I'm gonna talk about are Hausdorff and Minkowski and packing content. Now they're extremely similar. Let me just give you the definition so we can get some feel for what's happening here. This will make a difference when we start getting really careful about all of our analysis. So I'm taking a side route now so we're totally changing topics. Definition. This is of content. So let's take a set. Doesn't need to be closed. It can be whatever you want, some set. And R positive, both of these are fixed. Then the K-dimensional Hausdorff R content. That's a lot of words. I'll write the first one out. K-dimensional Hausdorff content of this set is the following. It's, well, we denote it by HK sub R of S, right? And it's defined to be the imp over the following. Essentially, we're gonna look at the imp of all coverings of S by balls of radius at most R and sum up their volumes, their K-dimensional volumes. So WK, R, I to the K, such that S is contained in the union of the ball of radius R I or X I. And R here, R I's all have to be less than or equal to R in our assumption, right? So I'm saying, let's draw a picture. Here's S, I'm allowed to cover this thing by balls of whatever radius I want. It has to be covered, they can't be bigger than R. And then I sum up the K-dimensional volumes of these things to figure out how big you are. The K-dimensional Minkowski R content is the following. So with an M, so this is, and we can write this two ways, actually. It's R to the K minus N times the volume of the ball of radius R around S. Equivalently up to a constant, right? That this is the imp over the sums of the K-dimensional volumes of balls of radius R. The R I are not allowed to vary anymore, such that S is contained inside the union of the balls of radius R of XI, right? So for Hallis-Dorf, I can get any covering I want. For Minkowski, I have to cover by balls of radius R, right? So it should be clear, this guy here should be bigger than this in principle, right? I'm very much restricting the amount of covers I have here. They all have to have the same size to them. And three, the packing content of a set, if you want the R packing content, but actually in practice, I don't care about the R so much, is actually I'm gonna make this a soup now of the sum of omega K R I to the K such that the balls of radius R I are on XI or disjoints. So I'll explain this, and XI is an S. So when you soup out something like this, basically a Vitaliy argument tells you the balls of five times that radius cover it, right? So what's happening here is that here I'm covering by balls of any radius and saying if I can find some covering for which this is finite, then I have control. This is saying if that covering is by balls of the same radius, then I have control. And this is saying I have to be able to cover it by absolutely anything and I have control, right? So packing content insists that not only is there some covering which is control, but every covering of it is controlled, yeah. Yes, thank you. They're all constrained, yeah, they're constrained to be a most R. Thank you, exactly, thank you. I am running out of room. If you have my notes, this is written verbatim. I'll try to write bigger after this, sorry, I was getting squished. Okay, so classically speaking, people control this, house dwarf measure. This turns out to stink for lots of reasons. Sometimes it's all that's true, mind you, but it stinks. Right, let's just look at one example to understand why. In particular, if you're doing anything quantitative, if you're solving PDEs and trying to get estimates, this is a horrible thing to control. And let's just do one example to see this. And in the problem session, your TA will work out a series of examples a little bit better. You have an exercise on this, which I've written incorrectly, so he's gonna fix it and then solve it. So let's let S be the rationals inside the ball of radius one. Then for any R, what do we know? The key dimensional house dwarf, our content of this set S, well, I mean, I don't know exactly what it is, but it goes to zero as R goes to zero. This is basically set up for any K, sorry, positive, strictly bigger than zero. So if K here is bigger than zero, then the K dimensional R content goes to zero, right? Whereas if K is zero, well, it's actually just gonna go to infinity. So what this means here is that this is zero dimensional from a house dwarf point of view. That's exactly what this statement is here, in fact. So roughly speaking, how do you see this? Just to give an idea. This is a countable set, right? So what I can do is enumerate it in some way, pick the radius R, let the first element be covered by a ball of radius R, the second element be covered by the ball of radius one half R, then one fourth R, then one eighth R, so forth and so on. And in fact, I think we just proved that's like two R. So this is, well, bounded by two R anyway, is what we just proved. So countable sets are very well behaved from the house dwarf point of view, right? I mean, they're zero dimensional is what this is saying. Now, why is that awful if you're doing analysis? Well, if you're doing quantitative analysis, the closure of this set is the entire ball. It's everything. You have no real control. I mean, if you're trying to push away from this set, it may be smaller dimensional sense, but it's a big set in the sense that you can never get away from it, right? It's dense inside the ball of radius one. And one sees this directly through the Minkowski and packing content, right? So if we look at the Minkowski, and this turns out to be roughly equivalent to the packing content of this thing. Then in fact, this is just covering all of them by balls of radius R, right? Which means you've now just covered the entire ball of radius one by balls of radius R. So all you're getting here is that's roughly equal to R to the K minus N. So in particular, if K is less than N, it goes to infinity, right? And it's only bounded if K equals N. So from a Hausdorff point of view, this is a zero dimensional set. From a Minkowski and packing point of view, it's an indimensional set, right? And actually, we like that, right? Because if we can control sets in a Minkowski or packing way, which will be very important in what we're saying later on, we're really doing a much better job of controlling our set. All right, so this is, let me point out actually in words why you're gonna care about packing estimates in a minute. So the end results will prove packing estimates on things, not just Hausdorff. We will directly use that in a corollary because what a packing estimate gives that that other estimates don't, right? Is that imagine you have some set, right? You're trying to control the set. So maybe the first thing you've done is prove some sort of control, Hausdorff control or packing control over the set. But now imagine you're trying to prove something more refined about it. So what you're gonna wanna do is cover this set, but not by some arbitrary collection of balls. You're gonna wanna cover it by balls that are special, right? This makes sense. This is what we do as analysts, right? We find nice balls to cover things by. But if those balls are nice, you may not know what radii they are. Maybe you can prove there exists some nice balls, but you have no idea what they look like, at which point a Hausdorff control is not sufficient. You have to know that you have a packing control and therefore whatever covering this is, we can control the covering by nice balls. And we'll actually have a direct use of this and by the end of the lecture, I think. At least I'll point out where it's directly used. Also, when you're proving a priori estimates for nonlinear equations, it's the Minkowski estimates that do it, not the Hausdorff estimates. So one really wants this sort of effective control. Okay, so fine. We've crash-courced packing estimates. Yay. Crash-courced number two, rectifiability. So, and I'm gonna focus on sets. You can do something very similar for measures. Okay, so what does rectifiable mean? So what is a nice space? Well, the nicest space out there is Euclidean space. What's the next nicest thing you might do? You might be a manifold, right? So maybe you're Euclidean space, but geometrically speaking, maybe you bend a little bit. These are things that we know how to do analysis on. We like these things. So we like subsets of Euclidean space that are themselves manifolds. That will be best, right? It'll be really nice to talk about, let's find a, let's like it with the Reifenberg set, let's find a set which is actually a manifold in one sense or another. So what we want about these manifold things are two pieces. One, we like the topology, but we're gonna give that up in just a minute. And two, we like the fact that we can do analysis on them because there's a bioliption structure. There's a notion on manifolds of things like derivatives, gradient integration, everything you need to sort of work as an analyst, it's there. And if you think for a while about, well, what's somehow the strongest looking set that's completely awful, but we still have all those things in Euclidean space, rectifiable sets kind of fall in that category. So they're very much like k-dimensional manifolds, but we're gonna basically throw some stuff out because it turns out if you throw stuff out, we can still work with them as analysts. So let me start with what's gonna be an annoying definition. So the flat-out definition is the following. And one can be sure people struggled for a decade to get this one right. So let S be a subset of Euclidean space. We say S is k-rectifiable. I'm gonna define something called, there's actually about 16 different names you might throw in front of rectifiable for different versions of rectifiable. I'm gonna technically define what some people call countably rectifiable and I'll point out the differences. S is k-rectifiable if there exists a countable collection of Lipschitz maps, whatever. If you want good English, you're gonna have to look at the notes. I have limited room, Fi, which maps from some sets, Si, which are subsets of Rk now, right? So this is supposed to be k-rectifiable. So now I have a whole bunch of subsets of Rk, right? Into Rn, such that the images of these sets are gonna cover S up to a set of measure zero. That's the statement. Oops, did I define measure for you? I might have skipped that. That's okay, I still have my board over there. It's one line. Such that the k-dimensional measure, you just limit R to zero. I'll write it precisely over there. Of S minus the images of these sets equals zero. So almost every point of S is in the image of a Lipschitz map coming from Rk. That doesn't sound like a lot if this is the first time you've seen this structure. That sounds pretty bad. However, you can essentially restrict yourself and assume that these Lipschitz maps are actually bilipschitz maps, by Lipschitz embeddings, right? There's no harm in that. So view of these things as actually being images that are fairly nice, right? They're spanning a sub-manifold that's by Lipschitz. And the k-dimensional Hausdorff measure, I'ma set S, is literally just the limit of that as R goes to zero. So that's all. Okay, so let's talk about a few examples here. Just to get a feel for essentially how nasty this can look. And basically, the example I'm gonna give you is really more or less as nasty as it gets, give or take. But I'm gonna follow that up by seeing how nice these sets are. So examples. What's that? Right now, I'm not assuming anything about it, right? So there's two of right. So what he's really asking about here are two things. So first off, sometimes you might assume S has finite k-dimensional measure, right? That's oftentimes done. I'm not doing it. I will explicitly simply say when I said it has finite k-dimensional measure and what I want, usually, because I care about packing in Minkowski anyway. Secondly, when you say this, you might actually assume it's a finite number of balls and not accountable numbers. Some people call this accountable rectifiable. So, right? And there's a whole bunch of other versions of this floating around. So examples. Let S k in R in, and I don't know if you can intersect this with the ball of radius one, if you want it to be a compact set, that's fine. Be a k-dimensional sub-manifold. I'm going to assume smooth, but the point of the definitions, you really only have to assume kind of Lipschitz. But let's just say smooth. Why not? Clearly, this is k-rectifiable, right? This is the point. So essentially at every point, there's a neighborhood of this thing, which is gonna be diffeomorphic to a ball in Rk, right? Union all those maps up, right? So this is gonna be k-rectifiable. First point of nastiness. Let S tilde k be a subset, any measurable subsets, your favorite measurable subset in the world. This is also k-rectifiable. So the key difference here, right, is that sometimes people will sort of fakely view a k-rectifiable set as being a k-manifold. And this is kind of true, but the point is it's true away from a set of measure zero, and that measure zero set need not be closed, which means all topology goes away. All right, so this can be all kinds of nastiness sitting inside here. It turns out not to be that bad, right? There's still gonna be tangents at almost every point here. It's still gonna really look like a manifold from every analysis point of view. So we're okay with this. So this is it. How do you get this, by the way? Take your diffeomorphism maps for this guy, and for each of these guys just pull it back on this, and those are your SIs over here, right? So any subset of Rk in particular is k-rectifiable. And now just to make this frustrating, let's let S as a tilde, maybe I'll call it a hat, whatever, okay? Be the union over all rationals of this S tilde translated by q, right? So think about dimension one for a second. I'll draw a picture. Pictures are nice. So if we're in R2, here's the plane R2, right? Here could be my S1. I can take any subset that I want to that, and now what I can start doing is moving it up and down by, by, by translations, by, by rationals. So what I actually get is a union, a countable union of all these things that are floating around here. So that's also k-rectifiable. So why can you still work with a countable union of such things? Because if you're an analyst, you can basically just work with each one individually more or less. It's more subtle than that, but as a first approximation. Okay, and now I'm not gonna go through it, but you have at least written your notes and actually.