 Camillo has probably a book full of, he gave some lectures last week, Camillo de Lewis, a book full of notes on this. But these are very well-behaved sets. So, you know, like manifolds, they have tangents and they're Euclidean space at almost every point. So we can make sense of directional derivatives and so forth and so on. In fact, that kind of characterizes what these sets look like. They're exactly the subsets for which tangents are Euclidean space almost everywhere. Oh, hey, okay, fine, okay? That is harder. So canter sets, but I mean, even there, one has to be extremely careful about what you mean. Because if it's a subset of, I mean, so you can take a canter set inside some two-dimensional space, which is like say, one and a half dimensional. It will not be one rectifiable, but it can be two rectifiable, right? So one has to even there be kind of careful. It'll be measure zero there, so it won't matter. Worse, you can actually get canter examples that are canter set, that are rectifiable, but you can get them that aren't as well. This is less trivial. If I actually finish early, maybe I'll write it out explicitly for you. It's not hard, but we'll take five to eight minutes. Okay, so now we move on to, I think our last real piece of meat. So, so, Jones' beta numbers, right? So, so, so as we said in what I kind of just erased, one major issue in terms of applying Reifenberg to actual applications is that in principle, we have integral estimates. We don't have point-wise estimates. And for that reason and actually a bunch of others, as it turns out, it is more convenient to work with measures. So, so, what we're gonna wanna talk about is we're gonna try to, I wanna understand in the same sense that the Hausdorff distance was used to control how far away a set was from an affine plane. We wanna know how far away is a measure from at least, well, at least being contained inside of a plane is in fact how it's gonna be written. So, so, let's write the definition first. So, so, given a measure mu and an integer k, we define the, I'm gonna call it the L2 beta numbers, but that's all I'll really work with in this lecture to be the following. It's okay, it's a function of X and R. And you should view X and R as being a ball here, right? So, what I'm really asking, what this is gonna measure is how far away is mu from being contained inside an affine subspace on this ball centered here of that radius, right? This is what we're asking. And just like with the right form of condition, we're gonna infimum over all affine subspaces. But what we're gonna infimum is the scale and variant integral. So, what we're saying is we take L, we look at the distance function to L and square it. This is beta squared, sorry. So, it's the square root of that. We look at the distance function to L. We ask, you know, for every point in the support of mu, how far away is that point from L when we square it? And then we integrate it over the whole ball, right? It's a reasonable thing to do. This here is exactly scale and variant. What that kind of means is that if I take a ball of radius R, we scale it to the ball of radius one, and I pull mu back by that in the suitable normalized sense, then the beta number will be the same on the ball of radius one, right? We always love scale and variant things that are floating around here. You should call that an exercise. Okay, so some comments. Note, I mean, this is not in some sense controlling how close L is to mu. It's controlling how close mu is to L, right? You know, mu could be zero, for instance, right? And this is zero. If our set was zero, it would not be close to an affine plane the way we defined it before. So all we're asking now is that that we're asking how close mu is to being contained in a plane, but there can be holes, right? It could just be a subset of the plane that's contained in it. This was zero. And we'll do some examples. And actually, so I'm gonna do three examples. And these examples are meant to motivate the following. They're meant to ask the question, imagine I have a completely random measure mu and I know something about the k-dimensional beta numbers. Maybe I know a lot. Maybe it's super, super nice, right? A lot nicer than we'll end up assuming in the theorem. Maybe it's zero, right? Or maybe it's decaying polynomial or maybe we know a lot about it. What can we say about mu? What can't we say about mu in these cases? Because we wanna have some fuel for what we have a hope for controlling. So example, let me just start with the silly one. So let's say mu k is a measure which is completely contained inside some affine plane. Well, then beta k of XR is zero for every x and R positive, right? So this is essentially an if and only if condition, right? However, precisely with this in mind, let me consider the following. Let me actually pick an example that does this. So let's let mu k be the following. So Lk, I'm just gonna say is a subspace even, right? So affine through the origin for the sake of argument. So let's let it be some multiple of the Dirac delta at the origin plus some multiple of the k-dimensional Hallis-Dorff measure on L. So recall what this is, right? This means that the measure, so I'm just looking at this for a second. This measure of a set means that I take the set, restrict it to L and integrate with a little big measure, right? That's how you should understand this. Then, I mean, it's just an example of the above, right? So, but I really wanna fixate on it for one second. So it's zero for every x and R again. Not just every x and R is somehow the point. Also for every alpha not and alpha k. And why does that matter? Well, it matters because it's saying that even if this is flat out zero all the time, there's no measure bound on this thing, of course, right? We have no, I mean, so in the end, that's our goal, right? We want rectifiable structures on measures. We want measure bounds on measures. And this is saying that flat out, even if this is zero, you have no measure bound on this, right, it can be absolutely arbitrarily big, right? There's no stopping it from being large. However, where it doesn't rule out as being k-rectifiable because, of course, it doesn't set a k-dimensional plane. So now, let's give an example that rules out being k-rectifiable and make you think most of this lecture has been pointless. You guys have the good chalk. So let's see, example. So let me do something on the complete opposite extreme. Notice that although I called that mu k, that example, I could have actually broken it up into two, which I might have called mu k and mu minus because one of those pieces there is very k-dimensional and one of them is strictly less than k-dimensional behaving. It's a direct delta, right? It's construing something that's even less than k-dimensional in support. With that in mind, I'm gonna define a mu plus, which is gonna be supporting something bigger than k-dimensional, right? So let's let mu plus be defined to be simply epsilon times the standard Lebesgue measure on Rn, maybe on the ball of radius one or two or whatever. So the indimensional Howlstorff measure is the standard Lebesgue measure we want on the ball of radius two. So all I'm doing is integrating, right? So I can do this and I can ask, what are its beta numbers? Now, no longer are they zero, right? They're not contained inside a k-plane, but they turn out to be extremely close, right? So a nice exercise, which is like three lines, but I mean, you should do it, is that the k-dimensional beta numbers of this example are equal to, we'll make it in the volume of the ball of radius one, Euclidean space times epsilon squared, I'm squaring the side, times R to the n minus k. So if k is, say, one, whatever, something less than n, look what's happening. This is always small, not only is it small, it's decaying polynomially, right? So I mean, you will never in your life have a practical example where the beta numbers behave better than that, right? Where they're small and decaying polynomially, that's awesome, right? Well, what's good and bad about this? Well, what's terrible about this is that it's clearly not k-rectifiable. So these are small and in fact decaying, and yet I have no k-dimensional structure on my measure. By the way, if you haven't thought much about measures before, these are simply a nice collection of examples to think about, right? You get to kind of manipulate them a little bit and get your head around measures some. Okay. Now, I wanna do one more example before stating theorems. Oh, let me see an exercise. Here's an exercise, actually, because I really want you guys to do this in the next thing, so I'm gonna write it down. This gets used constantly, right? So imagine I have two balls, which are roughly comparable, right? So the way I can write that is, say the ball of twice the radius 2s around y is contained inside the ball of radius r around x and it's contained inside the ball of radius, I don't know, 100s, whatever, around y again, right? So pictorially I'm saying here's my ball of radius r around x. Here's basically what my ball of radius y looks like, right? So it's contained in there maybe, but if I multiply s by some big number, the ball of radius r is contained in it, right? So they're comparable balls. Then the claim is that the beta number here controls the beta number here, right? So then beta k of y, s, is less than or equal to some dimensional constant. Anyway, times the beta number of xr. This is one of these examples that if your head's in the right place is obvious and if it's not obvious, it's because, well, you need to spend an hour thinking about these things. That's all, that's what happens. So if you find yourself writing like a page to prove it, you're thinking too hard, right? Note the other direction's highly non-true, right? I mean, the big ball controls the small ball and not the other way around. Okay, that being said, I now wanna present one more example before I do theorems and then we're gonna do theorems. So in fact, the last example's very easy. It just builds on these two. So let's let mu equal to mu k plus mu plus. So exactly from these two examples, right? That's it. Then the exact same computations say that if we use the affine space L, there's your test function, what you end up getting is something along the lines of the beta numbers of this guy are once again behaving worse like these guys are. I'm gonna throw a larger dimensional constant in front because why not? But I think it's just four. So the beta numbers are small. They're actually decaying. And why am I pointing this example out? Because now we have no k-rectifiable structure. We have no measure upper bounds, right? One seems doomed. However, it turns out, I mean, what do we get from kind of looking at this example? We get from a completely, I mean, for at least this example, we do at least guess that we can split the measure. I mean, stupidly, it's defined that way, right? We can split it up into two pieces, one of which is k-rectifiable without maybe measure bounds. The other which isn't k-rectifiable but has measure bounds, right? And the basic theorem here is that's always true, right? So if we have reasonable control over the beta numbers, we can always do such a decomposition, right? It measures either bounded or it's k-rectifiable or it's some of two things like this. So the theorem, which is what I'm calling the rectifiable Reifenberg theorem, that this is basically a slightly, this basically comes from the paper with Nick Edelman and Daniel Lavelle-Torch. We won't phrase it quite this way. We make sure to phrase it in as confusing manner as possible when we wrote the paper. So let's take a nice Borel measure mu and for the sake of argument, I'm gonna say it's inside the ball of radius one. I'm gonna go have enough room here. I don't have enough room and then I should just stop before I get to the end and decide I don't have enough room. The theorem, so our measure of mu is in the ball of radius one. We're going to assume the following holds. So what do I mean by the beta numbers being nice? I mean that the integral over the ball of radius one of the integral from zero to two and I'll explain what this term here means. This is basically like a demiasome. So what I'm doing here is at every point, so what does this mean here? It's dr over r, right? Essentially the nice way to think about this is it's like uglier to write but easier to think is that using that exercise there in fact. So remember, if one's gonna get rectifiable in volume control, it's gotta be a stronger condition than Reifenberg's in some sense. So somehow each one of these things on a given scale is like Reifenberg's condition. So something must be stronger in this information. And if we let ri be say two to the minus i, so this is what I'll call scales. So one half a fourth and eighth. I'm gonna call these things scales. So this integral here at a given point is roughly speaking, I'm not being super precise here, like the sum over all scales, less than or equal to two say, of beta k of x ri. So this has like unit volume on any scale range, right? So I'm basically saying at every point, I'm gonna look at how far away on the ball of radius one say it is from being contained in some k-dimensional plane. And I'm gonna sum that up with how far away it is on every scale below it. So this is definitely a stronger condition than Reifenberg's, right? It's not just on one scale, it's the sum over all scales. And that is assumed at least in integral to be bounded. There should absolutely be a square on the right hand side. Thank you. Then, how we doing? Okay. Mu is equal to mu k plus mu plus. We can break mu into a sum of two measures, right? Such that the following holds one although the mass of mu plus is bounded, right? So we think of this as being like the larger dimensional piece. So I like the Euclidean guy over there. It's even bounded by a multiple of that. And two, if I'm gonna call it k is the support of mu k, then k is k rectifiable. K rectifiable with finite Halsdorf mass. We'll do better in this in one second. So in particular, looking at this example, the measure's infinity, but the support of the measure has bounded finite, and this should be on the ball of radius one, of course, the way we're discussing over here, has bounded finite k-dimensional mass, right? I mean, in fact, this actually has a packing estimate on it. So in fact, the volume of the ball, so a Minkowski estimate, the volume of the ball of radius r around this guy, so this thing can't be dense, right? It's actually gotta be well-behaved, and balls around it have to be well-behaved. C, r to the n minus k, and the packing content of this guy is bounded by some dimensional constant. So not only is the Halsdorf content bounded but the packing content. Any covering of this actually is bounded. What's that? What's t? No, those do not. That's the whole point, right, exactly. So like over there, right? Somehow you're gonna split up into a piece where there's not gonna be a gamma there. So this is actually the main thing that that one uses in the applications for the singular sets. I'd like to write a couple corollaries. I have about five minutes. I think I can do it. So definition, because this is related to some of Tolst's work if we put some density bounds there. So let's define densities of things real quick. So this is an arbitrary measure, right? What are the reasonable things to try to throw stuff out on? We can define densities, because if we throw out densities, then at least morally speaking, one of these examples gets thrown out depending on whether we have an upper or lower density. Yeah, xk, xk. I mean, this should be a k here, yes. I mean, there's some fixed k, absolutely. Thank you. Okay, so let me define what I'm gonna call, actually they're called, they're more like a weak upper and lower densities, right? So the weak upper density, the weak is because I'm gonna take a soup instead of an imp here, is the limb soup as r goes to zero of mu, the volume of a ball of radius r on x over r to the k. And sort of the weak lower density here is gonna be the limb imp. So I'll discuss this in the context of these two examples as r goes to zero of the same thing. So in some sense, having this be bounded kind of rules out, this being too large in some sense. If I'm getting my, I might be getting my inequalities backwards. I'm at a blackboard with 80 of you looking at me. I'm gonna write the corollary and we'll figure out from the corollary what the right direction is. So in the exact same setup here, if we now assume upper or lower density bounds, which is basically gonna rule out one of these two examples, one can do a little bit better, right? So if the lower density at every point mu has an upper bound, then we just simply get a mass bound on our ball. So that's saying if we have upper bounds on the density everywhere, on the lower density, we have upper bounds on the lower density, then what we've basically ruled out is this example over here so that we can't actually have this guy being some ridiculous something in the large. Because that thing being bounded from above will actually prevent that. And two, if you have a lower bound on our sort of upper density, and those of you who think I'm getting these things incorrectly written, I'm not, that there's a reason for the, that this is weaker than what might help normally write upper and lower densities, then k, which is the support of all of mu, is k rectifiable. And now if you assume both the upper and the lower bound, then mu is simply k rectifiable with an upper bound. Okay, so let me make three points. Fine, so the assumption here, right, is that if we, it's three here for time, I just won't write it. If both these things hold, then both those conclusions hold. That is to say, mu is k rectifiable with an upper mass bound. So there's a, to put that in sort of perspective of some other results in the literature, right, there's these results by Tolson and Zomtolso that give a characterization of when something's k rectifiable. Basically, if that quantity there is point-wise finite almost everywhere, then they prove you're k rectifiable. And Tolson proved the opposite direction as well. So one can view this as being an effective version of one of those directions, right? So it doesn't prove that it's a bound on things that are k rectifiable, simply that it gives an effective way of understanding bounds when this is actually bounded. And let me point out that if you think long and hard about this, you'll feel like this is too weak of a condition to actually pull that off is because of the packing estimates. You're gonna have to, because these are soups and not imps in the way I'm writing this, you're gonna have to actually get a very special covering in order to make these conclusions. And so the packing estimates directly used in the proof here. And I think I am exactly done, right? Okay, thanks. Oh, I'll say next.