 with a set of points, x, f of x, such that x is an L, then s satisfies the epsilon-ryphomeric condition. And I would never fully trust my epsilons. Maybe it's 2 epsilon. I feel like it's epsilon, but whenever I say anything, I'm an analyst, and you might multiply it by 2 to be correct. In this case, by the way, so what we're looking at is some graphical thing. Here's our s. In this case, what is the mapping from s to a ball of radius 1 and rk? The easiest way to take it, I mean, this is not how it works for the general case, which is why it becomes more challenging, but you can just do the projection map, right? You have a mapping from s to lk. This is a graph, so there's a one-to-one correspondence between points here and points up here. Project them down, and you can use this condition to see that actually is going to be a by-lipchitz map, much less by-holder map. So it's a clean example to get your head around. No, no. So in fact, this is a good question. So the example I'm going to provide in just a second, so let me answer this in two steps. So even providing, so it's sharp in the sense that it's not by-lipchitz, it's just by-holder, right? So in some sense, to that degree, you have a certain sharpness, but not the other direction. So constructing maps that are kind of by-holder euclidean space, but not by-lipchitz is actually pretty tricky. The next example is what's going to do it and kind of show the sharpness of the Reifenberg. But by no means the worst by-holder map can get. So what can happen here in practice is that we have to think stochastically for a second. So if you have kind of like the Wiener measure on path space, then what happens is a generic curve in that is going to be holder-continuous for c alpha. Alpha is less than a half. But it's image will be two-dimensional, right? It'll be a lot worse than what's about to happen here and there will be no Reifenberg associated to that. So I mean, you know, there's no stretch of the imagination in which you can make a Reifenberg hold for that. It is really the point, right? Even if we were to loosen up on exactly what we asked for. So you can have holder maps that are much worse than Reifenberg guys. Reifenberg guys are much better behaved than a generic sort of holder map. What the heck was I gonna do? Oh, I was gonna do an example, okay. Okay, so here's our interesting example. In fact, what I'm gonna say is if you're new to all this, the most important thing you wanna do, probably I would say for the next two days even, is understand this example, right? It's gone through in detail in the notes because basically everything we're gonna end up wanting to know about the Reifenberg theorem is in it, right? It's even gonna tell us how to prove it in the end. It's gonna give us all the right intuition for what's happening. It's also gonna tell us why these mappings are bi-holder and not by-lipschitz. So example, so-called snowflake. Okay, so let's start with sort of an iterative construction. So let me do something stupid. Let's take two, I mean everything's gonna be an R2 here, right? So it's gonna be a one-dimensional example in R2. So let me take two points in R2 and call them A and B. Let's consider the segment in between them and let's let little LA be by definition. This thing is the segment between A and B, okay? Now, what I wanna do is I wanna say that, okay, so I'm gonna fix some Epsilon, it's gonna be floating in the background. Roughly speaking, our example will be Epsilon Reifenberg, but I think it's probably gonna be like 20 Epsilon Reifenberg or some nonsense. And so what I wanna do is perturb this now from a single segment to two piecewise, linear segment, so with two pieces. So I'm gonna come out here, come down here, and I'm gonna call this point here C. The C here is gonna be, so the length of, let me do a bigger picture. I think there's too many people in the back that's not gonna work. We can designate a whole blackboard for a triangle, why not? Okay, so I'm gonna take my, because I'm gonna start writing some stuff that's somewhat precise, so I wanna be careful. Okay, so what I'm gonna do is make an isosceles triangle here, right? We're with two pieces. The length of this side here is gonna be Epsilon times the length of this side here, right? So I'm just moving out by length Epsilon. That corner point I'm gonna be by definition, I'm gonna call C. And what I'm gonna say is I'm gonna say L Epsilon AB is by definition going to be the union of AC and CB here, right? So this guy here is this guy here, right? So, do it's piecewise linear. And now we're gonna build our example. I'm gonna do in the following. So I'm gonna let, we'll call this A minus what not is gonna be the point minus two zero. I'm gonna let A not one be the point two zero. I'm making these indices match with what's written on the notes just so it all kinda matches up somewhat, right? So this is the line in our end. I'm gonna call this line segment tier S naught, right? So our original guy is just gonna be a straight line from minus two to two, nothing complicated. And now inductively, what I wanna say is the following. If Si is equal to the union of, it's gonna be a piecewise linear curve by definition. So some Li to, how do I write this? It's probably better. That's I, J, I, J plus one. So if I have some piecewise linear curve, then my next step of my iteration is just gonna be, I'm gonna epsilon it like up here, which means I'm gonna epsilon all these out. Unions of L, A, J, I, A, J plus one, epsilon, which by definition will just be the union of a whole bunch of other line segments, right? So this is I plus one, J to I plus one, J plus one, right? So let's draw the picture of the first couple, because this is one of these things that's much clearer in pictures than it is when you write the definition. So here's my S one, my S two is gonna be S two. S two, and I'm gonna keep doing this, right? I'm just iteratively, right? So I'm gonna do it forever. And we're simply gonna ask, what do we know about this example, right? A priori, nothing, right? Maybe we just knew nothing about this and it's a total complete mess and it's a waste of time, but let's study what we can about it. So let's make some observations about this guy and this guy, which probably would have been more fitting here, but oh well. So what can we say? So what's the Hausdorff distance? If I have a segment, AB, right? So this is just from the line from A to B. And if I wanna know how far away is the line, is the piece, is the double line, L epsilon from this guy, well, one can get this pretty easy, right? It's the distance from this guy to this guy. It's a nice easy thing to check, right? It is just the length of this. That's as far away as you get, right? So the Hausdorff distance between these two guys is precisely epsilon times the length of the original segment. On the other hand, what's another piece of information we might wanna know? What is the length of this segment? So the length of this segment is, so the square of the length in fact is a little bit easier because what can we do? We can do a nice little triangle inequality, right? So that squared is this squared plus this squared. That squared is this squared plus this squared. Sum the silly things up and what do you have? You have that the length squared, anyway, let's just call it the length, is the square root of one plus epsilon squared times LAB. So for the sake of today's lecture, fine, but I'm gonna observe something mainly so when I observe it again in two lectures down the road, it'll be clear. There's a neat little thing going on here where somehow the Hausdorff distance is basically like epsilon times the length, right? But how much does the volume go up? It's basically like one plus epsilon squared, right? I mean that this has a Taylor series out, right? So the length goes up by epsilon squared whereas the distance just goes up by epsilon. This means nothing for us in this example, right? But I want to observe it so that when I've observed the third time and use it, it makes sense. So what can we do with this? Well, once we have these two pieces of information, it's easy enough to figure out how the lengths of these guys change and how the lengths of these guys change and so forth and so on. So let's write all this down. So if I want to know what the length of Si is, well this is equal to, so S0 is four, right? It's from here to here. And each time, the length of each segment gets multiplied by this and I'm just summing the silly things up. So the length of this is getting multiplied by one plus epsilon squared each time as well, right? So what I actually have here is that the length is just one plus epsilon squared to the I over two. Zero, zero, good, excellent. So our length is like this. And on the other hand, and here's sort of the interesting point, what is the Hausdorff distance between Si and Sj? Well, if we think about this for a second, right? So to get from Si to Sj, right? Let's just think about going from S1 to S2 for a second. So to go from S1 to S2, I don't really care what's happened up to the construction of S1. It is what it is. Now, now to get to S2, I now do the usual procedure, where I'm gonna out myself twice, so that the Hausdorff distance between S1 and S2 is exactly gonna be epsilon times the length of Si, sorry, epsilon times the length of one of these intervals. Right, so this is gonna be, let's go into the sum. Oh, I probably should have mentioned this before, but neat point, which we won't really use for anything. You can get around it pretty easy, but it's nice to know. This Hausdorff distance, actually, for compact subsets, actually gives a complete metric space, interesting enough. In particular, it satisfies a triangle inequality, so I call it a distance function. Call it an exercise, it's like four lines, but it's neat. Right, so if I wanna compare the distance between these two guys, I'm just gonna sum up the distances from Sk to Sk plus one. What is the distance from Sk to Sk plus one? All I'm doing is I'm doing this isoscelate triangle construction one time, right? So this is gonna be equal, this is from i to j. The sum of i to j of epsilon times the length of Li. So let's say, boo, I should have written that. So Si is piecewise linear curves, right? All these piecewise linear guys have the same length by construction, right? They never change, the length of any one of them is where I'm gonna designate Si, Li, and Li, you just do the exact same game here, right? The original thing is four, and every time you go through it, what you do is you half it, and then multiply it by the square root of one plus epsilon squared, right? So Li is actually the square root of one plus epsilon squared over two to the i, right? Because the total length of this gets multiplied by one plus epsilon squared, but then I have the number of intervals I have. So that's the length of this. That's the length of this. And that means we have, this is less than or equal to four epsilon times the sum of one plus epsilon squared over two. This is the most technical part of this, probably lecture series, k i to j. And it's just a triangle inequality, sorry, it's just a geometric series. So that means that the power of distance between Si and Sj is less than or equal to what? So that's equal to, say, eight times epsilon times what this is at i, right? Because each time it basically has by some geometric series. So the square root of one is epsilon squared over two to the i. This is super small, right? I mean, two is much, much bigger than one plus epsilon squared of epsilon small, right? And I'm actually gonna define this to be eight epsilon times two to the minus alpha i, right? So two to the minus alpha is by definition this quantity, just so I can write it nice and cleanly. If I lost you in that, the key point is it's summable and it's small, right? So what's happening? It's a geometric series. This thing is a Cauchy sequence, right? So in other words, we expect this to limit to something. So we've got all this craziness going on. It's getting crazier and crazier and crazier. But one actually expects a limit if the distance between i and j is actually gonna decay polynomially like this or decay at all, really. And although one could actually directly just prove it, let me just point out the fact that since the Hausdorff distance is Cauchy, this does actually imply there's a limit because it's a complete space. One can prove it not too long, but why, I mean, it's not helpful right now. So this implies that si actually Hausdorff converges to some s sitting inside r2. And this is gonna be our example in the end, right? So all I'm doing here is saying that I'm gonna take an interval and I'm gonna keep isosceles trangling it out and we're gonna actually get a limit here and this limit, whatever it's gonna end up looking like, is gonna be r and s. So let me actually draw that with less wiggles. Well, less height for the wiggles. Okay, and now what's the key observation? I'm not gonna prove it for you because we don't have time. But it's in the notes and besides think intuitively is better anyway. I mean, you can read the notes through the details, but get the intuition is more important right now. I mean, so what I've done here and what that's saying over here, right, is my original as zero is precisely some line or interval, right? And when I've kept making these changes, not only did the first change not move far from it, but the sums of all the changes didn't move far from it, right? So my end s, whatever it is, it's within say a two epsilon neighborhood or something. I'm gonna compute it out precisely. Maybe it's eight, I don't remember, but it's pretty close to my original segment s naught. Maybe it's four epsilon, I'm gonna say, why not? So in other words, at least on the ball of radius one, we see that this s, whatever it is, is very close to a line, an affine plane, right? In fact, if you just keep repeating the constructions down here and ask the same questions, it's the same deal, right? Because if I looked here instead on this ball, this is an actually an affine line again and the sums of all the changes past this point is small relative to this, which means again, no matter what s did right here, it's gonna be close to that affine line, right? And if you make this precise in let's say half a page, what you've just proved is you actually are a Reifenberg set, right? So this implies very sketchily that s is, and I'm just gonna say four, but it might be eight, I don't know. So this is our Reifenberg. Okay, so first observations about this set. Why is this a non-trivial example? So let's see that with nothing else at the very least, it's showing the Reifenberg theorem is sharp and that you can't be by Lipschitz, right? That by-hole there's the best that you have hope for anyway, even if we don't know that's true at this point. And that's easy to compute, right? Because we come over here, we look at the volume of si and what is it? It's one plus epsilon squared to the i. It's going to infinity, right? I mean, if this thing were by Lipschitz to, to, to an interval, it would have finite volume, right? So we see immediately that whatever this thing is, it cannot possibly be by Lipschitz to Euclidean space. Neither can these, right? So by-holders the most we can hope for. And now the question really is, do we even expect by-holder? And what's nice about this example is two things. We can build the by-holder maps completely explicitly. So I'm going to say this in two ways. So we can at least see by-holder is true for this. So question, what about by-holder map? So it's not by Lipschitz, what about by-holder? So for that, what we need, right? Is a mapping fee from, and let me say minus two to two, why not, into S, which is supposed to be a by-holder map, right? And now in practice, right? So I want to discuss basically two ways of building this. One way because it's the easiest way to see it, and this is clear, this is in your notes. I'm actually both ways you're in your notes, but the second one I typed up this morning so it's not completely thorough. So one way we can do this is the following. Well, we have no idea how to build this map whatsoever, right? I mean it's a lost cause. But what we can try to do is build some phi i's from minus two into two into S i. Now, why is this reasonable? Because in some sense, there is actually only one reasonable map into this. I mean, this is piecewise linear. This is an interval. The only reasonable thing to do here is to try to make sure it has constant speed as it goes around it, right? That's basically it. So this has finite length even if it's huge, right? It's just a piecewise linear thing. So one can choose phi i by definition to be parametrized by arc length and that's it, right? So it's the unique parametrized arc length map. So if one heads this way, I'm gonna stop with this in, oh, I have five minutes. But am I 50 minutes or an hour? Okay. Good, I have five minutes. I guess I can see it enthusiastically instead. So the observation you wanna make here is the following. I'll just tell you what it is and you can look in the notes. Note the following. Every one of these maps, every one of them, maps minus two to this point and two to this point. Now, every one of these maps except phi zero maps zero to this point because note this point never moves again in this construction. Every one of these maps except the first two maps minus one to this point and one to this point, right? So we don't know that there's any reasonable sense in which these converge, but we're at least seeing right away if they're stabilizing in a reasonable sense. And if you try to actually compute out what all means with exactly those computations over there, you get precisely bi-holder with exactly that alpha. It's basically that computation rewritten in a new way. So one explicitly can write this map this way to see bi-holder and see it all. It's a very nice thing to get your hands on. However, that being said, that's not how I want to do this in two minutes. Actually, do I have two minutes? I want you a little slow, I think. Not good. Okay, one minute, never mind. So instead, let me make the following observations then. So there's another way this has done your notes and I would suggest you read that too because this is a little bit more connected right from here. Probably I'll start next time with this because you should see this. So this example, it's neat but it's not just neat. If you're an expert, this is essentially told you everything you need to know to prove Reifenberg because what's happened here is that the actual proof of Reifenberg for a completely general S is basically reverse engineering this example, right? You're gonna say that for a general S, the fact that it is a Reifenberg set is gonna mean we can build SI just like these guys and we can build mappings FI just like those guys and we can estimate them just the way we're doing here so that we can get limits and show the bi-holder, right? So this is basically just, you know, it's a baby version of what the Reifenberg theorem proof looks like, it's actually extremely accurate. And maybe at the beginning of next time we'll say a little bit more about that. Thanks.