 Okay, so first off, thank you very much for the chance to come here. It's my first time in Park City. Let me make a few comments about what these lectures will be about. There are, the first two lectures should be uploaded now, so you guys can look at those or go get a hard copy, I guess. For those of you who looked at the abstract, right, the abstract for this talk was a little on the ambitious side. Case in point, I taught a course on this last year and then 10 weeks got through about two thirds of it. So we'll scale it down a little bit. What I wanna focus on is on various versions of the Reifenberg theorem that exist out and their applications, in particular the singularity theory. For the most part, I'll probably just talk about the Reifenberg theorems themselves in one guise or another. And there's some post-docs who are running some courses on the applications of singularity theory. But I should point out that the proofs I'm gonna give, which are not the ones that are in the literature right now are specifically designed to sort of motivate the singularity analysis. They've done them much, it's a toy case now. Basically, it's actually what happens in the singularity analysis, but a bit easier because you get to assume away some of the difficulties that actually have to be there. So someone can get a feel for all this. So let me sort of start off with a couple of comments. I mean, Reifenberg theory, I mean, one reason I sort of opted to go with this direction is that for a large crowd with a broad audience, what's really nice about it is that it requires almost no background. I mean, basically multivariable calculus, and you should know what a measure is by-by the next talk or Thursday's talk. So I mean, but at the same time, right, right, I mean the techniques that get used in it from, I mean, for those of you who are aspiring analysts out there, the techniques used here are things that you will use 60 times over. I mean, you know, I've worked hard to at least do the following in the notes. The notes are very complete, much more complete than my talks will be, of course. And at least whenever I make you suffer with technical details, the goal is that these are technical details that you will use 60 times over in other situations that have nothing to do with this. All right, so that's somehow the goal, right? So if you're an expert, you will be able to look at a lot of these statements and understand how to prove it without reading the notes just because you're used to doing this sort of thing. Right, so that this is where you would have sort of aspire and aim toward, right? Reifenberg theory itself is the following. I mean, you know, and one guy's or another, it is about studying general sets or measures and trying to compare them to well-behaved sets or measures. And using the errors you get from those comparisons to say that your original set or measure is actually better behaved than you thought it was. Right, and you use this to try to pull in structure for your guy. Now, as part of sort of my introduction here, I'm gonna wanna do two things. Obviously, I wanna go through a lecture by lecture of what I'll be talking about. And I also wanna go through the four to five or different ways that, you know, versions of Reifenberg theory actually appear in the literature and are used in applications. And since this'll be pretty hand-wavy, I'll kinda just be talking through it. I thought what I'm gonna do first is jump ahead and we're gonna give a precise statement of the classical Reifenberg theorem. And then I'm gonna jump back to my introduction. And that way, we at least have something precise in our heads when I start waving my hands about the other stuff. So maybe you can at least envision kind of what I mean a little bit for the rest. So with that in mind, let's start by sort of stating the classical Reifenberg theorem if I can find it. So the classical Reifenberg theorem says the following. It's in words, if you have a set in Euclidean space and if at all scales it's closely approximated by an affine plane, then in fact, it has to be topologically a ball, right? Nothing complicated really happens with your set in this context. It was proved as one would guess by Reifenberg with the idea of studying minimal surfaces at the time. Now to make this all precise, let's first start by what I mean by approximating. So let's recall some stuff. So definition, by the way, let me know how the blackboard thing goes. I'm skeptical this is gonna work well. I can take my notes and basically shove them into slide form for the next lecture if this might work better, we'll find out. So let's let A and B just be subsets and I'm gonna say closed subsets. Inside R and compact subsets, why not, it doesn't matter. Then we wanna know a notion of distance between these two sets, right? So how far away are they from each other? And there's about 60 ways one might do this. And in some sense, the weakest way that that's reasonable here is the so-called Hausdorff distance. So we define the Hausdorff distance, dh, that is shaky of A and B to be the following. It is the emphemium over all R positive such that the following two conditions hold. A is inside the ball of radius R around B and B is inside the ball of radius R around A. So pictorially speaking, right? What we're saying is that we have, here's one set. Here is another set. And as long as I take a ball that's roughly bigger than the size of my hand here, right? Then we know that this tube here around my set is called this A and call this disjoint set here B, right? So this tube here contains B and roughly the same size tube here contains, if I go a little bit bigger maybe, contains A, right? So they're always close to each other. Said otherwise, every point of A is with an R or epsilon if we call it the distance is with an R of B and every point of B is with an R of a point of A, right? So they're close to each other in this sense. You're, by the way, I'm gonna throw it out. You got about 6,000 exercises that are just sort of written throughout this. I'll list some of them on the board here. You guys, for reasons beyond my understanding, have your exercise session right after this class. Well, which means there's no way anyone's thinking about this, but let me just write some down and my exercises are designed to be that if you have a good visual feel for what you're doing, it's not tricky, right? They're designed to be that if you're having trouble in struggling, it's because you don't quite have the right picture yet type of thing, right? So, start with an exercise. Make sure I spell this right. So, two, so let's let S be an arbitrary subset and say compact and Xi inside S be epsilon dense, right? So, the ball of radius epsilon around this contains S then show the house dwarf distance between S and Xi is a less than or equal to epsilon anyway. An exercise two is same basic deal and let's start with some S. I'm gonna, let's just say I'm gonna shove inside the ball of radius one and Rn. And it's a hard one to write this. Yes, that is how I want to write this. Which itself I'm gonna sort of say is inside Rn cross Rn plus two by by saying it's an Rn cross R2, right? So, this is sort of just living inside the Rn slice, right? The zero slice of this point here like this. So, it's sort of flat looking. Then let's let S epsilon be equal to S cross S one of epsilon where this is the circle of radius epsilon inside R2. So, then show the house dwarf distance between these two guys S and S epsilon is less than or equal to epsilon. So, what are these two examples showing you? They're showing that basically dimension has nothing to do with this distance, right? I can take a set which picture is being a plane, right? A really nice set and I can approximate it by something of lower dimensions, discrete points, or I can approximate it by something of higher dimensions, right, by crossing it by a factor and squeezing down. So, house dwarf distance doesn't preserve much when you're looking at it. So, with this in mind, let's define what the Reifenberg condition is. So, let's let S be closed. I'm gonna say it's in the ball of radius two because it'll be simple for the statements later on inside Rn. I'm always working in Rn. Closed. Then we say it satisfies the epsilon Reifenberg condition. Big enough. I can up size, just yell at me. By the way, can't see if you raise hands. These lights are in my eyes, so you gotta yell. Satisfies the epsilon Reifenberg condition. If for every ball of radius R, around some point X, also inside the ball of radius two, let's say, there exists a k-dimensional subspace. So, they're always k-dimensional, some fixed k. Affine space, right? So, just some k-dimensional plane stuck in there. And if you want, I can write it like this. To symbolize that the k-dimensional plane is allowed to depend on the ball, right? It can change from ball to ball. Otherwise, this becomes trivial, such that the Hausdorff distance between S on this ball, maybe I should say the way I'm gonna write it, X is in S. And L sub k on this ball is less than or equal to epsilon. Epsilon R is the scale invariant factor, right? So, I mean, epsilon is totally meaningless if I don't have an R here, if R is less than epsilon, right? So, the scale, I mean, this is a scale invariant statement. What that means is that if I take this ball of radius R and if I rescale it to a ball of radius one and pull back my sets, my affine plane of my set S, then they're epsilon close, right? So, you can always picture this as being in condition on the ball of radius one after rescaling. K is absolutely fixed, yeah, yeah. It's gotta be the same k for everything. In fact, the way it's written, I'm sure you could do a short theorem that actually says k has to be fixed. I mean, if you know this is true for every single ball, k can't change. As long as it's connected. Let's just say k is fixed. No, it's connected, because it's the first ball. And now, Reifenberg's observation was the following fantastic thing. He said that, imagine you have a set that looks, I mean, let's sort of picture what this might look like, right, we've got some set. It's wiggling around, right? So, on the ball of radius one, I mean, here's our S and our L is just this guy here, right? It's very close. And if I were to zoom up on this ball here, then I have maybe this L here. And that's for this, go ahead and prime. And that's for this ball, right? So for every ball and for every ball, this thing looks close to a k-dimensional subspace. Think of a smooth sub-manifold, right? If it's a smooth sub-manifold, that certainly holds, right? This is sort of a very weak version of this. In fact, that's sort of the motivation, right? So if you picture sort of sub-manifold sitting inside our N, then one can clearly see this, we'll come back to this in a minute when I go to my introduction and then come back. One can sort of clearly see this condition holds. But Reifenberg's idea was that, well, how far away from a sub-manifold can a set that satisfies this actually be? Remember, this is a very weak notion, Hausdorff distance. But at the same time, what we're doing is assuming this condition here holds, this very weak condition holds on every ball. And Reifenberg observed that, in fact, that's not arbitrary. The only way that can happen is if that S actually is a ball. And slightly more precisely, so this is Reifenberg's theorem. So let S be to satisfy the Epsom-Reifen condition. Then, so there's a cardinal rule of not erasing and have to erase. I'm gonna do it over here. I'd really rather not split it in half. I've officially written that statement three times somewhere around here, so I'm gonna keep it at that. So we have a set that satisfies the Epsom-Reifenberg condition. The ball of radius two, let's say, then for every alpha, let's say less than one, strictly less than one, if this epsilon here is sufficiently small, so depending on just the dimension in this alpha, but nothing else, then there exists, and I'll say what this means, some phi from S intersect the ball of radius one in Rn, into the ball of radius one in Rk, right? So we're picturing this as being sort of like a k-dimensional sub-manifold, right? It's the picture you wanna have. So from the ball of radius one on this set, and to the ball of radius one in the right dimension, there's a bi-holder map, and I'll write what that means. That is to say, and one can actually be more precise than this, but this is good enough. For every x and y in this ball, one has the estimate, wait, I should really make that an alpha, shouldn't I? So this is gonna be, that should be an alpha, and this will be say a two minus alpha. That's because we're, the view of this alpha is being very close to one here, right? So this is almost like a bi-elliption's condition, but not quite. This is also, this is a little bit less than one, this is a little bit bigger than one, right? That's what you should be visualizing. So what he proved is that this set S actually is a manifold in a bi-holder sense, right? There's this bi-holder mapping, and in particular the homeomorphism, right, is what you should be thinking as the simplest thing. There's a homeomorphism from this set into the ball of radius one and RK, right? You have to be a manifold, the can't be more complicated. And, you know, I mean, such a thing would be utterly useless except the fact that there are actually quite a few situations where one can force conditions like this, right? So the most basic of which is if you have something like a minimal surface, and if you have that its energy is actually close to being minimal in some sense, then in fact, one can prove without a lot of effort that the Reifenberg condition holds. That's easy. And what you do then is just quote this and say, well, now you're a manifold, right? So you can actually do better, but that was somehow the initial sort of thought process. Okay, so this is our classical Reifenberg. How are we doing? Okay, so let me actually, now that we've sort of seen a precise statement, now let me jump back to my introduction. So, quite literally where are my notes? So what I wanna do now real quick, just to get a flavor for it is I wanna say in a more sketchy sense, other versions of Reifenbergs that exist out there and what their applications are, right? And so the first one we just studied, right, was classical Reif. And what we said for this guy, right, was that it satisfies the classical Reifenberg condition if the set S is always approximatable by affine planes, therefore the thing has to actually be by-holder to a manifold. And the main application of this was, say, in some regularity theory for minimal surfaces, at least as a first step. Also, I would imagine, though, I don't know if anyone's ever sat down to do it, but I want you to be able to use this to prove the same regularity if you just have a lower bound on mean curvature, not a two-sided bound. I imagine someone's done this, but it's the same thing, too. This will not talk about in my lectures, but let me just mention it. So Reifenberg for metric spaces. So, in words because, oh my goodness, I'm 20 minutes into it. If you have a metric space, you can mimic this situation almost completely. What you do is you replace the notion of Hausdorff distance by something called Gromov-Hausdorff distance. It's essentially the same, but it's how you define bad notions of close for metric spaces instead. And what you can do is have a metric space, which, on all balls, once again, is Gromov-Hausdorff close to a ball in our end. And when you have this, you prove the exact same theorem that it has to be by-holder to a manifold. It's a way of including manifold structure on random metric spaces. And the main application for this is that, so, so, so, by Chigar and Coling in the 90s, early 90s, something like this, where they show that if you look at limits of manifolds with lower Ricci bounds, then away from a code emission two set, you're actually topologically a manifold. And you prove that exactly by this. And the moral is very similar to what I just said for minimal services. There's a monotone quantity. Well, when this sort of energy is sort of minimal, you can satisfy the Reifenberg condition pretty easily and then prove it's a manifold structure, right? And otherwise, it's highly not obvious you have manifold structures on those guys. So, even these sort of weak versions of Reifenberg have some pretty nice applications. Rectifiable will be our, so the third type is rectifiable Reifenbergs. So, what one can do is now what we start doing is start moving into measures instead of sets. You can have a measure and you can start to ask how close is the support of the measure to being contained inside some affine subspace once again? And you can put conditions on this measure so that the, I mean, completely arbitrary measure so that essentially an arbitrary measure where you have the appropriate types of Jones' Betty number bounds and we'll discuss this thoroughly probably next lecture but next lecture the lecture after is that every measure can be split into two pieces, one of which is sort of rectifiable and I'll say what that means and one of which has measure bounds. So, what happens here is that rectifiable is basically like K-manifold away from a set of measure zero, right? So, just think it's a K-dimensional manifold but you throw something out. We'll give a precise definition. And this piece here has to have measure bounds and we'll see how, I mean, this, I'm not gonna say too carefully because we'll do this thoroughly. We'll spend a whole lecture stating this thing. And this is what what's being used now to prove estimates for singular sets on various nonlinear equations like harmonic maps and minimal surfaces and free boundary value problems and so forth and so on. So, one way of generalizing the standard right from Berg theorem up here instead of being close to a plane, the next natural question is what if you have a more general class of objects? What if you're either close to a plane or you're close to something else, right? And just for instance, the way this gets used in practice sometimes is what if you're, in fact, the main application, the sort of thing that I know of is what if you study sets that are somehow always close to the zeros of a harmonic polynomial, right? And I think this is gonna be one of the papers studied in the postdoc class. And in this case, what you basically get to do is build not just sort of a holder manifold structure but you can build sort of a stratification structure because you get to use the fact that your zeros of harmonic functions have a monotone quantity called frequency and it forces a pinching. And this actually forces kind of a weak version of that pinching on the set itself, right? So you get to sort of transfer that information over. So this is also a really nice way of, a really nice application of right from Berg type theorems and the final one that I'll sort of mention off hand. The only ones we're gonna talk about in these lectures are this one and this one, by the way. So these are just, I'm throwing them out there are canonical right from Berg theorems. And essentially what this is, is the following and you know, I'm waving my hands, don't worry about it. So in certain applications, this here, I mean essentially does not transfer the rectifiable right from Berg which is the one you want to sort of study singular sets more generally, doesn't transfer over to metric spaces. The problem being, I mean not in an applicable way. The problem being is that when you start talking about a set being approximated by a k-dimensional something, you now have to talk about the underlying space being approximated by that thing too, otherwise it doesn't make sense. And it turns out those errors are fundamentally worse. They aren't just aggravating and technical, they're worse. So what you start to do is you learn how to build right from Berg maps that there's essentially all these theorems use the same construction, right? There's a construction from right from Berg, everyone uses them and essentially the question is how much information can you get from that construction? But now what you can start to do is instead ask your right from Berg maps to solve equations, right? So actually the way it works is you solve an equation, you don't know what's right from Berg map, you have to prove it is, but then what it's able to do in these contexts is actually sort of bend itself to the underlying space better and get rid of these higher order errors. So in a paper where we prove the structure on the singular set for space of a lower BT curvature, this is what you have to do. You sort of have to have right from Berg maps that solve equations. I think you can do that on R in by the way too. I mean you won't get better results, but it's a neat way of proving it. Okay, so that's the crash course on more or less why I care, right? Why I'm thinking about these things and studying these things. And now let me sort of say what we're gonna do in our lectures, ideally, that's not good. It's okay, I typed it this morning, I can remember, right? So in lecture one we're gonna focus the rest of this lecture on the classical Reifenberg. So realistically, what I'd ideally like to do in the lecture, I mean so in your notes is a very detailed proof of the classical Reifenberg theorem. Clearly I'm not getting to that in 20 minutes. But let me say a few words about it and what I will do today, hopefully, is really work through a very clear example. There's something called the snowflake example which basically is the Reifenberg proof in an example. I mean it has all the ideas you need to actually do it. But what you'll find in your notes, the proof is not the standard one for the classical Reifenberg. I'll talk about this a little bit more later, but it's rigged up in a way to be more convenient for what we're gonna do down the road in other lectures. So when we start studying this in more complicated situations, namely for these rectifiable Reifenbergs, the standard Reifenberg sort of proof breaks down in sort of awful ways. You can fix it and I wanna go so far so that you need a new idea to fix it, but it's pretty technically awful. But the proof I have for classical Reifenberg in the notes basically will pass over and you'll never see any difference. So that this is sort of the motivation for why your notes go the direction they go. So the rest of today will be on this. I'm basically gonna work up to stating the rectifiable Reifenberg. The version I'm stating isn't really in the literature. It's some sort of refined version of things that have appeared in the literature before. It seems to be that cleaner way of saying it that requires the least amount of actual background. So tomorrow we'll talk about things like what Joan's Betty numbers are. We'll talk about what a rectifiable set is. We'll talk about what packing content, Howlsdorf measure and Minkowski content are. These are basically the three ways you measure sizes of sets, right? And basically this is all one needs to really know for this for these lectures. So we'll state that lecture three. And to make things worse, I thought I had five lectures, so not that it matters. Lecture three, we'll outline the proof of this. In particular, we're gonna talk about something called neck regions. And we're gonna state something called the neck structure and the neck decomposition theorem. So let me yell two words about this. So I mean, for a random measure, doesn't satisfy a Reifenberg condition, right? I mean, that's nowhere assumed inside the rectifiable Reifenberg that you satisfy a Reifenberg condition. So what happens in practice is that you're gonna take your measure and you're gonna start ripping it apart into pieces. This is what's called the neck decomposition theorem. And these pieces are either gonna be regions where you have kind of at least a weak version of Reifenberg holding, or the measure's gonna be bounded, right? So we're gonna be able to tear measures apart into these sort of nice regions where you have one of these two nice behaviors. And then what's gonna have to happen is on these next structure, in the next structure theorem, we'll basically say that these Reifenberg type regions, these weakly Reifenberg type regions basically satisfy all the things you want them to satisfy, which is where this proof comes in, in some sense. So let me point out that the original proofs don't go this way for the rectifiable Reifenberg. I'm going this way for the following. Even the original proofs of the singularity analysis don't use these things, but everything does now, right? So when you look at recent papers on singularity analysis, or for things, this idea of a neck region or a neck structure appeared in the proof of the N minus four finance conjecture and the L2 finance conjecture that appeared for the energy identity conjecture for Yang-Mills. So these are things that actually appear in lots of situations. So it seemed nice to actually set up the proof here in a way where you'll get introduced to it in a way that's slightly less technical, maybe because you get to work with it with a little bit more freedom. Well, which is my way of saying that this is why I'm gonna make some technical statements there. I mean, again, the motivation being we should always use them for other things if we state technical things. And then proofs of these points, proofs of neck structure. So this is what this week will look like. I have online right now the first two lectures. I'll try to get these last two on, at least as much as that we'll need for our talking bye-bye Wednesday, for sure. Okay, good. So let's talk more about this guy, and I still have it on the board, excellent. So okay, so generalizations aside, let's just talk for a few minutes about the classical Reifenberg theorem. Let's start with a trivial example, shall we? So let's let S be equal to some k-dimensional affine subspace intersect the ball of radius two. Then the hypothesis and conclusion of the Reifenberg theorem holds. So essentially, I mean, that this is, if you're thinking pictorially completely clear, right? So we have our ball of radius two. We have our affine subspace Lk, which is equal to S. What is the bi-holder map from S to a ball of radius one in Rk? Well, pick an isometry from the ball of radius one in our L to two Rk, and just use the identity map there, right? So it holds for sort of dumb reasons. Slightly less trivial example. If someone doesn't see this, you should be brave and say so, right? Cause it'll get worse. It is not a joke that some of the smartest people I know ask the stupidest questions, right? So no worries. In fact, I think they do it on purpose a lot of the time. Example, which I'm also gonna call an exercise, and you can harass your TA in the next section and give some details for this. It's like three lines, but it's worth thinking about before just writing down an answer. So again, L is gonna be some k-dimensional subspace. So let's draw a picture. I mean, just think of it as being Rk inside Rn, whatever. And let's let F be a mapping from L into L-perp, right? So L-perp is the perpendicular subspace. So this is k-dimensional. This is N minus k-dimensional. And this is just some nice mapping from Rk to Rn minus k, essentially speaking. Whose gradient is bound the biops on? Then if S is equal to the graph of F, say on the ball of radius two, whatever, which recall is,