 OK. So yesterday, I gave a bit of a motivation what I'm doing. I set up the basic problem as the basic problem, maybe the PDE formulation of it. So I'd like to find, well, given a metric, sorry, so given a Riemann surface, MG naught with conic singularities at p1 through pk with cone angles, 2 pi times 1 plus beta j. So I'm writing the cone angles in kind of a funny way here. So I've written them, but yesterday I wrote them as 2 pi times alpha j, where alpha j was any positive parameter. And I'm shifting that a little bit. And it just turns out that this number beta is slightly more natural in many formulas. But it's a choice. So start with anything that has conic singularities. It's easy to construct something that has some conic singularity. You can just write down, paste together the metric in the usual sort of way. Then what I'd like to do is find a conformal factor u. I'd like to find a u such that g, which is equal to 2u, g naught, has curvature k sub g is equal to a constant. If u is bounded, then g has the same cone angles. So as long as I find a bounded conformal factor, and I've taken previously specified metric with conic singularities, then this thing will have the same cone angles. And I want to make the curvature constant. And remember, what that amounts to is solving Laplacian g naught of u minus k g naught. And then plus this constant k sub g, which I don't know, times e to the 2u equals 0. So this is the Louisville equation. So that's the pd that I want to study. Now, as I said yesterday, one of the mysterious things is you can write down this PDE. And if you're unfamiliar with this particular area, you think, oh, sure, well, you solve it. It's an electric PDE. How hard can it be? And it turns out that the precise values of these constants are subtle. And if it's too big, too small, or wrong sign, you run into trouble. So it's what's called a critical exponent problem. So we're in two dimensions, and the exponent has become an exponential. So this is the degenerate case of the Mabie problem, where this becomes u to a power in higher dimensions. So this is the problem we'd like to study. And I stated the two-thirds of the big theorem, which is sort of the easy case, which was originally done by McCohen in the early 80s, which is that if we're expecting k sub g less than or equal to 0, which corresponds to when the chi sub m beta is less than or equal to 0, then there exists a solution. So I'll remind you what this chi is. And then there exists a solution. This solution is unique if we normalize k is identically equal to minus 1. So that's in the case where chi is negative. And it's unique up to scale if k is equal to 0. So up to a constant factor. So essentially, we have a unique solution to this problem. OK, so I have several comments here. So the first thing is, where does this chi come from? So remember, this chi came from the Euler characteristic calculation. So if I have a conic metric, and I integrate over m kdA, no matter what metric I choose, it could be this g0, or the constant curvature 1, or any metric with conic singularities, then this is going to equal 2 pi times chi of m plus the sum j goes from 1 to k of beta j. And these extra contributions came about precisely from integrating. So remember, what we did is we integrated, here is the cone singularity. I applied Gauss-Penet on this manifold with boundary and computed the boundary term in Gauss-Penet and then took the limit. So even if the metric was not exactly a flat cone, the contribution is this circle shrinks down, converges to what it would be for a flat cone. Excuse me, now I have to do a little maneuver and take up my metric. OK, so let's say I put this here. OK, so I'm going to prove this theorem of macoans. Now, one comment, which I didn't say explicitly last time, and it's going to correspond. So like in the smooth case, I'm going to first look at the case where chi is identically 0. Now, first case, which is that chi of m beta is equal to 0, we're expecting to get a flat metric. And as we saw last time, this corresponds to having a linear equation. Now, I'm going to actually cheat a little bit and reformulate the problem as a perturbation off of a smooth metric rather than off of a conic metric. So I went to all this for a while to tell you that we're going to formulate it as a PDE where the background metric is singular. OK, but now I'm going to do this case where the background metric is smooth. So I take g bar, which is equal to any smooth metric. Let me call it g tilde, any smooth metric. Now, one of the things about conic singularities, which I mentioned last time, but I didn't do explicitly, is that I can think of the Gauss curvature. So even in this flat case, I can think of the Gauss curvature as being smooth or bounded or 0 here, but actually having a delta function there. So there's actually a delta function of the curvature. So let me say that a little bit more carefully. So I take any smooth metric. I want to find a conformal factor e to the 2b g tilde, which is my solution. So it's going to satisfy that equation. Laplacian g tilde of v minus kg tilde is equal to 0. But now the problem is that this cannot be solved, typically. The reason it can't be solved is that I need, if I were solving this on a background smooth manifold, I'd need the integral of kg tilde is equal to 0. And typically it won't be. The thing that 0 is this plus the additional cone angles. This is just the order characteristic. So the point is I want to actually add some extra terms here. So what I'm going to do is I'm going to look at the modified equation. I'm going to look at Laplacian g tilde of u is equal to kg tilde plus the sum j goes from 1 to k of beta j times the delta function of pj. I'm putting a distribution on the right-hand side, a singular right-hand side. So what that means is I'm taking the curvature on the smooth manifold, whatever it is, and then I'm modifying it by requiring these extra delta functions at these points pj. So what's the point of doing this? Well, if I integrate the right-hand side, and of course, integrating means I'm just taking a distributional pairing with the constant function 1. So I integrate this. I'm going to get the integral of this right-hand side is equal to 2 pi times Euler characteristic. And I guess I need 2 pi times beta j here plus 2 pi times the sum of the beta j's. So I have equal to each one of these guys is 1. OK, and that thing is equal to 0. OK, so now I want to appeal to solvability for elliptic equations in a slightly broader sense than I've done before. Let me start it over here. So now just on a smooth surface, if I take Laplacian g tilde of u is equal to f, and if I take f in any symbol of space, hs of m, this is for any s and r. And if the integral of f is equal to 0, and remember what that always means is that the distributional pairing of f with the constant function is 0, then I can solve this with u. So then there exists a u, and u is in hs plus 2 of m. So this is elliptic regularity, but I want to apply it with s perhaps minus a million. In fact, all I need is minus 1 or minus 1.1 or something, anything less than minus 1 here. The reason I need that is because the delta function is not a function, but it's a negative order distribution. So there's no problem solving this equation. I'm just using elliptic regularity in this slightly extended, well, I mean in the general setting. So in other words, if you're dealing with linear analysis, then as long as you're willing to work with distributions, then you can do a whole lot of stuff that you ordinarily would worry about, but you shouldn't worry about it. OK, so we can solve that equation. And so we can solve. And so we get some function u. It's going to have some distributional order, but we'd like to understand a little bit better what it looks like. OK, so we have a very special right-hand side. And local elliptic regularity is going to tell me that the solution is going to be smooth away from the points pj. So the question is, what does u look like near the points pj? OK, and so here's the claim. So near pj, u looks like a sum of 2 pi times log of mod z, where this is a local holomorphic coordinate. Here's pj. I just choose a local holomorphic coordinate near here. Plus some function uj, where uj is actually smooth. Now, why is this true? Well, simply because this is a, excuse me, do I want to? No, I think I don't need that 2 pi there. That's the way it goes. So if I take the Laplacian of this guy, I get 2 pi times the delta function. Excuse me? All constants are equal to 1 in my world. So I think I've checked this at least once. I didn't check it twice. Anyway, so the point is that Laplacian of log mod z is equal to 2 pi times the delta function, the origin. And so if I just subtract u minus this thing, it's going to kill off the delta function. And then I have Laplacian of the difference locally as a smooth function. And hence, u minus that has to be smooth. So that's a fairly standard application. So another way to say this is, in fact, the following. Suppose I take the Green's function. So if I take the Green's function for the Laplacian at pj comma dot. So now what is the Green's function? So it's going to be exactly the thing that satisfies, well, it's the integral kernel for the Laplacian. Now, this doesn't have an exact inverse. It only has an inverse on functions for which the integral is equal to 0. So in fact, this is equal to, so if I think of Laplacian of g, I can make this equal to the identity minus projection onto the constants. So if I think of this as an operator on, let's say, L2 or on any distribution, I can take your distribution projected onto the constants. If I subtract that off, I have something with integral 0, and then I can invert. So in other words, there's a nice linear operator that satisfies this equation. I let g be its integral kernel. And so if I take Laplacian of g of pj comma dot, that's going to be the delt function of pj. And then there's going to be minus a constant, which is going to be the integral kernel of projecting onto this, which is going to be essentially 1 over the area. So now when I take the sum of beta j times g of pj, so this is a well-defined thing. j goes from 1 to k. And so the claim is that this thing now, so if I take, let's say, u hat is equal to this. So this is now a well-defined thing. This is a well-defined global function that satisfies Laplacian of u hat. Looks like 2 pi times the sum of the delta functions plus some constant. And that constant is exactly the one that I'd need to subtract off of this so that I get integral 0. So in other words, my actual solution is going to look like the sum beta j times g pj plus some final correction term v where this v is smooth. So that's sort of putting together all of these claims. I have this local regularity claim, which tells me it looks like the right thing locally. But here are well-defined global functions. And this v is going to solve Laplacian of v is equal to k g tilde minus a constant. And if you do your arithmetic, I'll leave you that as an exercise, you're going to get just the average of k g tilde. That's what these things sum up to when I have beta j's. OK, why did I go through that so carefully? Well, just to sort of point out that the natural conformal factor is going to locally look like, so e to the 2 u locally looks like e to the twice beta j log of mod z times e to the 2 v times this g tilde. And this is equal to mod z to the 2 v j times this thing. Well, this is just a smooth metric. So in other words, this is going to look like e to the 2 phi times mod dz squared. So namely, I've written my solution as a singular conformal factor times a smooth conformal factor. So locally, I've used isothermal parameters here. I've combined the conformal factor here from the dz squared with the 2 v. I call that e to the 2 phi. And so what I have is that this is going to be some nice c infinity conformal factor multiplying by the model metric mod z to the 2 beta j mod dz squared. Now, in two dimensions, this is sort of a special thing that happens that here's another form of the flat conic metric. So I didn't write this down in the very first case, because it's a little bit deceptive. It's a very two-dimensional phenomenon. So if I write down z is equal to rho times e to the i theta, then this metric looks like rho to the 2 beta j times d rho squared plus rho squared d theta squared. And I'd like to compare this to dr squared plus 1 plus beta j squared r squared d theta squared. Our priorities look nothing alike. And I claim that they're equal if I set r is equal to rho to the 1 plus beta j divided by 1 plus beta j. So that's just a change of variable. So there's a change of radial function, just by power. And it turns this conformal version into this. So in other words, all this is saying is that in two dimensions, conic metrics are conformally smooth. Now, that's just very far from true in higher dimensions, even topologically. But it really just reflects the fact that if I have a conformal structure in two dimensions on a punctured disk, then it extends smoothly. It's not that the metric extends smoothly, but that conformal structure extends smoothly. So that's sort of a non-obvious fact that it's sort of part of uniformization. So conformal factors, so conformal classes, cannot have, I mean, there's various ways to say it. But it's not trivial, but it's not really trivial. Conformal classes cannot have isolated singularities. That's exactly right. There's a theorem there. OK, so that's sort of what's behind this. OK, so in any case, what I've done here is I've solved my problem. So what we've done, finally, is I've found g, which looks like an e to the 2u times the smooth metric. And g locally looks like e to the 2 phi times mod z to the 2 beta j mod dz squared near pj. So that is a flat metric with conic singularities. OK, so that's sort of the easiest case. And it's clearly I've solved a linear equation we know exactly what the ambiguity is, which is adding a constant to the solution to the conformal factor u. And the constant just scales the whole metric. So that's the uniqueness I claimed in that theorem. OK, so the next case, which is slightly more interesting, though it's still pretty easy from an analytic point of view and very similar to what we did in the smooth case. And this was McGowan's observation 30 years ago, is that, so now let's look at the case where chi of m beta is less than 0. We know that that's going to necessarily entail that k is equal to we're going to try to solve this equation with k is equal to minus 1. We're looking at this equation. Now I'm going to assume that g0 is a conic metric, Laplacian g0 of u, minus k g0 minus e to the 2u is equal to 0. And so this is the equation we solved in the smooth case yesterday. So how do we do this? So remember, the steps were that we looked at. So we first, step one was we let g1 be e to the 2u0 times g0, which has the property that k sub g1 is strictly negative. And I did that by solving the equation Laplacian g0 of u0 is equal to k g0 minus k g0 average. So I need to solve a linear equation again. But now I'm solving linear equation with respect to a background conic metric. And then step two was I let g be e to the 2u1 times g1. And there I have to solve an equation where I now have the advantage that the corresponding term here is strictly negative. And so there I used barriers. So now I want to give one black box, which is how do I solve linear equations on conic manifolds? So I'm going to leave this as a black box. For the moment, I'll fill it in later today or tomorrow. So if I'm left with this equation, Laplacian g0, or let's take more generally, Laplacian g0 minus lambda, because this is what I'll have to look at in step two, Laplacian of g0 u lambda u equals f, how do I solve equations like this? So g0 conic. So in fact, we need two things. So one is the analog of Green's function. So namely, I'd like an inverse for this thing. And I'd like to understand some properties of it. So I'd like to know how do I prove that something like this is invertible. And then secondly, I'd like to know on what space is it invertible? So I'm very much taking the point of view that I'm not looking at this in terms of a priori estimates, but I'm looking at it in terms of an integral operator which inverts this. So the second thing is what is the regularity? Namely, if I know what the regularity of f is, if it's hs or some appropriate function space, what does you look like? Now, the only interesting issue is what happens right near the conic singularity. Away from there, you're smooth, and you can just apply the standard local regularity theorems. So the question is, how am I supposed to measure regularity near the conic tib? Now, I want to give you a little bit of an answer to part 2, because then the next thing we need to do is we've solved this thing, but we don't know very much what the solution looks like. And remember that in this barrier method, not only did we need to iteratively solve equations of this type, but we also needed to use the maximum principle. And now we're on an incomplete manifold and the maximum principle looks disastrous, because maybe the maximum might occur at the conic tip, where we can't apply those arguments at all. So how do we make the maximum principle work? So that's sort of our question. OK, so to do that, I actually have to tell you a little bit about the regularity. So I have to tell you what the answer is here, but the black box is I'm going to fill in how those answers are derived. OK, so I'm going to define function spaces on which I can solve this equation. And I'm going to sort of go to it. So there's many different types of function spaces you can use. You can use SOBLOV or LP spaces or things like this. I want to use holder spaces, so I'm going to use a particular type of holder space. And the reason I use these is because holder spaces is because we're dealing with a nonlinear equation, and you don't have to worry about doing nonlinear things to holder functions. Some of them they sort of are easily controlled. Whereas if I have LP functions, you have to worry about SOBLOV arithmetic and stuff. And you can usually do it. It's a lot messier. And you get into a lot of really extraneous difficulties. So I want to work with holder spaces. So let me just, first of all, recall. So if you've forgotten these, maybe you do these before breakfast, but just in case not. So in the ordinary case, if I look at C0 alpha, for example, this is going to be the space of functions U, which are bounded. So they're continuous. And the semi-norm, U0 alpha, which is going to be the supremum, Z not equal to Z primed, U of Z minus U of Z primed over mod Z minus Z primed is finite. So this is how we measure regularity. So it says not only are you continuous, but there's a slight modulus of regularity for how quickly you converges to its values as Z goes to Z primed. So it goes at this rate. OK, now, many reasons for introducing these, but in the standard elliptic case and non-degenerative elliptic case, one of the nicest motivations is that it's not true that you can solve Laplacianism mapping from C2 to C0, but you can if you look at it as a mapping from C2 alpha to C2 C0 alpha. This is a well-defined mapping with a good inverse. So that's very classical stuff. And the question is, how do I modify this definition in the presence of iconic singularity? OK, so in the presence of iconic singularity, what do I do? Well, I want function spaces on which the Laplacian is going to act nicely. And so I actually have to look at what the Laplacian looks like. So this is now going to be getting into a theme that I'll be talking about at great length in the next few days. But if I look at the Laplacian, so I take the metric dr squared plus, well, let's just write this parameter. Let's forget the subscript j, 1 plus beta squared r squared d theta squared. So the Laplacian looks like d by dr squared plus 1 over r d by dr plus 1 over 1 plus beta squared r squared d by d theta squared. So this dr is a d by dr, and this d theta is a d by d theta partial derivative. Now, you can rewrite this as 1 over r squared times r d by dr squared plus 1 over 1 plus beta squared times d theta squared. That looks like just a trivial thing. It doesn't really matter, but this is actually really important observation. It tells you that there's sort of an underlying scaling goes on, and what it does is it isolates these vector fields. So r d by dr and d by d theta is kind of the fundamental objects. So you make use of this in the following sense. You say that let's just take these polar coordinates seriously. So if I have my conic singularity by introducing polar coordinates, what I'm really doing is sort of stretching that point to a boundary. So r is this variable, and theta is around here. So in other words, instead of being on a singular manifold, I'm now in a manifold with boundary. But the price we're paying is that I'm differentiating with respect to these vector fields. So in other words, the Laplacian on a conic manifold looks like, well, there's this 1 over r squared factor, which we can just multiply or divide so that's not very important. But the important thing here is this r dr and d theta. So I can look at operators which are built out of these vector fields, and those are kind of the natural class of conic operators. So I'm going to define v sub b, which is the span over c infinity of the vector fields r dr and d by d theta. And when I say the span, I mean thus the algebraic span. So namely, I take all locally finite sums of linear combinations of products of multiple fields. So namely, that contains any operator of the form a jl, which may depend on r and theta, times r dr to the j d theta to the l. This is j plus l less than or equal to some constant. And this is the natural class of differential operators that are somehow associated to a conic singularity. The Laplacian up to this factor 1 over r squared is one of these. And so the question is, what are all the right objects that you normally use in analysis associated to these sort of funny class of operators? So all we want to do is sort of a categorical construction. With ordinary Laplacians, we have ordinary holder spaces and standard mapping properties. But you just want to say, what's the analog of all of these things when we have a different class of vector fields? So with this class of vector fields, the claim is we'd like to have new holder spaces, new types of suited differential operators, everything that just sort of models that same type of degeneracy. So there's an answer to all of this, which I'll give you some description of. And I just want to sort of point out that this is kind of a part of a much larger theory that basically says, anytime you have some sort of geometric singularity, so it could be much more complicated stratified singularity. So it could be an edge or a cone over an edge or it could be something much more intricate, I can sort of do a similar unfolding with iterated polar coordinates. I'd get a class of vector fields, which kind of respect the iterated polar coordinate system. And I can build operators out of those. Those turn out to be the natural geometric operators associated to that stratification. And I can sort of build a whole elliptic theory around those. So this will incorporate, if you do it correctly, I could, for example, take a space with a conic singularity and then take a cone over that and then, let's say, take a vibration where that's the fiber of this cone. So I'd have some sort of very intricate singularity where there's sort of an edge here and a second-order edge here. And you can build complicated stratified singularities. And this methodology sort of allows you to handle analysis in all of these. OK, so this isolated conic singularity two dimensions is really the basement version of this. So in some sense, quite a bit. I mean, the analysis you have to do gets harder. But if you have cusp-type singularities, there's a lot we can do. But let's say there's a lot we can do but have not done. But there's a lot that can be done and has been done. Well, no, because if you take complex geometry, you can have arbitrarily complicated singularities. And so the way to phrase that question is if you look at resolution, suppose you take an arbitrary algebraic singularity. We know there's a resolution sort of in the topological, even in the algebraic sensors. So what you'd like to know is suppose you take a natural metric on the singular object like the Fubini study metric, right? How does it get carried along in the resolution? And that's something that's not really been adequately answered in general. So in cases where you have conic singularities or edge singularities or so on, we know the answer. Sure, you can indeed. So that's another way to do things. And of course, in two dimensions, a lot of this stuff simplifies for the reasons, like I said, because it's conformally smooth. That's exactly right. The reason I don't want to do that is because some of the later constructions are really going to use this sort of blow up stuff. Yeah, so in some sense, we're looking for an approach that builds the right estimate. So I haven't told you what the right holder spaces are. All I've told you is that we have a shift of emphasis to these types of operators. And now if I say these are the right types of operators, what are the natural function spaces that they might act on? And the answer is, well, instead of taking C1, C2, C3, it's just things which are stable under a couple of derivatives, I can take, let's say, C1B to be the space of u's, which are continuous, and such that RDR, well, let's write it as CLB, is RDR to the j d theta to the i of u is continuous for i plus j less than or equal to l. So in other words, ordinarily think of CL as just being the set of things that you differentiate up to l times, and they stay continuous. Here I want to differentiate them but with respect to these weird vector fields. So in other words, that's my shift of perspective. I just do everything with respect to these vector fields. And that gives me the right CL spaces. What are the right holder spaces, the fractional versions of these? So you can write those down pretty easily too. So let me do C0 alpha B. So this B stands for boundary. It's sort of an old moniker. And this goes back to an old theory by Melrose who started this in a slightly different context, and which has now been generalized greatly by quite a number of people. In any case, so C's are alpha B, so it's going to be the set of things u that are bounded. And then instead of this difference quotient that I talked about, this holder semi-norm, I'm going to do the following. So I'm going to take u of r theta minus u of r prime theta prime. Now I'm going to do something a little bit funny. I'm going to multiply by r plus r prime to the alpha divided by r minus r prime to the alpha plus r plus r prime to the alpha times theta minus theta prime to the alpha. OK, that looks like a mess. Why did I do all of that? Well, I've done that because when I scale, so there's a natural dilation acting here. If I take r theta goes to lambda r theta. So I just dilate the cone. Well, when I dilate the cone, it's a very simple computation that these vector fields are invariant under dilation. r d r and d theta are invariant under dilation in r. So I'd expect that whatever this holder difference quotient should be, it should also be dilation invariant, and indeed it is. If I just dilate the coordinates, then everything cancels out exactly as it should. Now what does this allow and what does it prevent? So the point is this is some sort of fractional version. So what do these C sub B spaces prevent? So if I take any function r to, sorry about that, if I take any function, let's say r to the gamma, let's say where gamma is greater than 0, maybe times a smooth function of theta, this is going to lie in, well, in fact, for example, in C1, B, or CLB. Why is that? Well, precisely if I take the derivatives d theta, so here phi is supposed to be a c-infinity function on the circle. So if I differentiate it with respect to theta, nothing interesting happens. And if I take, well, if I take d by dr as I'm in trouble, but if I take rd by dr, there's nothing serious that happens. If I take rd by dr any number of times to r to the gamma, all it does is it brings down various factors of gamma to the L. I said that the wrong way around. Gamma to the L times r to the gamma. So it doesn't get any worse. So these are the right function spaces. And so here's a theorem. So I'm going to say that a little bit more carefully later on. But I just want to say that the main thing is it allows sort of fractional things like this. So these wouldn't be smooth in the ordinary sense, but they're smooth in this sense. So here's a theorem. If I take the Laplacian with respect to a conic matrix, you know what? And I take it as a mapping from C2 alpha B to C0 alpha B. Well, this turns out to be not quite right, because remember, I still had this factor of 1 over r squared I had to worry about. Remember, the Laplacian had a 1 over r squared. There it is. So in fact, what I actually want to do is I want to multiply this by r to the gamma and this by r to the gamma minus 2. So I've taken this Laplacian, and I let it act between weighted versions of these. So when I write r to the gamma times this, all I mean is I take any function in this space and I multiply it by r to the gamma. They're just weighted versions. So if gamma is positive, then it forces things to decay at some rate. Gamma's negative, it allows them to blow up at some rate. OK, so the claim is that this is Fred Holm, if and only if, gamma. So in this case, it turns out that there's a mysterious set of numbers if gamma is not equal to. So the numbers that we're disallowing are the following. I'm going to take L over 1 plus beta j, where these beta j's are exactly the cone angles, and L is anything in the integers. So as long as I don't take a weight, which is exactly one of these funny values, this is a discrete set of values on the line. As long as I don't take a weight, which is in those, then I have a nice Fred Holm apple. OK, so where does that come from? So the answer is that I look at this operator, Laplacian, acting on r to the gamma times, and I just erased that thing that I wanted to say, phi of theta. Suppose I just let it act on a function that happens to separate like that. Well, it's going to look like r to the gamma minus 2 times gamma squared plus 1 over 1 plus beta j squared times d theta squared applied to phi. So in other words, r dr changes that into a gamma. I'm applying r d by dr twice. It gives me a gamma squared. I have the d by d theta squared with this extra factor. And r to the gamma comes out, but then with an extra factor 1 over r squared. Now, sometimes this is 0. And in fact, this is 0 precisely when phi is an eigenfunction of this operator. So that's 0 when, well, so what I need to know is that there's an eigenvalue here. So gamma squared plus, now what are the eigenvalues here? So there can be minus l squared over 1 plus beta j squared equal to 0. And so if I rearrange that, I'm going to get that, well, that's exactly the statement. So gamma can't be one of these values. OK? OK, what is having an approximate solution like this? Why does that screw up thread on this? So notice that the true Laplacian is going to look like 1 over r squared times this r dr squared plus d theta squared times this factor 1 over 1 plus beta squared d theta squared. Then there's going to be a plus a bunch of higher order terms if the metric is not exactly the flat model. So this is kind of the principal part. And then there may be higher order terms. So this is just sort of a local coordinate calculation. How does this screw up thread on this on holder spaces? Well, the basic problem is that if I were to take the right-hand side, so this is maybe a little bit weird. But if I take Laplacian of you is equal to exactly one of these bad exponents. So I'm going to write this as r to the l over 1 plus beta j times the corresponding eigenfunction, which is, let's say, cosine of j theta, cosine of l times theta, I'm sorry. So suppose I try to solve that problem. Well, you can solve it. There's no problem. The only issue is that you're going to get a solution which is going to look like r to the l over 1 plus beta j times log of r times cosine of l theta. So in other words, you can solve that but in the wrong function space. So what that's actually going to mean is that if I take a sequence of cut-offs of that function, this operator won't have closed range when gamma has one of these bad values. So this is just kind of one of these funny, weird functional analytic anomalies. But precisely, these ranges, you don't have closed range. So let me sort of jump back to the scalar curvature problem, or the Gauss curvature problem. So I want to finish up that part of the geometric discussion before the end of the day. And then I'm going to come back to how I prove some of these theorems, how I prove this theorem. But the upshot of what I want to say is the following. So the upshot, so there's a lot more to elaborate here, but I'm going to tell you what I obtained finally. So I solved this problem. Laplacian g0 of u minus kg0 minus e to the 2u equals 0. This solution, u, is actually going to have the following regularity here. u is actually going to look like a constant. This is near any pj plus two terms. I'm going to write them as a11 times cosine theta plus a12 sine theta times r to the 1 over 1 plus beta j. So there's going to be a funny fractional exponent here. Then plus an error term, which looks like r plus something which lives in r squared c2 alpha b. So in other words, the solution is going to have something that looks like a Taylor series. In fact, I can continue this to a Taylor series but all I really care about is that you have a constant term. I have another term which is not a standard smooth term. It's a fractional exponent here. And then plus a remainder term, which is going to be low order for our purposes. OK, so how do I want to use this? So this is going to tell me that up to sort of diddling around the finite dimensional co-kernels and so on, I can solve each of those problems that I advertised. And this is going to tell me what the solution finally looks like to the solution to the nonlinear problem. So I want to address the question that I said at the end of that list. How do I deal with the maximum principle? So remember, I needed to sort of find the sub-solution u0 less than u1 less than or equal to u2, less than or equal to u upper bar, so I needed to prove this chain of inequalities. What that corresponded to was looking at Laplacian minus lambda of u is equal to f. I need to solve that repeatedly, where let's say f was less than or equal to 0. So I was looking at a superharmonic function because that's less than or equal to 0, or if I did it the other way around sub-harmonic. And I'd like to say that a superharmonic function has no minimum, does not have a negative minimum. So what's the disastrous scenario? So here's my surface, the conic singularities. And let's suppose here's a point, p1, let's say. And suppose, so I'm worried about the infimum of u, suppose the inf of u occurs at this point, p1, then I can't apply the maximum principle. So maybe I have a negative value here there. The arguments I gave just don't work. So what do I do? OK, I use this regularity theorem. And I say that, in fact, whatever u is doing near there, it's not changing very rapidly. I have a very good estimate on how quickly it's changing. So I use local coordinates. And so near p1, u looks like this constant value. And then, so let me say it, let's bounded by this constant value plus c times r to the 1 over 1 plus beta 1. So that's what that estimate tells me. Now, on the other hand, what I want to do is I'm going to look at u minus epsilon times r to the gamma, where gamma is between 0 and 1 over 1 plus beta 1. So what I'm doing is I'm taking this function and modifying it a little bit. Now, it's not going to satisfy this equation anymore. It's going to satisfy a different one. OK? So what equation does this satisfy? So if I take a Laplacian minus lambda of u minus epsilon times r to the gamma. So I'm going to have this f, which came from Laplacian minus lambda of u. And then I'm going to have two terms coming from Laplacian applied to this and lambda applied to that. So the Laplacian term is going to be minus epsilon. Then I'm going to have a gamma squared r to the gamma minus 2. There's no d theta dependence here. This is just a local computation. So I'm just worried about local coordinates. So there's a epsilon gamma squared r to the gamma minus 2. And then I'm going to have a plus epsilon lambda r to the gamma. OK? So what does that do? Well, the point is we knew that f was less than or equal to 0. This is extremely negative. In fact, gamma I can make be between 0 and 1. So I can make that 0 and 2. So I can make this term blow up. And this term is positive. So that's a bad sign. But of course, this is much lower order than this. So in other words, what I've done is I've made it so that in some neighborhood here, if the minimum occurred right here for this new function, the minimum is occurring nearby. That's this estimate. So if I look at u minus epsilon times r to the gamma, that's going to be less than or equal to a constant plus c times r to the 1 over 1 plus beta minus epsilon times r to the gamma. This guy wins out in a small scale. r to the gamma is less than this exponent. So this guy wins out. So the minimum occurs here. And I'm still in a region where this is super harmonic. So I've shifted the place to a smooth point. And I can make the argument work. OK, so the crucial thing here was I have a precise rate of decay at the conic singularity. So it seems like I had to invoke a big theorem to do this. And you can get away with less, but this is kind of the right statement to work with, that you have to have some sort of regularity at the point as you approach the point in order to make the maximum principle work. OK, that's exactly right. A priori would not. Right, because you moved up to a circle. So that's a good point that when I am looking at these function spaces, a priori, here's r equals 0, I could have a function that takes on different values at r equals 0, which means that it's not even continuous at the conic tip. And a priori, those might be allowed in this business. So that's part of this regularity theorem that this A0 is a constant. Yeah, good point. Yeah, and I should have certainly pointed out that. In the case where I have a rational angle, so let's say a 2 pi over k, something like that, then I could have lifted up to the k-fold cover, applied standard theory, and then descended down. That's exactly right. Yeah, and that's an important observation. Thanks for reminding me. OK, so now I've solved the big problem in this k less than or equal to 0 case, and the solution was not particularly hard, or there it is. McCohen's paper was three pages long. He did it differently using this conformal method that Yaneer had mentioned before, but it's not very hard to do in this case. So I want to talk a little bit about, before the end of today, the positive case, and to show you where the geometric difficulties arise. So I've hinted at some of the analytic difficulties, especially if I wanted to generalize this to more complicated situations or higher dimensions or whatever, but they're useful even here. But there are interesting geometric difficulties that you don't see until you get to the positive case. OK, so now let's look at the case chi of m beta, strictly positive. So I'm looking at a case where I have positive cone angles, conic singularities, look like this. And it turns out that, well, it's good to understand the model case. So the model case is a football, or as I say, an American football. So it's going to be something that looks like this. And it's just a warp product over a circle. I can write down the metric exactly. It's dr squared plus 1 plus beta squared r squared, excuse me, sine squared r times d theta squared. So both at r equals 0 and pi over 2, at r equals 0 and pi, it collapses. This central circle, which is a geodesic, has length 1 plus beta times 2 pi. So this is a spherical suspension, and that's the standard easiest constant curvature, positive curvature, conic space. And we're going to see this a lot in the modulite theory later on. This turns out to be one of the fundamental bubbles. OK, so there's a model space. Now I'm going to define a certain, I'll call it a developing map. It's not quite the usual developing map. So I'm going to assume for the moment that all the beta j's are less than 0. And remember, what that means is that 2 pi plus 1 plus beta j is less than 2 pi. So all that means is all of our conic singularities look exactly like the type that I draw on the board. Namely, they have cone angle less than 2 pi. I can embed them in R3 locally. OK, so something which I could have said earlier that goes along with the fact that the curvature of a conic metric looks like a delta singularity is, I can smooth these guys out. So if I take such a conic singularity, and I can just smooth out the tip a little bit, so there's a family of metrics, g sub epsilon, which converge to g0, and each of these g sub epsilons are smooth. OK, so in other words, to do this, I might write down dr squared plus r squared times f sub epsilon of r squared d theta squared. For example, this would be a good model where f sub epsilon of 0 is equal to 1 for epsilon positive, and f sub 0 of 0 is equal to 1 plus beta j. You can find a smooth family function to do that. And I can calculate their curvature, and it's not very hard to see that you can arrange to have such a function so that they stay non-negatively curved or positively curved. The picture kind of makes this obvious. But anyway, I can find a positive curvature smoothing of these types of singularities precisely when the cone angles are less than 2 pi. Now here's sort of a rather important consequence of this. So here's an important proposition. So if all the cone angles are less than 2 pi, then mg0 is a g-desic length space. In the sense of Alexandrov. So in particular, what this is going to mean is that between any two points, there's a minimizing g-desic. And moreover, the g-desic never goes through the conic singularity in its interior. So let me say that more specifically. That's a little bit more than this definition. So I want to say in particular, so that no singularity, so that no minimizing g-desic contains a conic point in its interior. So one way to prove all of this is to use these smoothings. Each one of these is a compact smooth manifold. I can clearly find minimizing g-desics. And I just want to show that those g-desics stabilize. So the important thing is I would like to say that any minimizing g-desic that sort of goes around this sort of sharp corner for epsilon positive in its interior, somehow it has to move away from there as epsilon goes to zero. And so the proof is really just a comparison that if I take a flat cone, and I have two points I want to connect, then there's two g-desics. I mean, I can think about a g-desic as meaning a g-desic on the smooth locus that hits the cone angle and goes off in any direction. So there's a g-desic which connects these things. But then here's another g-desic which stays away from it. And this one is always shorter. So if I know that this picture is true in the flat case, then it's very easy to generalize it and show that if I'm finding sequences of minimizing g-desics, they have to stay a bounded distance away from here. So they don't converge to something that goes to the conic tip. OK, so what does that actually mean, geometrically? What's the implication? So I'm going to define a map. So I'm going to take m, and I'm going to remove some sort of singular locus here, s. And I'm going to define a map onto the football. And here's the map. So here's these conic singularities. Maybe there's even some handles in here. Could be higher genus surface. I take my favorite cone point. So let's say this has pj, and the cone angle is 2 pi times 1 plus beta j. This is the football I'm going to call a c of beta j and with curvature plus 1. So that's that guy. It's the football with cone angle 2 pi 1 plus beta j and curvature plus 1. OK, so what I'm going to do is I'm going to define. And here I did not say properly. I'm assuming that I have a metric of constant positive curvature, so kg0 is identically equal to plus 1. So what I'm going to do is say that, first of all, near a conic point, there is an absolutely standard model for what this has to be. So if you like, there's kind of g-dassic polar coordinates at the conic tip, and they look standard. Namely, if I take any isolated conic singularity, then a neighborhood of it looks isometric to that. This is the local model. And that's the same theorem really as the corresponding theorem in the smooth case that in a constant curvature space, any small neighborhood looks like a ball in the sphere. So that's a pretty easy theorem to prove. So I start off mapping this onto the football. And so this neighborhood gets identified with this neighborhood. And now I choose any other point here. And I find the minimizing g-dassic. Now the minimizing g-dassic, well, either that point is a conic point or it's not. If it's not a conic point, then the g-dassic does not contain a conic point anywhere except for this top point. Now if you just apply the standard Monet comparison theorem, you will see that I will have reached a conjugate point before distance. I would reach a conjugate point if this thing had distance greater than pi. So namely, Jacobi fields, or the start off 0, are going to be 0 at distance pi. So in other words, this g-dassic, because it's minimizing, has to have a length less than or equal to pi. So I start laying it down here. And I get something of length less than pi. So it doesn't get to the opposite tip. So I do that to every point here. And what I'm going to get is some sort of region with a polygonal boundary. And here there's some sort of cut locus in the manifold. So this s is really like a cut locus. It's where minimizing g-desics meet on the other side. Some of the points in the cut locus will be possibly conic points, but not necessarily. And some of them won't be. So what I have here is a polygonal region on this football. So this lower boundary is a theorem. You have to prove it. It consists of some piecewise geodesics. And how do I get this surface from here? We'll just buy side pairing transformations of the polygon. So in other words, I can think of any conic surface with curvature plus 1 and cone angles less than 2 pi as being obtained by side pairing transformations from some polygonal region inside the football, even if it's higher genus. Yeah. Is the cut locus a tree? Yeah, it will be in general. Yeah, so things can collapse over here, of course. So it's as complicated as it is in the smooth case when in some extra. So the main point is that you don't get any sort of really new phenomena. It's sort of just the same as in the smooth case, OK? OK, so now I'm going to do a computation. And I'll pretty much just finish with that. OK, so here's my computation. Let me get it right. So I'm going to call this region. So I'm going to call the image equal to d of pj beta j. And this is a Dirichlet polygon. So this is really the same construction as happens in other constant curvature spaces. So it's the Dirichlet polygon associated to my surface. OK, and what I want to do is I want to compute the area of the football, c of beta j. What did I call it, c of beta j plus 1? Yep. Minus d of pj beta j. I want to compute that area, just the piece that's left over at the bottom. So, well, remember, this has area, whatever it is. Curvature of both of these things is plus 1. So I can think of this as the integral of kda, and I apply the Gauss-Benet theorem. OK, so when I apply the Gauss-Benet theorem, I'm going to get 4 pi times 1 plus beta j. That's what Gauss-Benet gives for the surface minus 2 pi times, so I'm going to get chi. And let me just stick to the case where the surface is s2. The other case I'm going to come back to in a moment, chi of s2 plus the sum of the cone angles beta i. So this is the difference of areas, and I have to put one more close parenthesis. So this is just applying Gauss-Benet to both of these pieces. And here I have to do sort of a fancy calculation to say that some of the angles add up to 2 pi, and some of them add up to something less than 2 pi. So those are the defects. This is greater than 0, because I knew that the Dirichlet polygon fit inside the football. So what does that tell me? So what it says is that 2 plus 2 beta j is greater than 2 plus the sum of the beta i's. And there's a, yeah, that's what I want. So the 2's cancel, beta j comes off from both sides. And what I'm going to get is beta j is greater than the sum i not equal to j beta i. And this has to be true for all j. So there's this weird angle constraint. And so this is a necessary condition. If there's a constant curvature metric with cone angles 2 pi beta 1 plus beta 1 plus 2 pi 1 plus beta k, all of the cone angles less than 2 pi, then you have this weird condition here. So this condition was discovered by Mark Troinov. And so the natural condition is, so this is a real constraint. It's different than the triangle inequality constraint. It's just some new constraint. And this is a necessary condition for the existence of spherical cone metrics. So what I'll prove at the beginning of class next time, or at least indicate the proof, it takes some work, is that this is sufficient. So this condition, when all the cone angles are less than 2 pi, is a necessary insufficient condition. And furthermore, it turns out, so here's sort of the culminating theorem, which I'll state right now. So theorem is that if all the cone angles are less than 0, then there exists a spherical cone metric, if and only if this Troinov condition holds. And the solution is unique. OK, so the existence was done by Troinov in the late 80s, and the uniqueness was done by Luolun Tian in the late 90s, using ideas from Keeler geometry. So this is a very different sort of theorem than I talked about before. So you have non-trivial constraints. The method of proof is using essentially calculus of variations, which I'll describe that point of view briefly. And it tells you that, well, there's something weird going on. It's a big question I want to discuss in the next three days is what happens when the cone angles are 2 pi, and to short and circuit ahead to the answer, there is much more complicated conditions where you really don't understand the uniqueness yet, and the full existence question is not known. So it's really understanding what this condition means, both geometrically and analytically, which is going to occupy us. And how is it supposed to generalize when the cone angles are big? So that's a good question. Here is two types of things that happen. So you can think of spherical cone surfaces, for example, as being doubles of spherical triangles. So for example, if you have three cone points, then it always looks like a spherical triangle doubled across its edges. So there's a standard theorem. You can put the three points on the equator and then, you know. OK, so let's draw some nearly degenerate and spherical triangles. Well, one is, if I just take something with a very small cone angle, I'm going to any radius. It could be going to the equator or whatever, but that beta goes to 0. Then you're going to get very near to satisfying inequality, but the other one, which is actually more indicative of the true situation, is that I take a triangle of any opening whatsoever, but it goes from the north pole almost to the south pole. So namely, you're almost to the football. So in fact, here's something I'll do. Later, if I have something with, well, three or more chronic points, cone angles less than 2 pi, if I take any sequence of things that satisfies this inequality, but inequality is getting closer and closer to being inequality, the thing is geometrically converging to a football. So all but two of the cone points are somehow collecting on one side or the other. Alex, you have a bunch of them. So if you think about this, if you play around with this a little bit, there's a single bad guy here. It's the smallest of the beta j's. That's going to be the problem child. OK, anything else? Yeah, so there's an analytic reason why this equality. So when Troy now proved the existence theorem, he actually derived these inequalities from a purely analytic point of view. And I'll describe that tomorrow at the beginning of the lecture. So this actually comes from a sort of sharp constants in the Moser-Troetinger inequality, or a sort of generalized Moser-Troetinger inequality. So it's kind of remarkable. There's this beautiful geometric proof of why it's necessary, but it turns out to be somehow fit in very closely with me.