 I want to continue and wrap up and show a very stiffened direction. So in the past day and a half, I've been giving more of a survey and very light on details and, in some sense, avoiding the analysis more than is my want. And so we're over that. And I'll get back to the analysis. So I want to give you a little bit more of a feel for what goes under the hood and some feel for some of these questions that I posed at the end of the lecture last time. So first of all, I want to tell you, review a little bit what I said about parametrices. So we are interested in the Laplacian for G conic. And as I've described several times in polar coordinates, Laplacian has this singular form. So it looks like r to the minus 2 times r dr squared plus d theta squared plus 1 over 1 plus theta squared plus higher order terms. And so what I was describing was a set of methods to understand singular operators of this type. So the set of methods I'm describing have been generalized in many different directions. This is really the very simplest version of understanding a singular operator. Now, you'll probably see elsewhere in the literature, if you're interested in these sorts of things, people analyzing operators on cones and cylinders and things like this. There's many different approaches. They're all doing roughly the same sorts of things. I'll point out one specific difference. But I think the basic philosophy when you're looking at things like this is that any method you try works because cones and cylinders are so simple. If you look at slightly more complicated things, things like edges, then the methods you use really make a huge difference. And so I'm really emphasizing methods that have great potential for generalization and have been generalized. So as I talked about, the idea is that we'd ideally like to solve this. We'd like to find an inverse to this operator. And so this is a functional analytic statement. For example, I'd like to say that Laplacian is invertible mapping between two sobolev spaces or invertible up to a finite dimensional kernel and co-kernel or something like that. And of course, once you know this, then that tells you that you can solve the equation. And so if I'm trying to solve Laplacian U equals f, then U is equal to G of f just abstractly as an operator. If you know fine mapping properties of G, that tells you that, well, it gives you elliptic estimates and tells you that the solution, so if I'm looking at Laplacian U is equal to f, U is equal to G of f, it tells you that if you know that G sort of increases derivatives by two on L2 or on HS, then U is in HS plus two with F is in HS. So all your familiar properties. Now, philosophically, this is doing exactly the same thing as a priori estimates do. So it's just one of those things. Either you're brought up in the world of parametrices or you're brought up in the world of a priori estimates or you do neither. But that's the worst course of action. So anyway, parametrices are good. Now, what's better about them is to think of these not as functional analytic objects, but as real things, not to say the operators aren't real things, but as distributions. So you want to think of G, so the big point of view I espoused was think of G as a distribution on m cross m. So there's this general theorem that goes back to the 50s, the Schwartz kernel theorem that says any linear operator can be thought of as a distribution on m cross m. And it's one of those completely useless theorems because it substitutes one black box for another. However, the point is that if I think of, I drew this picture the other day. So if m was a closed manifold, m cross m is some sort of product space. If G is something interesting, like the inverse to an elliptic operator, then it's gonna have a very precise structure. And so we think of G not as just an arbitrary distribution, but as a geometric distribution, by which I mean that a singular set is somehow very well controlled. So in the case of inverse to elliptic operators on closed manifolds, here's the diagonal. And there's a general theorem with the great example that I wrote down before, the Newtonian potential. So the example, well, two examples are, one is the delta function on the diagonal, and the other is the Newtonian potential, x minus x tilde to the two minus n. You can write down many similar examples, and you can invent a general class of operators called suited differential operators, which are things that are smoothed away from the diagonal and have very precise singularities, which kind of generalize these things along the diagonal. So in general, a suited differential operator has some sort of asymptotic series as you get to the diagonal that's singular, maybe very singular, right? Maybe that's supported on the diagonal, but you really understand those things very well. Okay, so that's how you should do elliptic theory on closed manifolds. And obviously this has some drawbacks, but it's a very powerful point of view. Now, if you're on a conic manifold, then the point is you can develop an analog of this story. So I drew this picture yesterday, where I think of m cross m now as a manifold with corners. So remember, my point of view is I was thinking of m itself was I'd opened up the conic tip, so I think of the boundary of m as a collection of circles. So of course, here's this conic manifold, here are these cone points, and I blew up each of these cone points. So I now, for every cone point, I now have a circle, okay? So I now have a manifold with boundary. I take the product with itself, and I have a manifold with corners of co-dimension two. That just means it's locally modeled on, well, I mean interior either R4 or near the boundaries modeled on a half space or an orphan, the quadrant, okay? And in here, there's this natural submanifold, the diagonal, and this operator, the inverse to the conical plus in, should be something which has this pseudo-differential singularity along the diagonal, and the big question was what it does near the new bits, which are near the boundary faces and the corner, okay? And so the answer, which I'll tell you, is the following. You blow up the corner. So this is a way of understanding the answer. There are other ways of understanding it, but this is somehow the most natural way. So I blow up the corner. So what I'm doing, and this is really, it's a funny picture, but I introduce, so for every point in the corner, I look at the set of inward-pointing normal vectors, spherical normal vectors, inward-pointing unit normal vectors. So all I'm doing is encoding all the possible directions of approach to the corner, okay? And that point on the corner corresponds to a whole quarter circle. That corresponds to all the possible angles that I could approach that, okay? So all I'm doing is I'm taking polar coordinates seriously, right, and I'm adding a face, R equals zero here. Okay, so I lift this picture and the diagonal to this thing here. It's just a slightly more complicated manifold in this particular case, and has several advantages. One is the diagonal now intersects the boundary transversely. So it doesn't do that here, and that's a huge advantage, technically, okay? And so then the theorem, and this goes back to many people I want to attribute it, because as I say, it's been proved by, well, really sort of this type of thing that we've studied starting in the 60s by Kondratiev in his school in Russia. And as I said, many people since for the Connick setting. But he obviously didn't use this language. The theorem at the end of the day is the following. So G is polyhomogeneous, I'll describe what that means on this space, on this M2B. So M2, this is M2, and this blow up of it is this thing, right? With an extra, and let me just say briefly, Psi DO singularity at the diagonal, at the lifted diagonal. So let me decode this and parse this statement. So what is a polyhomogeneous distribution? So if I have a manifold with boundary, so let's use boundary coordinates X and Y, this is guaranteed to drive the complex and less up the wall, right? So X is greater than or equal to zero, it's equal to zero on the boundary. And I say a function U of X and Y is polyhomogeneous, if, well, two things are true. So the first thing is, U lies in some space, I don't care what, let's just say L infinity, for example, but it could be a weighted L infinity space, it could be a weighted L2 space, it could be a, you know, Vesov space, I don't care, right? Some fixed space, every one of its derivatives with respect to these B vector fields, X, X to the J, DY to the I of U lies in exactly the same space, okay? So that's a regularity statement, and it's a strong one in the interior, it says it's smooth, right? But as you get to the boundary, you're losing control of normal derivatives in a certain sense, any normal derivative gets accompanied by a power of X, okay? So this thing alone just means that, so the technical word for this is that you say that U is conormal. And if you're a micro-local analyst, what that really means is that its wave front set as a distribution lies in the conormal bundle of the diagonal, but that doesn't matter if you don't know that terminology. So, and then the second thing is, not only are you conormal, but U has an expansion. So it looks like, excuse me, is there a question? So U looks like X to the gamma J log of X to the L times A J L of Y. And here, I have a sum, so J goes from zero to infinity and L goes from zero to N sub J. Okay, so what I'm saying is it's something like a Taylor series expansion at the boundary. These A J L's are smooth functions and you really want to think that this is just a generalized Taylor series. So the difference is that you have possibly weird exponents. There's no reason that these be non-negative integers, they can be anything, okay? And sometimes they're accompanied by positive powers, so all of these L's are positive integers. So you have a finite number for each gamma J, you have a finite number of positive integers and the gamma J's go to plus infinity. So in fact, the gamma J's could even be complex. So all I care about is that the real parts go to plus infinity, okay? Okay, so that seems like a bizarre class of things and I could stand here all day and make arguments why this is the most natural class of things on a manifold with boundary. So on a manifold with boundary, you should not be thinking of smooth functions, you should be thinking about polyamidgenius functions. So there are lots of, these are the smooth functions, right? You just got to change your frame of reference, okay? And so to say that G is polyhomidgenius means that it has an expansion like that at each of these boundaries and then it has this extra funny singularity that's something we knew about already along the diagonal, okay? So what do the expansions look like in this particular case? So let me draw a simplified picture, okay? So as I come to this face, so there's the diagonal, I'm not gonna worry about that too much. As I come to this face, let's let a boundary defining function be rho. So that rho is a function which vanishes on that front face and then zero. So it's like polar core, it's like the polar radial variable there, okay? So then G is gonna look like some series that I'll call let's say A j times rho to the j, but j is gonna start from two to infinity, okay? Why do I start from two? Well, in this case it's precisely because the Laplacian has this one over r squared. So that's just this, that accounts for the shift, but what this is saying is that G is sort of smooth up to here. So I didn't need to invent all this fancy language, G is just smooth up to this face, okay? What about up to this face? So in this face I can use, well I was using x's and y's here, I'm gonna switch to r's and theta's again. So up to this face, G is gonna look like gl, let me call it, times r to the l over one plus beta. So I'm near a cone point with cone angle beta, so this is two pi times one plus beta. I've dropped the subscript j there. So I have a specific cone angle and here are the precise exponents that arise, okay? And what happens at the corner where these things happen? Well, it's like what would happen at the corner for smooth functions. If I have a smooth function on a corner, you have a Taylor series here, you have a Taylor series here, and you just have sort of a joint product type series at the corner, right? So the point is nothing very exciting is happening. And the whole moral of this theorem is that once you've sort of thought of it in the right way, sort of done this blow up, then in fact you have something which looks smooth, right? I mean it really looks like a smooth object, okay? And there's a similar type expansion here. So you can see that this thing, these exponents are kind of universal, they just have to do with this dimension, but here you really see the geometry, you see the cone angles, okay? Okay, so there's a very similar theorem if I take conic spaces in higher dimensions, there's nothing special about the cross section being a circle or this being two dimensional. And as I said, there are analogs of this type of theorem for more complicated singular sets, edges and stratified spaces and so on. Okay, so there are a variety of expositions and I wish I could say that the book I've been writing on this is done, but I can't say that. So I have a bunch of technical papers on this sort of thing. And so, yeah, I can, but, so there are sort of some survey papers that sort of describe bits and pieces of this and you know, so it's not poorly exposed, but it's technically exposed, let's put it that way. Okay, so what does this buy you? So, yeah. Excuse me? So the singular layer along the diagonal is, well when I say psidiosingularity, it looks, well we're in two dimensions so it looks like log of mod x. So here you, the G sub L's are smooth functions along here. Right, here these A sub J's are the Taylor coefficients and they are actually singular at one point. So the A sub J's are singular at r equals r tilde. So that's sort of this extra singularity, but it's kind of like a product type behavior. So the point is that that suited differential singularity is not degenerating as I get to that front face. It's multiplying by rho squared, that's it. So it's like a, you have something that looks like log of mod x minus x tilde which is what you expect in two dimensions times rho squared, more like that's it. So logarithmic terms appear often, so these numbers, and we saw these numbers before, they are called indicial roots. So they come about from just trying to find formal solutions of these. And you know if you read your Fuchs and other 19th century mathematics that you get log terms when indicial roots differ by integers. So typically if there's sort of a numerical coincidence then you get log terms. Well, I mean, you know, again I'm aligning these and so for certain cone angles and high up in the series you get log terms, almost surely. Yeah, that's exactly right. But you want to think of this and this is certainly what the beginning of the series looks like, okay? Okay, now what is having a suited differential kernel look like that by you? Well the answer is first of all, it does this. It's an inverse of the operator, but you solve this not in the level of operators. You solve this on the level of distributions, namely what I really solved was I have this very concrete distribution so that Laplacian in the Z variable G of Z and Z tilde lifted up to the space is equal to the delta function of Z and Z tilde. So I've solved this on the level of distributions, namely not on the level of operators, okay? Now, by the way, I should say a few words about the proof, so I've just asserted this statement. How do you prove a statement like this? Well, you do it backwards. What you do is you say, I'm hoping that this is the answer and then I construct something that should be the right kind of thing and then I prove that by uniqueness it's it, right? And when I say I construct it, this is the whole point of having these polynomial genius expansions. Anytime you have something that's discrete, you can sort of solve term by term, okay? So all you're doing is you're solving this equation sort of in Taylor series formally in three different places, four different places. You're solving it formally here along the diagonal. That's what the standard suit differential symbol calculus does. You're solving in Taylor series here and here and here. So you're just doing something really boneheaded and you get a very good approximation and then you do some small extra arguments to say that the actual solution has those series expansions. So you just guess this is an answer and then you proceed to show that you can construct something that has pretty much the right behavior and then that then by abstract reasons using what you've constructed, there is an inverse, right? And then you show that it really looks like what you said. So if you're familiar with the construction of the heat kernel, the Audemard parametrics method for the heat kernel, that's exactly what you do. You sort of find this local expansion for the heat kernel as t goes to zero and you know there is a heat kernel by abstract arguments and then you prove that that abstract heat kernel must really look like the Taylor series that you constructed. Very similar strategy. Okay, so what does this buy you once you've done it? So the answer is you're not tied to one function space for one thing, right? So you can prove, a posteriori, that G is bounded between all sorts of function spaces. So G is gonna be bounded between these types of spaces. So these Bs just mean that I'm always using these RDR and D theta vector fields. Okay, so it's just ordinary so-beloved spaces but defined with respect to these weird vector fields. Okay, you can also prove that G is bounded between let's say CL alpha B and CL plus two alpha B, et cetera. So namely, dream up your favorite function space and as long as pseudo differential operators act on it in some sort of reasonable way, then you can prove that G is bounded and what that means is that the Laplacian will be Fredholm on those spaces. So you're not, so if you think back to a priori estimate methods, you have to do one elaborate strategy for holder spaces, another elaborate strategy for so-beloved spaces. You usually don't think about those because that was done 50 years ago but this just allows you to go back and forth immediately. The minute you have this operator and it's bounded between various function spaces, you get that the Laplacian is bounded. So anyway, that point of view has a lot of merit and in particular, it's also going to allow you to do the following things. So suppose that F is, so now let's just start here. Let's suppose that F is in C's are alpha B, I let U be G of F. So what I'm really doing, let's just think in terms of this picture. So here is R, R tilde, here's the diagonal. I've suppressed the theta variables. What does this mean? Well, it really means I'm looking at G, R, R tilde, theta, theta tilde, F of R tilde, theta tilde, R tilde, D R tilde, D theta tilde, if I normalize it that way. It's just that integration, of course, it may be interpreted as a distributional pairing but there it is. What that really means is I'm just multiplying, I'm taking this kernel for F, I'm pulling up F to sort of be a function in the second variable. So it's independent of R, it just F is sort of living in each of these variables and then I push it forward. Okay, and then I just integrate along the fibers. Okay, so there's U. Well, if we know everything about the point-wise behavior of this, all this really detailed series expansions and F looks like this, then basically what's happening is that what's happening along the diagonal is exactly what gives you the gain of regularity of U. So U is two orders better, that comes purely from the diagonal. And then all the fact that you have these series expansions here and you have this extra vanishing, this row squared here, that tells you a little bit more. Namely, I can decompose this thing as the stuff that comes from near the diagonal and then the stuff that comes from R, so cosine theta plus A12 sine theta, R to the one over one plus beta plus a big O of R squared. In fact, this U tilde, which U tilde looks like an R squared C2 alpha B. So this thing comes from what's happening along the diagonal and this thing comes from what's happening along this left face here. So this decomposition theorem that I used really importantly in the analysis when I was doing these nonlinear arguments comes from just studying the point-wise behavior of this thing. Okay, so now let me tell you a little bit about how you sort of go on to apply this sort of stuff in our conic metric problem. So first of all, notice I cheated a little bit. So I told you the series involves all non-negative integer powers of one over one plus beta. I always stopped here at R to the one over one plus beta and said the error term is of order R squared. So anybody who was doing their arithmetic would realize I was lying when the cone angle is bigger than two, well, and the cone angle is some range, right? So what I really have is that U is gonna look like a j one cosine j theta plus a j two sine j theta. In fact, if I were to look at, for example, a solution of Laplacian sub g of U is equal to F and let's suppose F is compactly supported away from the conic tip. So here I've done this blow up. Here's the supportive F. U is not gonna be the zero function, but it looks like a homogeneous function near the boundary. This is just for the sake of argument, right? You will have a complete series expansion. And did I get all the right stuff? Yeah, we got all the right stuff, okay? I'm gonna say this for j goes from one to infinity. For j goes, for j equals zero, I'm gonna have an a zero, which is a constant, plus an a zero tilde times the possibility of a log r. So there could be a log there, okay? So namely, a general solution of this would have a complete series expansion and all I've been doing here is sort of truncating it. And in fact, where is the right place to truncate? Well, I'm sort of interested in where the error term from the diagonal, all the junk terms from the diagonal from this F interfere with sort of the homogeneous stuff. And so what I really wanna maintain is I wanna only look at the series, j over one plus beta times all this junk where j over one plus beta is between zero and two. So these are the relevant guys. These are the relevant additional roots, okay? Now, where do they happen? For which, how many of them do I need to take? So let's just do our arithmetic. j over one plus beta is less than two, the same as saying that j is less than two plus two beta. So we have various cases. When beta is less than zero, remember that was cone angle less than two pi, then there may be only one, I always have j equals zero, that's the trivial one, but the only other one, so maybe I have j equals one, but one over one plus beta, if beta is between minus one and minus one half, this is bigger than two, so that's not even there, okay? But what do you have typically? So here's zero, here's two, and you typically have a bunch of additional roots, and the higher beta is, you have more and more of them in this interval between zero and two, right? And then you have a bunch of other ones, okay? So in other words, these things contain a certain amount of, well, it turns out geometry about the conic tip, and they get sort of swamped in the error terms the minute you get above order r squared. Okay, so how do we use stuff like this? But you can right away see that, so this is the general picture when beta is positive, when beta is, here's zero and two, when beta is less than zero, you have a most one root here, okay? And excuse me, when beta is, forgive me, when beta is, no, I said it completely wrong. So when j is equal to one, one over one plus beta, that's greater than one even, so when beta is less than zero, one plus beta is less than one, so one over it is greater than one, okay? So here's one, you might have a root here, but over here you definitely, here you definitely have nothing, and that's very significant. Okay, so let me describe why that's significant, and let's now get to some analytic problems. So now let me remind you of a theorem I mentioned the other day, which is that suppose I look at the space of all metrics, let's say of, yeah, no, so this is when beta is greater than zero, and this is when beta is less than zero, okay? So when beta is greater than zero, you may have a bunch of roots crowding in here, when beta is less than zero, they're all way out here, there's infinitely many of these, but they've all gone, okay, excuse me, I need to, okay, okay, so I talked about the modular space, so let's take the hyperbolic one, well, let's take curvature k, so k could be zero, one, or minus one, and this is conic, and so I claim, or let's try to see what would go into saying that this is a smooth manifold, so is this a smooth manifold? Okay, so the theorem I mentioned to you is that it is smooth, at least when all the cone angles are less than two pi, okay, and we'll see exactly what condition comes into play. Well, how would you check that something is a smooth manifold? Now I'm looking into space of all metrics, so I'm not just looking conformally, okay, so what I'm really looking at is, suppose I have a solution g in this met k conic, and I wanna know when are nearby tensors still in there? So in other words, I'm looking to see here's the point g, and I'm trying to see, so here's met k conic, and this is sort of hopefully a hyper surface in the space of all conic metrics, okay, now I wanna remind you of a familiar picture in Riemannian geometry, that if I look at the space of all metrics, of course it's just some huge infinite dimensional blob, but it has three distinguished directions, so if I'm at a point, there's the conformal directions, I can multiply by positive function, I have the diffeomorphism orbits, so I can take a metric and pull it back by diffeomorphisms, and then there's typically, well in two dimensions, a finite dimensional amount left over, so excuse me, this is a terrible picture, let me try this again, so here, and then I have these diffeomorphism orbits, and then there'll be a finite dimensional picture left over, okay, so in other words, if I can find a nice slice to the conformal direction, something transverse to all these conformal classes, then the diffeomorphism orbits, and if it's diffeomorphism invariant, then the space of sort of different conformal classes is represented by these directions, which are perpendicular to the diffeomorphism orbits, and lying in this slice, okay, so have everybody seen that picture? This is one way of thinking about teichner space, of course. Okay, so we wanna do the same thing here, and what this space of constant curvature metrics is supposed to be is, well I've been always talking about it conformally, which is starting with the metric, and trying to find a conformal factor to move right onto this slice, okay, and so, go back to here, so what I'd like to do is I'd like to find some operator whose level set would be that slice, and that operator is just the thing that takes any metric, so I wanna take, I fix G, and then I wanna take small h's to the Gauss curvature of G plus h, okay, but this is not working conformal classes, this is working amongst all metrics, okay, and then I wanna do the obvious thing, which is just differentiate, right, and I wanna use the implicit function theorem ideally, so I'd try to say that the level set of this, so I'm looking at a set of all h's such that k of G plus h is equal to k of G, it's a level set, and I'd like to just study that, say it's a smooth manifold by using the implicit function theorem. Okay, well what is dk at G? Okay, well it's a little bit of a mess, it's not too bad, so it looks like some operator, so it's acting on two tensors, so it has the covariant Laplacian minus a certain curvature operator, and then it's gonna have this extra bit, which is gonna look like a delta G star B sub G. Okay, so these are just natural operators on tensors, on symmetric two tensors in one forms, so this is some sort of generalized Laplacian on symmetric tensors. This B sub G is called the Bianchi operator, so it takes h, and one thing it does is it takes its divergence, so it takes the divergence, the other thing it does is it takes the trace of h, which is a function, and takes d of it, and adds them together, okay? And then this is the adjoint of the divergence. Okay, so if I were to write this in coordinates, it would be horrible. It's not so good even now, but it's not so terrible, okay? So what does the implicit function theorem tell you? It tells you that if I wanna prove that the level set is a manifold, I wanna show that this linearization is surjective, okay? Now, the way it's done in the compact smooth case, we can try to adapt here, which is, if it's surjective in just some special directions, then it's surjective. I mean, it's clearly a very under-determined operator. There's a huge infinite dimensional null space, right? Which is gonna be the tangent space to this thing. Okay, so if it's surjective in some nice directions, for example, the conformal directions, then it's gonna be surjective on everything, okay? So what does this thing look like? D k sub G applied to some u tons G. So namely, I just take something which is purely conformal, okay? So the answer is, well, you can sort of compute what each of these things are, but it's not too shocking to what you're gonna get is the following. So let me just remind you that if I were to look at just the conformal problem by itself, the plus in G minus k sub G plus k sub G plus, well, another k sub G tends to e to the two u. And if I linearize this, this goes away, brings down to factor of two. I'm linearizing around u equals zero. I'm gonna get Laplacian sub G plus two. So I claim that times applied to you. Okay, so I claim that when you work through this and you take this on purely conformal directions, you get the same thing, okay? That's a good exercise, it's not very hard. Okay, so what do we need to prove? We need to prove that this operator is actually surjective. Okay, now we're on a compact manifold with conic singularities. If you're on a compact smooth manifold, there would be one and only one condition is two in the spectrum, right? Remember, this is the Laplacian, which is plus d by dr squared, so its spectrum goes to minus infinity, so two could be in the spectrum, okay? And excuse me, I should say two k. I apologize because I'm doing this around curvature k. Okay, so if k is negative, like if I'm in the hyperbolic case, I'm feeling pretty good about it, that should be negative. Laplacian minus two should be invertible. k is equal to zero, I should have a null space, it's constants, and that's gonna indicate that, well, you have a, adding a constant, remember we're just scaling, so that wasn't anything too serious. And the more serious thing was when k is equal to plus one. So basically what we need to understand is, is two in the spectrum. So I've already told you the tools for understanding, well, this g can be used to sort of understand the spectral theorem for conic operators, and so something I won't write down formally, but if I take the Laplacian, I will write it down, Laplacian sub g has discrete spectrum, say on L2 of the cone, right? And so if I take minus Laplacian g, it has eigenvalue zero, which is equal to lambda zero, which is less than lambda one, less than lambda two, less than or equal to, okay? So it has discrete spectrum going off to plus infinity. The bottom eigenvalue is zero. The next eigenvalue, well, I stated this estimate, question of veneers the other day, it turns out that this guy is the one we're worried about. What if it's two? So suppose we're in the case, k is equal to plus one. Then we're definitely in the range of possibility that this could equal to, okay? Now what I wanna show you, so I'm gonna sketch the proof, so the claim is that if all the betas are less than zero, so all cone angles are less than two pi, then lambda one is greater than or equal to two, and it equals two only for a football, okay? So it's a very nice rigidity theorem. The corresponding theorem is sort of well known and it goes back, I think it's due to a bother, you know? It's an old theorem in Riemannian geometry, geometric analysis for smooth manifolds, and the claim is that the same sort of thing is gonna work here. So here's the proof. I take du, and let me get this straight here. I do the following. Okay, so I take lambda one times the integral du squared. That's the same as the integral of Laplacian one du du. Okay, so here Laplacian is the Laplacian in one forms, and I'm just using the fact that Laplacian one composed with d is d composed with Laplacian on zero forms, these commute, okay? U is the eigenfunction, yeah, thank you, okay? So I do this, and now I can do some rearrangement here, and so the claim is that this is equal to nabla du squared plus du squared, okay? Now, this is exactly a step where I have to do some integrations by parts, okay? So I'll return to that momentarily. Cauchy-Schwarz tells me that this is greater than or equal to one half times the trace of this, so this is gonna look like, let me just say this the right way, this is gonna look like one half times, well, Laplacian one du du plus du squared, and altogether if I rearrange this, this is a lambda over two, I'm gonna pull this over here, that's one half lambda is gonna be greater than one, okay? So this thing here is just pure linear algebra, okay? This step here is integration by parts, okay? So this is just the statement that this is the Hessian, and it says that the Hessian, so how do you say it? Do you say that the Hessian is, hmm? This is the Hessian, probably, no, there should be a, let me see, so this is gonna be, no, because I'm taking the trace of u, so this is, well, let me see, so what should I get? So I should have lambda one du, so I wanna say, this is one half, so, excuse me, what I wanna say is one half times Laplacian u squared, that's what I wanna say, I'm sorry, but that's lambda one, if I know if I lambda one squared, that doesn't look good. I'm gonna write these, oh, right, okay. This is Laplacian u, looks bad, but if I look at Laplacian u, that's Laplacian u times Laplacian u, that's delta du times delta du, right? If I move one of these deltas over here, that's d delta du du, right, which is the same as Laplacian one du du, as claimed. Okay, so this is this integration by parts, from here to here, it's moving this delta over here, I don't have a delta d, because then I'd have a d squared equals zero, okay? So this is Laplacian one du, okay? So this is lambda one over two over here, so I'm gonna have a half lambda one as greater or equal to one, so it's an inequality. So you have to work a little bit harder, and in fact, you have to use that interesting behavior about geodesics that I described, namely, that geodesics don't go through cone points in their interiors in order to understand the case of equality, that you only get in footballs, okay? So here's the inequality, lambda one is greater than or equal to two, okay? I'm not gonna prove the case of equality, but where does the cone angle issue come in? And the answer is that I did an integration by parts that I talked here, and I'd have a boundary term, okay? And the boundary term is essentially what I get integrating by parts here, and I'm gonna have, so I'm really using that nabla one is equal to, this is the standard Wagner formula, okay? I wanna take nabla star nabla du du integral here, and that's gonna give me a boundary term, this is gonna be an integral on the boundary, and I'm gonna have something that looks like nabla du contracted with dr inner product du, okay? So in other words, I'm gonna have two derivatives here and one derivative here, okay? So this boundary integration by parts, the boundary term goes to zero precisely, well, the integration by parts works if the boundary term goes to zero. Suppose that u has a term that looks like r to the, for example, one over one plus beta in its expansion, okay? If I differentiate that twice, I'm gonna get r to the one over one plus beta minus two. So I'm gonna have that here. Here I'm gonna have du, this is gonna be another r to the one over one plus beta minus one, and then I'm gonna have an extra r because the measure is r d theta, okay? So altogether, this is the guy that I need to go to zero as r goes to zero, okay? Well, that's r to the two over two plus beta minus two. That goes to zero, so that goes to zero as r goes to zero precisely when one over one plus beta is greater than one, okay? So namely, the integration by parts works exactly when you're in the cone angle less than two pi k's, okay? And you had to use these delicate expansions in order to do that. Okay, so this is a little bit of the machine where I wanted to talk about, but what you do is you apply this, you say that, okay, when the curvature is one, two is not in the spectrum unless I'm on a football. This is surjective, and hence I get exactly this picture. Here's the space of conic spherical metrics. It's a hyper surface, okay? So here's g, and I have a hyper surface in the space of all conic metrics, and I've just proved that. Okay, now to analyze further, and when I said there was a good Teichmuller theory, I need to say that there's a good slice theorem that the diffeomorphisms orbits do what they should, and so there's more analysis. I need to sort of understand what this extra direction is, so that's sort of an enhancement of this thing. Okay, so any questions about that before we go on? As opposed to apply what to this? Well, I mean, so I'm trying to solve, I'm trying to find things of constant curvature, so that's the operator in sight. That's sort of the reasonable thing to. So I'm looking at the slice of constant curvature metrics. Good, any other questions? Okay, so there's a lot more you can do in this direction. I wanted to mention at some point without describing it in any detail, that there's sort of a heat flow approach to all of this, which I did some years ago with Yanir and Natasha Sessem, where you look at, you start with a conic metric and let it flow to a constant curvature conic metric, and you have to do sort of similar analysis, but with nonlinear heat equations rather than these Laplacians, and there's a lot of surprises, so we worked it out carefully when the cone angle is less than two pi, and when it has, it replicates these results, but of course it tells you many other things, and you can also, I mean, there's an interesting bit there, you can also see what happens to the heat flow if you're away from the Trojan average, and so you're in the spherical case, the cone angles don't satisfy these linear constraints, and then, well, you can go read our paper to see what happens. Okay, so I sketched all of this to sort of finally get to points, so now suppose you want to generalize this, because, you know, generalize we must, so I want to generalize this to the case of arbitrary cone angles, okay? So remember, this is our last frontier, you know, spherical cone surfaces with cone angles bigger than two pi, okay? Does this generalize? So the first observation, and I should say that a lot of this stuff I worked out in a paper with Hartman-Weiss, this cone angle less than two pi case, so trying to generalize this, so I've been working with my postdoc at Xuanzhu, and so the first thing we realized was that all of the stuff in this paper with Weiss, so namely this good Teichmu theory and blah, blah, blah, blah, everything works precisely if two is not an eigenvalue, okay? So let me just state informally, everything works if two is not in the spectrum. So it's a very simple condition, namely I have the scalar Laplace in, and I'm interested in two not being in a spectrum, so this is for the spherical case, curvature plus one. Okay, and you have to work harder, so there's, you know, quadratic differentials with multiple zeros, and you know, there's all sorts of things to avoid, but as long as two's not in the spectrum, you can do all the renormalizations and you can make everything work. You don't, because that's not true. So when I say everything, I mean the Teichmu theory works, but the eigenvalue estimate definitely doesn't work. Okay, so it's not true. So in fact what you can show, so everything except the eigenvalue estimate. So it could be that in that case the spectrum, so it always starts at zero, here's two, you could have a lot of eigenvalues between zero and two, and in fact that's what happens, is that the cone angle opens, there's a spectral flow, and a bunch of eigenvalues cross two, okay? Now you can ask, can you find any interesting examples where the eigenvalues are two? Okay, so I'll give you one interesting example as any football, no matter what the cone angle is, because I can find an eigenfunction that just depends on that radial variable. Okay, here's another interesting class of examples. No, you can take footballs with any cone angle, any cone angle. It's a, it's a, you know, really big football. I'm not a sportsman. I'm not a sportsman. Okay, so you can take a football of any cone angle and those those have eigenvalue two. Take any branch cover of this. So take any something where, so now I have, you know, these cone points have sort of unwrapped and then I have a bunch of other cone points with angles integer multiples of two pi. Okay, those those have eigenvalues two because I just take that eigenfunction and pull it back. Okay, now, you know, I had a conjecture for a while that was that these were the only examples. I'd love to prove that, however, I think it's false. Okay, and in fact, I have good circumstantial evidence of it's false. Yeah, as you can take branch covers of this sphere to itself, like you can just unwind it. So suppose that you sort of conformally you just sort of start branching it. Well, so suppose you just take antipodal points. So conformally this just looks like this with two branch points, right? Now take conformal points here and then take a unwrapping along those two points. And you can still have something that's spherical. Of course you can have things in prior genus much more easily. Well, no, well, I mean, it corresponds to a conformal vector field down here, but not in the unwrapping, not in the branch cover. No, it doesn't, so it stays okay. No, excuse me, the vector field becomes a meromorphic, but the function does not. That's exactly right. Yeah, that's the point. Okay, so let me sort of finally give you some punch lines. So what we have to do in general, so we're trying to understand what happens if two is in the spectrum. Now, one thing I'd love to understand because this problem has been studied by character varieties and many other sorts of things and people have come up with various conditions involving Schwarzian derivatives and various other things. I have no idea how to relate those conditions to eigenvalue two. So they should be the same, but I don't see how. Okay, so now let me talk about sort of the final thing of how do you get rid of the co-kernel when two is in the spectrum? So in other words, we've sort of seen that if I wanna say that there's a nice smooth modular space, at the end of the day what I need is that the operator is surjective. If it isn't surjective, what do I do about that? What I'd like to do is find some geometric motions which sort of compensate for the co-kernel. Okay, this is sort of a standard method. So here I am at some point, there's the putative modular space and if there's not a singularity there, well if the differential, let's say in a conformal direction is surjective, then there's not a singularity there. You might think that I might be helped by looking at non-conformal directions and in a certain sense that's right, but let me sort of sidestep that for the moment. So I need to make the operator surjective, right? If it's not surjective, what do I do? Well, I can sort of try to look at the differential in non-conformal directions, right? Or I can somehow, well, try to compensate for some other way. Now let me tell you what we discovered which in some sense was sort of obvious from the Mandela-Pano point of view that I sketched yesterday, which is that if I have, now, so now let me take the surface, here's these cone points, suppose that two is in the spectrum, suppose that lambda j equals two for some j, okay? So what's that going to correspond to? Well, if lambda j is equal to two, right? So what I want to do is I want to look at some of these cone points with cone angles bigger than two pi, okay? And as I described yesterday geometrically, you can split these. So you can take one of these cone points and you can think of it as sort of a limit of families where these two cone points come together, okay? So there's a geometric motion. Now that in some sense corresponds to going horizontally because remember in the Connick's case, this is sort of the non-conformal direction, so I'm moving the marked conformal class. Moving the marked conformal class means moving the points. Okay, so the question is can I sort of understand what happens as I move this? Now I described to you sort of rather explicitly what this looks like log of z plus, you know, beta one log of z plus beta two log of z minus p and I let p go to zero. Well if I differentiate this family with respect to p, I'm gonna get something which has a negative power. I'm gonna, in fact, so here's a little computation for you to do. You remember z looks like rho e to the i theta and I set r is equal to rho to the one plus beta over one plus beta. So there's this funny change variables. If I differentiate this with respect to p, I'm gonna get a one over log z. I'm gonna get a one over log z. If I change it to the r variables, I'm gonna get an r to the minus one over one plus beta. Okay, so in other words, looking at coalescing families, in this case just two points come together, I get these negative exponents. Okay, so the secret and I just wanna say a few words about what happens here. So we have minus one, zero and plus one and we had the possibility of all these indicial roots here, but each one of these things has a mirror on the other side. I have a j over one plus beta and I have a minus j over one plus beta. Every one of these guys, these r to the minus j over one plus beta corresponds to breaking a point into a cluster of j points or j plus one points, okay? But we have to sort of understand that analytically. It's a generalization of this argument. Okay, so what that suggests and what we have to work out is the following picture. How do I sort of deform the surface and how do I actually do the analysis where suppose I have a family of surfaces but I have a family of points that are coalescing. So what I'd like to understand is what happens to all of this analysis as the points cluster. How do I make sense of that, okay? And so if you'll allow me two minutes, I'll draw a few pictures that describe that, okay? So the first point is how do I parameterize? So just if I'm on the plane and I want to say I have two points and they're coming together. How do I parameterize that? Well, I can assume that their center of mass is the origin and then I can normalize by the angle. There's an extra angle and then there's a distance between them. So altogether there's some distance rho which goes to zero which is let's say the distance to the center of mass. There's an extra angular coordinate which I'm suppressing, okay? So I'm gonna suppress this now so I just have a one-dimensional space which is rho which corresponds to two points colliding. For every arrangement of two points, I have the sphere and it has two points on it. Again, I can sort of put them in normal position. I can assume that they're vertically arranged. So I have a whole family of planes, right? Here is where the points have come together and at any row positive, I have two points of distance row apart. So if you like, I have two interesting submanifolds here. I have this point and this point. So let me draw that picture a little bit more coherently. I have a half space. This is row and then I have a family of planes, right? This is just the model case. And for every row I have one of those points for every row I have the other of those points. So all I'm doing is in the plane at parameter row I'm marking the two points, okay? Now you know I like to blow things up so I wanna blow these things up. I wanna blow these things up. However, they collide. So when they collide I have to do something about that and you solve problems like that by blowing up first. So here's the picture and it's kind of a funny picture. So I first blow up the place where those points collide, okay, and I get a hemisphere. So that corresponds to all directions of approach. When I lift those two sort of traces of the points I get antennaeers, bunny ears, right? And then I blow these up. So now I'm gonna have a picture that looks like this, okay? So there is sort of the universal curve for two points colliding, okay? So for every slice, so here's row again, for every slice I have a copy of the plane punctured twice, blown up at those two points. But what's happening is as row goes to zero these two points are coming together and a row equals zero I get something funny. So row equals zero I have both the original surface blown up at the point where they collided but then I have this extra hemisphere. So what is this extra hemisphere? Any guesses? These things because I always wanna blow up you know on a conic surface, it's just like on a conic surface I wanted to take the conic points and blow them up. So that's all I'm doing. So I'm doing that for every row I blow up those conic points. But then those conic points move. Yeah, so around each one of those I pull up coordinates but now this pull up coordinates are coalescing. How do I resolve all of that crap together as I have to do this extra blow up first, okay? What this surface corresponds to is some sort of bubble and if I were to think of it geometrically in terms of conic metrics on the surface what this thing would actually look like is an asymptotically flat space with two conic singularities that sort of fix distance from each other. So this thing here I can naturally identify with a space that has opening cone angle two pi times one beta one plus beta two minus one plus one, sorry, okay? So namely this actually looks like an asymptotically big end of a cone. These things look asymptotically small end of a cone. Here are those two conic points two pi times one plus beta one times two pi times one plus beta two, okay? And so what I get here is two cases I have the incomplete conic surface where I have the points merged. I have this extra bubble which actually corresponds to a complete manifold well with these two extra conic singularities but it's complete at one end. And so the moral of the story is to understand to this degenerating analysis I have to do analysis on every one of these boundary hypersurfaces. So I have to do analysis on these boundary hypersurfaces which kind of we'd already done I have to do analysis here and then I have to do analysis on this extra complete space. But this sort of shows you that these are the directions that are really relevant and the theorem that one can eventually prove is that the family of solutions you as the points collide is polyhomogeneous in this space. It's smooth here, okay? So let me conclude with one last picture what happens with three points. You might well wonder. So it turns out that if you have three points you're gonna have sort of these three lines which are coming in and you're gonna get a more complicated picture with more boundary faces. So one boundary face is gonna be a hemisphere that's gonna have three points on it but as I sort of, you know there's an extra angular dependence and those points sort of move around. Okay, there's another region here which I'm gonna draw on a different picture which looks like the following. It looks like this, but where two of the points have come together. So here's sort of the bubble where the two points come together and then here's the third point. So I have this as sort of a boundary face of this. Okay, now this looks kind of horrible but you just do this by induction. But what this very clearly shows is that to do this analysis for three points well either you're in the case where the normalized bubble is just three distinct points which you know how to do. These are points which are not collecting together. So on this bubble these things are normalized to have sort of a diameter one. They're staying apart from each other. Or I have two points which have come together which are already normalized back here interacting with the third point. Okay, so there's a picture like this for every k for every number of k. And you know what we can do is, well as I say first of all show that in this case this is enough to sort of get surjectivity at the price of I'm you know expanding these points you know declustering these points. So every one of these points if I sort of decluster them a certain amount I get surjectivity in the corresponding operator. I get that solutions are polyhomogeneous on these weird spaces. Okay, so I'll stop there. It's very interesting to compare it to the stable curves point of view. No, that's exactly right. With hyperbolic metrics typically and here you're taking essentially flat metrics on these it turns out that I have a flat conic metric here. It's a different way of bubbling off things like stable curves, that's exactly right. And as I sort of hinted three or four days ago probably sometime you missed, right? Anytime you have a phenomenon where you have sort of points coalescing you can sort of use this. I've already started sort of working out like what happens with quadratic differentials or building differentials in point zero. And this gives you sort of a fine understanding. I mean fine in a sense of very detailed understanding of asymptotics of various quantities associated to those. The point being that this is a space in which those things are smooth. Those degenerations are smooth. So this is spherical conic metrics for cone angles bigger than two pi, right? And so a further picture, so remember I have this sort of funny polygonal region from Mandela and Pono of the space of allowable betas. And I can ask where in there does this kind of thing occur? When do eigenvalues equal two? And it turns out that there's sort of infinitely many separating hyper surfaces where the eigenvalue equals two. So namely I know that because this is a connected set and I can go out and there's a spectral flow. When I get way out here, I know that there's a lot of eigenvalues less than two and I have had to have sort of crossed places where the eigenvalues is equal to two. There have to be separating hyper surfaces. I can't do that integration by parts. Just a really naive reason, but that argument just doesn't work when the cone angles are bigger than two pi. And it shouldn't work because it's not true necessarily, plus one, curvature plus one. So in fact all of this analysis works when you're in an arbitrary genus surface but as long as the curvature is plus one. I mean, I never have this problem with the eigenvalues when the curvature is zero or minus one. So the only time that you might have this problem with the eigenvalues is when the curvature.