 There's a unification between the two. So remember the two were the Brown-York mass. And so I was doing this for three dimensions. So my setting for now is that I have a Riemannian 3 manifold with scalar curvature greater than or equal to 0. So again, it's a special case of the constraint equations. And in tomorrow's lecture, I'll talk about what we can say beyond that. And so I had these three sort of general situations. I had the asymptotically flat case, which were the main geometric results, the positive mass theorem. And then I had the two finite cases. And the first case, which I actually called two before, is the Brown-York mass. And so remember what it requires is that I have some region omega. So it's the idea of a quasi local mass. And the boundary of omega is a surface, a smooth surface. And the hypotheses for the Brown-York mass are that the Gauss curvature of the surface, that is the induced metric on the surface from the 3 manifold, is positive at each point on sigma. And the other assumption is that the mean curvature of sigma is positive. So that means that it's the mean curvature vector points inward, like a sphere in R3 has positive mean curvature in this convention. And under these conditions, there was a very beautiful quasi local mass quantity, which we called MBY. It's a function of sigma, well, really a function of omega, but really of the boundary geometry. And what it is is 1 over 8 pi times the integral over sigma of h0, which is a comparison mean curvature, minus h, integrated with respect to the area element on sigma. And the h0 is gotten by isometrically embedding sigma into R3. So I have my region in the curved spacetime. And I can isometrically embed the boundary by some map phi into R3. And what I get is a convex body, convex surface in R3. This is phi of sigma. So that's called the vial embedding problem, which is a very fundamental result in differential geometry, which has been known since about 1950. And then what I can do, the h0 is the mean curvature over here. So if I take a point P on sigma, I can look at the corresponding point over here, phi of P. And then h0 is the mean curvature. So h0, I think of it as a function of P. Of P is actually the mean curvature at phi of P. And so I can compare these two functions. And the Brown-York mass is given by this quantity. And we discussed what it looks like for standard spheres in Schwarzschild and some properties last time. In particular, one of the main properties is the positivity. So this is a theorem of Xi and Tam. It's greater than or equal to 0. And it's 0 only if the region omega actually is isometric to this region here. So it's only for a region in the flat R3. So that's the Brown-York mass. And the other theorem, which we described, not quite a quasi-local mass, but it's a situation where we have a region omega contained in M. And actually, there are a number of cases. But the boundary of omega in this case is assumed to be polyhedral. And we discussed particularly the case of the cube, where the boundary consists of the six faces, six smooth surfaces, which meet along edges. So it's the combinatorial type of a cube. And also, we alluded to the other most important case, which is the case of a tetrahedron. So that is where you have four triangular faces and edges. So in this polyhedral case, there's also a theorem which relates these. And this is a theorem. So these are conjectures of Gromov. And as I indicated, Gromov sketched a proof in the cubicle case. But they were done rigorously in the cube and the tetrahedral case by Chow Lee. And so the theorem here is that in this case, the maximum dihedral angle, so the dihedral angles are defined at each point on the edges, the one-dimensional edges, is greater than or equal to, let me call it theta naught, which is the, so in the cube case, theta naught is pi over 2. And in the tetrahedral case, it's the dihedral angle of a regular tetrahedron. So this is the dihedral angle of the regular model. And so, and again, equality holds only in the, equality only if trivial, meaning that the region omega is a flat region and the boundary faces are actually planar surfaces. So this is a kind of angle comparison theorem. And I want to try to unify these. So let me point out the strengths and the weaknesses of these. So in the case of, let me start with strengths, in the case of the Brown-York mass, a strength is that the result, the quantity computed is actually a quantitative measure of the quasi-local mass of the region. And it's one which works quite well in many situations. So a key strength for Brown-York is that the conclusion is quite satisfactory somehow sharp. And a strength of the polyhedral case, a key strength is that it does not require, so it's weaker boundary conditions, boundary omega. So in the case of, in the polyhedral case, I guess I didn't say it, but all that's required on the faces is that the mean curvature is greater than or equal to zero. This is greater than or equal to zero on the faces. So in other words, this first assumption of positive Gauss curvature is not there in the polyhedral case. And that's a good thing because the assumption of positive Gauss curvature doesn't seem to be natural. I mean, it certainly strongly restricts the kinds of geometries you can apply this to. But it's really essential even for the definition. So you can't define this comparison region unless you know that the boundary surface is isometrically embedded into R3. So you can't get started without the assumption that the Gauss curvature is positive. And so the main weakness of the Brown-York is that k sigma positive should not be necessary. One would like to remove that. And the key weakness of the polyhedral situation is that it's not that quantitative. The conclusion is a little weak. So in the polyhedral case, it's kind of a weak conclusion. For example, if we compare it to triangle comparison theorems, which we discussed in the first lecture, it would only be saying that if we take a triangle and a surface with non-negative curvature, and it would only be saying that the largest angle of the triangle is bigger than pi over 3. So for an equilateral triangle, the angles are pi over 3. So the corresponding statement would only say that the largest angle is bigger than pi over 3. And that's a weaker statement that you get either from Gauss-Bonnet, which gives us the statement that the sum of the angles is bigger than pi, bigger than or equal to pi. Or, of course, much weaker than what you get from the topenaga theorem, which actually compares individual angles for a specific comparison triangle. OK, and so the conclusion is a bit weak. And so it would be nice to combine these two. And there's a kind of natural way to try to do it. And so let me make some, I mean, I'm going to write down a possible statement, which I don't have much evidence for. And I also don't know counter examples. But it sort of comes from kind of formally combining these ideas. And so how can we combine these two? Well, so one of the main problems, if you remove the positive Gauss curvature assumption for the Brown-York mass, is that you don't have a comparison. There's no model. So in the polyhedral theorem, there isn't really a model polyhedron in R3. You're simply comparing the polyhedron to the regular polyhedron of the same combinatorial types, so either the cube or the tetrahedron. And you can do it for other polyhedral types as well. But there's not a specific model like there is in the Brown-York case. And so I would like to point out that if you look at a polyhedral surface and your swimming H is non-negative, the geometry is largely determined by the one skeleton. So in other words, if you take, let's see, if you take, say, you look at a polyhedral surface, then you have the edges, which are these arcs. So this is called the one skeleton. And so if you know the one skeleton, then you sort of know the best possible choices for the faces, because you could take each triangle and just fill it in with a minimal surface. So it's not very hard to see that if you make each smaller on the faces, you decrease the dihedral angles. And so somehow, just like in the geodesic case, I pointed out that you get the same triangle comparison theorem if instead of taking a geodesic, you take a curve with, geodesic curves are positive. This says k equals 0. And when you do that, you're sort of fattening up the triangles so you increase the dihedral angles. The dihedral angles are bigger for the more convex one. And so you can sort of think of the one skeleton as being the main object that determines the geometry of the polyhedral face. So you could ask the question, can I isometrically embed the one skeleton into R3? And actually, that's a rather easy question to answer. So if you take curves, of course, you can always embed them in R3 or R2 even in many different ways. You just need a curve which has the same length. So for curves, there's no intrinsic geometry. And so on the other hand, there are restrictions in the case of a collection of curves like that. If you had a triangle, then you can always embed it because of the triangle inequality. But if you have four points, you can't necessarily embed a four point metric space into R3. But on the other hand, the conditions under which you can do it are very simple conditions, which you can check. It's basically linear algebra conditions. So it's not unreasonable to assume that the one skeleton is isometrically embedded in R3. So in other words, you're just assuming that there is a tetrahedron in R3 with the same corresponding side length, same corresponding edge lengths. So I can assume that I have a tetrahedron. Let's label these points. So p1, p2, p3, and p4. So I could take p1 bar, p2 bar, p3 bar, and p4 bar. And then I could look at the edge between p1 and p2. And I could require that those have the same length. And then similarly for all the others. So that's a condition which you can't always do it. There are cases. There are four point metric spaces which cannot be isometrically embedded in R3. But again, it's a much, much easier question than whether the problem of embedding is surface into R3. That's a very difficult problem in general. And it's really only globally known in the positive Gauss curvature case. And so this is a question. Being able to do this is, as I say, it's really a linear algebraic computation, which is very standard. It goes back to about 1930, I think, embedding finite metric spaces into Euclidean spaces. OK. And so let's assume we have the one skeleton embedded in this way. Then we could look at the Brown-York conclusion. So the Brown-York conclusion would say that, again, that the integral would say that the integral of H0 minus H, P a is greater than or equal to 0. And this would be on sigma. On the other hand, if the faces have zero mean curvature, then the only contribution to these integrals comes from the edges. And so the question is, what does this mean? So this is like what we did in going from the Gaussian a theorem for a smooth curve to the triangle comparison, where we just approximate the, we can think of approximating a triangle by rounding off the corners by a smooth curve. And when we do that, we get a contribution on the edges. And so this inequality really restricts to a condition on the dihedral angles. And so in fact, the claim is, if you take, say, a polyhedron and you approximate it by a smooth surface and you look at the limit of the total mean curvature of that, what you get, let's say for a polyhedron, the integral, let's call it omega bar, sorry, the integral over sigma naught, that would be the boundary of the tetrahedron. Then this behaves like the integral over the edges. And so it's pi minus the dihedral angle theta naught integrated with respect to arc length. Because if you smooth, if you have these edges, if you smooth that surface, you get the contribution to the mean curvature that is concentrated along the edges. Well, along the edge, nothing really happens. The things are bounded. But when you transverse to the edge or orthogonal to the edge, the angle is changing by the exterior angle. And so you can argue that by limiting argument or that the total mean curvature is really this integral of the dihedral angle. So in particular, it's the length of the edges times pi minus the integral over the edges of theta naught. Yes. OK, and similarly, I could do that. It's a local calculation, so I could similarly say that morally, the integral of h is also the integral of pi minus the dihedral angles. And so this is the integral. This would be h naught. This is in the model space. And in the original space, it's also true. Well, actually, there's another. If I don't assume the faces are minimal, then there's an extra term. But the integral of h in the manifold would be the integral over the faces of h. So those are smooth surfaces. And then plus the integral over, so this would be e0, the edges in the model, the integral over e of pi minus theta integrated with respect to arc length. And again, that's pi times the length of e. This should be the length of e0 minus the integral over e of theta ds. OK, so in particular, if I look at the difference of these two, well, these terms are the same because I'm assuming that the edges are isometric. So the edge lengths, in fact, each corresponding edge length is the same. And so this inequality here reduces to the inequality that the integral over the edges of the tetrahedron in the manifold. And again, I can think of theta naught as a function back here because I have each point here, has a corresponding point here, p of p. And I can take the dihedral angle theta naught here and just think of it as a function of p. So what it gives me is in the reverse order. So this is h0 minus h. So this would be theta minus theta 0. ds is greater than or equal to 0. So that would be the, so at the Brown York conclusion, we're to hold in this polyhedral setting. So I should say, what happens to the integral of the faces? Well, it comes in with the right side. So you can actually throw it away. So the optimal case, if you like, is when h is 0. So this statement, really, I claim is exactly analogous to the Gauss-Bonnet statement. Because if I do a triangle, then it's exactly saying, so I'm integrating the angle over the vertices. And the vertices, so it's a sum of point masses. And the contribution at the mass is exactly, at each edge, is exactly the interior angle. And so this is a question. So it would be, I think, a very big improvement on the polyhedral case where we're working with the maximum of theta, which is a rather weak statement, saying the maximum is bigger than something. It only says there's some point where the dihedral angle is pretty big. And so this statement would be, I think, a much more precise statement. On the other hand, the problem is I have no idea how to prove it. And I thought about various cases. It's actually very non-trivial, even if this is a polyhedral domain in R3. But on the other hand, I couldn't come up with counter examples directly either. So there may be more restrictions that are needed on the edges. So I mean, you could, for example, require the edges to be geodesics. That's not required in the theorem. But it may be that you need some further restrictions to do it. But at least formally, the Brown-York conclusion in the polyhedral case would correspond to that. I think it's a very interesting question as to whether this conclusion can be improved in the polyhedral case. It could, for example, give a way of defining a quasi-local mass. So this quantity then is more stable. So if I perturb the polyhedron, the maximum theta of p can jump around. It can change rather badly. But on the other hand, if I perturb the polyhedron, these integrals would seem to be much more stable. So it's much more likely, I think, to give a more effective notion of a quasi-local mass. OK, so those are just some speculations. Again, I'm only writing this down because it's the formal analog. I wouldn't state it as a conjecture without further hypotheses. But I think it's an interesting question to ask. OK, so those are just some closing comments about these two notions that I discussed last time. And what I want to do in the rest of the lecture today is to describe how these ideas generalize to higher dimensions. And so actually in mathematics, of course, studying geometry in all dimensions is important. But higher dimensional gravity is also studied a lot by physicists. So there are various motivations for considering higher dimensional gravity. And in particular, the idea of defining quasi-local masses is, again, I think a useful notion. So let me put one thing to rest. And that is that, so I now want to consider, so I have the Brown-York mass here. So if I now consider, instead of m3, I consider a manifold mn, g and n with scalar curvature non-negative, then the Brown-York mass is not going to be a good notion at all because you almost never have isometric embedding. So even if the boundary geometry is just a perturbation of a sphere, of course, the standard sphere embeds. But even if you perturb it just generically by a little bit, you won't be able to embed it in isometrically in Rn plus 1. So it's almost never true is sigma isometrically embedded into Rn as a hyper surface. And so actually, if you could embed it, I mentioned the proof of positivity, which is done by Xi and Tam. That proof actually does work in higher dimensions as well. It's just that the applicability is extremely limited. So the problem is that the isometric embedding problem as a hyper surface in dimension bigger than 2, that is when the surface has dimension bigger than 2, is an over-determined problem. So it's a problem. So a metric, a Riemannian metric on sigma has components locally, gij. So i and j run from 1 to n minus 1. And the question of isometrically embedding is the problem of finding a phi. So we want to find phi, which is phi 1 up to phi n. So I would have n embedding functions. And the equations which make it isometric are that gij is equal to the partial of phi with respect to xi dot product with the partial of phi with respect to xj. This would be true for all i and j. And so in particular, the number of equations is the number of functions in the metric. And that would be n minus 1 times n over 2. This is the number of equations. But the number of unknowns is only n. So I would be looking for n functions for which these very large number of equations are satisfied. Well, you can see that when n is 3, then this is 2 over 2. So this is 3 and this is 3. But if n is 4 or higher, then there are many more equations than there are unknowns. And so even a small perturbation will you could take the standard sphere and just perturb it randomly. And you won't be able to find a solution. So you do not expect to be able to do that. And so the whole idea pretty much falls flat. I mean, it only would be for extremely special domains that you could actually define it. And so the Brown-York mass doesn't generalize very well to higher dimensions. However, the other two notions, that is the asymptotically flat case and the polyhedral cases actually do generalize. And so I want to describe how that works a little bit today. So involve some interesting geometry. So let me first talk about the claim as if I take n greater than or equal to 4, there are extensions of the first of all, the asymptotically flat case. That would be the positive mass theorem. And the other one would be the polyhedral case, which is I called case 3 before. So they both do have successful generalizations. Actually the polyhedral case, the complete work is very recent. And it hasn't, in higher dimensions, it's not yet fully worked out for the tetrahedral case. But the cubicle case actually appears in a, it was just posted to the archive a week ago, I think. It's a paper of Chow Lee in that case. So let me start again, because actually the cubicle case, again, as in the three-dimensional case, is very closely related to the ideas that I described for proving the positive mass theorem. So let's first look at the positive mass theorem. So let's suppose we have mng with scalar curvature greater than or equal to 0. And suppose it's asymptotically flat. And again, I'm going to assume special asymptotics like I did before. Again, it can be justified. You can take any arbitrary asymptotics and just perturb a little bit, keeping the mass essentially the same, and demand these asymptotics. So I'm going to assume that g near infinity, so remember m minus k, is diffeomorphic to Rn minus a ball. And we can choose coordinates x1 up to xn. And I'm going to demand that the metric near infinity be u to the conformal factor. And it's convenient to write it in this form times delta, the Euclidean metric. This is on m minus some large compact set. So I'll assume again that the metric near infinity is conformally flat. And then the same set of ideas we talked about in the three-dimensional case. So it works similarly. So again, the proof is by contradiction, actually just as it was for the polyhedral case as well. So let's assume that m is negative. And so u here is a harmonic function outside a compact set. And so u of x has an expansion. It's 1 plus m over 2 mod x to the n minus 2. So now the Green's function for the Laplacian on Rn is 1 over r to the n minus 2. And then plus terms that vanish more quickly as x is large. And so the definition of the mass we can just take to be that number that occurs in the expansion. And so in particular, if m is negative, that gives us very specific restrictions on the geometry near infinity. And in particular, the one we're interested in is the statement that if we take a large slab, so we could do any coordinate. But let's say we take xn equals lambda. So lambda is quite large. So we're out in the asymptotic region. And then we take xn equals minus lambda down here. And again, the same calculation we did before shows that this slab is mean convex. So the mean curvature vector points in, both on the top and the bottom. So this slab is mean convex. And in particular, it allows me to construct minimal surfaces, area minimizing surfaces, which lie in the slab. So I can take a big circle out here and construct them. And I can do what I did before. And now there is an issue, of course, in this. I'm always going to work in cases where the surfaces are smooth in this talk. So I assume also that n is less than or equal to 7. Because again, there's this weird phenomenon, this weird thing that happens for volume minimization. If you're in very high dimensions, 8 or higher, you can have singularities for these surfaces. So let's not deal with that issue here. It would add too much complication. So let's just assume that our hyper surfaces are regular. And then we could let the circle tend to infinity. And we produce asymptotically planar. So there exists asymptotically flat area minimizing or stable. In particular, we're interested in sigma. So that's what we did in three dimensions. And then in three dimensions, we were able to just use the stability inequality to say that that's not possible. So we used the Gauss-Bonne theorem. Stability inequality plus the Gauss-Bonne theorem on sigma, which is then a two-dimensional surface. And then we ruled it out. Now, it's a bit different in higher dimensions. So the important remark is that it's not enough to just find asymptotically, this turns out to be not sufficient. So we have to figure out some way to replace that Gauss-Bonne argument. And so actually, for n greater than or equal to 4, there are lots of asymptotically flat area minimizing surfaces. In fact, if you take any plane, so there exists a sigma asymptotic to any plane, any plane at infinity. So you don't need the barriers at all. So this is true whether or not the mass is negative. So there's a difference between the three-dimensional case and the four-dimensional case, the four and higher dimensional case. And so that may be not obvious, but let me explain that. So in other words, it's not sufficient just to have one area minimizing hyper surface. And so you can ask, well, what part of the argument doesn't work? Well, in that argument, we used the fact that the plane was two-dimensional because we wanted to take our variation to be one. I'll write this down again, but that argument that we used to cut off a constant variation doesn't work in the higher-dimensional case because the volume growth is too large. You can't do that. And in fact, there are a ton of these. In fact, you can understand this quite easily because there's a nice analog. So minimal surfaces which are close to planes, minimal hyper surfaces, behave like graphs of harmonic functions. So if I take a, so I have this hyper, it may be quite complicated inside, but near infinity, this is the graph of a function. So this is like xn equals u of x. And then the condition that the surface be minimal is essentially the Laplace equation near infinity. So when it's very flat, the function's basically harmonic. So you can understand this phenomenon in a sort of linear model. So there's a, just to understand this, so there's a model problem, which is quite simple. So suppose I wanted to solve on Rn Laplacian u equals f where f has compact support. And suppose I want u to be asymptotic to a constant. C or C is given. Question is, can I do that? So that's sort of what I'm trying to do, right? I'm trying to say that for any plane, I can construct this minimal surface while it's not really at the graph of a harmonic function, but near infinity, it's pretty close to that. So the question is, when can you do that? Well, the answer is, so it looks a little like Newtonian gravity, right? If this were a mass distribution, then you know that you can solve this in R3 and you get the Newtonian potential. And so in fact, the answer here is yes, or n greater than or equal to three. Let me call it Rk because k is actually n minus one. It's true, yes, for k greater than or equal to three, but no for k equals two. And so it's related to the fact that the Green's function of the Laplace operator on R2 has a logarithm, and so it's not bounded near infinity. And so it's a very similar phenomenon. So you can sort of force these solutions to approach a constant, a given constant, over any plane if you're in higher dimensions in if n is at least four. And you can ask, well, so there's a, you can ask, well, what would happen in two dimensions if you tried to construct a solution like this where you could take a big disk here and you could solve Laplacian u equals f here and say u equals zero or u equals c on the boundary. And then you could let the radius of the disk go to infinity. Well, what would happen? What would happen is that the function f would go either to plus infinity or minus infinity. It won't converge. So if this is sigma, you get a solution u sigma. Then the limit of u sigma does not exist. And exactly the same thing happens in the three-dimensional case when you do, when you look at these minimal surfaces. So when you solve the minimal surface, if you don't have the negative mass condition, you don't have the barriers, the mean convex region. What will happen, so if you did this in Schwarzschild or something, what would happen with positive mass? What would happen is these minimizing surfaces would just drift away. They would, on compact sets, they would go either to infinity on one side or the other, depending on how you chose your boundary data. Okay, and so that's a good linear analog. And it turns out, of course, the minimal surface problem is non-linear, so it's a little bit different. But it turns out that there's no difficulty there. There are lots and lots of complete asymptotically planar area-minimizing surfaces. And so what do we do about that? Well, so the idea, the proof is that it's not enough just to have one of these things. You have to have a very special one, and I'll call it strongly stable. And so it actually is somewhat related to this free boundary condition that we imposed in the cubicle case. And so let me be a little brief about it because I want to spend some time talking about the finite version, the cubicle case. So the basic issue is that what I can do, so I have the, let me just draw a picture, so I have the upper slab Xn equals lambda, and I have the lower slab, minus lambda, and then I can look at the cylinder so I can solve for a large radius here, and I can take these circles on this cylinder for each of those I can minimize area. So I get this minimizing surface here, and they'll all lie in between minus lambda and lambda because of the mean convexity of the slab. But what happens is the mean convexity, so then what I have to think of doing is sliding up and down. So I can ask, well, what's the best height for which I can take here? And it turns out the best height is not lambda or minus lambda. And so the mean convexity condition, together with, of course, the angle condition, guarantees that there's some special choice here. So there exists some height here. Let's call it say, so this will depend on the radius, sigma. So let's call it h, sigma. So that this surface is smallest, so the measure of, so when we go up to height h is greater than or equal to the measure of the special one. So this is the best one. And these h, sigmas are strictly in the interior of this interval, minus lambda, lambda. And that's, again, because of the geometry of the situation. So in other words, not only do I construct a surface, I construct a best one. I allow the boundary to slide up and down. And then I produce the sigma, and then I can go, let sigma go to infinity, and I produce this special surface, which I'll call strongly stable. So I produce a limiting surface, sigma, which I'm gonna call strongly stable. And so what's important about it, so a minimal hyper surface being stable means if I do a compactly supported variation, the area goes up to second order. Strongly stable, I'm gonna define to be the condition that I'm allowed to choose variations which are translations near infinity. So I can allow variations. So the second variation, del squared sigma is positive for all variations phi. So again, as usual, I take sigma and I take the normal. And then the variations are defined by functions. So if I take phi to be compactly supported, then stability means that the second variation is non-negative for compactly supported variation. We wrote that down as it's an eigenvalue condition on a linear operator on sigma. The Jacobi operator. And so the requirement here is that I can allow not only for phi of compact support, but I can require phi to be in c infinity compactly supported sigma, or I can allow phi minus one to be c infinity compact supported sigma. In other words, I can allow phi to be a constant. So I have this, because I built in this extra translation that I can do and the surface actually minimizes not only for compactly supported variations which fix the boundary, but also ones which move the boundary in a parallel manner. And so that's the, this is the original, the way we did the positive mass theorem in higher dimensions. And in particular, what happens is that, what happens then is, and I'll explain this in a little more detail in a minute, but what happens is that on this sigma, so this sigma has an induced metric, let me call it G one. Just the metric induced from M. The stability condition, this condition, actually just with compactly supported variations, guarantees that I can choose a conformal factor to make the scalar curvature of the metric zero. So the first point is, this condition implies there exists a U one, so that U one to the four over, well, N minus one minus two, or N minus three times G one is scalar flat. So let me call this metric C. One hat, and so the scalar curvature of G one hat is identically zero. So that's solving a, it uses the eigenvalue condition that comes from the stability and you only need compactly supported variations to do that. So this U one tends to one at infinity, and so in particular this metric G one hat is again asymptotically flat. So on this hyper surface, I get a metric with zero scalar curvature, which is again asymptotically flat. And what I wanna say is that if I started with negative mass on the original manifold, then this will also have negative mass, and that's where the variation comes in. The key step is that if I take Mg, and if I assume negative mass to begin with, that implies that sigma with metric G one hat has M hat negative. And so it's an inductive argument. So if I start with negative mass, I can produce this strongly stable hypers, it's a very special one, and it again has a metric with zero scalar curvature and negative mass, and then I can slice down to three dimensions. That's how the argument goes. And let me just explain this M hat being negative. So the point is that this function U one, so the stability inequality, del squared sigma being greater than or equal to zero implies that the integral of the norm of gradient U squared on sigma, sorry, phi squared, plus the special constant C of n minus one times the scalar curvature R one, R of G one times phi squared is greater than zero, if phi is non-zero, say. So it's strictly positive. So the R one is the scalar curvature of the metric, and this constant C of n minus one is the conformal, the constant in the conformal Laplacian. So generally C of k is k minus two over four times k minus one. And then the operator associated with that is called the conformal Laplacian. So it's a little bit of conformal geometry which is used in this. And then, so this is positive for any provided phi as non-zero. So what I wanna do, so this function U one is a solution of this operator. So this is the same. The corresponding operator satisfies Laplacian U one on sigma minus C of n minus one, R one, U one is identically zero. And so what I wanna do is I wanna replace phi by U one. So I wanna look at, so U one is asymptotic to one at infinity, well it's not quite equal to one, but it's close enough that you can use it as a variation. So what I wanna do is I wanna replace this by U one. By U one, that's U one squared. Then I get that this is positive. So everything here is integrable. And so what that tells me is that, so I can write this as the limit as sigma tends to infinity of the integral over a sphere of radius sigma of the norm of the gradient of U one squared plus C n minus one, R one, U one squared. So I get that is greater than zero, right? So that limit. On the other hand, this can be written as a boundary integral. So if I integrate by parts, I get minus U one Laplacian U one and here I get, I have C n minus one, R one, U one. So using the equation, I can write that as a boundary term. So it's just like for a harmonic function, the Dirichlet integral gives you a boundary integral and that's true here too. This is a solution of the corresponding operator. So what this is, sorry, this is B sigma a ball and this is the integral on the boundary, a sphere. And so what I get is U one, D U one, D R integrated D with respect to the area measure, oh, nu. Okay, and so what I get is that that limit is positive. On the other hand, U one is asymptotic to one so it will have the expansion. So U one of X will look like one plus M one, so M hat, actually the hat mass over two times mod X to the N minus three, I guess, and then plus low order terms. So it will have a nice expansion with the M hat appearing there and you can see by the same calculation that I think P Ote did in the first lecture, this limit, so when I differentiate this, U one is approaching one so I can forget it in the limit. D U one, D R is just minus a constant times M hat. And so what happens is that as sigma goes to infinity, this left-hand side just converges to minus some constant times M hat. And so in particular that shows that M hat is strictly negative. And so you get, so that's how the proof is. So there's this additional step, which is somewhat close to what we did in the cubicle case, right? So in the cubicle case, we solved this free boundary problem. So let me claim that there's again a localization of this theorem. And the localization I'll talk about again is the case of a cubicle polyhedron. And so the proof is somewhat similar to last time, but again, the Gauss-Ponnet theorem has to be replaced by a different method and in particular it's a somewhat interesting conformal argument. So we used, so just to summarize the steps here, of course I haven't done everything in detail, but the idea in the asymptotically flat cases is we started with an asymptotically flat manifold with negative mass, then we produced this special strongly stable area minimizing hyper surface asymptotically planar. And we then used a conformal deformation and the strong stability to show that that inherits the same structure we started with. So that is it becomes a zero scalar curvature, non-negative scalar curvature metric with negative mass. So we had negative mass and originally we get negative mass for the slice also. And so it's the same strategy that's used in the cubicle case in higher dimensions, but the conformal argument is somewhat more involved than the one here. It uses I think a rather interesting conformal geometry problem for manifolds with boundary. And so I'll try to explain if I don't run out of time. But so now, so the claim is, so the theorem is that now if I take omega, now a finite domain in my manifold, it really is just a manifold with boundary. And if I assume the boundary of omega, which I'll call again sigma for lack of imagination, sigma equals boundary of omega is cubicle. So that means that it's made up of a certain number of faces. It would be two to the n faces, which are hyper surfaces. And then they meet transversely along the edges. So the cube, the n dimensional cube is somewhat more complicated than the three dimensional cube. Three dimensional cube just had the interior, the faces, the edges and the vertices. The n dimensional cube has the interior, the faces, the co-dimension two edges, co-dimension three edges on down to vertices. So it's a more complicated singular structure that occurs here. And in particular, analyzing the behavior of the minimizing surfaces is somewhat harder in this case. And so, but of course I'm not gonna talk about that. I'm just gonna give the formal argument. So we assume that this is cubicle, and of course we always assume that scalar curvature is non-negative. And then again, for an n dimensional cube, we can define the dihedral angles to be the angles at the co-dimension one edges. So we look at the edges, so it's an n dimensional cube. So they would be the n minus two dimensional edges. So those are the, where two faces come together. And those are the dihedral angles. And the theorem is again that the, well we assume that, and we assume of course h is greater than or equal to zero on the faces. Then the theorem is that the maximum taken over points in e n minus two. So that would be the, sorry, yeah n minus two of the dihedral angle theta p is bigger than or equal to pi over two. And equality only of trivial. Meaning that it's the standard cube or standard rectangular solid in the boundary of the rectangular solid in our n. Okay so that's the theorem. And the idea of the proof is very much parallel to this idea. And so let me just say it in words and then I'll try to fill in at least a few of the geometric details go into it. So the first step, so the basic idea is like what we did here. So again the proof is by contradiction. The same way that the positive mass, this proof of the positive mass theorem goes. And so we assume for sake of contradiction that the maximum is strictly smaller. So we start with an n dimensional region whose boundary is cubicle. And then what we wanna do, so we get this and then what we're gonna do is we're gonna construct a slice. We're gonna again minimize the area and we're gonna produce an n minus one dimensional region manifold with cubicle boundary which also has maximum dihedral angle less than pi over two. In fact the argument will never increase the maximum dihedral angle. The dihedral angles will go down in the construction. So how do we do it? Well we do something similar to what we did last time namely we have a little hard to draw an n dimensional cube but we choose two opposite faces. So we have an upper face and a lower face. And because of the angle condition, so because the angle here is less than pi over two and because the mean curvature is positive on all of the faces, both the top and the bottom in particular, we can argue that there is a area minimizing surface that divides the two faces and it doesn't touch the top or the bottom. So there's a surface here, per surface, this is my sigma and sigma is now n minus one dimensional. It's a hyper surface. It divides the top, the bottom from the top face and it has the property that it minimizes area among all such surfaces that do that. Divide the top from the bottom and again the reason that surface exists is because so you might worry that when you do that you might just push up and touch the top or touch the bottom. And so the reason that doesn't happen is because of the mean convexity inside so it can't touch inside and it can't touch at the boundary because of the angle condition. So the picture is like the picture I drew for Geodesic last time. Because the angle here is strictly smaller than pi over two, I can construct this minimizing curve and it won't hit the top or the bottom if these are mean convex inside and has angle smaller than pi over two. So again in n dimensions we can do that. The only problem of course is that we always assume that n is at most seven so that sigma is non-singular. So it's a smooth hyper surface. And so we construct this sigma and then what we wanna show is that in fact sigma is an n minus one dimensional counter example. So what we wanna claim is that, okay so this sigma has a metric, let me use the same notation. Let me call it G1 as the induced metric. And the claim is there exists a function U1, again positive on sigma. And we take G hat one is U to the four over n minus three times G1. I can choose a metric so that if I take sigma with G1 hat so that it has zero scalar curvature, our G1 hat identically zero inside and the boundary of sigma has zero mean curvature at the smooth points. So in other words we can take our metric so similar to what we did here, we took the induced metric, we can formally deformed in this case keeping it asymptotically flat. In this case keeping it with the correct boundary condition that is the mean curvature being, well we need it non-negative so it's most convenient to make it zero when we solve this problem. And zero inside and moreover the dihedral angles so because the surface meets the boundary orthogonally the dihedral angles are the same. So if I take a dihedral angle at a co-dimension two face of sigma that will be the same as the dihedral angles. That point would be in the co-dimension two face of, this should be sigma one. That would be the same as the dihedral angle of sigma, the original boundary at the same point. And so the claim is the maximum dihedral angle P and E n minus three, one, so for this metric one, is less than or equal to the maximum we started with and that's assumed to be less than pi over two. Okay so we don't increase the dihedral angles because the surface is meeting orthogonally and so the dihedral angle will be the same as it was originally at that point. Okay and so how do we do that? Well let me say a little more about the conformal part of it which I think is not, well maybe many of you haven't studied any conformal geometry but even for people who study conformal geometry, it's not a completely standard construction. In particular, this function here u one, this conformal factor, satisfies both the right interior condition and the correct boundary condition to make the mean curvature zero. So this would be h, I should call it h one, the mean curvature of the metric g one hat. Okay and so let me say a little bit about conformal geometry and what's needed in the proof of this and so I think it's of interest in its own right and it just works out perfectly in this case. It's kind of remarkable how the second variation inequality exactly gives the right constants and to make everything work out and so let me say, let me digress a little bit so that's the strategy. We're gonna start with a counter example in dimension n, we're gonna slice and get a counter example in dimension n minus one and then we've already done the three dimensional case so we'll go down eventually to three dimensions. So let me just give a, fill in a little background on conformal geometry here. I've got it. So I wanna look at a situation so I'm interested now in a manifold with boundary and I have a metric. Then I wanna look at the following consideration, let me call this background metric g naught. I wanna consider it conformally varying it so let's say this is k dimensional then I wanna write it as u to the four over k minus two times g naught and then there's one very standard conformal calculation which, well which everybody in relativity and geometry should know. I mean if you know the conformal method for solving the constraint equations, for example it's used there heavily, it's also used in almost any work in conformal geometry and that is that the scalar curvature of g, the metric g is given by a very nice formula. There's a constant which I'll call c of k, the same constant I had earlier, to the minus one and then there's a power of u which is minus k plus two over k minus two and then there's a linear operator so let's call it l naught of u and this linear operator l naught of u is the Laplacian with respect to g naught of u and then minus the same constant c of k times the scalar curvature of g naught times u so l naught is just a linear operator, it's just a linear operator on m and it's simply the Laplacian plus a zero order term. The zero order term is the background scalar curvature times the constant and this constant c of k which is the one I used before is k minus two over four times k minus one. Okay, so that's an important calculation in conformal geometry. Now the other important calculation is something I alluded to earlier and let me write it down in this setting. So now I have the boundary, so I have m as a manifold with boundary. I have m, boundary m and that is the relation between the mean curvature with respect to g, I'll call that h and we call h zero, this is the mean curvature of the boundary with respect to g naught. So I used this earlier, there's also a nice conformal deformation argument for that, so u to the minus two over k minus two and then I wrote this down before, it's h zero minus k minus one and then times the logarithmic derivative, it's the normal derivative so I choose the outward normal here. The normal derivative of u to the two over k minus two, so I wrote this down with a square divided by u to the two over k minus two. So in this formula I had written down for the metric v squared times g naught and so my v is u to the two over k minus two and now this is just the logarithmic derivative so this is equal to two times k minus one over k minus two times d nu of u over u. Okay and now we're particularly interested in the case when the mean curvature of the new metric is zero so we want this to be zero. So if we wanna choose some other mean curvature it's a lot more complicated because this factor of u plays a role but it becomes a nonlinear boundary condition but if we want the mean curvature to be zero then the boundary condition is very simple so notice by the way this constant is related to c of k it's just one over two times c of k. Two c of k is k minus two over two k minus one and so that's one over two c of k and so this equation so if this is zero then I can forget that part and it's just a linear boundary condition so the linear boundary condition is the normal derivative of u minus and I'll multiply through by two c of k times the background mean curvature H naught and then times u is zero and so the condition that I impose zero mean curvature on the boundary is a linear boundary condition so that's a good boundary condition from the point of view of eigenvalue problems if you like, right? It's a nice linear boundary condition and inside I have a nice linear operator well if I wanna make Rg zero of course I make this zero so Rg equals zero is equivalent to saying L naught of u is zero that's also a linear problem, right? Because again this term is irrelevant and it's just a solution of a linear equation so zero scalar curvature is a linear condition inside and mean curvature zero is a linear boundary condition and so I can study this in a natural way so what I can do is I can look at so if I think of it variationally I can write say E of phi which is the integral over M of the norm of the gradient phi squared and then I get plus c of k times scalar curvature R naught times c squared d mu and then to get the boundary condition I subtract two times c of k times the integral of boundary of M of H naught times c squared so if I look at that quadratic form and I try to diagonalize it or I look for eigen functions for that then it has a discrete spectrum so if I look for eigen functions we'll satisfy the condition that L of, an eigen function will satisfy L of u is minus lambda u inside and it will satisfy the boundary condition which comes out, if I just compute the deformation, the variation for the form is exactly this one it's d nu u c of k times H naught u is zero so in other words if I think of the eigen functions as giving me conformal factors those will automatically make the boundary mean curvature zero okay and so in particular you can do the sort of thing you do with closed manifolds there's what I call a trichotomy theorem so you know if you take a surface a two-dimensional surface which is closed there are three cases there's the case of positive Euler characteristic in which case there's a positively curved metric on the surface there's the case with zero Euler characteristic in which case there's a flat metric and there's the case with negative so the same thing is true here in for conformal metric so the claim is, in fact let me just do part of it since I'm running out of time if I look at the lowest eigen value lambda one is greater than or equal to zero if and only if there exists a function u positive function satisfying the scalar curvature of u to the four over k minus two right times g naught is greater than or equal to zero in m and h, the mean curvature of the boundary of m with respect to this metric the mean curvature with respect, let's call this g hg is identically zero so the positivity of the lowest eigen value of this quadratic form is exactly the condition that we're looking at is exactly the condition that I can conformally deform the metric on my manifold so that it's scalar curvature is non-negative inside so that's because lambda is non-negative so the eigen, if this is positive then the scalar curvature becomes non-negative or positive and zero mean curvature on the boundary and so the way it works maybe I don't have time to do all of the calculations but the beautiful thing is that the stability and equality on sigma so I'm going to apply this on my hyper surface sigma one this general result so the claim is that on sigma one the ability which I'll write is del two sigma one greater than or equal to zero and that of course is for that allows variations which move the boundary so I wrote that down yesterday when I did the three dimensional case so maybe I don't have time to write it again but the claim is this non-negative for all variations fee implies that the lambda one over here is non-negative and so remember in the stability and equality there's a boundary term that comes in and there's also a term involving the scalar curvature inside so it turns out the constant in front of the boundary term and in terms of the scalar curvature work out exactly correctly to reduce to this case and so notice here the constant c of k oh sorry yeah so here it is so here I have a c of k here I have a two c of k and so it turns out that those constants work out exactly right so that the stability of this sigma one implies lambda one non-negative and that's where the conclusion comes from there exists the conformal factor u as over there so in other words this conclusion so yeah so that's the argument so again the argument is then by downward induction so you start with an n dimensional counter example to the cubicle theorem and then you slice you produce this n minus one dimensional hypersurface and because of the variational structure of that n minus one dimensional hypersurface you end up conformally changing the metric on it to give you an n minus one dimensional counter example and then eventually you slice down to three dimensions and which we already handled using so in the three dimensional case we use the Gauss-Ponnet theorem on the two dimensional slice okay so I'm one minute over so maybe I'll stop here for today yeah