 Well, this is part two time-wise, but theme-wise it's more of part one. Maybe we'll get to lecture two sometime this afternoon. But I think we're carrying on tomorrow. We're going to make a plan at the end of the day about what we're going to do tomorrow. So we're in the middle of getting to the main point. If you could just stay for the rest of the semester. What we're trying to do is to get this uniform bound on for some k on the size of a section at any point in X. And it's quite elementary to reduce by a double compactness argument to a much more tractable-looking local statement. Compactness to the situation which I'll describe is to say we have a sequence of manifolds of the kind we're considering converging in the Gromov Hausdorff sense to Z. And we have a point P in this limit space Z. And then we want to say that this holds for a suitable B and K depending upon a priori upon P for points which are close to P. For that to make sense, let's recall the definition of the Gromov Hausdorff distance. This involves looking at metrics on the disjoint union of Xi and Z. So we can suppose that we have chosen suitable such metrics. So we know what it means to say, to talk about the distance between a point in Xi and a point P. Is that clear to everyone? We should recall the definition of that. So what we want to say is that there exists, let's say star. Star holds, what's that exist? Say K of P and R of P such that star holds if D of X, P is less than equal to R of P. And I, maybe it feels like I of P, I bigger than X is in Xi. Probably K equals, let's say K of P. If it's true for one K, it's an easy argument. It's true for a suitable, changing this for all powers of K. But as you'll see, we need to be aware, if it's true for K, it didn't be true for K plus one, for reasons that you'll see. But let's, let's simplicity say that, which will be right. So, so this would mean what, what you say, if not, if our, if, if what we're trying to do failed, there will be a sequence of X's for which it, the sequence of BI going to zero for which it fails. So it would take a convergent subsequence to get some Z. And then when we got this Z, if we had these little, for each point P, we've got this little R of P, then we can cover Z by, take a finite number of the balls of radius, half R of P or something that covers Z, and then get the estimate over all of Xi. So this is, this is compactness both of Z and Vrommov-Hausdorff compactness of our space of, spaces that we're considering. So, but there's nothing difficult in Vrommov-Hausdorff, there's just sort of straightforward arguments. Of course, as, as, discussing the belly of a lunch, this, this is, this is very non-constructive. We're not going to get any, this is just the first non-constructive step. This is not saying we get any kind of actual information about what our things are, because every stage we're saying, if it fails, then we'll do something. So now what do we know about, so let's, so now let's focus on proving this statement. So we've got some fixed P and Z. And so we know we have these tangent cones. A tangent cone. Remember, we're not saying this is unique. But what that means, we can choose a sequence of scalings, so that Z, on a fixed size, Z approaches as close as we like to this cone. And that sequence of scalings corresponds to choosing a corresponding sequence of powers, k that we can take. But then, so, because there are many parameters and things to get in the right order in this, but let's not try to write it all down. We've tried writing it down very carefully in our paper, but so let me not try to say it all very carefully here. So let's suppose we choose a scaling so that Z looks very close to this cone, as close as we want for our purposes, as we're going to find. Then we choose I very large, so the XI very close to Z on this scale. So we do it in that order. What we conclude then is that we can, for suitable, after suitable scalings and suitable I, we can construct a map from our U into XI, which after we've scaled the metric here, is essentially a holomorphic isometric embedding. So let's just recall what U was. We took our cone and then we just cut it. We're removing the singular set with the three parameters that we haven't, we're just carrying along for the moment. So we like this. This is our U. But we can, by just writing down the definitions and doing things right, we can embed U inside our XI, not exactly holomorphically, and it's not quite, it's a bit different from what we were doing before, but such that the complex structure differs arbitrarily close to being holomorphic and arbitrarily close to being isometric. So essentially the holomorphic isometric embedding. So there's one other thing we want parameter we're going to have, which we'll also keep free, let's call it, say, row. So we choose a point, a distance, say, row from the vertex and a little ball, say, of some little open set around that point, which lies in the smooth part of this view. So again, all these are parameters that we're going to be allowed to adjust. But you see now we're in the same, now we're basically in the position that we were before. You see that we have, we have, write it down again, XI. Here we have our line bundle, which we're choosing the k corresponding to the scaling that we want. Let's not, here we have our line bundle we're calling lambda. So just two things we have to do, and then, well, there are two things we have to do to get a holomorphic section that we know something about. One is that we would like to lift the map here to a map of the line bundles in such a way that the holomorphic structures of the line bundles roughly match up. But we can think about that in terms of, I prefer to think about it in terms of connections being a gauge theory person by training. We can think of, we want to lift this approximate isometry to an approximate identification of the bundles with connection. Secondly, we want a good cutoff function, which as we said is going to have the form beta delta, beta r, beta eta. Where by this we mean a function on Y lifted up to the cone in the obvious sense. And by good, what we mean is that we want the L2 norm, of course, beta of grad beta times, I mean just put in the norm of this section, e to the minus mod z squared over 2, small. This is just notation for the distance to the vertex. Decaying exponentially fast or Gaussianly fast as we go away on the cone. And by small, I mean as small as we like when adjusting all the parameters. So we have two problems. What is the problem about chi hat? The geometry down here tells us the curvature of these bundles is essentially the same. So the problem, if we pull back the connection and compare it with that connection, we get a connection with very small curvature. But the problem is it might be close to a flat connection. So we might have non, we can easily project it to a flat connection, but that might not be a trivial flat connection. We might have some non-trivial holonomy. And secondly, we need a good cutoff function. How do we achieve this? This is small. But let's maybe, let's not, let's come back to this in a moment. Let's suppose we don't have any holonomy. And suppose we've got this good cutoff function. What do we then do? Then we can just say exactly the same words as we said before. We transplant our section compactly to a section over xi, which is not quite holomorphic, but close to being holomorphic and project it to get a holomorphic section. Working on our genuine space xi, which is sort of close to the cone, but the point where we're trying, yeah. The region we're trying to work in is not in the image of our map chi. It's a little bit outside it. So do we know that those Sobolev constants are bound in? That's a very good point. We do, the part of the reason for choosing the hypotheses that we have, so the crucial thing is that it gives a bound on the Sobolev constant, which is what comes into this estimate here in the Moser iteration. That's a very important point. OK, so these are the two, this is where the work was to be done is in these two things. So the holonomy, I mean, this is a problem that actually could occur. Well, pretty much you can write down examples where it could occur. Because you might have, this cone might be something like C2 over plus or minus 1, in which case over that when you've, when you remove the vertex, the first group is Z2, and you really could get a non-trivial bundle. So you really might not be able to do this. But on the hand, if you take the square of your bundle, then you can, because the pi1 is just Z2. So although it doesn't work for here, if you take the square, it does. You kill the, taking the square just means you're changing the, I mean, you need to do more rescaling. So rather than working in this bit of a cone, you need to go down half the size. So, yeah, so that same principle applies. If you knew that you had, the same idea applies, but you need to be a bit careful and go into it to set it up. See, let's just think about the dangers so as not to kind of minimize it. One of the things you don't, when you take the complement of this singular set, you get an open manifold. You don't know, for example, it has finely generated homology. So as you get, as you make, as you go further and further towards the singular set, you might get more and more loops to have a homonymy around. And so it could be quite difficult. So you need to be very careful in how you, the order you do things in. How about finite order? I mean, if you finely generate, why couldn't you get a z for the whole amount? That's what I'm, I'm coming back to that. So, so, so, so, but so you need to, you need to take care of the order you do things in. But you can, you can, you can deal with finite order by essentially by the argument we said. In reality, one believes that you will only get finite order in this situation. So probably that's all that happens. But it's hard to prove that. But even if supposing you had some, some non, supposing you had some non-trivial Betty number. So you'd have a, you'd have a torus of representations given by h1 of whatever the space is r over h1 of z. It's tk. So we, we, we need to contemplate that we might have a holonomy abstraction in this torus. But actually the same argument works because, so we don't need these, we're always just saying these need to be approximately the same. We can, we can cope with a bit of error. When we make these sections, they're not going to be exactly holomorphic anywhere. We've already got some errors. So we don't mind having a bit more error. So we don't mind if we, we've got a bit of holonomy. So we don't, we don't, we don't need our connection to be exactly flat. But it could be in some little neighborhood of origin and we'd still be okay. But then by a basic famous principle, once we've chosen that neighborhood, we can choose a finite order n so that if we take any point there's some power less than n which lies in that thing. So we can always choose some power in a fixed range such that the holonomy is very small. It's sufficiently small for our purposes. Choosing that power just corresponds to working on a bigger cone in which, a bigger portion of the cone in which you're allowed to scale up and you need to be able to scale down. So it all takes quite a lot to write, to write that down in the right, to get the words in the right order. It takes a bit of care but that's the basic idea. So that's, so now all right, it's going to cut off functions. There's no problem, if we just take the obvious things here, that works. About the beta delta. So for the R there isn't any problem because the Gaussian pays roughly as you go out to a large area of R. But the delta is very close to the place where you have to think of the Gaussian. That's just what I'm talking about. Exactly. So that's the Gaussian argument. This is, but this is a basic thing. So if we do this at scale delta, the derivative has got size delta to the minus one. So the square derivative has got size delta to the minus two. But the volume in which it's happening has got size delta to the two n. So just, because n is bigger than one, that's a basic problem. So that deals with that. So the real problem is this. We want to construct a, this is actually a cut off function on y for the singular set. And all we know about the singular set is that the Hausdorff codimension is at least four. So let's, let's just finish with that. So this is, I'm sure this is a completely well known thing to experts. But it wasn't, it wasn't well known to us. That is a nice thing. So what we want to say is if we have our y. So this is, we're not really, anything about y except what we're reducing is that the volume of balls is comparable to the Euclidean volume. So this could be very general. And we have our, let's call it sigma in y. Codimension, in fact the codimension we want is bigger than two. It's the crucial thing. In the Hausdorff sense. Then there exists a cut off function. What do we want to call it? G, so on sigma vanishing on a small neighborhood of sigma. On an arbitrary small neighborhood of sigma such that the integral of mod grad G squared is less than an arbitrary, say, of rollips. So if we have any set of codimension bigger than two, we can get a cut off function, which is where we make this L2 norm the smallest of the derivative, the smallest we like. So let's just do this and then pretty much stop. So what this, what the definition of this Hausdorff codimension says that we can cover, this is supposed to be a closed several compact set. We can cover y by a finite number of balls of radius ri. So for some lambda bigger than zero, the sum of ri to the n minus two, let's write, this is the dimension of y is equal to n. This is two little n minus one in our previous notation. That would look confusing. So this is less than, as usual the asylum will be a half of asylum. So the definition of Hausdorff dimension means that we can, for a suitable lambda, we can choose to cover such that this sum is as small as we like. And then we can, by standard, we can arrange that we can choose a cover so that the balls of radius, say, one-tenth the size are disjoint. Say one-tenth size. The elementary covering argument. So on each of these balls, we can take a standard function fi, the standard cutoff on the, let's call it bi, say the ball. So there's a ball bi of radius ri. So I just mean a function which vanishes. So it's equal to one, let's say, on two bi. So it's equal to one on bi supported in the twice size ball, which derivative is going to be bounded by some constant times ri to the minus one. Let's do the definition of constant. So I can write down the sum of fi. This is now a function that's positive on bi set sigma. And then I can write, if I take a suitable function psi, one real variable, I can write down g equals psi of f. So psi just wants to be something which, more or something like that. So this is a function equal to one on sigma vanishing on a small neighborhood of sigma. So the crucial thing is to estimate the derivative, the L2 norm derivative of this. But because psi has got a bounded derivative, it's the same to estimate the L2 norm derivative of f. It suffices too. So we need to estimate. So let's imagine what actually is very likely to happen. The support of these fi are actually disjoint. So the derivative of f is the sum of the derivative of fi. When we take the square of that, we would know cross terms. We would just get the sum. So this is just imaginary for the moment. At this speed, this would be, this is size ri to the minus 1. So we get ri to the minus, just as the calculation we did before, times it's supported on a set of volume ri to the n. So this is essentially the sum of ri to the n minus 2. Which we can make, that's bigger than the thing we made very small. So that would be fine. The problem is that we do have the cross terms. If each ball only met a fixed number of other balls, then we'd be all right. The cross, we'd only multiply this sum by a fixed number. So the difficulty is that the difficulty has to do with the intersection of balls. But there's a simple, there's a simple way of overcoming that. I should just say, let's divide, let's the i be the index set. Let's let i alpha be the set where ri is approximately 2 to the minus alpha. We can imagine that the largest ri is 1. I mean between 2 to the minus alpha and 2 to the minus alpha plus 1. Or say something like that. So the observation is that because we chose the 1 tenth size balls to be disjoint, not too many balls of the same scale can intersect. So let's start it like that. So if j is in i alpha, so the number of balls in i beta, so for all beta, that's the number of balls in i beta, which intersects some fixed i, which intersects bj, is bounded by some universal number. So what we're saying is, supposing we have some balls of essentially fixed size which meet some much smaller ball, then we can't have too many because the balls of a tenth size are disjoint. If we had too many of these, we look at the tenth size balls and we would get too many for the volume of the thing that we know. So it's just a basic packing argument that shows that we have them. What we're using is that the size of, in our situation, the size of balls is bounded above and below by, in terms of the radius by the Euclidean volume. So in the end, when you do that, you see that, so now we can write the normal grad f squared is less than or equal to, let me just get a 2, the sum over, we can take the sum over rj versus an equal to ri, mod grad f i, mod grad f j. But that means, sorry, the interval, because rj is, what should I say? Ri is bigger, so the derivative is at most rj. The integrand is less than rj to the minus 2. And the number of, so now if we sum first ever i and then over j, the integrand is less than rj to the minus 2. The contribution from the ball of size rj is what we had before, rj to the n. When we sum over the i's that contribute, we're going to get a log rj to the minus 1 term coming in, because at each scale we have a fixed number. So what we get is we have to change, by taking out intersections, all we have to do is to check, all we get is we get an extra log term, n minus 2 log. So that's still controlled by, because we had the extra power at hand. Okay, so that's the end of what I'm saying about the technical, this is the proof of the main thing. Let's go back to, how do we go from this technical theorem about the bound on the row to proving that we get the Gromov-Hausdorff limits are algebraic. The thing we started off saying. To write that out in full, take some work, but to get the main point is very simple. It's just that we have, we can always embed, first of all, we can choose rk such that these rows are non-zero, so we do get, by sections of lk, we do get a map into projected space. Given by the sections, where we take the norm, we think of this as a projected space with a few beanies, so it's studiometric, cross one to the l2 norm on l to the k. So what we want to do is to estimate, what do we call this map? I'll say f. We want to estimate the derivative of f. So what is this map given by? It's given, we take the sections, in a local patch, we would take the sections s1 up to sn, say, and then divide by s0. So the derivative essentially has got a contribution from the derivative of the sections, but that's bounded by what we said here. But we get, the only way it could get large is if s0 gets small. But that's just what we've said, that we can always choose an s0 so that it's not small, is just what we're saying. So the lower bound on our sections, our control of rho, gives us a fixed bound on this. Is that clear? Because we only need to control the size of the denominator. So that means that when we take xi converging to z, in the Gromov-Hausdorff sense, it follows just the definition, this is a uniformly Lipschitz map, which extends by continuity to a map from z to x. And we're over the image. Must be an algebraic variety. It's just the limit of the images of our maps. If we had to take a sequence of algebraic varieties with bounded volume, they're going to have a limit in that sense. Things are going to match up. So what you see first is that you get a, you get a map from the Gromov-Hausdorff limit into projective space whose image is an algebraic variety, of some kind. And then you need to do a bit more work to show that actually you can arrange this as a homeomorphism and all the other sort of more refined statements that we made. But the basic point is that if you take any given points in z, then you can shoot, by taking sufficiently large k, you can separate them by these localised sections that we chose. So although for a given k, possibly two points in z come together, by going to a larger k you can separate them out by sections. Use a section which is large at one point and small at the other point and vice-versa. Okay, so that's the end of this morning's lecture. Let's see. I think maybe it should make sense unless we could go on with this one. One last question. So is that true, literally, you don't want to see what I've said? Well, we're not doing it. So the question is, how can we, what is the best way of choosing an electric, permissioned electric on an electric space, given the data in the back? We think it's analogous to say what is the best way of choosing an electric, given the data in the back. So there's a good answer to this question which is essentially familiar. But if we had an electric, H, of this alpha, the usual way to turn it by the natural. So what we get to say is we'd like to normalize the natural, such that it commutes with its anoint. And it's a familiar theorem when you can do this. Because what you know, what you prove, if you have this property, is diagonalizing. Conversely, if it's diagonalizing, it seems to see the distance. So, precisely, whether you can do this, precisely the question is diagonal. And if you can do it, it's probably neat this thing. So, well, if I had to give it to you. So this is the kind of prototype that we would do. We have a good criteria, we can typically solve it, but we just make sure it's diagonalized long. But we can't always do so. We don't have to, like, typically find a manifold and have K-1's type of trips and do something slightly more Q1. So this is, the use of this general theorem. Well, we've already mentioned, of course, that with any other method, a pyramid, a stone, a T-out block, any other people, that I did. So the general, a different way of thinking about this is that we choose the identification of B with Cm, where we fix the metric on the Cm. But now the, we can absorb the choice of this identification in the action of Gg. So another way of talking about the same thing is to say that we consider an endomorphism of Cm, but now we consider its whole orbit and we're trying to find in this orbit, it's just its adjoined orbit, a conjugation, we're trying to find a normal matrix, a fixed one. And it's just an emotional vibration to say that's the same as just defining an extremum of the norm. And we always have the thoughts of an endomorphism now. So if we can minimize the norm in our orbit, we will satisfy this condition. And this Kent Nessie, he says that you can do the same many representation of the same group. But you have the same story. If you have a closed orbit, it's not the same as the civility of an endomorphism theory, which then you can always minimize the norm, and that will be unique up to the action of the endomorphism. So in general, we're going to consider a compact group G, Gc, and a representation of Gc on the south of W, then if we have a point to W, the closed orbit to minimize the norm. Thus, if you have a norm that you mentioned to minimize the norm, then it's the stable and precisely stable. These are quite right. On the one hand, they're good for body line theory, on the other hand, they correspond to one to minimize these preferred representatives. But there's a more general way of setting things up in which we consider a more general way. We have some tailor-manifold A, same, with a line bundle all over it. We suppose that G or Gc or A, and we have a dip to this line. And then we have a similar story. In this case, if we're considering here, A will be dip to space, and lambda will be the part of the line bundle. So get back to us if you're back to space. The crucial thing is that if we're just interested in the single orbit, as we will be in our interpretation, then we can think that we can become a single orbit Gc, but everything's invariant under G. So in fact, you can pull back your data. It's just the norm of this vector. Let's set the function of its orbit. It's invariant under G. So it defines a function of f on Gc. So we can write this down. Anyone can be expressed in terms of the geometry of the symmetric space Gc over G. In fact, if it boils down to that, this is a convex function. So we have Gd6 on Gc over G convex. It turns out that the functions you get from this picture are convex functions. That's why they're minimized. Conversely, if you have a convex function, then essentially you can return to the previous picture. So studying an orbit in this situation, so that's not the geometry of all these inputs. It's literally just to say we're studying convex functions on Gc over G. So what we'd like to do, if we went to the future of time, this is a bit of a digression from what we've been quite aware of for some years, we'd like to fit our Taylor Einstein problem into a differential geometry which is a formal way of this kind of setting. Let's talk about that tomorrow. Let's just talk about the one parameter. This will make it a Taylor connection to what we've been talking about. We want a numerical criterion to whether we have this posed as conditions. Let's think about our example of the parameters that we take to consider the orbit that we can't So this is a way of seeing why we can't minimize the norm in this small black device. Because when we conjugate by this one parameter, the subgroup, we can make a certain derivative. For all epsilon, these are the same orbit. But supposing we took any supposing we supposing we took any ham, then of course we're going to get certain weights that are beside the square. Or if we took up not too much of the bigger matrices that we had, then we get some bigger collection of weights. So the question about whether under this one parameter subgroup, the three things that could happen, the norm could go down to zero, but when all the weights are strictly positive it could be not going down to zero but be bounded, that's when we're out of zero weight, but we're negative weights, or it could still to infinity. So the crucial numerically varying that we need to detect what's happening is the smallest weight, which is appearing in the non-serial conditions. Taking generic measurements and having all the weights we take one, as we said before, we're not putting in the negative weights. So the numerical we take up, we just consider the actual one parameter subgroup, then we can look at the weights of the action, the question of whether the vector whether it goes down to zero or blows up, it's just a question of whether any negative weights are the sign of this smallest weight. And this is possibly exactly where this footarchy which is a crucial thing comes from. So I am in fact the numerical criteria of a pillar of the month that says that precisely you can detect this ability by restricting to monitor all the parameters of the orbit. The orbit is closed even only if this weight is positive. That's what goes to stock. I'll begin by carrying on this kind of formal story about what we're talking about these measurements and how the signal is.