 OK, I think I'm going to start. Let me quickly recall last time we introduced a generalization of the filling function, and more or less what this was, we started with a metric space. Then we supposed that there's a bigger space that contains X, and then for a curve, a closed-lipschitz curve in X, we defined the filling area of this curve in the bigger space, Y, just as the smallest mass of an integral current, S, an integral 2 current, and Y, whose boundary is exactly the current induced by C. So maybe I'm just going to call the current, so that's the integral 1 current induced by C, I'm going to call it Tc. So that means you're allowed to leave the space X for example, if you think of the following thing. So you take just this lattice, this one-dimensional grid here in R2, so if you already saw that in geometric group theory, you have the so-called Dean function, so if you just look at curves in here, you can't fill them. However, if you make your space bigger, if you embed this somewhere, for example, everywhere you put here, you could put a cap, a half sphere in it, then suddenly you can fill. So that means you can have a lot of holes, if you use an I space Y, then you will be able to fill. So of course you can do the same thing with this filling area function that we had to... Let me just remind you, from there we defined the so-called filling area function X and Y, which basically looks at all curves of length at most R, and then takes the one that is worse to fill. So of course we can do the same thing now in principle, we can do if we're just interested in filling by disks. So that would just then be the same thing here, except taking an integral true current, you just take the area, the parametrized house of area of a Lipschitz disk. So that means from the unit disk into Y, whose restriction is C. So we already said that, well, because here you have many more choices, right? Well, then just Lipschitz maps. Well, in principle, this should be smaller... Sorry, sorry, I should have a zero here. I should have a zero here to show that this is a different one. You only use disks. So in principle, this, you have fewer competitors, so this should be smaller. The problem is that here we're taking a kind of a different measure, then here we have house of measure, here it's mass of currents, and they're only the same up to a constant. If you have a remoney manifold, for example, they're the same because they're only... So this will not... Of course then we can define the filling area function with disks for X and Y. And this is up to a constant is bigger or equal to this filling function up there. So now let's maybe just do one thing quickly because we will need this even if we're just interested in remoney manifold and just looking at fillings in the remoney manifold, we will still need what I'm doing. So even if you think so, this guy is just doing more and more general things, no, actually we'll need that. So you can ask yourself if you're given a metric space X, in which bigger metric space can you fill the best? So the best is actually a Banach space L and 50. So for given X, for X, given we let L infinity of X, little L infinity of X, just be the space of all bounded functions on X. Looks that sort of bounded. No measurability or anything, of course, with the supremum norm. So then this gives you a Banach space, clearly. And you can embed X isometrically in there. So you can view it as a subset and this is used by the so-called Kuratovsky embedding. So how do you embed? So first, fix a point and embed, define the map that goes from X into L infinity of X. Just using distance functions. So you take an X, then we need to define a bounded function. So our bounded function, let's take a Y here in X. So what we'd like to do is just taking this. So if X is bounded, then this gives you a bounded function and you easily check that this is an isometric embedding. If X is non-bounded, we have to try to make this bounded and the way you do that is simply by subtracting something using the fixed base point here. So this gives you a function and you can check immediately that this is actually isometric. That means it's distance preserving. So this is an isometric. So now this L infinity, L infinity spaces like this have a very nice feature. And this is a non-trivial. It's not awfully hard to prove, but it's definitely a non-trivial fact. So a nice feature of such infinity. So just here I should maybe say X need not even be a metric space. For this construction here, you don't need a metric on there. It's just for a set. So a nice feature is the following. Whenever you have a Lipschitz map from a metric space into such an L infinity space, say this is lambda Lipschitz, A sits in a bigger subset of another metric space, then there exists a Lipschitz extension to whole of B with the same constant. Exists a lambda Lipschitz extension of this thing here. So I will not prove this. So this you can find in many books on Lipschitz extensions. But applied now to this, what does that mean now? So we have this Kurotovsky embedding, which is isometric. And so now if X sits in a bigger space Y, that means we can extend this isometric embedding to a one Lipschitz map from Y into L infinity of X. So applied to what we have over there, that means there exists a one Lipschitz extension side bar from Y now to L infinity of X in our setting beforehand, where basically we had the X here and here we had the Kurotovsky embedding. We have this. So as a direct consequence of this now, because one Lipschitz maps, of course, they both don't increase mass and they don't increase house of area. So that means that now for any curve C, if you fill in X, if you fill in L infinity, then you can fill at least as good as in any other metric space. That contains Y, X. So for any metric space Y, that contains X. And of course the same thing you can also do with house of area. So that is the best space you can fill in. So the best in the sense that you need least area. So now quickly we'll need that for later. So if X is actually a finite dimensional space, finite dimensional normed space, then we can embed it linearly isometrically into little L infinity. So then we can go into little L infinity. So these are just the sequences, all bounded sequences, or in this notation up there, this would be little L infinity of the natural numbers. So how do you do that? Well, basically what you do, you just choose a countable dense subset of the unit pool in the dual. So you have their countable and dense. That's still, of course, finite dimensional space, V star. So you have their countable dense. And so you just take now V to the sequence Xi n of V and you check immediately that this is a isometric value. Gives isometric and, of course, linear, isometric linear. And little L infinity, since it is just little L infinity of n, it has exactly the same properties as this thing here, because that was for every X and for any metric space is AB, that was correct. So this has little L infinity has same feature, same feature, of course. Oops, that's just the same feature above. Any questions so far? So now I'm going to state the theorem that I already announced a few times. And now we're going to prove it. So this is this theorem about if you have a bit better growth of the filling area function than in Euclidean space, then actually it has to be linear already. So quadratic drops down to linear. So you start with the geodesic space and I'm going to state a more general version. X is a geodesic metric space. I suppose that, first of all, there exists Y, which is a Y, which is a metric space. So again, geodesic, complete. Okay, completeness we need in order to make sense, you know, to have intercurrents in there. So with Y contains X and such that Y is at finite distance, so that means there exists a number such that any point in Y is at most this number far away from a point in X. Okay, and such that the filling area function taking curves in X and filling them in Y, that this is at most growing quadratically. Now whether you take, so this, you know, we know up to constants, it's smaller or equal to the one when you take a zero, right? So that's actually a weaker condition if I ask this, okay? So now, secondly, so here there's nothing about the constant, right? There's no, the constant doesn't matter, it can be 100,000, you know, it can be smaller or equal to 100,000 times R squared, okay? So now suppose, furthermore, suppose, can you still read this? Is that not too far below? Okay, so suppose there exists now comes to this optimal, this a bit lower than Euclidean condition, there exists epsilon and R zero, such that now the filling area function in X, now we're taking the best one, the smallest one that you can find, okay? That this is bounded by something a little bit better than the Euclidean space, one minus epsilon over four p pi times R squared for all sufficiently large R, meaning bigger or equal to R zero, for some more. So that means you, you know, you don't need necessarily to have that in the space itself, you can go to the best space you want so the smallest filling that you can find anywhere in the universe. So then the conclusion is, then X is gram of hyperbolic, and thus we have that filling area function even with disks is at most, it's the statement clear, so in particular if you replace the L infinity of X by just by X, okay? Then that's the statement that we have beforehand, okay? And if you replace Y and L infinity of X just by X, then that's the first statement I gave about this theorem. So now this is a generalization or strengthening of what we do, okay? And even if you're just interested in this classical, classical case where you have X and X as well here, you'll actually end up proving exactly the same thing, so that's why I stated right away like this, okay? Okay, so I should make one remark, okay? I should have made this already a little while ago. So if V is a two-dimensional normed space, there you can... So let's just X take X as a normed space, a two-dimensional thing, okay? And let omega V be an isoprometric region. So that means you solve the isoprometric problem, so that means for a given area, you try to find the one with smallest perimeter, right? Okay, so isoprometric region, well, with respect to any R measure that you want, okay? You just choose a Lebesgue measure on there and or the Haussdorff measure and all other R measures, so translation variant measures are going to be the same up to a certain constant and of course that doesn't change the shape, right? Okay, so isoprometric region. So then the Haussdorff measure of this guy, well, you take an isoprometric region, right? So then this is going to be equal to the isoprometric constant that you have in this space times the length squared of the perimeter, right? So what I claim is that the isoprometric constant in this space is bigger or equal to four pi. So the inequality shouldn't be the other way around. Don't be confused. This just means that this is equal to a constant times this, but the isoprometric constant is bigger or equal than the Euclidean. In some sense, it means when you want to fill in Euclideans among all two-dimensional normed spaces, in Euclidean space you can do it best in some sense, right? You need the isoprometric constant is the best, okay? So of course you think now, well, okay, this is a... Actually I don't want to go into that too much, okay, why this is true. Of course you can say, well, this is a crucial point for this theorem, right? Well, actually I don't think so, okay? So you could actually restate the theorem as follows. You say, okay, suppose we have a metric space where on a large scale we can fill a little bit better than in the worst normed space, right? Or in the best normed space, say, okay? So if it's a little bit below that, then you become Gromov hyperbolic, okay? So clearly any normed space, right? I mean, it's not Gromov hyperbolic. So among all the two-dimensional normed spaces, right? I mean, you can definitely not have a theorem that will give you Gromov hyperbolicity. So you just say, okay, I take, you know, even if this constant were different here, actually we would end up just having the same constant here, which would still be the best thing that you can expect because among all normed spaces you have this, okay? Actually the same thing is true for if you take, instead of the house of measure, you take the mass of the current induced by this, okay? So also the mass of this current here will have bigger or equal to this times the length squared, okay? So the minimum of the two is bigger or equal to that. So actually one has more. So this is a result from convex geometry. One has actually more, okay? So suppose we embed the linearly isometrically, oh no, let me maybe say the following. So if you take the filling area of v in little L infinity into which we were able to embed linearly isometrically of this curve here, then this is still bigger or equal to this, okay? So this is actually the same as, so yeah, this is still bigger, it's actually the same as this, right? Either if you take zero then it's going to be this or otherwise it's going to be this. So you might think, so let me just illustrate this real quick in a picture. So you have here your two-dimensional normed plane v and we had it embedded in the best space to fill. Here you have your isoprometric region in your space, okay? And now you're looking at fillings in L infinity, okay? So this is your thing. Here you fill with either with disks or with currents. It doesn't matter, okay? So suppose if we replaced v and L infinity by a two-dimensional Euclidean space and the Hilbert space then it would be trivial, right? Because then we have the orthogonal projection, okay? Which is one-lipschitz, okay? However, when you're in just enormous spaces you don't have in general such a projection anymore, okay? You don't even have, so I should say there exists no in general, no one-lipschitz projection, okay? So you can't just argue saying, well if I take a filling out in L infinity then I can just project it down into v, okay? And then basically, you know, it's just, it covers the whole of this region. That doesn't work, okay? So for the mass, okay? So we saw already that the mass for currents is lower semi-continuous, okay? You can show that there actually exists a projection, okay? Exists a projection, a linear projection, okay? Such that the mass is not increased, okay? Which, but it's not one-lipschitz, okay? It distorts lengths, but once you go, yeah. So which does not increase, not increase mass of two currents? No, this is, yes, okay, this is completely general. So whenever you have a k-dimensional, so yes, the two under dimension two here, they have to be the same. So whenever you have a k-dimensional, suppose you have a k-dimensional here, okay? Space, k-dimensional normed space embedded into some other normed space, then you have a projection which does not increase the mass of k-currents, okay? And this is a result due to Gromov. So this is Gromov who gave this argument, and you can find it, for example, an easy proof in a paper of Alvarez and Thompson. Thompson, it's called something with volumes of normed spaces, I think. So it's a very nice paper where they describe all kinds of different measures on a normed space, like our measures, right? And the properties that they have, okay? So maybe quite surprisingly, okay? So for 50 years, or 60 years, probably now, okay? So the similar thing for how surf measure was not at all known, okay? So there's actually still a big open question that asks, well, if you take regions in a k-dimensional normed space, and you have it embedded into another normed space, is the region, the flat region, is this the volume minimizer among all fillings that go outside, okay? Plain domains are they house dwarf, are they minimizes for the house dwarf measure, okay? So a big open question, so that's called one of the Boozeman-Pettie problems, asks, well, if you have a k-dimensional plane in a normed space, okay, are domains in V, are they house dwarf k minimizers among all either disc-shaped or all k-dimensional surfaces with the same boundary? So that's been a big open problem. I think it's still at least in all dimensions above k, above two, it's completely open, okay? So for k equals to two, so only very recently in 2012, this is, the answer is yes. Yes, this is a paper by Burago, Dima Burago and Ivanov from 2012 in Gaffa. So the answer there is yes, okay? And now, okay, this, what I want to say here, this uses actually, this uses the answer there, okay? So when I proved my theorem actually, there was no, this was completely unknown yet, because that was four years earlier. But then actually you can circumvent this problem here by introducing yet another measure, which is actually, you know, bigger or equal to the house dwarf measure, where it's known that it is, you know, that it minimizes all those, right? And so if you have a better constant, then four over pi for the house dwarf measure, okay, so that was the house dwarf measure here with the house dwarf measure, okay? But, you know, the house dwarf measure is bigger than this other measure, okay? And then you get it for this other measure, for this other measure, you know, we know it and then I could use this, okay? So what I'm going to present now is, you know, just using this answer because then it's a little bit easier, okay? So, okay, so this is still a bigger problem except in dimension two, okay? So, okay, are there any questions? We are almost ready for the proof. We need one more ingredient and this is a generalization of Radimacher's theorem to Lipschitz maps into metric spaces, okay? We already saw that with Emanuele. Emanuele went, you know, from Rn into the space of Q tuples, right? So now we're going to go into an arbitrary metric space, okay? But there, you know, you of course, so this is called metric derivatives because it's more metric rather than really giving you a derivative. So we start with a Lipschitz map from an open set in Rm into an arbitrary metric space. So now, of course, you can't make sense, right? Of, you can't make sense of this in this metric space, okay? Of course, you could say, well, that's what we would like, right? For the directional derivative, say, okay? We would like to take a limit of that, but of course, you know, if you have X just in metric space, you know, that doesn't make sense, okay? You could of course say, well, okay, let's embed it into a normed space, okay? Then we can make sense of that. However, you know, if you take L infinity, you know, in general, you know, that does not exist to the derivative, okay? So you can't pass to a limit in many, yeah? Those are actually the spaces where you can are called normed spaces with the Rado-Nickering property, and, you know, L infinity doesn't have that, okay? So the idea of bend, this is now all due to Ben Kirchheim. In the 90s, I think the paper is in 94. The idea is to take just, you know, metric things, okay? So instead of looking at difference, we just look at the distances, okay? We would like to show that in a metric way, so distance-wise, this behaves like a derivative, okay? So now you can ask, well, sorry, I have F and fees all over the place, sorry. So now you can ask yourself, does this limit here, it's like a directional derivative, does this exist? So if it exists, we will call it the metric derivative of phi at point x in direction. So it's a metric derivative if limit exists. This is called the metric derivative. So Bend showed that, yes, this is actually at almost every x in your open set. This limit exists for every v, actually, and much more, okay? Actually, this is then a semi-norm, okay? And actually, even much better properties that we'll see, okay? Just maybe a quick remark, okay? If x is equal to some Rn, we have Radimachristir, which tells you that, you know, this Lipschitz map then is differentiable almost everywhere in the classical freshly sense. And, you know, then clearly, we have that the metric derivative at all of these points exists in every direction, and it is just the norm, the norm, so that's the Euclidean norm here, of the image. So let me give you Bend's theorem, okay? So if phi is half as above, then metric derivative exists in almost every point in all directions, exists for almost every x and all v all directions. Furthermore, we have something much stronger. Sorry? Oh, yes, sorry. Yes, of course. I think I said it, didn't I? I hope at least I said it. Sorry, otherwise, of course, that's a complete nonsense. Actually, one can say much more, namely that locally, okay, in the right sense, locally the distances, they behave almost like the metric derivative, you know, up to an error which only depends on the Euclidean norm, okay? So furthermore, there exists countably many compact sets with the following property, such that, firstly, they fill out almost everything of u. Secondly, the metric derivative exists at every point, and in every direction of a given cci, exists for all x in ci and for all v. Now comes the crucial property. For every epsilon and every i, there exists a radius depending on i on the set cci and epsilon, such that now I look at the distance of, such that for every x in ci, the distance of phi x plus v, phi x plus w, is comparable to the metric derivative of v minus w in the sense that this is bounded by, at most, epsilon times the Euclidean distance between these two directions for every v and w directions, which are of norm, of Euclidean norm, smaller equal to the r i epsilon, and such that at least one of the two, say x plus v, is also in ci. So the picture, the first and second part of the thing will be much more important for us. So, okay, so the thing is just you have here a set cci, and basically it says that for every point x in here, if you go into two directions, so that's x plus v, x plus w, then the distance of the images is comparable to the metric derivative in the, basically, in this direction here, okay, plus an error. So, yeah, let me give a few consequences of this, or let's call it consequence, maybe let me do it over here. Consequence, so the metric derivative at any point of a c i is a semi-norm, okay? So this basically follows almost directly from this, okay? You can check immediately that, you know, that this has to be a semi-norm now, okay? So m d phi x is a semi-norm, so that just means it has all the properties of a norm, except that maybe certain vectors are zero, right? For every x in every c i, okay? Furthermore, just quickly, if phi is by-lipschitz, for example, then it's a norm, because then the metric derivative can never be, in no direction, can be zero, right? So that's, yeah, so that's a first, and then a second consequence, which we will need is the following. So suppose x is in c i, is such that m d phi x is actually a norm. Then for every delta, okay? So now, if this is a norm, right, what happens? So if this is a norm, okay? Then, well, you know, any two norms are comparable, right? So that means that this is comparable, or this is comparable to this. Well, in particular, this is bounded by this norm up to a constant. So since I can make the constant here as small as I want, right? So that means epsilon times this, you know, can be much, much smaller than this. So that means then that this guy here is bounded by one plus a very small constant times this from above, and then one minus a little constant times this. So that means it's actually, once you endow r m with this norm here, right? Then it becomes an almost isometry. That means it becomes and one plus delta by Lipschitz map for every delta that you want, okay? And the delta, you know, basically, so, yeah, I mean, of course, the closer you go into, right? You, because you still have this condition that, you know, you have to be close in order to have that. So for every delta, there exists an r, say r delta i, so now maybe, you know, fix i, or let me maybe call that, no, let's do like that, r i delta, such that the map phi from c i, a little ball around x of this radius r i delta, so now together with this norm, the metric derivative norm to x, this is one plus delta by Lipschitz. So the upshot of this is, I think, a very remarkable thing. Whenever you have a Lipschitz map, say a by Lipschitz map from an open set of r m into a metric space, then in this metric space you find pieces that are arbitrarily close to normed spaces or to at least, you know, to pieces in a normed space of dimension L, right? Okay, so that's exactly, yeah, okay, and this is exactly what we're going to use. So one should say one more thing, maybe one remark, I'll make it down here because I will not really go into that. So you can replace u, so u open can be replaced by u just measurable. So then of course you say, well, okay, the problem is, what is the problem if you just have measurable? Well, you can't even, you know, go into the direction x plus r v, right? I mean, that doesn't even make sense anymore if u is just measurable, okay? Well, there's a trick how to circumvent that while we can embed x into L infinity of x, okay? We already saw that once you have a map into here, Lipschitz map, you can extend it, so that means if you have phi from a measurable set to x you can extend it to a whole of r m, you know, into L infinity of x, okay? And then you have Kirchheim's theorem and then, well, you just have to check that at every other, or at allopic density points so where, you know, your sets c i's, you know, are dense or almost a whole neighborhood around the point, there actually the metric derivative does not depend on the extension, okay? Which kind of makes sense because you have only, you know, when you zoom in, you have only tiny holes whose size is much, much smaller than, right, the distance and then, you know, you can just, if you hit a point where, you know, it's not defined, you can just move a tiny bit away and since it's Lipschitz, it doesn't make a big difference. So this you can actually replace just by a trick like this, okay? Any questions so far? So now, let's prove our theorem, okay? So we will not use this, we will only need this upshot here, okay? So that is going to be the crucial thing, just that we have pieces of normed spaces in our space, you know, that are, and that's what we will need. Okay, so now we're going to prove the theorem or give at least an outline or a sketch of the proof of this is the theorem about the one over four pi, huh? We assume by contradiction, so what do we have to show? We have to show that if the filling area function is small enough, then the space is chromophobbolic, okay? So assume by contradiction X is not chromophobbolic. So we said that chromophobolicity by a theorem that I stated by Gromov is equivalent to having all asymptotic cones being metric trees. Okay, remember asymptotic cones, just you scale down, you scale down the metric, okay? You blow down the space and you pass to a certain limit and, you know, it says that X is chromophobbolic if and only if every asymptotic cone is a metric tree, okay? So then by this theorem of Gromov, there exists an asymptotic cone X omega which is not a metric tree. So this means that not if you take three points, okay? So whenever you have X geodesic, you know, all asymptotic cones will be geodesic as well. Okay, that's not hard. So that means not every geodesic triangle, so three points together with three geodesics looks like a tripod, okay? That means this, okay? So from this you can... So there exists such a guy, okay? And you can then check immediately. So that's very easy, okay? So there exists a closed Lipschitz curve. So there exists C1 to X omega Lipschitz, which is non-trivial, okay? Such that now I'm going to use the same notation as earlier today. Tc is the integral one current, okay? That is induced by C, just by integration, right? So this is basically the push forward of now I interpret C as being parameterized on the interval zero one. It's just this push forward, okay? Such that this guy here is non-zero. So the basic... So actually you can characterize metric trees as follows. You can characterize metric trees by saying a geodesic metric space is a metric tree, if and only if for every integral current, integral one current without boundary, we have that T has to be zero, okay? So that's an easy characterization. Let me just show you quickly, or well, just indicate how that works. So we said that we have one guy that is not a geodesic... It's not the tripod, okay? So basically what you do, you just have to show that now this guy here takes such a curve, which is not a tripod, and that just needs to find a Lipschitz form, a Lipschitz one form that doesn't cancel out, right? So basically you just take here a function f, which is one here, and then goes outside very quickly to zero. These guys here are geodesics, so you take a distance function, so you have to have an f and a pi, right? A Lipschitz form, okay? And you just take pi, you take a distance function, say from this point here, okay? And since this is a geodesic, you know, integration of the derivative of this will just give the derivative one, okay? And you just have to choose this neighborhood where f is non-zero, so small that this curve doesn't come inside here. You have to find that because you have, you know, it's not a tripod, okay? So that's a very easy thing. Okay, so now we've seen that it's basically because we know that the filling function is quadratic, you know, with respect to that this, because the filling area function of x and y is at most quadratic, right? So we showed, or at least gave an indication of the proof that then any asymptotic cone has an honest quadratic isometric inequality, okay? So that we had as a proposition at the end of Wednesday, okay? So we know that, and we don't actually need the whole thing, we just need the existence of a two-current, integral two-current in the asymptotic cone whose boundary is this current that up there. So there exists, we even know that is one, you know, whose mass is at most quadratic in the mass of t, okay? So now clearly because t is non-zero, s can also not be zero, okay? That means that s is not equal to zero. So now we said that integer rectifiable currents are basically just sums of integrations over bi-lipschitz pieces, right? So we had the representation theorem, which is basically like in Euclidean space, right? That integer rectifiable currents are just integration over countably h two rectifiable sets, okay? So that gives us that s is actually the sum of currents of the following form. Remember this. So for a theta i, these are L one functions on certain subsets which are integer valued, okay? And the phi i's are bi-lipschitz. Phi i's, there are bi-lipschitz from subsets to r two to x to the asymptotic cone, bi-lipschitz, okay? So this is basically just integration, right? Over the domain, okay, times this function, and this is the push forward, okay? So since s is not zero, well, there exists at least one guy here that is not zero, right? Okay? So that means we have now, we have now one map, a bi-lipschitz map. So that means there exists phi from a certain subset of r two to x omega, which is bi-lipschitz, and with the two measure of the domain strictly beginning zero. So now we basically use just this upshot of Bentz theorem, okay? So we take a good point, you know, one of those in the ci's there, okay, a good point, we zoom in, and basically what we get is, you know, we get more of an isometric map once we take a norm, okay? So the basic thing is the following. So you have your set here, you're okay, you take a good point, okay? And now what I do, so this is going to be one of those good points that you have, and now I take an isometric region, okay, so now I have, you know, I can use this as a normed space, which is basically or contained in the normed space r two, plus this norm which is just md phi x, right? So now I take an isoprometric region in here, the way I had, and let me also fix, so now, you know, okay, not the whole set might be in there, right? So you can rescale this isoprometric region, make it smaller, and now let's choose just a partition in here. So that means just finitely many points on the boundary, okay? And now after rescaling more and more and more, you know, more and more of these points are extremely close to the set k. So we know that this map now phi into its xw, x omega, is going to be one plus delta by lip sheets with delta, you know, basically going to zero, okay? So that gives us then points in x omega finitely many points, so a one plus delta by lip sheets map from these finitely many points. But each of these points is by definition, right, an equivalence class of a sequence of points in x, okay? So basically that means you get more and more points, you know, you get finitely many points in x, right? Such that after rescaling, they're basically the distance or exactly the set, okay? So let me maybe just note this as a... So let me note this down. So it follows basically that... Let me maybe... So I'm putting a few steps together, but you'll be able to do it by yourself. So there exists a norm. Let me maybe just... because otherwise I'm... Okay? So that's the Kirchheim's theorem. So there exists a two-dimensional normed space, the... with a norm, with the following property. So if we take an iceberg metric region, gamma, a finite set in the boundary, so finite, and you take a delta bigger than zero, then there exists an R, arbitrary small, and there exists a... bilipsitz map, which goes from gamma together with the metric coming from the norm to now instead of x omega, I'll take x right away because we only have finitely many points, right? Always finitely many points. You know, you can do a comparison between the asymptotic cone and the space itself, and that's a rescaling, the blowing down, such that this guy here is one plus delta bilipsitz. So that's basically what I explained just, you know, putting all these... these few steps together. So now we're... I'm just going to argue the rest. I'm just going to argue in pictures. So let me just draw a picture. So here is our space, our two-dimensional plane, v. Here is our isoprometric region, omega, v. So then we take a partition of the boundary. That's gamma. So then we have our map sine. Into now this goes to x with the rescaled metric, and that's a one plus delta bilipsitz map of these points in gamma. Now we would like to construct a curve, right? So, I mean, the main idea is now simply to say, okay, well, here we have an isoprometric region. So now this curve we can almost map to a curve in here. Simply by now we can construct a curve by just adding geodesics. So these are geodesics. So that gives me a curve in this space here. We already know that, you know, that in x for no long curves we can fill it in L infinity very, very well. Or we can make it very small. That means we know. So here we can fill in L infinity of x, r, d. We can fill this extremely well. So you get the filling. Let me call this the current s. We can fill this curve. So let me maybe say this curve, let me call this maybe c. So this curve c. We have an s such that the mass of s is bounded by one minus epsilon over four pi times the length squared of c. So now I want to transport this back into here. Well, what I can try is I can try to say, well, okay, so here I have, you know, at least from these points I have a one plus delta Lipschitz map. Let's extend it. The problem you have is that whenever you want to extend a Lipschitz map in the normed space of dimension k, well, the Lipschitz constant will increase by a factor of square root of k in general. That can happen. So here we can't actually extend this map back into v because it will increase the constant too much. But what we can do, we've already seen the trick over there. We can embed this linearly isometrically into L infinity. So now this map, the inverse of this map, we can extend it, okay? We can extend it. So psi inverse, okay, which is first just defined on these points here, we can extend it as a map from this guy here to L infinity. And this is going to be one plus delta Lipschitz. So now we can transport this current s here, simply by, you know, this map, okay? So what will this give? So this has boundary, you know, this curve. So we'll get something that doesn't quite fill, okay? So this guy here is the push forward under this map, okay? So this guy here, you know, it's almost a filling, okay? It's almost a filling. If you make this partition very small, you know, you can still fill this in without losing too much, okay? And you see what is the mass of this guy here? Well, the mass of this guy, because it's this one plus delta Lipschitz, so that means it's just at most one plus delta squared times, you know, the mass of this, you know, which is basically this. But this guy here, this is smaller equal because it was a one plus delta Lipschitz map plus delta squared, the perimeter squared of this guy here, right? So that means you have almost a filling, okay? Its mass is basically bounded by this guy here times this and here you have maybe to the power of four, okay? And, you know, after filling a little bit in, you still get something that is better than one over four pi, but we already said in normed spaces, you know, it's just not true, okay? Okay, so that, yeah, so this you can complete to approve rather easily, I think, okay? So I think this, let's stop here and make maybe five minute break, let's make eight like Emanuele, that's fair, okay? If you have questions, yeah, and then we look at the other theorem, I'm afraid I'm not going to be able to prove the whole thing in the other theorem, but at least a few indications. Okay, I think I'll start again. So now we'll shift gears completely. So now I'd like to briefly give some indications on how to get with similar techniques using this result from last time, how one gets spaces where the filling function is not polynomial, not exactly polynomial, okay? So what we want to, now we're going to be using three-money manifolds, okay? There will be nil-potent league groups with a left-invariant three-money metric. They're going to be nil-potent of step two, so there exists g of step two, nil-potent. And that with a left-invariant three-money metric, such that it's filling functions, so there you can fill inside the space, of course, okay? Such that this guy here is not growing like R to the alpha for any alpha real number, okay? So it's just one thing, okay? We can, of course, not take L infinity here, right? Okay, because when you fill in L infinity, it's always quadratic, okay? Because it's a normed space. There you can basically, if you have a curve, you just take, you know, all the straight lines of the point, and that gives you quadratic filling. So, I mean, there, don't mess with that, so, yeah, okay? So, yes, let me just recall quickly what is, again, a step two nil-potent group, so we'll always have simply connected, okay? So, g is of step two, so now that's the preparations, perhaps, okay? So, g of step two, remember, g, um, so it has a Lie algebra, so basically a tangent space at zero at the identity plus a Lie bracket, okay? And of step two means, of step two means so you have the Lie algebra, okay? Now, you take the subspace generated by the Lie brackets, okay? So, this is the subspace generated by the Lie brackets, okay? So, any Vw, the Lie bracket, and you take linear combinations, okay? So, this has to be non-trivial for a step two, non-trivial, so a proper subspace, which is not zero. And then, if you pass to one more step, if you now take, you know, if you now take an element of here and you take the Lie bracket with anything in the Lie algebra, then this becomes zero, okay? That means of step two, okay? For example, Euclidean space, you would already, here you would be finished, right? Because the Lie bracket is just a billion, so, yeah, okay? So now, we'll denote this by W. This is a subspace now of g. And now, if you let, I'll just show you that the Lie algebra has a very nice structure, actually, okay? So now, let V be any complementary subspace. Let V be a subspace of g, such that g is the direct sum of V plus W. So, these are just now two subspaces, okay? So, what can we deduce from this, okay? So, you can deduce, first of all, that, you know, any element of W bracket with a guy in V, or any guy in g, right? This is gonna give zero, okay? So, we have VW is equal to zero, that means just zero space, subspace, and that's the same as W and W, of course, okay? And we have that the linear subspace created by the combinations of elements in here, in Lie brackets, this is exactly W, okay? So, and that means, in other words, and that's basically a definition, that g is a Carnot group. It is the whole thing, okay? Yes. It is the whole thing, because, you know, once, so you know that this guy here, right, is generated by things in here, okay? But whenever you take a thing in W, the bracket is anyway zero, okay? So, when you have an element in W, right, you cannot generate anything at all. So, since it is generated, it has to be generated by the Vs because the Ws cannot generate anything, okay? So, yeah. So, a Carnot group is actually of step K, say, is basically a nil-put group of step K, such that, you know, your Lie algebra has a composition, like this, but into K subspaces, okay? And so that the first subspace, you know, if you do this, then it generates the second, and so on, okay? For step two, it's basically always the case, okay? So, that's, yeah. In higher steps, actually there are Gs where you can't get the decomposition with these properties, okay? But in step two, the world is nice and easy, okay? In this sense, in this sense, sorry, yeah. So, now, we would like to work with this space, right? Okay, which is a bit complicated, but fortunately, we have two very nice things which come from the theory of Lie groups. So, this is important. Fact, which they're not easy to prove at all, okay? We'll just use them here. You can use them almost as a definition if you want. So, first of all, okay? The exponential, the Lie exponential map from G, from the Lie algebra to the space through G is a diffeomorphism. And this diffeomorphism lets us pull back the multiplication, okay? The multiplication in here, into multiplication in here, okay? And we can actually determine precisely what the multiplication, the pullback multiplication is. This is the so-called Baker-Campbell House formula. And with the multiplication, which is v, now I termed the multiplication star, vv star is equal to v plus vv prime is v plus v prime plus one-half the Lie bracket of v and v prime, okay? So, for v and v prime in the Lie algebra, okay? Together with this multiplication, this becomes an isomorphism, okay? And with exp becomes group isomorphism, group isomorphism. So, this basically, that, the pullback multiplication in step two has this form is due to the so-called Baker-Campbell House formula, okay? You have basically, for any step k, you actually can determine what this is. This will be something like this, and then plus higher order terms, okay? Where you have more brackets in there, but, you know, here since we have step two, everything else vanishes, okay? So now, this is, you know, we can, instead of working here, we can basically just work in here in g, in little g, with this multiplication, okay? Any questions so far? Okay, so now, we said we endow g with a left-invariant three-money metric, okay? So, remember what we have to do. So, we actually use, again, asymptotic codes. So, we want to understand what is the large-scale behavior, okay, of g when we rescale the metric, okay? So, first, since any two left-invariant three-money metrics are by-lipschitz, we can just use any that we want, and one of the good, a good one is one where, remember we had that g is v plus w, okay? We can have one that, where, which makes this, this is basically just an inner product, right, that you translate around in the space, such that these two become orthogonal, okay? So, let g or let, let me now take a distance, right? So, three-money metric gives rise to a distance. Let's d be left-invariant, three-money metric, such that at the origin, we have that v and w are orthogonal subspaces, okay, that makes life a little bit. So, now, when you rescale the metric, okay, we'll actually soon see that certain curves will be important, okay, so these are, these are called the horizontal curves. Before we do that, let me just do one more construct, which will play a big role. We can define the algebra homomorphisms, or, you know, then via x group homomorphism. Define group homomorphism, delta r, so I'll always identify big g and little g, okay, by delta r of, so any element of g I can write as v plus w. So, this scales the v by a factor of r and the w by a factor of r squared. So, just remember we had the first Heisenberg group, right, which basically you get, went into x direction, say an amount of l, then you go into the y direction and you actually go up very steep, okay? So, then, you know, you go back, we said that basically just be going, you know, a curve of length l will end up at l squared, okay, in the z direction, okay? So, somehow, this, you know, basically means when you scale, you scale by a factor of r squared, right? So, 1 over l squared, that would take you back down to 1, okay? So, in some sense, these two things are related, okay? So, you can check immediately that with this multiplication, this group homomorphism, okay? If you put an r here instead of an r squared, you know, you would not get the group homomorphism, okay? So, check directly that this is a group homomorphism. Okay, now, we define special curves. These are called horizontal curves and basically one in the Heisenberg group where we went into the x direction, then we went steep up into the y direction, y z direction, okay? That will be a horizontal curve, okay? That will be a big role because they govern the large-scale geometry of our gene, okay? So, we define first a sub-bundle of the tangent bundle, okay? So, this is basically just left translation of my first subspace phi here around in the group via, you know, this guy here, okay? So, left translation, less translations of phi, okay? This is a sub-bundle of the tangent bundle and it's called the horizontal bundle. So, you can explicitly compute, actually, what at the point x, you know, what this subspace of the tangent space is, okay? So, you can compute directly just using this guy here. So, if I take at the point x in g, this is just all the v's plus one-half the bracket of x v, where v is in v, okay? So, this you can just do using this. It's a one-line computation. Okay, so now, a curve c to g, which I always identify with little g as before. So, say, piecewise c1, or just c1, is called horizontal. If, piecewise, at every point where it's defined, the derivative lies in this horizontal space. Sorry, x should be c of t, of course, g for all t where this is defined. So, these are special curves, right? So, let me just give an example quickly. So, or a very easy construction, using, again, this multiplication, or, you know, this guy here. So, example, whenever you start with a curve in v, we can lift it to a horizontal curve in g. So, let c1, okay? That's for, you know, that is a curve in v. Say, again, piecewise c1, okay? And define c2 of t as the integral from zero to t, one-half times the integral, sorry, of the bracket of c1 of s with c1 prime of s ds. So, then, you can, if, so then, just using this or more or less this. So, if you calculate the derivative at the point t of, so then c, so this is, right? So, for every t, this is an element of w now, okay? Because that's always in w, w is a subspace, so that gives you something else. So, if I take c1 plus c2 now, this gives you a curve in g, okay? In v plus w, okay? So, then, this is horizontal, clearly, because if I take the derivative, if I get c1 prime plus the derivative of this, which is just one-half this evaluated at t, okay? So, this is horizontal. So, for every curve, you know, we have, in v, we have a horizontal curve in g, okay? And clearly, and if I take the length with respect to this d zero metric of c, okay? This is simply just the length in v with the Euclidean metric, right? With the metric, basically, you know, which fixes there of c1. So, that gives us a means of creating curves, and it gives us a means of computing the length, which is extremely easy, okay? So, now, please, interrupt me if you have questions. So, now, we'll define a so-called, the Carnot car to Dory metric. So, this is a new metric, which is more adapted to the large scale, as we will see, okay? So, this is usually denoted by dc or dcc, okay? So, this is, can you, yeah, maybe let me write dcc, because otherwise, c and zero, they look extremely close in my bad handwriting. So, if you have any two for x and y in g, we can define, now, the d, the cc distance, as, instead of taking the infimum of lengths of curves from x to y, we take only horizontal curves, okay? That's the infimum, now, of lengths with respect to the usual metric that we had of c, such that c is horizontal, piecewise c1, as always, horizontal and joins x and y. So, already, you know, right away from the definition, because we take only horizontal curves, right? So, this quantity for any x and y is bigger or equal to the distance d0 of x and y, okay, because the distance d0, right, that you get from a remunimetric, just taking the infimum of lengths of curves, right? And so that, you know, we get right away. So, then, we get that dcc, x and y, is always bigger or equal to d0, x and y, okay? So, now, we have, it's non-degenerate, so it's non-zero if x and y are not the same, okay? Clearly, we have the triangle inequality, simply because we take curves that are piecewise c1 curves, okay? So, if you take three points, then you go first to the first and then the second, that's definitely, you know, you might have a shortcut. So, the only thing that we have to check, well, can we actually, you know, join any two points by such a horizontal curve, okay? That means, is this actually a metric? If we want to show that, we have to show that we have a curve, a piecewise c1 curve, between any two points, okay? So, in step two, again, this is easy, okay? So, let's show this is a metric. So, in general, if you have a higher step group, this is actually, you know, is a very non-trivial fact, which goes back to Charles Rzevsky, is that correct? Okay? But in step two, you know, the world is much nicer, okay? So, this is a metric. So, if you want to show that this is finite, there exists such a thing, so we can, of course, assume without loss of generality because, you know, left translates, right? They preserve horizontal curves. So, that means, you know, we can always assume that x is zero. So, y, we write as y1 plus y2, where y1 is in v and y2 is in w, okay? So, now y2, well, w is the subspace generated by brackets. So, that means we can write this as a linear combination of brackets of elements in v. So, we have v1, bracket v1 prime, plus, and so on, say, you know, up to some dot k, vk prime. So, now, how do we reach from zero to this y? Well, now that's relatively easy using this construction here. So, first, I'm going to go up to y2. So, how do I get up to y2? So, from zero to y2. How do I get up there? Well, I take the curve that goes in v, okay? I take the curve that goes from zero in v, goes first to v1, then I go to v2. So, v1 prime, sorry, I go back in the v1 direction, but in the negative v1 direction, and so I have a parallel epiped like that, okay? So, now, that's a curve. Let's call it c1, okay, in v. So, that's all in v here, okay? It's just this parallelogram that you go there. So, now, if you lift this, okay? Well, we know exactly what the lift looks like, right? It's just the integral of whatever, right? And if you can just compute it, it's a trivial computation, that when you lift this, well, then you'll actually end up at, lift, horizontal lift, lift, c1 bar, okay? Oh, sorry, now I was maybe a bit stupid to use c1, and I wanted to use c1, c2, and so on, but don't confuse that with c1 that we had before, huh? Okay, so, I'll keep c1. So, c1, so bar, so horizontal lift, okay, starts at zero and ends at v1, v1 prime bracket. Okay, that's a trivial computation because, first, you go in this direction, okay, then the bracket with itself, right, with the derivative is zero, and then once you end up, once you're here, right, then you get something like v1 plus t v1 prime is your curve. If you take the derivative, then you get v1 prime, and if you take the bracket, right, then basically, you know, the bracket is just with v1 prime, okay, so then you just get this, okay, one-half of this, and you compute it strictly. So, then you end up there. So, basically, what you do now in g, right, in g, you go here in the v direction, then once you go into the v1 direction, your horizontal horizontal lift will go up. Then when you go back, that nothing happens, but then you go, basically, back up, and after completing this loop, the c1 bar will be up here, which is just this bracket. Well, then you can go on, do the same thing with v2, okay? So, then you go up, you go to an x-point, you go up to an x-point, and then after k such steps, right, the concatenation of the ci bars will lead you up, so concatenation of c1 bar until ck bar, okay, gives curves, gives horizontal curve from zero to y2, and now you just have to go in the direction. So, now you just, you know, the last leg that you do, you just go in the direction y1, okay, and that bracket with y2 is zero, so the lift is going to be the same thing, okay? So, then, plus the leg, you know, in the direction, in the direction y1, gives what you want, gives horizontal, c from zero to y, and that means that the distance, the cc distance from zero to y is small or equal to this length of the c, which is smaller than 50, okay? So, that gives you that, this is a very, yeah. So, actually, if you inspect this a bit closer, this argument, so here, actually, you'll see something better. So, a closer inspection gives you the following. Closer inspection gives you that if you take all the cc distances from 0.0 to y, where now y is in the one ball around zero, and so, around zero of radius one, with respect to the d0 metric, that this is finite, okay? So, basically, right, so what did we say that the length of this guy here is actually the length of this curve just in v, right? Because we said always the horizontal lift has the same length as the curve in v. So, basically, here, you would just have, you know, two times the length of this, two times the length of this, and you add up, okay? But what is the sum, you know, here? How many terms do you need? Well, you basically only need, you know, two times the dimension of v, okay? So, k is at the most two times v, okay? So, since your d0 metric, you know, is bylipschitz on, you know, bound the balls, it's bylipschitz to just a Euclidean metric, okay? So, that means, you know, basically, you can compare all these things just with Euclidean stuff, okay? So, that means then that actually you'll get this, okay? So, that's basically almost comes from this, plus the fact that locally, you know, your g with the d0 metric is just Euclidean space, or bylipschitz to Euclidean space. Okay, so now, why do I want this? Well, a direct consequence of this is the following. So, we already saw that the cc metric is bigger or equal to the zero metric, but now this shows that this is actually bounded by l times the zero metric. Well, but that's not true. But, once you add another l, then you're done. So, this is trivial, okay? Where does that come from, okay? Well, what do you do? You have a point x, you have a point y. Okay, so I would like to bound this distance by this, okay? So, you just take d0 g desic. We're in a completely money-money fall to that exists, right? So, this is a d... And now, you just cut this in pieces of length one, okay? Now, we said this metric is left translation invariant, right? That means now that we can each one... Each of these guys here, we can... We know the distance, right? It's bounded by l times by l, okay? So, this distance here in the cc metric has distance smaller than l in dcc metric, okay? So, you have l, l, oh, okay. And then, maybe you have a rest, okay? But that, you can only also, you know, say that this is smaller equal to l, okay? So, you get this immediately, okay? So, that is that. So, what... Why is this nice? Well, what does that say? In other words, in our definition that we had in the first lecture, this means that g d0 as a metric space is quasi-isometric to g with the dcc metric, okay? So, that just means on a large scale, it's bilip sheets, right? So, I mean, once you forget this, points that are further away from l from each other, right? Or to l, they behave bilip sheets with a constant of l, okay? So, that means large scale is bilip sheets, okay? And so now, once you rescale the metrics, right? What happens if you rescale the metrics, okay? If you rescale both by a factor of r, okay? Then this term here is going to go to zero, okay? So, as you, you know, rescale more and more, these metrics, they're going to look bilip sheets, up to bilip sheets. Homomorphism, they look almost the same, right? So, that means actually they're asymptotic counts are bilip sheets, okay? So, the asymptotic counts, which I denote just by g, d zero, omega, okay? They're bilip sheets, bilip sheets to the asymptotic cone. Well, that doesn't help you anything, right? Why not? Well, we would like to have this guy here, but... Okay, now we've just replaced it by an asterastic asymptotic cone. No, it helps you something, because actually one will show that the asymptotic cone of this is just itself, okay? So, why is that? Well, remember, we had this homomorphism delta r, okay, which basically, in the Lie algebra, they rescale things, right? Okay, so, note that you, it's again a trivial computation, okay? That delta r, so, if c is horizontal, horizontal curve in g, then the length in the d zero metric of now this homomorphism here composed, so, once you rescale in some sense this metric by this homomorphism, okay? So, this is exactly r times the length d zero of c. So, this is simply because, you know, horizontal means you're in v, right? And the vectors in v, they're scaled by r, okay? And the w, the directions, they would be scaled by r squared, so there you can actually not say, I mean, you can still say something, but just in inequality, okay? So, but this just now tells us that the delta r, once you pass to the cc metric, okay, is a homohti, okay? So, that means, so, in the cc metric delta r of x and delta r of y, this is just exactly r times the dcc metric of x and y, okay? So, this space is a little bit like, you know, yeah, okay? And now, if you rescale the metric, well, you know, then you have always the same space, right? I mean, just an isometric copy of the same space. And since it's proper, right? Since it's proper, that means now that g dcc, any asymptotic cone, is just the space itself, a little bit like a nucleation space. So, that means now, we know at least that the asymptotic cone of this guy here is bi-lipsitz to the cc. So, actually, one can show, one can show that for any, you know, corner group of any step, that every asymptotic cone is actually isometric to this guy. And this is a deep theorem of Ponsu, okay? So, one can actually show much more. But here, for step two and bi-lipsitz, you know, it's extremely easy. Any questions? So, now, we understand a little bit better the large-scale geometry of our groups that we want to know. So, now, we do a little construction, how you get, and that's also, I think, quite useful, even though, I mean, also quite easy again. So, we do a construction, how to get from a step two group to another step two group. So, we start again g of step two with Li algebra as before, something like this, okay? So, now, I can construct a new step two group. As follows, I take any subspace of w and I just quotient out the subspace. So, then, the Li bracket, you know, once I quotient out everything, this becomes a Li bracket on, you know, on the new thing, okay? And that gives me a new group, okay? So, now, I'll just do it for one vector because that's what we need, okay? So, let u be any element of w, saying, of course, not zero otherwise, doesn't make too much sense, okay? And define a new group, g u, okay? Such that it's Li algebra. That's the Li algebra of g u, okay? This is w and you mod out, you know, the subspace generated by this. I'm gonna use this symbol for subspace generated, okay? So, this is r u, just one vector. You take one vector out, okay? So, then, as I already said, with the Li algebra on g u, which is basically just the quotient out of the subspace, this gives you, again, a Li algebra of step two, okay? So, then, you know, with the multiplication that we defined, you know, this just gives you exactly a thing like this, okay? So, then, this is, again, step two, unless, of course, you know, u is already, w is already, you know, just only one dimensional, okay? So, this is step two, g is step two, unless w is one dimensional. So, for the Heisenberg groups, that doesn't work, okay? Well, of course it works, but you'll basically just then get, you'll just get r two, okay? So, that's not, it's not really something very, very, very useful, okay? Okay, so, you can, you can create lots of different things by using this, okay? And now, the crucial proposition that basically gives our non-polynomial growth is the following. The first, I'm just doing a joint proposition, even though, you know, there were, the different parts were proved maybe 11, 12, 13 years apart, okay? So, they're due to different people, okay? So, g as above, g and u as above. So, the first part is due to Olszanski and Sapir in 99 or 2000, I think, 99, I think, okay? And, another, another proof was given by Robert Young in 2006 or 7 or 8, I think it was 2006 probably, okay? So, it says the following. If g has the filling area function at most quadratic, then if you look at, again, of course, with a, with a left invariant dream animatric, then the new group is bounded by r squared, look r on a large scale, okay? So, you don't lose much from quadratic, you go to this, okay? So, now you can compliment it with the following, and, yeah, so, now, no, in the second part, oh, that's the part where I came in with the geometric measure theory argument in 2011. So, maybe I should say, many people suspected, okay, and they even wrote so that they, so these guys here wrote, we can prove this, we are unable to prove that it's actually, again, quadratic, okay? So, it was expected, it was more or less expected that this should be, again, quadratic. So, however, it's not always quadratic. So, now, if, and I don't need anything about this, I don't need, it's, again, a completely new thing, it's just starting from there. So, if u is such that u cannot be written as the bracket of two elements, v and v prime for any v, v prime in, in, in v, okay? It's not simple, yes, okay? You can always write any u, right? You can, any u, you can write as a sum of these things, okay? As a sum of these things, but in general, you can't do this, okay? So, then, you can't have quadratic. So, actually, you can't have quadratic even if you use higher topological things, okay? So, this is, you know, that this is smaller, right? Smaller or equal to, to, to the one with the zero, right? So, that means it has to grow faster than quadratic, also this guy here. And, once, you know, you can meet both of these conditions that you have this and this condition here, then you're done, okay? Then you have what you want, you have non-polynomal. So, maybe, let me just remark on the existence of such a u, okay? So, this is actually very easy. So, remarks, first one is that as soon as, whenever the dimension of w is bigger or equal to two times the dimension of v, there exists then there exists u as in v. Well, why is that? Well, sorry? No, I think it is bigger or equal, okay? So, why is that? So, you know, you can just look at the following map, right? So, v cross v goes to w, and v and v prime, they go to this guy here, okay? So, here you have a two, two, times the dimension of v dimensional space, right? So, it's image, because it's bilinear, it's image can only, you know, have dimension at most two times the dimension of v, right? So, now, okay, that's exactly why Ben asked the question, okay? But, of course, you know, this, actually, the image is a cone, okay? So, what you can do, you can actually take just, you know, the boundary of the unit ball, say, in v, right? It has now dimension of v minus one, you know, and you can still generate everything like that, because you can just, you know, take the scaling factor over there, okay? So, that has dimension, image has dimension, has dimension, smaller or equal to two times dimension of v minus one, okay? And then, that means there must be something in this thing if the task dimension bigger or equal to this, so that you get right away, okay? So, now, this thing here, okay? So, what does that say, right? We already said the higher Heisenberg groups, they, in principle, they have a quadratic and a quadratic inequality, okay? The problem, though, is there that the dimension of the vertical space, right? W is one, so, okay? However, there exists a construction exactly like in the Heisenberg group, where you start with a, so, you know, basically the way you go from h one to h n, you can do that with any step two group, okay? You can pass to a similar construction, okay? Where basically, you know, the v space is just n copies of the v space of here, okay? And then the w space, okay? It's just, again, the same w space, okay? And so, once you take, so we need one where the w space has extremely high dimension, okay? So, and these are the three nilpotent groups, okay? So, there are basically that three means that, you know, the Liebracket has as few relations as possible, okay? So, for example, if you take the following, okay? You can make a space as follows. You just take any vector space, say, the basis, basis e one until e k, okay? So, now, you have w, that's a definition now, has basis e ij, where i is smaller, or bigger or equal to one, smaller than j, smaller or equal to k, okay? So, that has a lot bigger dimension, right? Then the dimension k here, okay? And now, you just define the Liebracket in such a way that e i, e j, gives you e ij. And that's called, so now, you know, this gives you a Liebracket. And so, the dimension of w is very big, right? You can, yeah. And so, this guy, g v plus w, okay, is called free, free of step two. So, now, this doesn't satisfy this, okay? This, you can actually show that it has, you know, a cubic filling function, okay? A little bit like in the Heisenberg group, a very similar argument, okay? So now, but how do you pass, now, what you can do? You can do a little bit like going from h one to h two. You can do the following. Now, you take, one usually calls it the central product. It's the one that has, that has g, let me call g z, okay? It has, the v is going to be v plus v, and the w is going to be the same, okay? And the Liebracket is just the sum of Liebrackets. So, and the Liebracket, the Liebracket of z is the following. Well, any element here, you can write as v one plus v two, right, plus a w. So, v one plus v two plus a w. And you have v one prime plus v two prime plus w prime, okay? So, the w's, they don't contribute anything to the Liebracket, so that's the z guy, right? It's v one, v one prime plus v two, v two prime, okay? And that gives you a Liebracket, okay? So, it gives you a new guy, okay? So, if the k is big enough, the dimension here is much, much bigger. It's basically, you know, k, almost k squared, right? So, then you get still 2k. You can show similarly to the, the, the, the, the higher Heisenberg group. You can show that. So, can show. So, that's again, Olszanski's appear, the same paper. You can show that this guy here, the filling area function of g central product with itself is quadratic. A little bit, yeah, okay? So, then this has very big, you know, very big dimension here. So, you can achieve this, okay? So, that means there exists g and u satisfying, satisfying both things of the, of the proposition. Okay? So, that is basically now. Okay? So, now one would only have to prove this, this proposition. Actually, the first one is relatively easy. You can just do it in pictures, okay? The second one, you need to work a little bit more. I would actually need probably, I have, yeah, I would need about, probably about 40 more minutes. So, but my time is, I'm already over-limit by a lot again. So, yes, maybe, let me just ask. I mean, who would be interested to have just, I mean, I could do in half an hour. I could just, you know, talk a bit more for those who, I mean, who would be interested? You can just say no, and, you know, I'll accept it. But those who are interested, you know, I'd be glad to just continue in half an hour or 20 minutes or whatever. And if people are not, even if there's only one interested, you know, I can give it. So, if someone is interested, I'll be here in half an hour. You'll be interested. Okay, so we'll do it. Let's do it in half an hour. Is that okay? Yes, perfect. Okay, in half an hour. And then it's very, actually very, at least the first one, you can do in 10 minutes with pictures and everyone will understand everything. The second one, we need the machinery that we set up. So, we're also ready for that. Okay, thank you very much for your attention.