 OK, so maybe I will stop. So I'm really glad to see that so many of you decided to come despite it's really late, really late in the day. I hope you will find it's worthwhile. So earlier today, I discussed the space. In particular, we obtained some understanding of this homogeneous space. We saw that this space can be viewed as the unit tangent bundle of a certain hyperbolic surface, except you have to be careful at the elliptic points maybe. But more or less, it's the unit tangent bundle of a certain hyperbolic surface of genus zero. Today, or now, I will discuss the space where we put the general d instead of 2 here. And we will have a different way to visualize this space. Namely, we will see that this space with d instead can be viewed as the space of all Euclidean lattices in dimension d, having covolume 1. And then I will discuss the geometry of this space a bit. OK, so I will have the following notation. We let G be equal to SLDR. So it's larger dimension now. This is a d square minus 1 dimensional, d group. And let gamma be the following discrete subgroup, SLDZ, the subgroup of all integer matrices in here. It is actually a lattice, but that is something which I won't exactly prove it. But we will see parts of the proof of the fact that gamma is a lattice in G. It's not obvious. But it is discrete, and that is obvious. And also, let X be the homogenous space here. So G mod gamma equals SLDZ slash SLDR. OK. So the first thing is I want to make precise this claim that this space can be identified with a family of all Euclidean lattices in dimension d. So what is a lattice in Rd? The following is you can take it as a definition, or you can take it as a fact. So a lattice in Rd. And here Rd is also a lead group with addition as the group operation. And we have a hard measure. That's Lebesgue measure. And because I have a lead group here, I have already defined what the lattice is. It's a discrete subgroup such that the quotient manifold has finite volume with respect to Lebesgue. So in that sense, the following is a fact. So a lattice, or Euclidean lattice, we call all these lattices Euclidean lattices. That is a set of the form. So it turns out that all lattices will have the following form. It's just the set of all integer linear combinations of some given basis of Rd. So I write it like this. All integer linear combinations, just to write out explicitly what this notation means, it's the set of all linear combinations like this, where V1 up to Vd is some basis, so R linear basis of Rd. And this is very easy to prove. I won't prove it at least. So any discrete subgroup of this lead group would be of this form except I would have some set of linearly independent vectors, not necessarily a spanning set. So it might be a shorter sum. But then it's easy to see that the co-volume is finite if and only if it's really a spanning set of vectors. So then it's a basis. And we can easily write out the fundamental domain now also for Rd modulo L. So I write it out explicitly. Then the following f, we can take the set consisting of all linear combinations of the following form, where now x1 up to xd are arbitrary numbers in the interval 0 to 1. This is a fundamental domain, or we can call it fundamental cell. It's really a parallelogram. It's a fundamental cell or domain or parallelogram of this quotient space, Rd modulo L. Or if I should use this type of notation mod out by the lattice from the left, I would write Rd mod L like this. These are the same because Rd is commutative, so there's no problem. OK. And the co-volume of this lattice. So in other words, the volume, no, the co-volume, well, is by definition the volume with respect to our measure. And I've fixed the far measure to be the big measure. It's the volume of this quotient. That's the volume of any fundamental domain. So it's the volume in particular of this fundamental domain. And it's easy to see or it's known that this volume is equal to the determinant of the, OK. So it's equal to the volume of f. And it's also equal to the determinant of the matrix I get by taking these as rho vectors, v1, v2, up to vd. This is a d by d matrix. So all the vectors. I will view them as rho vectors in this talk. I take this determinant and I take the absolute value. So that is the co-volume of L, just to set notation. OK, and maybe a picture. If d is equal to 2, then a lattice might look as follows. It could be a set like this, something. Maybe the drawing isn't very good. Then for example, here, v1, 1 might be this vector. v2 might be this vector. And then the fundamental cell would be this parallelogram. Sorry, maybe the picture is too small, but OK. And the first result I wanted to state is that we have a bijection from this homogenous space, x, onto the set of all Euclidean lattices of co-volume 1. So consider the following map, going from the homogenous space to the set of lattices in Rd. Let me call it j if I need it. It maps any point or any coset in x to the lattice zdg. And this should be understood as what I get by taking all integer vectors. And to each such integer vector, I multiply it by g. So by definition, this is this set. I take mg, where m runs through all the integer vectors in Rd. And remember, a vector is always a row vector. So this multiplication makes sense and gives a new vector in Rd. So in other words, this is the standard integer lattice, but then to which I have applied the map g, just moving it in some way, point wise. So the claim is that this map is a bijection onto the set of lattices of co-volume 1. So it's not onto this set, but onto the set of lattices of co-volume 1. Often we will identify x. Here is x. We will identify x with this set of lattices, at least in our mind. It's very useful to do this identification often. OK, so I won't write much of the proof, but it's a really good exercise to do this, and we will discuss it in the tutorial. But just maybe I will talk, say some words about the proof. So first, let's see what this map is. So if I have some g, and I'm looking at the coset gamma g, OK, what lattice do I get? Well, we can name the entries of g by naming the row vectors. So let v1 be the top row vector of g, let v2 be the next row vector, and so on. And if I do this, then clearly, the corresponding lattice, or the image under my map, sorry, is by definition zdg. And if you look at what this is, you carry this out, you get the set of linear combinations, of integer linear combinations of v1 and so on up to vvd. So now it is clear that we get the lattice of co-volume 1, because the co-volume, according to what I said there, is the determinant of these row vectors, and that's just the determinant of g. So that's one fact. And then you have to check that this map is well defined. So here, if I have one coset, just one point in x, that such a point can be represented by many different g elements. But to go from one to the other, you multiply by some element of gamma, that would mean that we here insert a matrix gamma in gamma. And the point is that multiplying zd by gamma from the right gives back zd. So that's the reason why this is a well-defined map. And then what more? The fact that it's subjective is really clear from the definition, if you look there. And then you have to check injectivity, and that's more or less checking that modding out by gamma is exactly what you have to do in order to keep track of the non-uniqueness in choosing a basis of a lattice. So that's a good exercise to do. OK, so let me stress again that if d is equal to 2, we now have two really strong and useful pictures of this homogenous space x. So if d is equal to 2, this is the unit tangent bundle of the modular hyperbolic surface. And it is also the space of covalent one lattices in the plane. And this will be really useful in later lectures. OK, so I want to discuss the main topic of the talk is to discuss something about the geometry of this space x. It's a non-computative, non-compact space. And what does it look like at infinity and so on? But first, OK, I want to do some more basic things. So first, I want to discuss an extension, a slight extension of this. So also for applications, it's really useful to have the space of affine Euclidean lattices. So affine lattices in RD. So I define an affine lattice to be a translate of a lattice. So an affine lattice in RD is a set L prime in RD of the form L prime equals W plus L, meaning point-wise addition, where W is some vector in RD and L is a lattice in RD. And we define the covolume. Maybe it's not really proper to call it covolume of L prime, but we define it to be the same as the covolume of L. And then the space of affine lattices of covolume one can be identified with another homogenous space. So I just wanted to state this, because we will need it in later lectures. So for that, I have to introduce a new Lie group. I will call it G prime here. It's the affine special linear group of order D. And this will be the group of all affine linear maps on RD, having determinant one. But I define it in a concrete way as the semi-direct product of SLDR and RD. And semi-direct product of groups means, as I said, it's just the Cartesian product. But then I have to tell you what the group law is. So the group law is that having G1, W1, and then having another element, G2, W2, in this Cartesian product, the product is defined to be G1, G2, W1 multiplied by G2 plus W2. And this group G prime acts on RD by affine linear maps, just like G also acts on RD by linear maps. And the action is from the right. I'm multiplying vectors, row vectors, from the right by my matrices. So the action is as follows, that if I take a vector V in RD, then I want to act on it by an element G comma W in G prime. This is defined as V multiplied by G plus W. So it's an affine linear map. OK. And so I just want to state the analog of this. Maybe I won't even state it. I will just say it in words. But let's also introduce gamma prime to be the group of integer elements in ASLDR. So gamma prime is ASLDZ. And yeah, to save some time, maybe I just state it in words. So theorem one prime, it says that this homogenous space, x prime equals G prime modulo gamma prime, this can be identified with a set of all affine lattices with covalent one. So I call it, let's say, equal the set of affine lattices covalent one of covalent one. And the map is the same as here, ZDG, except now G is an element in ASLDR. OK. So we have that out of the way. And OK. And then before I come to the geometry of x, I also have to introduce some good parametrization of G. And what I will introduce is the evasava decomposition of G. You have already seen this for SLDR, SL2R. And now we'll do the more general thing for SLDR. Maybe I can even, you will remember this, I hope. OK. So now I have to introduce some subgroups. So just as for SL2, I will let A be the subgroup of diagonal matrices with positive entries. So A1, A2, up to AD, and zeros of the diagonal. And I will let N be the group of upper triangular matrices with ones along the diagonal. So 1, 1, 1. And then here I have arbitrary real entries here. 1, 3, N, 2, 3, and so on. But here I have zeros. And all these arbitrary real entries. And I will let K be SOD. So the group of all rotations of RD. Just to remind you, it's a group of all elements in SLDR such that K times K transpose is equal to the identity matrix. OK. So it's all rotations of RD. Now here's the evasava decomposition. So what it says is this, that G is equal to NAK. And every element in G has a unique decomposition as a product of elements like this. So to state it in more precise terms, for any element in G that exists unique elements, some N in N and some A in A and some K in K, so that G is equal to NAK. And I guess we won't use it very explicitly, but it's also the fact that this map, going from the Cartesian product of these three groups onto that one, it's a diffeomorphism. The fact that it's C1 is clear at once, but also it's inverse is smooth. So both ways they are smooth. OK. I wanted to say a few words about the proof. This is actually just Gram-Schmidt orthogonalization, if you want. But I wanted to have it on the board to see what it works like. And we will need it later to look back at this proof. So it's maybe an outline. So let G be given. And again, we call the row vectors of G V1, V2, and so on. And the trick now is to apply Gram-Schmidt orthogonalization to this basis, V1, V2, to VD. So apply Gram-Schmidt orthogonalization or orthogonalization process to. And I apply it to the way I've set up things here. I should apply it to the basis in the opposite order. So I start with VD, then VD minus 1, and so on down to V1 in this order. And this gives me, so remember what does the Gram-Schmidt process do? Well, first we take VD, and this is a basis, since determinant of G is equal to 1, so it's non-zero. So first, we take VD, and we just multiply it by constant to get a unit vector. And then we take VD minus 1, and we add an appropriate constant times my unit vector in order to get the vector which is orthogonal to VD. And then we scale it by some real number to get the unit vector also. And then we take the next vector, we subtract the appropriate linear combination to be orthogonal to the previous vectors, and then we scale it by real number to get the unit vector. And in this way, we will get unit vectors, and they will be mutually orthogonal. OK, so we get an ON, or the normal basis, WD, WD minus 1, down to W1, and it has the following property. Well, so I can state it in many ways, but I want to state it as follows. The vector VJ belongs to the linear span of WD, WD minus 1, down to WJ. For every J, this is the case. But also, I have control on the sign here of the WJ vector. So actually, VJ belongs to the following type of linear combination. You take a positive real number times WJ. And then plus something in the span of the remaining vectors. So this is true for every J. And this is if in each scaling step, I make sure to scale by a positive real number, then I get this. OK, and now if I write out this fact in matrix form, I get what I want. So I had G equals this matrix. And now I can view this as the following matrix. Remember that this is a D by D matrix, and this is also D by D matrix because I've recorded the row vectors length D. And I get what I've written here can be written out as a matrix product. Here I have to just write out the appropriate D by D matrix. And the thing is that, first of all, VD is equal to some positive number times WD. So it's just AD, where this is a positive number. And I will have zeros here. And then VD minus 1 is I have zeros here, except at the next to last entry, I will have a positive number, AD minus 1. And then I have some real number there. And it continues like this. So I will get an upper triangular matrix with positive real numbers along the diagonal, and then some real numbers here. So the point is that these A1 to AD are positive numbers. And then if you look at this, it's easy to see that this is in the product A times N. You can easily work out what such products are and so on. And it's unique. And this, by construction, it's in K. So I guess immediately it's an orthogonal matrix, but then the determinant also has to be 1. Sorry, I defined it wrong there. But the determinant has to be 1 because this has positive determinant and this has determinant 1. No, no, no. Something is wrong, yeah. So I'm done. So this is an orthogonal basis, the orthogonal matrix. But it also has determinant 1 by looking at the signs of the determinants of these. And I have made a mistake here. Of course, what I defined was the orthogonal group. I was going, I should have written also determinant of K is equal to 1. It's OK because here, OK, OK, sorry. So I did not make a mistake. And really, everything is clear somehow because I was working inside this, so OK. I shouldn't, I just, yeah. OK, but that is it. In order for the proof to make good sense, it should be NA. But it's nice to check that they don't commute, certainly not, but NA is equal to AN as a set. So that ends the proof of Iwasova decomposition. It is easy to go in and check that we have uniqueness. Next, I wanted to just say a few words about harm measure on G. So there are many ways to write out an explicit harm measure on G, but they are all a little bit complicated on G. So for us, maybe one important format will be to write out the harm measure in terms of this parameterization. So I write it out here. And I will not prove it. It's not genuinely difficult, but you have to work a bit with the Jacobians or something. So mu, I will call my harm measure mu. And here is one harm measure. By the way, remember that harm measure is unique up to a positive constant. And I should really stress if it's a left or right harm measure, but it turns out for this group, any harm measure is both left and right invariant. So then I can say just harm measure. And this is fairly general feature in these lectures because whenever there exists a lattice inside G, the harm measure has to be left and right invariant. But OK, we haven't yet proved that G has a lattice. OK, so here's a formula. If I have this parameterization, then a harm measure is given by dn times a certain factor A. Oh, I should define A. A j divided by A i. I have to have this product. And here I should say that. So A, my diagonal matrix A, I always let A j be the j's diagonal entry of A. And then times a harm measure on the group A, which I write out explicitly, j from 1 to d minus 1. I will say a few words about this. And then dk. So here, dk is some harm measure on SOd. Harm measure on SOd. So on k, in other words. I will not write it out explicitly. And I guess we will not really need it very often. We can often work with just some harm measure. And this is some harm measure on the group n. And that is very easy to write out explicitly. I won't do it. But in words, you can just take the product of Lebesgue measures with respect to the entries. It's gone. So product of Lebesgue measures in all the entries will give a harm measure on n. And then in fact, this measure here, dAj divided by aj, product from j going from 1 to d minus 1, this is a harm measure on A. So note that when I write like this, these variables are dependent. A is a d minus 1 dimensional group. So ad is equal to 1 over a1 times ad minus 1, if you like. The product of all entries is, of course, 1. So in order to make sense here, I should take a product of d minus 1, Lebesgue measures. OK, so there is this factor arising. And well, you have to compute it simply. Actually, it turns out that going back to somebody's question, if you would instead use the parameterization a and k, then we would have the same formula without this factor. So it's a bit easier. But we often want this parameterization. It has a clear geometric sense somehow, for many reasons. OK, I think I hope we can give some good exercise in the tutorial. There are other ways to define harm measure. But as I said, it's a bit difficult. So let me just point out that harm measure on GL, if I have the full general linear group, then it's easy to give harm measure. Harm measure on GL dR. If I let the matrix in G be given, if I call the entries x11, x12, x21, and so on, then simply Lebesgue measure with respect to these variables times determinant raised to minus d gives a Lebesgue measure. So d new G, harm measure on that group, can be taken to be determinant of G raised to minus d. And then just d squared dimensional Lebesgue measure. So it's just a remark. And when you have this, somehow SLDR sits inside here as a co-dimension 1 sub-manifold. And it's not too hard. You would like to somehow take a delta function requiring that determinant is equal to 1. And you can make sense of that. But I don't want to discuss it more now. Say again, on K, no. So I leave that somehow. And it's often possible to just say that this is a harm measure on K. OK, so now I want to discuss the geometry of x, the geometry of G mod gamma. Or in other words, the geometry of the space of all Euclidean lattices of Covell in 1. So this is a non-compact space. And it's interesting to try to understand what does it look like at infinity. And well, one nice question is to ask, if you have a sequence of lattices of Covell in 1, then when is such a sequence not relatively compact? So when is it impossible? When does such a sequence tend to infinity? When is it possible to find a subsequence which converges to some lattice? And when is it not? And the answer is given by Mahler's criterion, which I will say a bit later. But first I will introduce Siegel sets. And somehow Siegel sets give a more precise description of the geometry at infinity than just having a criterion for when we tend to infinity. There are many kind of really different lattices that lie far out in the cusp of x. There are many different ways to tend to infinity. So this is captured by Siegel sets. OK, so what one would really like to do is, of course, to find a fundamental domain for, would like to find some fundamental domain for g mod gamma. And it can be done. Minkowski has given a precise description of really an exact fundamental domain. But it's more and more complicated to kind of decode it. It has more and more bounding sites and so forth. So we won't go into this. A Siegel set is a set which contains a fundamental domain. And it has a rather simple structure. So it can often be used if you want to integrate over x, and we just need an upper bound, for instance. Then we can replace it by an integral of the Siegel domain. OK, so I will give a definition. So remember that A is the group of diagonal matrices with positive entries. Now I define the following subset. For any positive number t, let A t be the following subset. It's a set of all diagonal matrices, always with positive entries and with determinant 1, such that the i plus first entry is less than or equal to t times A i for all i. So when I say for all i, of course, it means for i equals 1, 2, 3 up to d minus 1. So if t is equal to 1, this is simply saying that the entries should be monotonically decreasing. But I will actually have to take t slightly larger than 1. So it means that the entries are more or less decreasing. But they may increase a little bit, but they can't increase much when I go from one entry to the next. And OK, now I will define a Siegel domain. So for any t positive and any compact subset, so I will call it fn, this is a compact subset of n, I define s within the xfn and t to be simply the product set, fn times At times k. And this is a point-wise product, if you like. And this makes sense if you remember the Ivatova decomposition. Because this is a subset of n, this is a subset of A, and this is all of k. So it really sits element-wise, so to say, or factor-wise. It sits inside Na k, which we saw was equal to g. OK. So here's a Siegel set. And now, I don't know, maybe it's bad notation to call it fn. I want to remember that it's a subset of n simply. I could call it f here. And now I will make a specific choice. But any such set is called a Siegel set. And sometimes you actually work with other Siegel sets. So here's maybe today's main result. If I make the following choice, fn, or OK, sorry, if it's bad notation, I won't use any other fn. So let me fix fn to be the following. This is a subset of n, and I will define it by giving inequalities on the entries. And the requirement is simply that all these entries have absolute value less than or equal to a half. Clearly it is a compact subset of n. Sorry, ij. And this is for all ij, where it makes sense. And then also, let's take t to be 2 over square root of 3. So I make these specific choices. And then the corresponding Siegel set, I will call s, because I'm going to talk just about this now. So s is the Siegel set coming from this compact subset of n and with my specific choice of t. And the claim is that this s contains a fundamental domain for g mod gamma. So in other words, if I take gamma times s, point-wise, set multiplication, then I get all of g. So any element in g can be written as a product of some element in gamma and some element in my Siegel domain. OK, so I have several, before I hope to outline the proof of this, but at first I will speak about several consequences. Perhaps first, let me note that this 2 over square root of 3 is the same as the inverse of, if you remember, the fundamental domain we had when d is equal to 2. We had the following fundamental domain in the upper half plane for the modular group acting on the hyperbolic surface. And here is minus 1 half, here is 1 half, here is 1, here is minus 1. And so this is a fundamental domain. Now, if you do the translation from this subset of g to some subset of the unit tangent bundle, then it turns out that the Siegel domain, which I have here, is exactly the following rectangle. Sorry, it's really a horizontal line going from that point to that point. So I overshoot by a little bit. And it's the unit tangent bundle lying above that triangle. So it's a three-dimensional infinite rectangle. That is the Siegel domain in the case d is equal to 2. If you go from the Iwazawa decomposition notation to the unit tangent bundle picture. And the point is that the height of this point here is square root of 3 over 2. So it's the same as this t, if you do it. So that's a good exercise to do to get some picture of the Siegel domain in the simplest case. And maybe I should also say a word about lattices. If we think about lattices, what this says is that given an arbitrary lattice, I can represent it by some matrix in g. But if I choose the lattice basis correctly, then I will get some element in s. And I have more than one choices of such lattice basis. But the point is that any lattice has some lattice basis such that the corresponding matrix lies in s. And we can call such a lattice basis a reduced lattice basis, if we want. So it's not a really well-defined concept. A lattice can have many reduced lattice basis. But very roughly what it means, if I have a lattice and I speak about its reduced lattice basis, such a basis will have the vectors of such a basis will not be unnecessarily long. And the angles between the vectors will not be too small. And you will see more precisely this soon. OK. So a first corollary of this. And it's not a trivial corollary. You have to work a bit. But a corollary is that the harm measure of the quotient space, g mod gamma, this is finite. So gamma is a lattice. I promised I haven't proved anything yet. But it follows from this that gamma is a lattice. And the proof is by noticing or by showing, by working a bit harder and showing that the harm measure of this set s is actually finite. So I should write this. So gamma is a lattice in g. Remember that the harm measure of this quotient is by definition the harm measure of any fundamental domain for g. And it's a simple fact that any two fundamental domains will have the same measure. And now since I said that this shows that s contains a fundamental domain, then it follows that the harm measure of this must be less than or equal to the harm measure of s. And now you verify by some work that the harm measure of s is finite. Then we get this. So how do you verify that the harm measure is finite? Well, you go back to the formula we had for harm measure. And then you can integrate. That formula is well adapted for integrating over this because it gives the harm measure decomposed with respect to harm measures on this. So the only difficult thing is to integrate over at with respect to the measure I wrote out. So it's a nice exercise to show that this d minus 1 dimensional integral. And now it's a really concrete integral with respect to d minus 1 variables to check that it is a finite integral. And let me also point out a somewhat strong effect. And this is really difficult to prove or fairly complicated. I will definitely not try to prove it. That actually s contains only finitely many representatives of any point in x. So that is a stronger fact than this, at least a posteriori. So if I take the supremum over the number of point representatives for any coset, this is finite. So it's just a remark. So the single set doesn't overshoot by too much no matter where on x we are looking. And then also I will tell you the answer to the question I threw out earlier. If you have a sequence of lattices, when does this sequence diverge? And when can you find the subsequence which converges? Well, the answer is the following. Corollary 2. This is also a corollary of theorem 3, Mahler's criterion. So in other words, I want to give a condition for a given set. If I have some set c, a subset of x, or a subset of g mod gamma, when does this set c have compact closure? And the answer is that c has compact closure if and only if there is some ball in Euclidean space that all lattices in L avoid, except some ball around the origin that all lattices avoid, except all lattices, of course, contain the origin itself. Or in other words, if there is some lower bound on the shortest non-zero vector, so in order to tell if a lattice is far out in the cusp, you just look at the shortest non-zero vector of the lattice. If that vector is really short, then the lattice is far out in the cusp. So OK, a precise statement. So it has compact closure if and only if there exists some positive number r such that every lattice in the set c or corresponding to points in the set c has intersection with the unit ball. This is the unit ball of radius r around the origin. And this intersection should contain no other lattice points than the origin itself. And this should be true for all g or all gamma g in c. OK, so this is really a pleasant exercise. Again, I think to think about when you have given theorem 3. So let me just say two sentences or something. So the hard part of this is to check that if this condition is fulfilled, so if there is such a ball that this is a void, except for the origin, then I want to check that if I have a sequence of such lattices, I can find the convergent subsequence. And what you do is for each such lattice, take a representative in the Siegel set. So then we have a sequence of elements in this Siegel set, a sequence of elements here. And now, OK, originally I wanted to prove that I have a sequence of convergent elements here. But now I can actually prove that I have a sequence. I can find a subsequence that converges as elements of g, of the league of g. And that immediately implies that we converge also as elements of the quotient. And OK, so how do you find such a subsequence? Well, we have compacts. If you look at the definition of the Siegel domain, I have compact sets here and here. So it's easy to extract subsequences so that I converge in this component and in this component. So now what remains to show is that I can find the convergent subsequence here. And well, then you have to look at the picture. What does the lattice look like in terms of these a1 and a2 and so on? And well, basically, ad is the length of more or less the shortest vector of the lattice. So the point is that I have a lower bound on ad. The ad is bounded from below by positive number. And if you have such a bound from below, then these conditions and the condition that the product is equal to 1, they imply that I can extract a subsequence which converges, if you think about it. OK, and now I really don't want to go over time. So I've at least tried to sell theorem 3, but I've managed to have no time remaining to give the proof. Let me just say, I can begin a little bit. I was going to give a little bit of an outline of the proof. I can maybe say one or two sentences. The first thing to note is that this set fn is a fundamental domain for the following. Let me set gamma n equals the intersection of gamma and n. So this is the group of all upper triangular intergeomatrices. And then note that if I take point wise product of gamma n with my chosen fn, the one which I chose in theorem 3, this is equal to all of n. And in fact, this is easy to check by just writing out the product. And in fact, it's a fundamental domain even for the quotient. So I have not much over representation except that the boundary. OK, so if I note that, then I'm looking at the following set, gamma times s. I want to prove that this is all of s. And then I can use, so this is equal to gamma times fn times at times k. But here, from gamma, I can break out the fact that gamma n, this is no more than gamma. And then I use the fact that this is equal to all of n. So I get the following gamma times n times at of k. So now given an element g in the group g, I want to prove that the orbit gamma g meets this set. So I want to prove that for any given g, I can multiply from the left by gamma, and I can fall into this set. And if I interpret that in terms of lattices, if I let l be the corresponding lattice, then what I want to prove is the following. I want to prove that there exists a lattice basis, v1 up to vd of l such that, OK, I'm not doing much, but such that this particular matrix is in this domain, nat, k. And OK, what I wanted to give and to go back to maybe motivate some things I said earlier is to just point out what this means geometrically for the lattice basis. So we have a given lattice basis or a given element in SLDR, and I showed you how to do the Vasavvadi composition. So if you look back at that proof, the proof was applied Gram-Schmidt or Togonalization process to this basis, but backwards. If you go into that proof, then you can read off what it means for this matrix to lie in this set. And I want to just write out what it means. So the thing within the box over there is equivalent to the following. It's equivalent to saying that the distance of the vector v i plus 1 to a certain subspace, v i plus 2. I will define this subspace soon. That distance is less than or equal to t, the parameter t times the distance of the vector v i to the subspace v i plus 1. And this should hold for all i. So this is the geometrical interpretation of the diagonal entry, a i. And here I have to tell you what the subspace v i is. It's just a linear span of v i up to v d. And then also v d plus 1 is the zero space. So in particular, when i is equal to d minus 1, the largest value, then this distance is just the length of v d. So that length should be less than or equal to t times this distance and so forth. And now the task is to construct such a basis. And you do it basically by a greedy algorithm. You pick v d to be the shortest vector of the lattice, then take v i v d minus 1 to be the shortest possible vector lying outside of the subspace and so forth. And you can check that it works. OK, sorry for going over time. Thank you.