 So, we will have the same notation as before. X is the homogenous space G mod gamma, where G is equal to SLDR and gamma is SLDZ. And mu is hard measure on G. This hard measure is both left and right invariant. And we normalize it so that it gives measure one to any fundamental domain of gamma in G. In other words, it gives measure one. It induces a probability measure on this homogenous space. Now you could ask me for the precise constant, because I've given a formula for hard measure on G in terms of the Iwasava parameterization. And what is the constant in front? Well, I can give you a direction to the literature. I can't give it, but it actually contains the product of zeta functions and gamma functions. So, product of zeta functions and the product of volumes of unit balls in dimensions 2, 3, 4 up to D. And then just to set your mind on the right, I'd like to talk about what is a random lattice? Well, we can mean different things with, so random Euclidean lattice. Remember that this homogenous space is identified with a set of Euclidean lattices of covolume one. And now, if we do that identification, then if I pick a random point in X with this, with respect to this probability measure, this gives rise to some random lattice. It's natural to call this a random lattice. So, for now, let's think of that as a random lattice. Or let's, when I say random lattice, I will mean that type of that lattice. And this is for today. Actually, when we go to quasi-crystals, we will have occasion to say random lattice, and we will mean random with respect to another probability measure on the space of lattices, which doesn't pick up, which is kind of singular towards this, which only picks up certain lattices. But anyway, today, when I say random lattice, it means the following. The lattice, the Euclidean lattice corresponding to a random point gamma G, so a random in the space, my homogenous space with this probability measure mu. Okay. So, let me write out Siegel's volume formula again. It says that for any L1 function on RD, if I average over my probability space or average, take expected value with respect to random lattice, and the function I take the expected value of is the Siegel transform of F, if you like. So, I add over all points in the lattice, except the origin. The origin will always be there. So, we may just as well remove it. And then, if I do this, then according to Siegel, this is just the volume average or volume integral of F. And this is the three-dimensional Lebesgue measure. And of course, if I would do this, if I include the origin, that would just mean I add a term F0 here. So, it's no complication. So, let me state it in an equivalent form also, just to again, so that we have a way of thinking of this or help you thinking of this. So, equivalently, for any Borel subset of RD, if I again take average over my probability space or expected value, but now I just count the number of points which my random lattice has in B. And let's remove the origin again, d mu g. Then, this is equal to the volume of B. Okay. So, what is this? This is just the expected number of points which a random lattice has in B. So, it's something which is easy to think of. And to see that these are equivalent, well, the implication going from here to here is clear. Take F to be the characteristic function of B. Well, if B has infinite volume, then form a sum, countable sum. The other way around is also easy. Maybe start with non-negative functions, use simple functions maybe with and so on and add it up. So, it's like going into the definition of a Lebesgue integral. Okay. And if you know, it is for arbitrary. So, yeah. So, if it's unbounded but has finite volume, then it's immediate from here. It can be unbounded but have finite. And then, but also if it has infinite volume, finite, but also if it has infinite volume, then both of these will be infinite. I mean, this is well defined if I have non-negative integrants and so on. Okay. And if you, in the language of random point processes. So, this is a random point process to take a random lattice. And then, doing this, asking what is the expected value, what is the expected number of points in a given set. That is called the intensity of the point process. So, what this says is that the intensity of this point process is just Lebesgue volume measure. Okay. So, let me give a proof of this. Much of the proof is just very down to our conceptual. But then there is a technicality and this will lead us back so I can connect a little bit with what I said in lecture five, I think. So, no, lecture four. Okay. So, let us consider this thing, this integral for any Borel set B. This is a function of B. And it is easily seen to be a Borel measure. Define nu by nu B equals that expression for any Borel set B. Then nu is a Borel measure on RD. And it has some important properties. Maybe the key property that will also be important for the generalization that I will come to later is that nu is SLDR invariant. Nu is G invariant. So, nu is G invariant. So, what does this mean? We mean that, in other words, nu, if I take any Borel subset of RD and then I transform it by some element G in SLDR, this measure is equal to nu of B. And this is true for any Borel set B and any G in SLDR. So, why is this? Well, we can leave it, I leave it as an exercise, but I will just say it in words. Maybe call this G1. And then what happens if I replace B by BG1 here? Well, in order to connect it with the corresponding integral with B, I'd like to substitute. I'd like to replace... I'd like to transform the whole integral by multiplying from the right by G1 inverse. Then I get G1 inverse here. And then this is... I can replace G G1 inverse by G because this harm measure is a harm measure. It's right invariant. So, that's a nice exercise to write it out really carefully. Okay. So, this is really a key property. And now there aren't many G invariant Borel measures on RD. But there are still some complications. First of all, yeah, they are the same. So, I just stressed that when you do this computation to show this, we are using the fact that the harm measure is right invariant. On the other hand, we used the fact that the harm measure is left invariant. Well, to see that we have a well-defined measure here. And actually, if you have a lattice, then the harm measure has to be... Whenever the League Group G has a lattice, then harm measure has to be left and right invariant. I mean, yeah. And another property that is important is that the measure of the singleton set zero is zero. And then, now having this one knows that nu has to be Lebesgue measure times a constant. But there is one more problem. This constant could be infinite. What if this integral is infinite for any open set B, say? So, we have to also point that out. And that is the technical hardest part of this, to prove that nu of B is finite for any B which is bounded. So, if the closure of B is compact, if B is contained in some ball, then it's finite. So, I will talk about that a bit. But if we have all of these, then it follows that nu has to be Lebesgue measure times a positive constant. So, hence, there is some... This constant might be zero also. But, okay. Such that nu is equal to C times the Lebesgue volume measure, the dimensional Lebesgue. Okay. Now, how can we nail down this constant? I have to come back to this part. But, well, one would like to choose B in a nice way to try to get hold of what is the constant. And, well, one way of doing it, I think it's hard to compute this for any given set B. But if you let B become a larger and larger ball, then you have some something. You can say something. So, yeah, yeah. So, this is an equality of measures for any B. Nu of B is equal to C times the volume of B. This is what I'm saying. So, take B to be the ball around the origin of radius r. And then, for such a ball, we can note that this integrand for any fixed lattice, this integrand tends to, is asymptotic to the volume of the ball when r tends to infinity. So, for any fixed B, note for fixed, any fixed, well, gamma g, we have that the integrand, which is the number of points in my fixed lattice, excluding the origin, if you like. This is asymptotic to the volume of the ball. So, in other words, if I write it like this, if I divide it with r to the d, this tends to, as r goes to infinity, this tends to the volume of B, the unit, the unit ball. So, if I take this identity with this being the ball of large radius r, and then I divide with r to the d, the integrand tends point-wise to the volume of the unit ball. So, now, if we can just apply Lebesgue's bounded convergence theorem, we would conclude that this tends to the volume of the unit ball. It's only an issue of the fact that this is not compact, and we don't have uniform convergence over the whole space. But, so, if you do this going, so, we still have a problem. But, at least, this allows you to conclude that this constant c has to be larger than or equal to 1. Because, well, for instance, I can just say that for any compact set of, for any compact set in here, we actually have uniform convergence. It's easy to see using, but I just claim it. I'm not going into all details. But, this is an easy fact that if you have a compact set of lattices, then you can apply the Gauss classical argument. If you want to count the number of points in a large ball, you increase the radius of the ball a little bit, and decrease a little bit, and you will, well, you see that you, okay. So, the hard part here is to find the good major that works for, that shows that you can actually use Lebesgue's bounded, Lebesgue's dominated convergence theorem to do this. So, once we have this, there remains two technical difficulties, and that is to prove this part, and to prove, well, to somehow show that it is permissible to change order of limits. I want to take limit as r goes to infinity here, plugging in br and dividing through with r to the d, and show that I can change order of limits. Okay, and what goes into that? So, it's actually possible to give a nice uniform bound. It's just, you have to do a computation. I'm not going to do it, but I will refer to certain things that we know and so on. And I encourage you to look at this. And I'm happy to discuss it with you in tutorials and so on. So, if I take this integrand, and then I look at just this, let's not divide through with r to the d right now. So, I'm looking at this integrand. I'm integrating over a fundamental domain of g, but in my lecture number four, I pointed out that the Siegel domain contains a fundamental domain. So, I get something which is larger if I integrate over the full Siegel domain. So, this is the Siegel domain from theorem three of lecture number four, theorem three lecture number four. And then I have the same integrand. And now I'm integrating over the group g. Well, yeah, this is a subset of g. Okay, and then to estimate, to bound this, or this, we recall the definition of s as the product of n, what did I call it? fn, I called it fn. And then at, where t happens to be some, well, it suffices to point this. So, it's some positive number. It's actually equal to this. So, and at if you remember is the set of all diagonal matrices with positive entries, such that ai plus one is less than or equal to t times ai for all indices i. And then we have a integral of a k. And then we have this number of points. Number of points in the unit ball intersect number of lattice points in the unit ball take away the origin. Not very important right now. And then the measure, the harm measure in this parametrization was, well, dn, sorry, here, here I will now write n a k. And it's dn. And then it's this complicated product, aj, j over ai, and product of harm measure. So, d ai over ai, i goes from one to d minus one. But here, i and j, j runs all the way up to d. And dk, dk is harm measure on the group k, compact group k. We don't have to care about it. Okay, so now up to a constant, I can just, well, okay, maybe I, so the point now is we need to bound, bound this. And for a given Ivasava decomposition of the lattice basis. Okay. And let me just write out the bound. This is less than or equal to, this is a good exercise to understand what the Ivasava parameters says about the lattice. So, it's less than or equal to product, j goes from one to d. One plus two r over aj. And so the way you would do this is, again, a good exercise, but you look at the possibilities for this integer vector here. And it turns out that if you start looking at m one, you find that m one times a one has to be less than or equal to r. So, you get that the number of choices of m one is less than or equal to this factor one plus two r over a one. And then for given m one, you start asking how many choices are there of m two. And for any given m one, there are this number one plus two r over a two number of choices of that vector and so on. So that is how you get it. Okay. And I want to just, yeah, so I'm not going to prove it. It's a nice exercise to prove that bound. But I want to give a picture to show you what is going on, why we get such a bound. I will do that very soon. But now it's really an explicit integral. So now you can, it's a fun elementary but not easy problem to try to show that this is bounded. If you prove that this is bounded for any given r, so it's, yeah, so now we can basically cross out this. We had a bound that doesn't depend on n or k. We have an integral only of this d minus one, only over this d minus one dimensional space. And here is the explicit integral. So this is a more difficult problem than the problem you had yesterday. So the fact that we want to show that this is finite and then we would like to prove actually what we can prove is, if I can prove that this is finite for any given positive r, then it's clear that if I divide through with some large r to the d, well, it's easy to look at this. If I can prove it for one r, then I also get that it's less than or equal to some constant which depends on d of course, times r to the d for all r larger than or equal to one. So this is what you can prove. And to prove it, it suffices to treat the case r equals one. Then it's easy to, you get better, you are in better shape when r is larger. And this gives a majorant. If we divide through with r to the d, it gives exactly the type of majorant that you, it shows that this function is a good majorant in Lebesgue's dominated convergence theorem. So what I sketched earlier will work. So let me just say a few more words about this. Note that this is, this function is larger than or equal to one. If I would replace it by one, then we get back the exercise that was on the board in the, yesterday's tutorial. So what we have here is a stronger statement. And so when I talked to some people about the exercise and I said, I don't know a way of doing it except actually working hard and so on. And I actually, when I did it once and you do each integral step by step, you try to get the best bound and so on. But there isn't really nice substitution that makes this easier. So I just pointed out, use this substitution. Let q i be equal to the ratio, A i plus one over A i, for i equals one to two up to d minus one. So this will be positive numbers. And clearly if I know these d minus one numbers, I can determine all the d numbers A i using the fact that the product of these is also one. Now the formulas are going to be slightly complicated, but there is no real complication arising. And the beauty of this substitution is that the domain of integration becomes really simple. I'm just integrating each variable q i going from zero up to t. So the domain of integration is really simple. You have to work out the Jacobian a little bit of work, but in the end you just get a really simple domain of integration and then a product of q i raised to some power. And the only thing to check is that this power is larger than minus one. So it's not too hard. If you want to do the problem in yesterday's tutorial, if you want to do the this, it's more hard. You have to split the domain of integration a bit. I leave it to you. I will now simply refer to Siegel's beautiful paper. So you can see all this carried out in Siegel. So he uses slightly different notation, but yeah, his q i are more or less the same. So this is a really short paper, eight or nine pages. And all I'm saying here is done on one page. So I had a little picture I wanted to draw just to show you what we are fighting against. When we are far out in the cusp of the homogeneous space x, what can such a lattice look like? Well, it has to have a really short non-zero vector that we know, but there are kind of different shapes that it can have. So in d equals three, if gamma g is near, is far out in this homogeneous space. Well, one way it can look like. So it has to have some short vector. So this lattice must have, if I take the multiples of this short vector, it will look very much like this in one direction at least. But then there are two possibilities. Either we can have, well, okay, I just draw you some pictures. It can look like this. And then also outside of here, we have some rather distant direction to the next plane where it looks the same and to the next plane. And in the Siegel, if the lattice looks like this, so that the shortest vector is much smaller than the length to any other line, then the Siegel decomposition is unique. And in fact, this vector has to have length A3. And the distance between these two must be A2. That is if the distance to the next plane is really much longer than A2, then this is all unique. And the distance to the next plane here must be A1. So just to have some picture. But another possibility would be that A3 is small, but also A2 is small. Then it will be really thickly packed in some plane, this lattice. And if then A2 and A3 are small, A1 has to be really large. So the lattice will look like this. And so I hope it gives you some feeling for what lattice is far out in infinity looks like. And these are the beasts that we are fighting against. These are the beasts that make this integral large and that you have to fight with. And actually, when you split the domain of integration, these fall into different subsets. Okay. Okay, so now we have Siegel's integration formula for random lattices. So that allows you to conclude the proof that Jens Alkrandt yesterday. Once we have this, we know that the expected number of points that the random lattice take in a small set, in a set of small volume, that expected number of points is always small. So this allows you to do the approximation. If you remember, we had a long, thin cone. And we applied the rotation and then a dilation to make it into a normalized cone of height one. But it wasn't quite the cone. It had rounded cap. And it's an approximation only that it's a rounded cone. But the point is we can do that because the small difference between the rounded cap and the flat cap cone, that's a set of really small volume. And the expected number of points that the random lattice has in this small difference is small. So it's with very high probability there is no point in that small difference. But now we would like to do this similar kind of argument for more general point sets. So maybe I start here again. So I will go back to the setting which Jens had in his lecture yesterday that we start with some fixed and P will be fixed now throughout this discussion, a fixed locally finite point set in RD. So P is a locally finite point set. Meaning it has in any compact set, it has only finitely many points. And then we let Xi T be a random point set. Xi T is defined as this point set P rotated by a random rotation. So remember this is a random rotation which depends on V. And then delayed it by DT. So DT is equal to T to the minus 1, T to the 1 over D minus 1. It's a diagonal matrix. And KV is in SOD with the property that VKV is equal to the standard unit vector E1. So the point is here we take V at random for V random in the unit sphere with respect to some fixed probability measure lambda. And we assume that lambda is absolutely continuous with respect to the Lebesgue area measure on the sphere. Okay. So this is the object that we are interested in. For each T, for each positive T we have now a random point set. And we are asking does the distribution of this random point set tend to something interesting when T goes to infinity? So the main question for us is does this random point set tend in distribution to some random point set. Some random point set or point process. This is the question. And Jens explained yesterday what do we mean by tending in distribution? Well, actually we look at finite dimensional distributions of this. But let me anyway just say some words that it's possible this all fits into general probability theory. Because you can put a topology on the family of all locally finite point sets. Naively, it is very intuitive this topology. You can call it the topology of point wise convergence. So if you have, and this is a metrizable topology, it makes the set of all such in a separable space. It's not really complete, but if you think about it what do we have to complete it with? If you allow multiple points, then you get the complete metric space. So you have a topology and then a random point set is just a random element in this metric space. But it turns out that to check convergence in distribution you only have to check finite dimensional tested for arbitrary Borel sets and count the number of points in a fixed Borel set and see. So all what we are doing it fits into a general framework and I refer to, for example, Kallenberg's book, Foundations in Modern Probability. There you can find all this worked out. Okay. So what Jens showed yesterday is that if you start by P being a lattice, then this holds. And the answer here is a random lattice. So if P is a lattice and you define psi o t to be this randomly rotated and dilated version of the lattice, then as t goes to infinity, the answer is a random lattice. And tomorrow, no, this evening we will start discussing the case when P is a fixed quasi crystal. I will prove that in certain cases you get also an answer that we can describe. But I think it's a really interesting question to ask this. Well, is there some general theory that you can develop? Can you find more examples where you can prove such things? But I don't have really any idea how to go about it. But let me say one thing. At least we can prove that this sequence is, what is it called now, relatively compact in distribution. So if you have any subsequence of these, you can always find the sub subsequence that converges to something. So it gives you a first. So you would be able to do, when you want to characterize the possible limits, you can do what I described in the introductory lecture. You can just start asking, what if we have some limit of some subsequence? Can I start characterizing it? And so on. So let me write this out as a theorem. So we have to put one assumption on the fixed set P. Assume that if I count the number of points in P within a ball of radius R, so recall from Jens lecture, this is the set of points in P which have distance less than R to the origin. If I count that number of points, then this shouldn't grow more rapidly than the volume of the ball of radius R. So let me say, limbs up of this as R goes to infinity is bounded. That's the assumption we need there. Then this sequence or continuous sequence of what you liked, maybe T larger than or equal to 1. This is relatively compact in distribution. Relatively compact in distribution. Meaning that for any, if I have any sequence of T values, T1 less than T2 and so on, tending to infinity for any such, there exists a subsequence or subsequence of this such that along that subsequence, I converge to something in distribution. So there exists J1 less than J2 and so on. These are positive integers such that xi of TJK, if you like, tends in distribution to some random point set xi. Okay, it's a rather abstract theorem, but what we will do will be, you will see it will be more concrete very soon. Okay, so now I will just say some words. It may not make sense to all of you, but soon I will get to something concrete. So there is a well-known theorem by Prochora that tells you that in order to show relatively compactness, suffices to prove that this sequence is tight. So that means that for any epsilon, no matter how small epsilon we have, we can find the compact set so that the probability of being outside that compact set is less than epsilon when T is large. So now compact set in the space of locally finite point sets. It's rather abstract, but it turns out using the same machinery as when saying that convergent distribution can be tested by just showing convergence for finite dimensional testing against Borel sets. By the same machinery, one can see that it actually suffices to prove that for any given Borel set in RD, the sequence is tight, so this suffices. Suffices to prove that for any Borel set B, or actually I can take it to be a ball, a ball of radius rho, say, then the following sequence is tight. T intersected by this ball, and I'm counting the number of points here as T larger than or equal to 1, say. So now this is a sequence of integer-valued random variables, just so for each T, this is an integer-valued random variable. And to say that it is tight is just to say that given epsilon, there is some large integer such that the probability that this takes a value larger than that above my integer, that probability is very small for a large T. Okay, so this is what I need to prove. And actually we can prove a slightly stronger statement. We can prove that the expected value of this is bounded uniformly in T. So it suffices to prove the following, and we will be able to prove the following that the supremum of T larger than or equal to 1, the expected value of this number of points, excited T in my ball, this is bounded uniformly in T. Okay, so what is this? Now it is something really concrete. So let's just start doing this computation and see. The expected number of points in the ball is by definition equal to, well the integral over the, remember V is random in this space. So taking expected value means integrating over this space. So I integrate over the unit ball with respect to my given fixed probability measure lambda which is arbitrary except it's absolutely continuous. And then I count the number of balls, the number of points in this random point set. But the definition of this is that it's equal to the fixed set P rotated by the random amount, by the random rotation Kv, and then dilated by dt. And then I intersect with the ball B. Okay, so that is what we are trying to compute. Say again, random point set. Yeah, but my random point set was defined to be, for given, for fixed T, the random point set is defined to be this. So the randomness is here. So I have a map. So I'm really integrating over my space of random point sets with the distribution, well, yeah. Okay, and now, okay, I want to bound this. And so it's, in order to get something manageable, I want to have the spherical measure here. And I can get that by just remember that, oh, that should be an extra assumption here, right? That, sorry, sorry, sorry, sorry. We need to, as far as I can see now, we need to assume that lambda has, the density of lambda is bounded also. So on top of this, we also assume that the density of lambda, so this is the L1 function on the unisphere which gives, yeah, so we write, maybe I added it. We assume this and something else, which I will write here. We also assume that lambda prime has, is, but this is the density of lambda. So lambda is given by lambda prime, a function, an L1 function on the unisphere times the area measure on the unisphere. I was working a bit too late, maybe yesterday. Okay, okay. Yeah, yeah, right now, I want my, okay. Okay, so then this is less than or equal to, and I just approximate from above. I take the sub norm of the density and instead I have the same integral but against the area measure. And the reason is, well, okay. Now I kind of go backwards versus what Jens did yesterday. So in order to see what is this integrand, let's apply this dilation and the rotation, the inverse of them to see what we get. It's integral over D minus one. And then it's the number of points in the fixed set P inside the ball B rotate, no, dilated and then rotated like that. D lambda V, no, D omega V. So what is this? This is a ball, but then I've dilated it with this diagonal matrix. So it's, did I write it incorrectly here? No, no, no, it's the inverse of that one. So this will be an ellipsoid, a really long and thin ellipsoid in RD having one semi-axis equal to T and all the other semi-axis is equal to T to the minus one over D minus one. So really long and thin ellipsoid. But then I'm rotating it. I'm averaging over all rotations of this ellipsoid. So now it's quite explicit. For each point in P, I get the contribution which is quite explicit. You can compute it in terms of some J-Bessel functions, I think, but we don't really need to compute it fully. We just give it a name, this explicit contribution that I get for each point in P. So it's a sum over the points in P. So what I'm doing, I'm changing order. I write this as a sum over the points in P and I change the order of integration and summation and then I get some function. Call it AT of, and it's actually going to be, if you think about it, it's going to be a function only over the ratio x divided by rho. And here I made a mistake, I guess. This should have radius rho times t and this has radius rho times t to the minus 1 over d minus 1. So this is a function only over the ratio of the absolute value of the point and rho. And this is a slowly decreasing function. And now you can do summation by parts using integration by parts and so forth to actually get all of this. And I don't have time to write it out. So let me just write like this and say, so what this is, is some kind of, you're adding over all points. It's like a lattice point counting problem, except we don't have a sharp cut of, we have a slowly nicely decreasing function. So it's kind of easier than doing a counting of all points in P in a large ball. But it's clear that such a smooth cut, not smooth, but nicely decaying cutoff, you can treat it if you have such a bound. Well, if you have a bound on the sup, then you can easily get an upper bound. So what you get is, well, we get the area of the unit sphere times the sup norm of lambda prime. And then we get the supremum over the supremum of that thing instead of limbs up, R to the volume of the unit ball, volume of the B, R, D, supremum of overall R. So clearly this supremum is finite if this limbs up exists. Actually now something can catch me with a slight error here. It's finite unless P has a point at zero. And so we have to assume also that we don't have a point at zero. But that wouldn't be any problem to treat. Then that point would sit there always. So it's not a problem. But if I assume that there is no point at the origin, then the fact that this limbs up is finite implies that also this supremum is finite. So I have a good bound. And then what you get in the computation is you get also the volume of the ball that we were measuring in. So we get that this is finite. And so we approve the tightness that we wanted. Thank you. Origin is not in the point at P. And if you, I wanted to write out what you, a fact that also follows from this proof that will be important soon. So I make an addition to this. If I would do this counting, we expect to get a number of points in the ball of radius rho for a large ball. So if rho is large, then small r won't contribute much. It will contribute in a vanishing amount asymptotically. So if rho is large, then I can actually write limbs up here. This is a technical thing that will be important. So it turns out that I hope you can read the limbs up as rho goes to infinity of the expected number of points in the ball of radius rho. This is less than or equal to this constant, which I call S lambda for simplicity times just the limbs up here, which let me call the limbs of theta of P. So times the limbs up and then times the volume of this large ball rho. Oh, sorry, this doesn't make sense. So divide through it. I divide through with the volume of the ball of radius rho. This should be legible for everybody. Otherwise, there's something wrong with the, I hope it, okay. So this is the bound you get when rho goes to infinity. Okay. And now I want to come to a statement of a, so, okay. Sorry. Yeah, thank you. Yeah, okay, so we have this result. So now, okay, maybe I should say a few words about the program, so we would like to be able to get hold of this limit for many point sets. But what we can do today is what we know how to do is the case of lattice and the case of a quasi crystal, which we will speak more about and maybe some trivial example. But whenever we can get hold of this limiting random point set, we need to have access to the technique which Jens showed you yesterday. In particular, we need to be able to show that the expected number of points of this limiting point set in a small region is always small. So we need some kind of generalization of the Siegel volume formula. And this is what we, what one can prove. And this is actually, what I'm going to write out now is a special case of Weech's formula in this paper from 98. And it's a kind of trivialized version. You need to do this computation here, but Weech needs to do much more in his setting. So I'm assuming something stronger than Weech assumes. So theorem three. Let Xi, and here I'm going to speak only, actually I forget that I obtained Xi as a limit. This is a statement about the general random point set. So let Xi be a random point set in Rd. And I assume that it is Weech is SLDR invariant. Maybe there is a better word, variant. So I don't mean that every point set that I can get here is SLDR invariant. I mean that the distribution of this point set is SLDR invariant. So to give a random point set corresponds to giving a probability measure on the set of random points, probability measure on the set of point sets. And I'm assuming that this probability measure is invariant under SLDR with a natural action. So it's clear what it means whenever you go down to the technicalities. OK, assume that we have such a random point set. And remember that a random lattice is SLDR invariant. That was the key point in the proof of Siegel's theorem. And then we also assume that the expected number of points of Xi in any bounded region is finite for any bounded set B. And also we have to say something about the origin. So we assume that the probability that the origin is in Xi, that probability is zero. So I mean this random point set never touches the origin. And then the statement is that if I look at the intensity of this random point set. So in other words, I look at the expected number of points for some Borel set B. Then this is always equal to constant times the volume of B. So then there is some C. It's a finite real number such that for any B, Borel, set in Rd, the expected number of points which Xi has in B is equal to C times the volume of B. OK, and the proof is actually just the same as I did for the Siegel's theorem. So here I've assumed everything that I need. So just look back to that proof. So what we do is we define some Borel measure on Euclidean space Rd by saying that nu of B is equal to this expected value. It's easily seen to be a Borel measure. And here this assumption says that this Borel measure is finite on any compact set or any bounded set. And then this Borel measure will also be invariant under SLDR. And this will give mass zero to the origin. And the only such measures are Lebesgue measure times the constant. So the proof which I gave for theorem one or Siegel's theorem works here as well. Yeah, so maybe I take one more minute just to point out to connect. So here I didn't assume anything about Xi being a limit. But let me just write out what we can get by combining theorem three and theorem two, the information that we can get from theorem two with its addition. So as a theorem four. And this is, maybe this is not adapted to other point sets. But for a quasi-crystal, this will give us what we want. And for a random lattice. So let us assume the following. We have this fixed point set p. And we assume that the theta bar of p. So the lim sup, I erased it. But the lim sup of number of points in a large ball divided by the volume of the ball. This is finite. And then assume that if I consider the random point set, which we get by rotating p and then dilating it. Rotating by random amount and then dilating it by the fixed amount t. This converges in distribution to some random point set Xi. And finally assume that Xi is SLDR invariant. So this turns out to be the case for a random lattice. It turns out to be the case for when p is a quasi-crystal. And I think it's reasonable to hope for such a thing to hold in many other situations. Because somehow when we take the point at p, we rotate it at random and then dilate it by a really huge amount. Somehow we are seeing all sorts of versions. We are not of course doing an arbitrary SLDR transformation to this. But still one can hope that this could hold in many situations. So it's a really interesting problem to characterize also which point processes are SLDR invariant. Anyway, the statement now is that then theorem 3 applies. So the intensity of Xi, the limiting point process, is just Lebesgue measure times a constant. So we have this relation now for some constant Xi. And we also have some information about Xi. So Xi is less than or equal to this amount. This is what you get by looking back to theorem 2 and the addition I just gave. So Xi is less than this sup norm of lambda times theta bar of p. So S lambda is this sup norm of the density of lambda times this. So I just point out that if lambda is equal to the normalized probability measure, I mean constant times omega, then S lambda is just 1. So it gives you a feeling for it. So if lambda is the uniform probability measure on the sphere, then S lambda is 1. And here we have this Lebesgue. So we have an upper bound. And let me also point out that if I would have some lower bound on the lim inf of the number of points in Xi intersected with BRD. So this is really making an assumption about the limiting point process. If I happen to be in a situation that I can see that this lim sup is larger than or equal to some constant theta 1, almost surely. So for almost every realization of the limiting point process, I have a lower bound for the number of points in large balls. If I have such almost surely, then I have a lower bound on Xi also. And this is just an observation. I did this for a random lattice. I pointed out that for every random lattice of a covalent 1, the number of points in the large ball tends to asymptotic to the volume of the ball. The rate of convergence may depend on which lattice, but it holds for every lattice. OK. Sorry for going over time.