 Thank you very much. Thank you for the invitation. Thank you all for coming. As Mike said, please, if you have questions, please do ask them and you know let's keep kind of an informal in person. Atmosphere as much as we can. It's a great. I've attended many of these lectures and I think it's a great service in the community so thank you for that. Also, so this is a joint work with my friend who's in the electrical engineering department at the Hebrew University, and that leg of who's the computer science department at NYU so as you can see that it's a it's a it's the topic is a topic in classical geometry of numbers so this is a field that's enjoyed a lot of recent attention from both the computer engineers and the electrical engineers and for different completely different reasons actually unrelated and they're and they are now I think like the experts and making the most progress in this field so happy to have such great coauthors and actually the new results that we have are based on work coming from a third direction which none of us is really an expert in and that's something called the discrete concha problem. And specifically the new work is based on a recent work of Darren DeVere that I'll tell you a little bit about at the end of the talk. So what's this about so the we're interested in the covering volume of a lattice so let me define everything so what is a lattice a lattice is a subset of our end, which is spanned over Z by independent independent vectors. So L is the span over Z of these vectors. Namely, it's all the linear combinations of these V vector of these end vectors and is always going to be the dimension. And another way to say this. It's the image of the standard integer lattice ZN under a matrix a where a is got the v eyes as its columns. So this is another way of saying this and the geometry of numbers is the field in mathematics initiated by Minkowski in studying the geometric properties of such sets and the questions that I'll be talking about are classical in this field. So the first invariant of a lattice is the co volume. And this is defined to be the determinant of this matrix a and as you can see a lattice may have many different choices of these v eyes different bases but the the co volume does not depend on them. And what's the geometric meaning of this co volume omega say in our end, a measurable said, say Burrell said it's called the fundamental domain for a lattice L. It tiles perfectly so in other words, you can write our end as the disjoint union of the translates of omega by element of the lattice. Okay, and I'll give an example now so example. Take the lattice to be the standard integer lattice take omega to be the unit cube. Close on the left open on the right and every coordinate, then obviously every element of our end can be written as a translate of this unit cube it's just like the standard tiling of your floor by square tiles. Or if you like you just reduce any vector mod one and you get an element here and you the difference between the element and the original vector is an element of the integer lattice so that's why we get this formula for these two sets. Okay, and obviously the, if you translate the lattice. So if L is the translate of a lattice by Z and then you can also translate the fundamental domain. This will be fundamental domain for the translated lattice and the determinant of L in this case is measuring the volume, respectively, standard volume of this set. So what this discussion tells us is that the co volume of L is the volume of a fundamental domain. So it's not hard to see this is kind of a standard exercise that this does not depend on the choice of a fundamental domain. So, that's the first kind of standard quantity and now we're going to have a lattice and we're going to try to be covering space by translates of some other set which might not be a fundamental domain. So denote this set by K. So K is a some set which we want to use to cover space efficiently, and we're going to take it always to be convex and compact. And we're going to define a function and L K X to be the number of times. So if I fix is covered number of L sorry, the number of translates of K by elements of the lattice that cover our point L so if I fix L and K. If I fix L and K and very X, I'm getting a function from any point in our end to to the non negative interest is number of translates that cover the number of times the points get covered. And let me show you some pictures just so we are all the same page. Let me just write down the fact here that was the number that's going to show up in all the formulas is the volume. The volume of K divided by the co volume of L. And I'll explain in a minute that this is the average number of times a point in our end gets covered by the translates of K by elements of L. And this is the, this is the very important number for us so let's show let's show some pictures and so you get the feeling of for this so here where I'm showing you the, the center, the black points are center points of a lattice and the circle is my set K. And here, we're translating K by the lattice point so we get this union of circles, they don't cover space, and they do overlap. And so this point here maybe this covered zero times this point here is covered once points in the overlaps are covered twice and okay, X assumes three possible values zero one and two of this in this example, and as we increase the radius of the ball, the you see that the balls are intersecting more and more. After a while. Every point gets covered so now in this picture, every points covered at least once and say points that are in these corner spots are actually covered three times. And this is the kind of picture that you might have in mind, as you can see these pictures are obviously periodic they don't change when you shift by L. So if you want to understand the average of the number of times of points get covered you just have to evaluate that on a fundamental domain so if I'm going to if I integrate the number this pick this trap this parallelogram that I'm trying is a fundamental domain for this lattice. This is, you know, by Omega, if I integrate the function and over that fundamental domain I'm going to get the average number of times of points gets covered so that's what I mean. I'm going to get this so this is equal to one over the volume of Omega integral over Omega and okay X DX, and this is this equality here is a simple exercise that I'll leave to it's just kind of a phubini computation. So this is a crucial formula for us. And just for understanding the results I mean, and to standard definitions. The pair convex set lattice is called a packing. If this function never exceeds one. These translates are destroyed. Either a point is covered or it's not covered but when it's covered it cannot be covered by two different points, two different sets so the translates are destroyed. In this case it's called a packing KL is called a covering. If in KLX is always at least one. And this obviously happens when every points, every point gets covered and so that then we have this formula. Okay, so these are the notions of packings and coverings and what people have been interested in for many many years is to find, given a convex body. And that's for which you can use that lattice to get an efficient packing or an efficient covering of space by those translated translated images of the convex body. And that's the problem that we're going to do the covering and packing problems. So, let me state them formally. So the covering volume of the. KL form a covering the covering volume of KL is defined to be. It's this is traditionally not denoted by this capital theta. And it's defined to be the infimum. I'll write down the formula and explain. You take your convex body K. And then you dilate it. There's some question that I can't make out. Maybe not. Okay, so let me continue. So, we're taking our convex body K now we're making it larger smaller by multiplying by our and we're looking for the threshold at which the minimal dilation needed to make this into a covering. So, and then we measure the average number of time points get covered by this dilated copy of case so if we go back to the picture here to the pictures that I showed you earlier. You see that as I move from this picture down to the next picture and down to the next picture, I'm dilating the ball I'm making it larger, and then at certain instant, it starts to become a cover, and as soon as it's a cover. This is the first time it's a cover I measure the average number of time times that this points points here get covered so points here and covered once points here and covered twice. There are some few points where points are covered three times. The average is some number bigger than one by some small amount. Okay, so that's the quantity that we're measuring. So, the difference. Theta KL minus one is a measurement of the wastefulness, if you like, of this covering. And we make a similar definition with the packing quantity so you take the this is noted in literature by row row of KL is defined in the following way. This is the premium. Again, of the volume of RK divided by the co volume of L, where now we range over all dilates, which gives the packing. So Breck just a question in the chat. Yeah, you assume that case symmetric with respect to a point is the question. No, no, I don't actually in sometimes it's assumed and sometimes it's not I'm not going to assume that. So this is kind of the similar. This is this is measuring. This is always. So this number theta is always bigger than one and this number row is always smaller than one because here we're averaging a function. When we're in the covering situation we're having a function which is bigger than one everywhere. And here we're averaging this is the average of a function which is smaller than one everywhere. So we get these this inequality and the difference one minus row KL is a measurement of the proportion of space that's left uncovered by this packet. Okay, so I hope the quantities are clear. So we've defined this for a given K and L. The next order of business is to try to fix K and search for the optimal covering where we range over all lattices so row. Let's do you know this is the falling way. So sorry, let's start with covering theta of K with a superscript and is the infimum over theta KL, where L is a lattice in our end. And row K row and K is the supremum again of these quantities. Okay, now it's not hard to check that these are purely the truly geometric quantities in other words if I take K and expand it or translate it I don't change the value of this quantity. And if I managed to cover space optimally with given lattice and I inflate the lattice again. So the value of this quantity here will not change or this quantity here will not change if I replace L by its dilate. So, or, okay, so this is this is these are the quantities that we are interested in. And this is this problem of finding these numbers has been around for a very, very long time and they make an appearance and even in Hilbert's 1900 list of questions. And they've been studied ever since but still not much is known. The specific case that people care about more than others is the case of the L2 ball. And that's the problem that's that's attracted the most attention. Sorry, I want to keep this notation. I'm on standard. So let me get a substitute superscript. So row and is my notation for the packing density of the L2 ball in n dimensions. And so very little is actually known about precise values of this so so this is known in dimensions. So up to eight and 24, not in any other dimensions. And this is known in dimensions two to five. And that's it. So, and you know dimension threes due to gals I mean various various well and mathematicians have contributed to what we know 24 is a very difficult case where we see the leach lattice making its appearance and open questions and the questions that I would like to discuss which are also classical is question. Does theta n. Let's say K in behave as n times to infinity. What's the behavior in dimension getting larger and larger. Of course, to make sense of this question. K is always a convex set in our end. So it's not about a fixed K, we fix a family of convex sets in all dimensions and we are interested in this quantity for example you might be interested in this behavior for the L2 ball. We're interested in this behavior, getting balance which are uniform and work for all case, all possible convex bodies K. And of course I would be interested if I'd be very happy to be able to say things also about the corresponding quantity for row for the packing, but we don't have any results of that type so that's not what I'm going to be discussing in this talk but just to give you the background. Let's start with what's known about row just so you'll appreciate that not much is known. So for the packing. We have the following bounds. If you look at the packing volume of the Euclidean ball in dimension and a lower bound one half to the end times and times the constant, the constants that are that will appear that talk they're going to be lots of constants appearing that talk that constants that appear. You know they're going to change from one line to the next but I'm always going to note them by C. It's not the same constant. You shouldn't be confused by this notation and this is not row to the end this is the value of the packing density in dimension and the upper bound is roughly 0.67 to the power and so there's a huge gap here between the lower bound and the upper bound. This lower bound was proved in a sequence of papers so I'll mention just a few names just I'm trying to impress you and you know I want you to feel that this is a classical problem. It is a classical problem of course, but let me just mention some notable names here. So Minkowski initiated this study and this is in 1896, he proved the lower bound which was, which was linear and sorry, sorry, which had this exponent one half to the end, but did not have this end factor. And Rogers in the 50s. 50, sorry, 47 was the first person to get a linear improvement with this extra end term and Venkatesh right now the record holder for this kind of a problem is actually Venkatesh who in 2013. provided the best known constant here in front of this. But as you can see the lower bound is very very far from the upper bound the upper bound was established by Kometiansky and Levenstein. Oops, sorry. Wait and this is for any convex body. No this is for I'm just explaining that what's known for L2. Okay. For the L2 ball, just so you get a feel for the problem. Okay, so this is just to show you that there's a huge gap in our understanding. And this problem has been around for a very long time for packing. Let's pass to covering. So, here's what was known and I'll say what was told before our results and then I will state our results. So, for the covering problem let's start with the L2 ball. So for the L2 ball. We have lower and upper bounds from the 50s. The upper bound is due to Rogers. 59 the lower bound is due to Cox it or fewer and Rogers, also at 59. And so what you can see is that the upper bound and lower bound are very very. I'm sorry there should be a constant here. So, as you can see the upper and lower bounds are very very close to each other I mean that this is a linear and in and here it's also linear and in and the difference is a power of along. So it's not a huge gap but it this is, you know, still an open question to nail this. And okay so that's the case of the L2 ball but the case of the general convex body was the knowledge was there was a bigger gap in what we knew. And here's what I want to state. So if you take the worst case the worst convex body and dimension in and look at the covering volume. Rogers had in the same paper in fact he had the log log in and to constant times log log in so it's a super polynomial function event. And the lower bound, if you look at the worst case in other words, the convex body for which you get the covering to be as inefficient as possible. Nobody's found a convex body which is known to be more inefficient than L2 ball so in other words this is the same cocks that are few Rogers bound, coming from the L2 ball appearing here So now you can see that for general convex bodies, we have actually a very big gap between the upper bound and the lower bound in this problem. And our first theorem, which I'll write this theorem a improves the situation so the same quantity that ever down here we approve the bound to n squared constant times that squared. And that's our first result. And the next set of results that I want to describe are for typical lattices. So, I'll explain why I'm interested in typical lattices. So what this result says is that if you give me any convex body whatsoever. I'm going to find a lattice for which it covers. And the average number of times a point gets covered is bounded by this number, but I'm not going to be able to actually produce that lattice for you explicitly. And I'm going to be able to say that with respect to some probability measure on the space of lattices, a typical lattice will actually behave in this way. So this is what I want to describe next. So there's a natural measure. So I'm going to denote it by calling it the Ziegelhard measure. So there's a oops, sorry. There's a natural measure on the set of lattices in dimension and so I'm going to note this by curly X and so this is L in our and L lattice. And I'm going to only look at lattices of co volume one for all these geometric problems, reducing the lattices of co volume one makes no difference. But when I do that. This set becomes a quotient of a league group by discrete subgroup is the quotient the famous quotient SNR over SNZ. And this is to see this, you just notice that if I look at the coset. ASL and Z, think of this as a coset in this coset space, I can map it to the standard lattice ZN multiplied by a as I was doing before. This gives me a general lattice of co volume one as I described before so this map is by Jackson, and this map here identifies the space of all lattice of co volume one with this quotient. And that's very nice in particular. This becomes a homogeneous space for a league group and the harm measure on the league group can be used to produce a nice measure on the space of lattices and that measure that's the measure that I'd like to do is the ZEGLE harm measure. So you end is the ZEGLE harm measure on XN IE this is the unique SNR invariant probability measure on this space. So it turns out, it's a fact related to the theory of harm measures. That's why our name is appearing in this discussion. And this was first observed by ZEGLE. It turns out that there's a unique such measure probability measure on this space. I guess I'm going a little bit slower than I intended. So, I can respect this measure I can state the following theorem. So, I'll write it all down and then I'll explain what's going on. The statement is bounding the measure. This is a probability measure. So I'm telling you that the measure of a certain set is large, which means a typical lattice satisfies this property a typical lattice for every epsilon I can find a constant C so that a typical lattice where typical means outside a set of measure epsilon, a typical lattice satisfies the bound that I had before. In the theorem A I told you that there exists one lattice for which theta KL is less than a constant times N squared. Now I'm asserting that a large measure of such lattices have this property so this is the measure that's behind theorem A. Now it's telling you, so I'll write it down in words, a typical lattice gives a covering with theta KL less than C. This is the, so for each, for each K. And constant C is independent of the convex body K. That's our second theorem. You might wonder why I'm stating a theorem about behavior of typical lattices in such a convoluted way usually if we work with measures. The next thing I might might ask is, what's the average with strictness measure what's the expectation of this covering radius and if I were to prove to you that this expectation is of this order. That would be kind of a result of the similar flavor however the expectation is completely different in fact. So remark. The expectation with respect to this measure and UN of this quantity. So I'm varying L according to this measure and measuring the expectation. It's infinite. Somehow this probability measure has what's known as the thin part of the space or the cusp and in this cusp, the covering function the covering volume explodes to such an extent that the expectation is even though it occupies a small measure in this space so that's so that's just to explain to you that what I've discussed is sort of the natural way to express the fact written over here. Okay, that's I've explained to you the results in the first half of the title I haven't told you about almost uniform covers. So that's what I'd like to do now. Before doing that I'll sort of explain the idea. So, again, I'm turning back to these pictures and what you're seeing in these pictures is that when I passed down in these slides, I'm dilating the ball and eventually the balls cover. Now what would happen if I were to dilate the balls, even further, what would happen is that points are going to get covered. And just imagine the radius of this ball instead of it being this size and much, much bigger and say this size. So points are going to be covered by lots and lots of balls and you might ask what about this function. And the fact is that this function becomes almost uniform it hardly fluctuates becoming almost a constant function and that's what we want to understand we'd like to know when it is. And just show that this eventually happens, we'd like to somehow put our finger on, what's the volume at which the function becomes almost uniform so that's, I'll formalize that. That's our next quantity that we want to measure I get another thing to say here, all the results I have mentioned so far already appeared in one paper, and what I'm now describing is the newer stuff. So let's define for K and L as before a quantity Ada of KL. This is the maximum as x ranges over our end of the ratio of the number of times x gets covered by the average, and we subtract one. We subtract points to be covered. This amount of time that's the expectation. And this is the fluctuation, the ratio and fluctuation of the actual quantity to the expectation on average this is one, but we're taking the worst case in other words the point x where this ratio is as far from one as possible. Notice that if this Ada is less than one, we're back to saying that this function is bigger than zero so we have a covering so note. Ada less than one implies KL is a covering. Okay. Now, let's define another quantity so we want to know the kind of given an epsilon so given positive epsilon is defined psi KL of epsilon to be the infimum of the quantity that I measured before. Did you mean it's a greater than one or less than one less than one so if you're less than one. What does that mean, it means that this saying that this maximum is less than one means that for every point this quantity is less than one. It means that for every. This can't be zero. If this was zero we'd have zero minus one which is one. So it has to be bigger than zero at every point. Is that okay. Yeah. So I'm defining something. Let's call it define. Epsilon smooth density. This is just a name that's supposed to help convey the idea. But it's defined by the following formula. I increase our until the first moment when my former when this this quantity that defined before becomes smaller than epsilon as I said eventually. I increase the radius as I showed in the pictures, this quantity is going to go to zero so I wait for the first moment at which it's less than epsilon. And I call that the epsilon smooth density it's the minimal average number of times points get covered that ensures that they almost get covered uniformly where almost you're fully means with respect to this parameter epsilon. Okay that's a bit of a mouthful perhaps, but nevertheless let me state the theorem. So, I have two parameters in this theorem, epsilon and sigma, and I take a super all convex bodies. And I measure. I'll write it down and then I'll explain. Okay, here's the theorem that's trying to figure out what it says. First thing to notice is that we had a power of two and previous results and now we have a power of three. We have a parameter signal which you should think of as being small. So you should think of this quantity is being very close to n cubed. It's a little bit bigger than n cubed. And this is saying that if we take any convex body that's why we have a soup here, and we want to inflate the lattices so that we get an epsilon uniform cover. Then it's enough to inflate to volume to average covering number and cubed. In other words the quantity that I mentioned before is just a little bit. It's, it's, it's roughly n cubed so let me write, sort of, in words, what we're talking about so this approximately means I'm going to compare theorem B to theorem C so theorem B means that for any K with high probability. If we take dilate of K of volume and squared for most L theorem the next theorem theorems C says so this is this was about what I just said was about sorry, what I just said was about theorem B, and then I'll switch a color and I'll talk about theorem C. So theorem C approximately is a for any care with high probability if we take a dilative care volume and cube now. For most that this is an epsilon uniform. Covering by E, Ada is less than epsilon. Okay, so that's the meaning of this theorem C. I'd really grateful for the opportunity to present it here because I'm hoping what some of you will be interested in this quantity this is not a quantity as in contrast to the previous quantity that we discussed the covering volume. This is not a quantity that has been on anybody's radar and pure mathematics literature. Yeah, it's very, very natural from point of view of the computer science and electrical engineering applications that I mentioned before. And, well, we like it, we think it's interesting. So, this is an opportunity to publicize it. So this is those are the three results theorems, a B and C. I'd like to say, I have about a little less than 10 minutes. I'd like to say something about proofs. So, in the beginning of the talk, the names of darned beer, darned beer gave us a crucial input in the theorem of theorem C in the proof of theorem C. Earlier, the earlier work in which we proved theorems, a and B also got a crucial input from this field called the study of the discrete KK a problem. And I'd like to say a little bit about that. So what is the discrete KK a problem. So this, this is an ingredient in the proof. We're looking at the field is a prime number and we're looking at the field with P elements and we're looking at the vector space of dimension and over this field. And we have a subset in this finite set. And this is called a KK a set, if it contains a line in every direction. What that means is for any. I'll write it with kind of slightly complicated notation because that's going to be useful later for any element of the grass manian of one dimensional subspaces in FP to the end. That's this notation appearing here. You can take that line, and you can translate it. So that this translated line lines to this. So it's clear that the larger s is the better chance it has of being, because it's satisfying that it contains all these lines and all possible directions. And the belief was for a while that indeed I said have to be quite large in order to satisfy this property. And there were many conjectures and the breakthroughs. Many of them are associated with the name of the beer beer proved in 2008. The following well known result. If s is a cassette, then its size of s, I'm not going to do too much to do justice to this result but I'll just state it because it should be stated. He provided a lower bound on the size of a case at the amount the relative size of a case that relative the size of the of the entire vector space decreased in terms of the parameter P the number of elements in the field as a constant depending on the dimension over P. So it was a breakthrough that people thought people had much weaker lower bounds before him. What we care about in actually this work is about the case sense of rank two so let me define case sets of rank are in general are so s in Fp to the end is called a case set of rank are if oops, it's a simple definition except I replace the grass manian of lines with the grass manian of our dimensional subspaces. And there was a bound of, let me not write down the names co party, Lev said, so do it and so often 2011 obtained a lower bound, and I'm focusing on the case of rank two. I'm going to do their lower bound had the following form. And this is this was crucial for our work. So, let me very, very briefly, and I'm not going to state that there's a this this input here. This theorem here is what we needed to prove theorems, a and B, and a slide improvement of that, which I won't go into. In terms of C, we needed a further result of darned fear, as I mentioned in the very beginning of the talk that result came out earlier this year, or maybe last year I'm not completely certain. And so further further improvements along the lines of these works are what feeds into our work. Okay. So briefly, what does this all have to what are these problems have to do with each other they seem completely unrelated perhaps. So, how does this work so I'll give you the idea in a nutshell so connection. Connection to the covering problem. You'll understand our technique so first step we're trying to, we're given a compact set and we're trying to find an economical lattice for which we get a lattice for which we get an economical covering using this compact set. So the first step is a step called discretization. What does that mean I'll just show it by pictures so let's just write down. So let's put L in a dense lattice L prime, which is one over p times LP is a prime for reasons that will become apparent in step two, and cover L prime, instead of our end. So let me show you the slide, the picture. Think of P as being five. So the original lattice is are these black points. And introducing a new lattice L prime, which are the red points. It's the original lattice scaled down by a factor of five so every fundamental domain here now contains five squared which is 25 points. And instead of, I'm telling you that if I want to understand how large the balls should be so that they cover the red points. Sorry, so they cover all of space. It's enough to be able to cover the red points. And then there's this idea that I fight inflated just a little bit more. I'll be able to cover all of space and I won't lose much. So that's the first idea that requires some work and that's called discretization. Now that we have that second step. And I'll end with that. Step two, replace L with span of L V or some V in L prime model, which is fpn. Okay, so what's going on so L prime over L and all those red points that you saw before and that's can be naturally identified as an additive group with this vector space. If I choose an element from this vector space. I can add it to L and thicken the lattice, thicken the lattice by one dimension so I'll show you an example. An example is right here. So what would happen if I change the collection of black points by adding all of these new black points I'm making the lattice thicker by adding one vector and spanning a new lattice. Well the circle gets translated in a certain direction. And as you see in this picture, maybe it covers already covers all the red points, and then we win. That's going to be our goal. Our goal is to be to choose this extra vector so that we can just add it to the lattice which just means push around this, the set using it, and cover the red points. And if you, that's what we were going to try to do. And if you think about what that means is this picture explains what's going on. So this is the fundamental domain for our covering problem. And then some of the points are already covered those are the points in this light blue color, and some of the points that you want to cover or not cover these are the points in this red set. So let me just take a picture of all of the points I've already covered and all the points I'd like to cover. Think about what does it mean for my strategy to fail. What it would mean for my strategy to fail is that I cannot add. I can't push this picture around and cover space and if I can't push this picture around and cover space. That means that in any direction that I take the red points contain a full line, which means that the red points are okay I said. So the relation between the two problems is what makes my strategy fail is if the complement of the points that I've covered form a Kakao set. And if I know that this Kakao set has to be large. That means that these points have to be small. That's sort of the starting point of our analysis. So I know this is not much of an explanation but it's the best I could do in the 50 minutes. Thank you for your attention.