 My lecture today as I announced yesterday will be devoted to this basis reduction. The role that is placed in my series of lectures is that let this basis reduction is very helpful to do what I will mostly be doing on Thursday which is called linear algebra over see and if I say linear algebra over see that I mean computations with finitely generated a billion groups and those will in our application often be the additive groups of orders in number fields, their ideals and also their residue class. What that is basis reduction does for you is that it helps keep the numbers small in the course of those algorithms which is one of the major problems that one has to address in this area. Let me start by telling you what a let this is, and there are two definitions that are often used and the first makes use of the concept that you must know from linear algebra, and that is the concept over your period vector space and Euclidean vector space is by definition, the finite dimensional FD means finite dimensional vector space VS means vector space over R and R means the reals. And such a vector space to be equipped. We call this vector space E, E for Euclidean equipped with an inner product. And an inner product as you may recall from linear algebra. The map from E times E to the reals and this map should be by linear over the reals so if you fix one of the variables, it is linear in as a function of your variable. It should be symmetric. X comma Y is Y comma X, and it should be positive definite, which means that X comma X, the inner product of X with itself is strictly positive. X is non zero vector in my vector space. And if you have such a vector space, then it has a metric. So then this E has metric, the distance from X to Y being by definition. The square root of the inner product of the difference with itself. And that is indeed a metric it satisfies the triangle inequality there's only zero X is equal to Y. And that metric also gives you two columns. The Euclidean vector spaces that we will encounter are typically just RN with the usual inner product. And it is a theorem from linear algebra that every Euclidean vector space is actually isomorphic to a unique such RN, including the metric. So that is a Euclidean vector space. And once you have the notion of Euclidean vector space, then let this is defined to be the first definition is a discrete subgroup capital L. So we call it of Euclidean, E, V, S, E, V, S means Euclidean vector space. And often one restricts this Euclidean vector space to be in the space, the sub vector space is spent by L itself. So if, so if you call this E this vector space, then we say L is full rank in E and that will often be decently assumed if R spans, if L spends E over R so RL is by definition the real vector space, spent by L and you may as well make E a bit smaller, if that is not already equal to E. And it is an elementary theorem to show that every lattice is as a group, a free Aberian group is morphic to Z to the N, where N is finite and is a non-negative integer. So the rank of L, it is at most the dimension of E and it will be equal to the dimension of E in the case of full rank. So that is the first definition. And this first definition also comes with a natural way of representing. So first of all, a lattice, how do you input a lattice, how do you specify a lattice, how do you output a lattice into an algorithm. Well, you just write down Euclidean vector space for example just R and, and you list a set of linearly independent vectors in there and then L will be the subgroup generated by those vectors and elements of L likewise represented as vectors in E in order to make this all meaningful in the context of algorithms, it is natural to require that L is just sitting in your E, which you often present SRN, but actually that is sitting in QN. So in the, that the coordinates of all elements of L are rational numbers, and if you drive out a denominator which you often can do without changing any of your problems, you often take it inside the ZN. And then an element of L is simply represented as an n tuple of integers, the inner product of two elements being the traditional inner product. That is the first definition, and it is rise to the first possibility of specifying those lenses. And then I will typically be using, I will not start with Euclidean vector space, I will just look at finally generate the medium group so let me give you a second definition, which is equivalent to the first one. And then in the following, the lattice is a finally generated FG means finally generated free the B with your group so that means isomorphic to ZN, and let's call it L, and it comes with some extra structure, equipped with a map that I write in the same way. So two variables, both ranging over L and this map that should be well symmetric again x, y is the same as y, x, it should be bilinear but not over the reals because L is not a real vector space, but by linear over the integers. And if you fix one of the arguments, then the map as a function of the other argument will be an additive group homomorphism symmetric and bilinear, such that there is one more property, namely, that there is a positive real strictly positive, such that for all x in L, except the zero element, the inner product of x with itself for which I will often write q of x q because it is a credit function is at least R. And this reflects the discreetness requirement in the other definition of all that is. This is often the way that we consider lattices. And before I get into the way of representing a lattice in this sense let me tell you in what way these two definitions are equivalent it is clear that if you have such a discrete subgroup. Then, well, as I told you it is fine to generate it free. And if you restrict the inner product on E to L then you'll get a map satisfy all of these conditions and conversely bonds you have a lattice in the second sense, then you can take E to be the terms of product of L with the real which is a real vector space, which has a basis over the real which is equal to some basis of L over Z. The real product by linearly to E times E, you sit down and you prove that E is indeed that little product, a Euclidean vector space, and then actually L is embedded into it. In the sense that I mentioned here, it has full rank rank of L over Z is the dimension of E over the real. And this gives rise to a different way of representing the elements of L, which is a little bit closer to the way I am going to apply lattice basis reduction. This is as follows, you start from L being a free william group, so you may as well say that L is Z to the end itself. So the vectors of L the elements of L are represented by the coefficient vector on some basis of L over Z. You should not think of course that the inner product is the traditional one. So you also have to write down to the ground matrix, as it is called ground was a Danish actuarial mathematician who lived from, let me say, in 1962, 1916, and when he was 65 years of age, he walked to a meeting of the Danish Academy of Science, he was hit by a bicycle, and he died. So this is the claim to fame of ground the ground matrix, and this ground matrix is simply the matrix it's a symmetric matrix of inner products and I call the basis vectors, I call be I so be I is the element of L that here corresponds to the standard. The basis vectors of Z to the end, and if I and J range from one to N, you'll get here, the matrix of real numbers, the inner products, and those inner products because of the by linearity determine the entire inner product. And then, if you want this matrix of inner products to really define a lattice, it should be a symmetric matrix, and it should be positive definite. And for algorithmic purposes, it should have rational entries and often we will simply assume that it has integer entries. And then the way we are going to apply this in the context of linear algebra over Z is well here you have the type of objects busy to the end that you want to work with when you do linear algebra over Z, and then you have a certain problem that you want to solve concerning problems and co kernels and several other types of problems and then what you do is that you encode your inner product that you that you encode your problem in terms of the inner product so that the properties that you are after are can be determined, can be expressed by having a little bit for example a small norm I will give examples of this is not later today, then at least on Thursday. So, and in this case the elements of L, a simply integer vectors. Those are the two definitions, and it does for the rest of what I have to say today, not make much difference, which ones you prefer, but, as I mentioned that my preference is in general, the second. So, let me tell you a few basic properties. So this is first of all, I have to define for you the determinant of a lattice and the determinant of a lattice. So I have to go to D of L, and that is the determinant of L that is defined by taking a basis of L over Z, and write down this ground matrix of inner products, which is a symmetric positive definite matrix to take its determinant positive real number, and then you take the square root. And you take the square root, because that is what has a nice interpretation, if you take, if you and that is L in this vector space as a subgroup of full rank. And then you have because whatever inner product you have a natural volume for there and you can divide by L that gives you a compact topological group, which inherits a measure from the measure E, and the volume of that group. You can interpret as the volume of a parallelepid that is spanned by the basis vector that volume with the square root of that determined, and that is independent of the choice of the basis as is already clear from this expression. The purpose is, we will never take that square root, because when we do computations as I mentioned, this will be a determinant will be a rational number or even an integer, and there is no reason that the square root will also be a rational number of linters when you do computations with the determinant, you just use the square of the determinant that will always be sufficient. As I mentioned, you can have several basis of the same lattice you can apply a whole matrix, an invertible integer matrix and by end to such a basis to get another basis. What lattice basis reduction does for you is that it makes a particularly pleasant basis and pleasant as several definitions, and I will give one that is appropriate for the reduction algorithm I will talk about, but roughly speaking, such a basis will be pleasant. If the vectors in that basis are relatively short, that they are maybe not quite orthogonal to each other, orthogonal means in a product hero pair wise, but at least that they have small in a product. Then when you have a basis satisfying those conditions, then you can solve several problems as I will make clean. And the way in which you find a reduced basis, a basis that is reduced in the center of having those nice properties is that you start from an arbitrary basis, for example the standard basis of Zm, and you start cleaning it up so to speak. So you try to make them, for example, orthogonal. And that is done by the technique that depends on the grand suite of organization process that you may know from linear algebra, and that proceeds as follows. So, you start here from any basis which is the same as saying sequence of linearly independent vectors in some nuclear vector space. So, B1, B2, through Bn, they are sitting in my nuclear vector space and they generate my lattice if you like, but the lattice will for the moment, not play much overall, but we do want them to be linearly independent over the materials so that they generate as a group, the lattice, and then you define an optimization which you call B1 star, B2 star, through Bn star and they also lie in E. In fact, they lie in the space, spanned by the beat, and you also define real numbers, Uij, they are sitting in the reels, here the Uij, they range over all i and j, where j is less than i, and the definition just follows. So, what is B1 star is actually just B1. In general, Bi star is Bi minus a contribution coming from the smaller B stars. In general, there are no smaller ones and then this sum, whatever it is, is going to be empty, so B1 star is B1, but in general you have these coefficients that you use to subtract previous Bj stars. So what are these news? Well, these news, they are engineered in such a manner that all these Bi stars are pairwise orthogonal, and that is what you may have seen also in linear algebra, that is what you achieve by taking Bj to be the inner product of Bj Bi with Bj star and you divide it by the inner product of Bj star with itself. Let me, in case you have never seen it, try to draw a picture of what is happening here, so suppose that I have here a two-dimensional plane, this blackboard, and we have here B1, and we have here, for example, B2, and they are linearly independent. And then, if you orthogonalize them, then you want to subtract from B2 a multiple of B1 so that it becomes orthogonal. So if you take this projection, then you find here the mu21 B1, that is this vector, and here you find the B2 star. mu21 is the ratio of this length to the length of B1, which looks as if it is about three quarters. And if you now add a third vector, let's say that this is a vector perpendicular to your screen, then we can, for example, do something like this. Here we have B3, so that is in front of the blackboard, and yeah, in fact, I don't know whether you can see this, but your screen is of course two-dimensional. My blackboard is a three-dimensional blackboard, and this arrow I drew perpendicular to the blackboard, and B3 has also this distance to the blackboard. You have here the, you have here, the vector that you get by projecting B3 to the plane that is orthogonal to B1. So this vector there, that is B3 minus mu31 B1 or B1 star. B3 will be B3 star, which you get from by subtracting from this element, so this element is also equal to B3 star plus mu32 B2 star. So if you first project it orthogonal to B1, then you project it also orthogonal to B2, and this is your B2 star. In the news, they are always the ratios of the vectors that you subtract to the length of the corresponding B1 star. B2 is about minus three quarters, it points in that direction, whereas the mu32 will be about, well, I think it is about three, mu32 is about plus three. So that is what is happening here, and you see that each BI star is the component of BI that is orthogonal to the space spanned by the previous Bs. The previous Bs, that is the, that is this space, Bj star, which happens to be the same as the same space, without the stars. So that is orthogonalization, and if you perform it, then you end up with a set of orthogonal vectors, but even if those inner products are integers, that does not mean that those mu's are integers. So there is no reason whatsoever that the group, the additive group generated by the Bs will be unchanged under this operation. So that is something that we have to pay attention to. What one can show here, that is, first of all, that if you take this, this, this, these ground matrices, so maybe what I do is the following. Suppose that I take Li to be the lattice of rank i generated by the first iBs, so L0 is just a zero group, L1 is just the group generated by B1, and Ln will be L itself. If you start it, then you get a, in general, a different lattice, oh I see that I made typo here, this R should be a Z. The lattice is generated by the B over Z, and I do the same thing with the BI star. So, you see that if you look at this two dimensional plane again, then my original L will consist of these points, this B1 and B2, and you have these things. As the new lattice will be shifted a bit so that they'll be generated by this element and that element so that is really a rectangular, rectangular lattice, whereas the other one has an angle. This is the case that if you compute the determinants of those lattices, then they are the same, so the determinant of Li will be the determinant of Li star, and what is also the case, because this Li star has basis of orthogonal vectors, the Gram matrix will be diagonal matrix, any two different BI stars are orthogonal to each other, so that means that you can conclude from what I am saying here, that the length of this BI star, is the square length actually, the inner product of BI star with itself is actually the determinant, the quotient of the two determinants of two successive lattices which you both square. And that already gives rise to several interesting inequalities. For example, it is quite clear, this BI is equal to the BI star plus some error term you might call, so you see that this is at most QBI. The product of these inequalities between positive real numbers over all I, and you use the fact that L and L star have the same determinant, and you see that the DL square is at most the product, which will be equal to the product of the QBI star, and it is at most the product of the QBI, that is called Hadamard's inequality, which is a very useful inequality, if you want to do estimations as you have to do when you prove that certain algorithms are running in polynomial time. Many of these things I am forgoing the proof of, and that is because they are first of all quite elementary and easy, and secondly, most of them, you can find in the notes, and if you cannot find them in the notes, then they are even too trivial for the notes, in which case you find them in the exercises of the notes. So those are the, that is the Gram-Smetall's organization, and using these BI star, I can define when I call a basis reduced, and that is what we are after. So let me erase these inequalities, and z basis for my basis for lettuces are always understood to be z basis, b1 through bn for lettuce L is called reduced, if two conditions are satisfied, the first is that for all i less than n, if you look at these Gram-Smetall's organizations, then the lengths, well, you really like these lengths to form an increasing function of i. So I would really love to be able to require this, that this is true, but I cannot really do this because such a basis may seem to not exist, and you have to allow a certain slack here, and the easiest choice is just a half. So number two can be replaced by any number that is at least four thirds, and in fact you can make it into a parameter as it has been done in the notes, but I will just stick to the number two, one over two, and then secondly, we have that for all j less than i, we want those in those real numbers, to be no more than a half in absolute value. That is what I call a reduced basis, and what lattice basis reduction does for you is that it starts from any basis for your lattice, and it changes it into a reduced basis. And before I tell you the essential mechanism behind that algorithm, let me tell you a first few properties, good properties of reduced basis, so that you have some motivation to actually determine that. And one of them is, and I think most of those are again exercises in the notes or maybe truth in the notes. So if B1 through BN is a reduced basis. Then the first property is that B1 is pretty short. I would love to be able to say that it is the shortest non zero vector of L, but that is not quite true, but it is at most two to the n minus one times as long as the shortest non zero So this is what I am claiming here, if you take L not to be the zero that is then each vector that is non zero is at least as long as your B1 and divided by this unfortunately somewhat exponential factor. So one of the surprises of the whole theory is that we will see many of these exponential factors and many applications don't seem to be bothered by that, even though you would really find prefer to find a better approximation. This is already for many applications. Good enough. This is for the shortest vector. You can also look for what people might think of as the next shortest factor that has to be defined with some care I refer to the note that you'll get a so called successive minimum. But let me not talk about those. What is also true is that the QBI are not much longer than their stars. So, as I told you a moment ago, QBI star was no more than QBI and in fact they are comparable size. So it also means that with a reduced basis, this Hadamard inequality, which I told you this had a marine equality, the product of the QBI. This is true for any basis, but if you take a reduced basis, and you have actually an opposite inequality, which will get by multiplying out this one overall I and then you'll get something like I think two to the power n times and minus one over two. So you see that in this sense these QB these factors bi are almost optimal. So the quality in Hadamard, even only if all the bi coincide with the bi stars for the day are all. Okay, let me tell you one more good property of reduced basis, which is not about finding short vectors in the lattice itself, but finding short vectors in a coset of the lattice, and that goes as follows. So I take f to be the following set. So I take the sum from i is one to n. And then I look at my vector space, where everything is lying the bi star are orthogonal basis for a certain subspace in there. The whole space in the full rank case, and now we allow not into the coefficients or real coefficient we allow coefficients from minus a half to plus a half. And this is the F that F means fundamental domain and that is because if you include F in E. E to L to be a full rank that the full rank case. And you take this course mod out by L. Then you have here a bijective map in other words, each element of E. You have each x in E, you have a unique pair y comma z with y in F and Z in L such that x is y plus Z. So, if you are given an element in the vector space. Then you like to round it to a lattice factor you like to find a lattice vector the Z in this case. So that is a good approximation to X in the sense that why is small, and why it does look small because it lies in F, and all these coefficients of F, they are small. The good property of reduced basis is that this is in a sense close to best possible, and that is the following that if that. So, if the basis is reduced, then you have that is q y is again up to an exponential factor it is two to the n minus one, in this case, times the distance, the square distance of x to the lattice. That is the minimum of all square lengths of X minus W, where W ranges over the lattice, and you really like to make this small, and you see that up to this factor, you are managing pretty well all this is due to the basis being reduced. And you will see plenty of other good properties of reduced basic so what I hope that this serves as sufficiently strong information for you to be interested in knowing how you make such a business. This is done by lattice basis reduction algorithm and the most popular one. It's listens to the name lll look because it evolves three let this is that proceeds as follows. So, so the theorem is that there exists a PTA and that is as you may recall from yesterday PTA means the polynomial time algorithm maybe I should buy from this a polynomial time algorithm that is all input the lattice, and I told you how to get the lattice and elements of the lattice in particular basis produces that means out to reduce basis for the same lattice. I'm not going to tell you the total proof of this but I will give you a sketch of the algorithm, and that is as follows well you are given a basis already because in every way which I told you couldn't let this, it comes with a basis. So be one. And is given basis, but I am going to change it. And what I do first is that I test these conditions. And let me first look at the second condition. So, if J is less than I, and this new I J is greater than half. If you find such a pair, then you can decide to do something about it. And then replace the basis vector by itself, from which you subtract an integer multiple of the J basis vector and that in terms of multiple, you get by taking this number new I J, and you round it to a nearest integer. If you look at the definition of the news, you will see that this particular new by this process will be replaced by itself minus that rounding, so the new new I J will be at most a half. A step that you can take. And it is not so very difficult to show it may be the subject of an exercise. And if you keep doing this, then at a certain moment, this second condition will be satisfied in the notes this condition I think is called a Garm's meet reduced for the days. And of course, if there are several pairs well then you have a choice there's a certain strategy that you can follow which I don't want to say anything about the second that may happen is that you find that this inequality is violated. So QB, I plus one star is actually quite short, compared to Q of BI star. In this case, I want you to act only if the corresponding new has indeed been taken care of that this new is at most a half. And the picture that you should think of here is that you have here is BI star. And here you have the BI plus one star, and then you have a new that is between minus a half and plus a half. This is about a third, let's say, and then here you have this vector, which is be I plus one star plus the new times the BI star, which is the projection of BI plus one. This is similar to the sum of the smaller BJs, this is the J's that are strictly smaller than either. So, if you project be I and be I plus one orthogonal to this. If I minus one dimensional space that BI maps here, be I plus one maps there. And if I have organized the I plus one also with respect to the iPhone, it goes there. You see that if this distance is at most one over square root two times the other distance that is what I have here, and this number is no more than a half. Then, what you can do you swap be I and be I plus one. The reason that you do this is that it enables you to make an improvement, because then, if I look at the queue of the new BI star. So that is the new BI is BI plus one. And the new BI star is the projection of that one to this plane so that will be this new one will be exactly the vector. And that I wrote down here this is the old BI star, BI plus one star plus the corresponding new times the BI star, you see that these are orthogonal. So this is the same as the queue of BI plus one star plus the square of new square of the new times the queue of BI star, and that is at most. Now this is actually strictly less than half times the old QBI star, and that is a quarter, so that gives you the half plus a quarter, which is three quarters times the queue BI star. So if I make this swap, then it is very easy to see that all the BJ star unchanged except the BI star and the BI star, the new BI star will be the square length will be at most three quarters of the square length of BI star. And that is really a big improvement because that means that every time that you apply this second step, then the product of all these determinants that I mentioned, let's take the squares that this quality for I is one to N that it is gets changed by a factor three quarters or less. So that means that this number which has a, if you think of it as an integer, it has some lower bound that means that you cannot apply this step too often. And that means that after a number of steps that can be bounded in a good manner, you will find that you produce a reduced base. Okay, so that is a outline of the latest basis reduction algorithm, you do have to be a bit more careful in specifying which choices you make before you can prove this is polynomial time, but I will just be happy if you consult the literature about this and then I will just take this result for granted. The day after tomorrow, I will tell you how to use this in order to solve all sorts of problems in linear algebra over the integers. Okay, I thank you for your attention. Okay, thank you. Are there other questions. So no questions. It's something frozen on your end or we're, I mean, we don't see either of you anymore. And we see, I just see, don't forget the name. Can you hear us. Yes, now and now I can see and hear you. Okay, I think we had a freeze but I guess it's the best time to have freeze in this lecture. Yeah, you're free right after you stopped talking after you finished your lecture. Actually, there was another thing I had to say namely congratulate you with your birthday. Happy birthday, Bjorn. Oh, thank you. Okay. All right, well that wasn't a question, but are there other questions. Well the question may be what age you are getting. No, that's not the kind of question we're asking for here. Okay, private. No, it's not very private. I mean, I think you can find it on Wikipedia, but I don't know. All right, so either. So I guess. So there are some questions I think in the chat perhaps. Let's see. So there's some question there's some questions about why, why the one half liked why the constants in the in the definition of reduced basis why why use the constants one half. No question that it had, you can replace it by any number. At most three quarters right you don't want this number to be getting beyond one. You want there to be some improvement. So if you replace a half by two thirds, that is still fine as long as the sum is less than one. So if you really push it to the limits that the half becomes three quarters, then you get a one year, and then the only improvement is sitting in the in strict inequality side and that means that this integer is an integer. It does decrease but not very quickly. So what people often do is that rather than take the number like to you take any parameter that you may choose yourself that is at least four thirds so that the inverse is at most three quarters. And then yeah, then I think it is up to you but I just for simplicity, picked the number two which is the easiest rational number to write down that is greater than four thirds there's the smallest height. Okay, good. So, let's see, are there other, I think, I'm not sure if there are other specific questions that people want to ask. Please feel free to unmute yourself and ask if you have a, if you have a question that was not. I have a question actually if you do take two three quarters is it still polynomial time. The proof breaks down but maybe it is polynomial time for some other reason I think that is actually an open problem in dimension to actually in fixed dimension I do think it is polynomial time. If you have to want to have fun to think about, well, that's an interesting question to replace a half by three quarters. Is there a polynomial time variant of this algorithm. Okay. Okay, but I won't push it. Okay. Any more. Okay, there's a question is there a notion of reduced basis for infant dimensional lattices. Yeah, well that is a good question. I think it's called Hubert lattices. And I think we have been doing a lot with them, especially Dan, who is doing the problem workshop. But I do not think that lattice basis reduction is one of the subjects that we covered there. So there is the notion of a Hubert lattice which is a discrete subgroup of a human space. And here's another open problem is any Hubert lattice free as an abelian group. Open problem. Make yourself famous solve my problems. They are so they are actually what is called almost free an abelian group is almost free by definition if every countable subgroup is free. Okay, so this is only a question for uncountable, uncountable dimension. Yeah, correct. Yes. Okay, the rank of lattice is equal to the orthogonal dimension of the Hubert space that it spans. Okay, so orthogonal dimension, because I guess they I guess that the actual dimension is. Yeah, when you talk about basis of Hilbert space it's not a basis. Yeah, it's not everything is a linear combination but it is a infant linear combination. Yeah, okay. So, and those things also occur in algebraic number theory, but that will take another 50 minutes to explain. Okay, so some other summer school. Okay, any other questions. No, no, no, no, the next the next event. Okay, so thank you again. First of all,