 Hello everyone, in this video I'm going to play into some recent work we get with my colleagues Paul Kirchner and Pierre Alain Foucault on the fast reduction of algebraic lattices. First of all, a bit of recall on lattices and the celebrated LLL algorithm. So our lattice is a discrete subgroup of a Euclidean space, so for instance R to the N. It means that we get a bunch of regularity space points in the space which are endowed with a group structure. Another way of saying it is that we can take a 3D module and endow it with some inner product on the ambient space which is your lattice tensorized by R. A very interesting property of lattices is that for an lattice lambda of fixed round D then we are sure by Minkowski theorem that we can find a vector which is somehow small compared to the normalization of the co-volume which encodes somehow how dense your lattice is. The problem is that even though we know that such a short vector exists, finding it computationally is very hard. However, in A2, LL, LVH prove that there exists a polynomial time algorithm which given any lattice can produce a vector of the lattice which is of length at most two to the N longer than the shortest vector. So the problem is hard for sure, but you have polynomial time approximation of the shortest vector. This enables to solve a lot of problem-raising in number theory such as the simultaneous day fancy approximation problem. It also allows to find many polynomials of algebraic numbers. In particular, and this is what the first application of the LL algorithm as it was presented by the original paper, it allows to find the factorization of polynomials over rational numbers. And in crypt analysis, lattice reduction has a huge number of application. It allows to solve knapsack problem in some specific settings. It allows to solve RSA for small public exponent. And of course, it allows to break and estimate the security of all the lattice-based schemes. But more importantly for what is going to follow, it is also very useful in computational number theory. In particular, it is a basic building block of ideal computation algorithm for the H&F computation algorithm over modules. And it is used everywhere in algebraic number theory to control the size of the elements appearing inside other more sophisticated algorithms. So first of all, let's see what the blueprint of this algorithm. In particular, what are the basic ideas that are making possible to reduce lattices in polynomial time? So let's suppose that we want to reduce a wrong and arbitrary lattice. So the first thing we're going to do is to compute this carrier decomposition or equivalently to find the Gram-Schmidt vectors associated to the input basis, which somehow encodes the defect of orthogonality that the input basis vectors are. So this is a polynomial time computation. And then we're going to apply a procedure which, given as an input, the carrier decomposition will allow us to reduce slightly the size of the vectors. This will allow us to using integral roundings to find shorter vectors and to control the size of the coefficients appearing during the reductions. Once we've done that through an iterative design, the algorithm will call the reduction of wrong two projected sublattices. Not entering into more details here, but the basic idea behind the algorithm is that there is a polynomial time reduction from the reduction of a wrong and lattice to the reduction of numerus, but still polynomial, numbers of wrong two sublattices. This can be done efficiently using, for instance, Gauss algorithm, which aims at reducing completely lattices of wrong two. So using this sub procedure, what we're going to do is just to continue over and over until nothing happens. And at that point, we say that the basis is reduced. So quantitatively speaking, if we perform a naive analysis of this procedure, basically we find a dependency, which is six-stick in the dimension of the lattice, and which is cubic in the bit size of it. However, if we replace naive arithmetic, in particular handling rational coefficient by numerator or denominator, and we use floating point representation, basically we can prove that we can decrease the dependency in the bit size to quadratic. And this was epitomized by the work of Englian and Stille in 2009. More recently, using a recursive strategy, Neumeyer and Stille proved that we can be actually quasi linear in the bit size of the input and be only quasi quartic in the dependency on the input. So now let's focus to the main proposal of this work and see how we can provide a reduction algorithm, which is efficient for some generalization of the notion of lattices, which are algebraic lattices. So to define what an algebraic lattice is, basically we need to set up first what's a number field. So it's a finite extension of the field of rational q, which can be realized as the quotient of qx by the ideal generated by some polynomial p. Inside this field, we can find a particularly interesting ring, which is a ring of integers, and can be defined as a set of elements of the number fields, which are annihilated by some monic polynomials with integral coefficients. So for instance, if you take the most basic number fields of all, which is q itself as an extension of degree one over itself, then the ring of integers recovers the actual integers, rational integers. If you take something which is slightly bigger, so let's say a quadratic extension, for instance qi, then the ring of integer of qi is nothing else, but the Gaussian integers, which are defined as z of i, which are the elements of the form a plus ab for ab integral elements. So recall that a bit earlier we defined a lattice as being a discrete subgroup of a Euclidean space, and in fact we gave a slightly more general definition, which was to take a free z-modular finite rank and build with some inner product over the ambient space. Here what you can do to generalize this notion is to change the ring over which we are considering the lattice, and in particular if we fix a number field l, what we can do is to set as a definition of a natural lattice to be a free ol-module of finite rank, which is endowed with an inner product on some ambient space. And then you could say okay, but what's what's the shape of an inner product on this crazy space, and I'm not entering details here, but you can define it in a canonical way using all the embeddings of your number field into c2cn. So okay, using this definition now we can wonder on what is the possible reduction of such objects. And to give a bit of context before explaining how we do that, just recall that the so-called ideal lattices and so-called module lattices are just a particular form of algebraic lattices. Algebraic lattice concept is encompassing anything which is appearing in the field of lattice-based cryptography. In particular, exploiting the specific algebraic structure of such lattices enables to design more compact and more efficient cryptographic primitives. In particular, it's no surprise then that many lattice-based candidates of this competition are relying on either ideal lattices or module lattices. So being able to have a reduction procedure for this very particular kind of lattices is of the ultimate interest for the security evaluation of post-quantum candidates. So now the question is how do we reduce this particular lattice to find short vectors and be able to give security estimations for the primitives? So a very simple idea would say okay, so if I take an algebraic lattice basically I can always descend the full lattice over the integer z and get a bigger lattice, a bigger integral lattice which has the same geometrical properties. So this corresponds to completely forget the algebraic structure and only keep the metric structure which is underlying. And if you want to reduce that using LLL, basically the resulting algorithm is as fast as the LLL algorithm you would use over z, an active exponential approximation factor. So their first work in consideration was done by Figer and Stille in 2009. But it's a bit sad because you have a very structured object, so module lattice and ideal lattice and algebraic lattice, but you are completely forgetting all of the algebraic properties. On the other hand what you could do is try to mimic the behavior of LLL and to replace all of these rounding procedures I told you by directly SVP or CVP or records. So this line of work has been recently shown that we can find very interesting reductions in particular by the work of LPSW19 and from MS20 which is presented at another venue of this same conference and they prove that we can have a reduction which will be polynomial time but we'll call SVP or cycles or CVP or cycles in some case to be able to mimic the LLL algorithm. So we get something which is exponential time in the field degree because we need these oracles but it gives polynomial approximation factors in the rank of the lattice. And so our goal for today would be to try to steal polynomial time but use the algebraic structure to design an LLL type reduction which is not using CVP or SVP or records. So let's try to do exactly the same as we would do for LLL. So let's take a rank N algebraic lattice and try to emulate the behavior of LLL. So first thing we're going to do is to do the QR decomposition and it will be exactly the same as for the QR decomposition over the integers. Then we do the size reduction procedure and here it's not integral rounding. We need to be a bit more precise because we are manipulating actually quotients of polynomials. But okay let's take the simplest thing we can do and do coefficient-wise rounding. Okay that's done but it will suffice. Here we have to introduce a very specific trick which is the unit rounding. In short when you are dealing with number fields we have plethora number of so-called units which are elements of algebraic number one which acts on the ways the embeddings of elements are balanced. And so here we need some very fine tuning of the balancing of the embeddings to ensure that our reduction will be fast. And then we use exactly the same design as LLL so we are reducing the reduction of a rank N lattice to the reduction of multiple ranks to projected sublattice. Actually we can do it in a parallel way and this is what we use for the implementation but I will go back to that later. And then the question is how we reduce this round two projected sublattice. So if the base field is already z or q then okay we know how to do it. We just use Gauss or even a faster algorithm like Schoeniger's algorithm. But if the answer is no what we can do is instead of calling us svp-oracle there is to descent this round two projected sublattice onto the sub field which is just under the field we are working in. And then it will mean that we need to reduce another full rank lattice of big dimension and then we do that recursively. And once this reduction is done what we get from this reduction is a short vector and this short vector need to be lifted up to the upper field and we can do that using some kind of generalization of the Euclid algorithm. And then we do exactly the same as we would do for the original LL and cycle until the basis reduced. So here I just put in maybe the two main differences except of course this recursive structure with the original LL algorithm. Once comes from this unit rounding which is necessary to enter polynomial time and one is this lift which absolutely doesn't exist and is not needed in the in the reduction. And then you can optimize stuff in particular in this QR decomposition and size reduction sub procedure using simplectic structure. So this will be the final part of the talk but the idea here just quick spoiler is that you can halve the computation time of this whole part in yellow. So just to give you another way of the algorithm and as a view here I take a tower of number fields so big field k h which and suitable tower which is just under and suppose we want to reduce a wrong d lattice over k h. So as I said what we're going to do is to call the reduction of multiple instances of projected sub lattices of round two and what we're going to do is to descend this sub lattice. So we are descending it over the sub field which is right under. So we get some new lattices and what we're going to do is again call the reduction to reduce the problem to solving the reduction problem for round two sub lattices and we do that again until we reach z and when z is reached we can use Schoenager's algorithm to completely reduce these round two vectors. So we find a short vector v we complete it in a reduced basis and then we plug that back and it will yield a short vector of this final lattice that we will lift using this Euclid generation of Euclid procedure and we continue that we go up and up and up and at one point we will be able to have fully reduce the round two sub projected sub lattices appearing in the reduction. So this is the basic idea on how this recursive strategy over the tower of number fields is working. So now a bit of a heuristic complexity because in all of this reduction in particular on the behavior of this lifting part we couldn't prove everything so we rely on some mild heuristic. If you take a sufficiently smooth integer the complexity of this reduced procedure for round two modules over cyclotomic fields where your input is represented as a matrix m with a bit size b is heuristically also of n square b which is way faster than what a classical algorithm LLL algorithm would do on a lattice of the same size. In particular we decrease the dependency which has in n fourth to something in n two. There is a slight thing to notice is that we are slightly losing on the approximation factor actually LLL is retrieving vectors which are within an approximation factor in two to the n and here we get something which is in two to the software. We have a slight loss in the approximation factor but in our zone we are much faster. So now let's see how we can improve the complexity of the algorithm using simplistic symmetries. So today it would be to have the time of this size reduction part that I said and this can be seen as the generation of the work of GAM algorithm and NGN in 06. So first of all let's draw a bit of table between differences between the nuclear space and the symplectic space. So to construct a nuclear space you take a real vector space and you undo it with a symmetric bilinar form. On the other hand for a symplectic space you undo it with an anti-symmetric bilinar form let's say omega and the subgroup of linear transformations that will preserve the symmetric bilinar form in the case of the Euclidean space is called the orthogonal group. Whereas the same group the transformation groups that will preserve the symplectic form will call the symplectic group associated to the form and we have a notion of nice and congenital bases for both of these spaces. For Euclidean spaces it corresponds to orthonormal bases which are basically the bases for which the inner product has the matrix the identity matrix and for symplectic spaces the form omega has this matrix minus identity identity in reverse diagonal and this is called a darbu base. And why is it useful? It's because like if you think of the size reduction as a discretization of the orthogonalization process then you can adapt the size reduction to work as a discretization of the darbu process. And what happens is that for instance if you want to side reduce the bases what you do is start with the first vector and then use it to orthogonalize the second vector and do it for the third, the fourth and so on. And the interesting idea here is that for darbu base is what you can do is you start with the first vector and you use it to reduce the you use it to reduce the second one. But and this is because of the symplectic symmetry what happens is that the knowledge of this second vector as a reducedness of the second vector also allows you to for free reduce the penultimate vector and then you can do again for the third vector using the two first one you can uh orthogonalize it in some sense and for free you get the uh the third one starting from the beginning from the end and you can do that over and over until you reach actually the middle of the basis. So for the cost of orthogonalizing only half of the basis you get for free the full orthogonalization of the darbu base using symplectic symmetries. The thing is any wrong two lattice is naturally symplectic up to scaling the uh form you are taking to consider that is just a determinant form and in particular any wrong two algebraic lattice becomes naturally symplectic for a suitable form. What we would like to do then is to create symplectic symmetry at each recursive level of the recursion tree of the algorithm I presented earlier and this means that we need to construct a full symplectic structure which will be compatible with the descent and using all of this technique together basically you can halve the reduction time at each level of the recursion tree and so gain a polynomial factor on the overall complexity. So heuristically we can get an improved complexity using this symplectic trick and the overall complexity for a module of two over cyclotomic fields will be something in osoft of n2 minus something b. b is still the bit size and this minus something is something which depends on the field you are considering in for instance for poor of two cyclotomic fields you will get something which is in n log in base two of three which is better than n square b which is the estimate you can get from the number of swap when you are analyzing your algorithm with the potential method. So this is very interesting because using this this symplectic technique allows to beat actually the number of swap you could think which is required by the potential analysis and all in all you get a vector which is within approximation factor due to the software of n times the normalized property. In practice and this is implemented with the Paris GP language this technique allows to reduce a just light lattices in dimension 2 to the 11 in four days where fplll would take up to a estimation 40 hundred years which is an improvement of something like four millions and for the we were also able to reconstruct the Gentry's law algorithm and use it in dimension 1024 in 100 hour where fplll for the same kind of lattices would have to take let's say 10 years which is a very huge improvement. So thank you for your attention and I might be to answer the question in the corresponding question session.