 Hi everyone, Thomas here. I'm glad to present you some new results on first-term polynomial time lattice reduction algorithms This is joint work with my colleagues Paul Kirchner and Pierre-Anna Fouff-Romin-Riaren So first of all, let me make some recalls on lattices and on the LL algorithm So what are we working with here? So to set up some Setting super that we are taking the Euclidean space here R2 and take a bunch of linear independent vectors in this space then consider the sets of all possible integral linear combination of these vectors you get a very regular structure which is in the sense of a grid in high-dimension and it has actually the structure of group such it's a subgroup of the Euclidean space and by construction it's a discrete this algebraic structure is called a lattice In a lattice you might want to measure the density of points in your structure and for that you use a geometric invariant which is called the covolume it can be computed from your basis vector as a root of the Determinant of the cram matrix that is to say the matrix of inner products of your vectors It also corresponds to the volume of the parlippi bettys spanned by your vectors. I say it's a geometric invariant because it's invariant under a change of basis independent Actually, so I'm speaking of having multiple bases in your lattice So that the question is how can I get from a screw on a very bad one to a nice one like the first one I showed A solution to do that at least in dimension 2 is to use the shortest vector to reduce the length of the longest one more precisely you take the shortest element in the coset which is spanned by the short vector and get through the long one and Then you get a shorter vector and then you repeat as long as you can and at one point this process will stop This is in substance the so-called Lagrange-Gauss reduction algorithm and this algorithm has very interesting properties in particular the output basis Satisfy that the first vector is one of the shortest vector of a Zalates Moreover, you can prove from that that the length of this shortest vector is Smaller than four-third of the co-volume of Zalates, which is a constant which is independent of a Zalates itself More generally speaking in any dimension in your rank We get the following theorem, which is Minkowski-Hermes theorem for first minima Which asserts that for in the lattice lambda of rank D The length of the shortest vector will be smaller than some constant which only depends on the dimension multiplied by the normalization of the co-volume so We know that such short vectors exist. However, finding them algorithmically is a hard problem has been proved to be NP-hard However, in 82, Lenzfra, Lenzfra and Lovac Show that there exists some polynomial time algorithm with given any lattice lambda will give you some Lattice vector of lengths at most two to the n times longer than the length of the shortest vector It might seem pretty big But actually this LLL algorithm is quite useful For instance, we can solve the Simultaneous-Tayefanthian approximation problems, which is finding an approximation with rational of real numbers with common denominators From that you can use the same method to find the minimal polynomial of algebraic numbers You can factorize over rational that was the original application of Lenzfra, Lovac And of course you can do cryptanalysis with that and that might what interest me the most in the context of this conference For instance, you can solve knapsack problems in very specific settings. You can break RSA with public exponent And you can also of course Attack lattice-based cryptography Moreover, you can also Help computation in algebraic number theory in particular. It's very important in a normal form computation Working with ideals and also to control the size of elements appearing in your computation and ensures polynomial time So let's see how we can construct this or reconstruct. Let's say this LLL algorithm So first of all if you take any basis Let's say v12vd of your lattice lambda then you get a filtration from that basis Constructed as follows you start from the sublattice 0 and then you can see a lambda 1 which is the lattice spanned by v1 then lambda 2 spanned by v1 v2 and So on until you reach the full lattice. So you get an increasing sequence of sublattices Now you might want to quantify the quality of your filtration and to do so we do a bit of quantization how we do that we basically Use this co-volume actually we're using the degree here Which is the logarithm of the co-volume of each of the sublattices appearing in the filtration. So in turn you get Some bunch of real numbers which corresponds to the degree of each element appearing in your filtration Now We've seen that the Gauss reduction algorithm Helps to reduce lattice in dimension 2 but given a filtration We get a bunch of rank 2 lattices actually all of the quotients lambda i plus 1 over lambda i minus 1 actually of Rank 2 so that we can use Gauss reduction algorithm on these quotients more precisely what we're going to do is to use the Gauss Gaussian algorithm on the Projective sublattice corresponding to this quotient then lift the result and replace in our basis and more generally replace in the corresponding filtration If we look at the effect of this operation, let's say I place i Then we can easily see that the all the degrees are invariance Except the degree of lambda i which is replaced by say lambda i prime and in particular We can show that this new degree lambda i prime is smaller than the previous one so at all in all the Gauss reduction is a Local tool because we're only applying it in a very specific position of the filtration to densify it so that reducing the co-volume Okay, so now from that operation we can construct a simple iterative algorithm that will do Gauss reduction steps as much as it's possible and From that we just start from the beginning of the basis So let's say the first two vectors and then we move on to the third second and third and so on but maybe doing so Will had to reduce the already previously reduced first second and third vectors so that maybe we need to redo a step of reduction earlier on and So we then continue go on and might be go back and go on and so on but in the end We will finish by reaching the end of the basis and no more Gauss step can be done anywhere else in that case. We say that the basis is a little reduced if we analyze this algorithm we will find some dependency which is six stick in the rank and which is cubic in the size of the integers However, if we are precocious enough, we can use floating point arithmetic instead of exact arithmetic and It was first step made by Ingrid and then still a in 2009 Where they show that we can do LLL reduction which dependency which is quadratic in the logarithm of the size of elements appearing in your basis more Recently, no major and still a showed that with some recursive strategy You can go down to some cardiac or quasi cardiac dependency in the dimension and quasi linear dependency in the logarithm However, this algorithm is purely theoretical and the constant periods of ego are too big to be concretely implemented The question is now How can we improve these quasi quadratic dependency in the rank because quasi linear dependency in the size seems already close to be optimal So let's try to focus on the rank dependency So how to get some faster type lattice reduction? so we're going to completely change the structure which is Used and instead of doing these iterative back-and-forth strategy as you didn't LLL we're going to use parallelization and regression on the rank So let me explain that a bit more So I said that all the reduction boils down to doing some gauss steps on projected sub lattices What we can do first and this was already hinted by the line 93 was to is to Do parallel gauss step Everywhere is possible when it's non overlapping. So basically taking the first two vectors and the production of third and fourth and so on Then you do all your parallel reduction like that But if you do stop there, then you don't Interleaving relation between let's say second vector and third vector and so on so what you do is shift all your windows and Do the same parallel gauss reduction at the Intersection of the produce blocks and then you continue again and again and again, and you will in the end reach a reduced basis So if you take a full round of local reduction So all let's say the odd steps and all the even step today two of them if you look at the effect of this operation on the on Degree space then whatever is you're basically applying some discretized version of the laplacian operator And by that I mean that each Degree is now replaced by the average of its neighbors ancient degrees It's very reminiscent and to the diffusion property of the solution of the heat equation because heat equation is Basically that infinitesimal Increase increment of time you are playing the laplacian operator on your space and here at each time step We are playing a discrete laplacian operator onto the profile space So since we know that the characteristic time of the Diffusion of the solution of the heat equation is quadratic in the diameter of the space will get exactly the same kind of property In substance it means that the number of steps you need to reach a stable stable step is Roughly quadratic in the diameter of the space which is here actually the rank of the lettuce. Okay So now we can Remark in addition to that that all the operation are actually local So we can do the same but with big blocks instead of blocks to and recourse inside the blocks so for instance here we take like big blocks and for them for instance and We will apply parallel reduction on each of these blocks and each of them consist in the same Recursive calls so we're starting by even odd steps and even steps and so on when it's done We do we shift all our window by half a block and we do the same reducing and calling recursive recursively on the range and we do it again on D-shifted windows and again and again and again So we want to make some big algorithmic design by exploiting this locality property and to do so it will amounts to Makes an algorithmic design with the only block matrix operations So let's now see More in more details. What are these precisely block operation? First of all, we need to invert triangular matrices. This will be used everywhere. So as Warmer, let's see So take a triangular matrix. Let's say block AC D and if you you can write directly the inverse of this matrix using Schroes complements And it's pretty easy. It's a minus one D minus one and some Block here minus a minus one CD minus one Okay, so formula is pretty explicit so we can construct recursive algorithm for that and the complexity of this algorithm Amounts to basically matrix multiplication. So you can inverse for the cost of matrix multiplication Okay, so now let's see how we can compute. It's a so-called periodic composition by blocks. So um Curated composition is an algorithmic new to handle filtration because I said previously that we were directly using filtration but in practice what we are doing is to have an algorithmic grip on it, so we want to work with matrices and to do so We want to find some say normal form of The basis and it amounts to finding some orthogonal transformation to put the basis in triangular form because triangular form is the Way to encode filtration as matrices So basically it just means that we are looking at the lattice Modular any kind of rotation of the space and we take a good rotation so that your matrix is just triangular It's nothing much more than that. So how we can do this computation by block so Instead of working directly with a basis we're working with a cram matrix, which is a symmetric And we work it we were divided in three blocks ABC Then the first corner we can just Super that we can directly Find the decomposition. So let's suppose that we find a triangular matrix La such that la transpose la is equal to a okay Then we construct as before the source complement of a into matrix G So it's C minus B transpose a B and we can decompose it Directly using let's be recursion and we find L transpose s LS is equal to s and we know I can show with a bit of computation that the Sure, let's get a composition C our part of security composition is la LS and La minus LS transpose minus one B here from that we can directly work some Block Cholesky algorithm, which is the transposition of this Construction here once again the costly step Except the reduction step is a matrix multiplication here and you can prove that it amounts to matrix multiplication once again Now the Final tool we need to do our reduction is so-called size reduction. So what size reduction? Size reduction is a process which will reduce over diagonal elements using some let's say lattice compatible Transformation that says to say unimodular transformation. It's Transformation matrix which has integral coefficient and determinant. Why? so How we can reduce over diagonal elements here I did normalize the matrix R we construct with QR decompositions just dividing the rows by this lengths of the diagonal Okay, so how we can do that? We can use a block structure idea by start reducing the upper part of the over diagonal elements Okay, let's suppose we can do that then we will reduce the lower part before and like for the curriculum for the inversion it remains to handle correctly the upper diagonal part here like the upper right elements So how we can do that? So if we write the transformation, let's say U is U1 U2 and X X being some unknown for the moment integral matrix the effect of U on the triangular matrix with that we write by block a CD is Applying some transformation U1 on a a plane transformation U2 on D and then doing AX plus C U2 on the upper block So if we support that U1 and U2 are correctly chosen so that a U1 and D U2 are small as we want then we just want a X plus C U2 to be close to zero with X being integral and And basically we get a recursive algorithm from that which amounts on size reducing a size reducing D and Finding like that if you take a minus one C U2 for the matrix X and you're basically good This is in substance a block variant of a reduction which has been introduced by Seisen, so we call that let's say Seisen size reduce So we can move on to the actual reduction procedure which will amounts to use our big design by Applying parallel reduction and recursion and all of our block tools So once the dimension is two, then we use the Gauss algorithm or if you want to be more efficient acceptably you can use Schenegger algorithm, which is quasi linear in size B Then for a certain number of iteration, which is basically quadratic in diameter as I said We will do some reduction and we start we start start by Compting the curative comparison with the block shoulder scale with them I showed and then we say Seisen reduce as a basis to reduce the size of all coefficients appearing and then we do this window strategy and We will apply the reduction recursively to all the blocks So here we introduce some condition Which is reminiscent to the lavage condition in the original algorithm Which will basically says if you need or not to reduce And so that if you're reduced enough you just move on to the next block and so on so that you will eventually stop So if you look at the running time and where the Most computation is done You can see that for free when the dimension is small enough It's with regards to the rank with regards to as a size of the question appearing then you can actually Don't go All the way down to the leaves of the recursion tree But you can stop a bit before and use a more costly reduction such that the big easy reduction and The overall running time of the algorithm will remain unchanged So basically for free you can use something which is a slightly more powerful than a Gauss algorithm And something which is slightly bigger dimension and so you will get in the end vectors which are slightly shorter than before So all in all with this little trick we can heuristically Conjecture that in time which amounts to matrix multiplication So D omega multiplied by the condition number of your basis We can find some vectors, which is within 2 to the D log log C over log C multiplied by the normalized co-volume so this D log log C over log C is basically because we use this big easy instead of going up the way down to rank two with Gauss algorithm Otherwise, we just get 2 to the D like LLM. So for the running time This is a conjectural right now, but we made a lot of experiments and in particular Here you have on the left some traces of execution So the abscissa is the log of the dimension of the lattice we're considering from 2 to the 7 to 2 to the 11 and in Ordinates you get the log of the running time in second So we also we increased linearly the condition number of the matrix we're considering and we see that all the Slopes of this of these graphs are Almost all equals and around 2.7 which was the exponent we use for matrix multiplication Um, so just before moving on to other Experiments and results. I just like to point out that we can reduce this D omega dependency for knapsack like lattices