 Hi everyone, I'm Joana from Inaria, and I'm going to talk about latest savings via quantum variables. So, first of all, I will introduce some notions about the latest sales and quantum computing. Then I will explain how our algorithm works and give some results about the complexity. So, first of all, the latest is defined that way. It is a set of all integral inert combinations of a given basis of vectors. And from this, we can define the shortest vector problem, SVP, which is given a latest. We have to find the shortest non-zero vector in this latest. So, here in this example, we have the basis P1 on B2 in blue, and the latest vectors are represented by black dots under the shortest vector problem. Ask us to find the shortest vector of the latest unsoldered by P1 on B2, which is the vector in red. So, why do we want to solve SVP? Basically, from a cryptography point of view, it is an NP-hard problem, and it is hard in average. And there are several problems which derive from SVP, such as SIS, LWE, or NTIU. And there are several quantum-resistance cryptosystems which are believed to be quantum-resistant, which are based on these derived problems. So, from a cryptanalysis point of view, all of these cryptosystems are broken if we can find a reduced basis of the latest, which is used. And the BKZ algorithm finds a reduced basis on using an overcome that solves SVP as a subroutine. So, in a nutshell, the security of these cryptosystems directly relies on the complexity of solving SVP. So, there are several methods to solve SVP. The two main practicals today are by elimination and by saving. And today, all of the methods we know to solve SVP run in exponential time. So, in this presentation, I focus on the saving method. And it is a heuristic method that means that we use heuristics. And the main of them is that the latest vectors act as random vectors. And this implies that vectors of normed atmosphere are lying on the border of the sphere of radius air on dimension D. This heuristic has been very dated by experiments, so it is not far from the reality to consider that the latest vectors act as random vectors. So, this is the algorithm for the saving. So, the idea is that we begin with a list of n-latest vectors, which are of normed atmosphere and air. And we take a parameter gamma strictly if I have to run. On the output, the algorithm will return a list with n with the same n as before, with n-latest vectors of normed atmosphere and air. So, the output vectors are shorter than the vectors in the input. And through this, we can simply check each pair of vectors in the list. So, and we check if the norm of the difference is inferior to gamma air. If it is the case, we add v minus w, so the difference to the output list. Here we see the shear of dimension D on radius air. We have our vectors all over the sphere. And we see that if we have v on w-latest vectors, then v minus w is a combination of two latest vectors, so it is also a latest vector. And if we have v on w, which are close at angle, then v minus w can be shorter than v or w. The limit angle, so that is true, is when we have v on w, which forms an angle of p over 3. So, if they have a bigger angle, then they will not reduce. So, to serve SVP using this algorithm, we begin with a latest, so it is defined by a basis of vectors. On the output, we want to have the shortest vector of the latest. It is a probabilistic algorithm, so it will probably return the shortest vector. To do this, the first step is to generate a list of n-latest vectors, for example by Planck's algorithm. Here, the value of n is the smallest value such that we have at least one vector, which reduce with another vector in the list of n vectors. So, it is the smallest value such that we can have n vectors in input and n vectors in output. So, we have our list. While the list does not contain a short vector, then we apply a steep step on it with a parameter gamma being close to one. Then we return the shortest vector we have formed with this method. We see that at the beginning, when we have just generated the list l, the norm of the vectors are at most r, for certain r, which is the maximum of the norm of the vectors. So, after the first iteration in the while loop, the norm of the vector are at most gamma r. After the second iteration, we win again a vector gamma, so it is gamma square r. After only a polynomial time in d, the dimension of the l-t, we have a gamma power polynomial of d times r. So, it is exponentially lower than the first r we had. So, we can expect that there is a short vector in the sphere of t's radius. So, the complexity of t's algorithm is in n square because we have n square pairs in the list of size n to check. So, it is t's complexity. For the space complexity, it is just n because we have to store the n vectors of the list. So, t's algorithm can be improved by using a filtering. So, the idea is that we only want to check pairs of closed vectors because all of the vectors which are often more than p of s3, they are not interesting us because they are not reducing each other. So, to do this, we use filters. So, a filter is characterized by a center s which is basically a vector of the sphere and an angle alpha. So, here we have the sphere. We have a filter of center s and angle alpha. And here v is of angle at most alpha with s. So, we can say that v is in the filter of center s and the angle alpha and the value is outside t's filter. So, to improve the sphere using this, the first step is to generate filters all over the sphere. And through this, we consider a code and we consider that the code works as the center s of the filter. Then, we add each vector to its nearest filter of angle at most alpha with z. So, to do this, we use the list decoding algorithm which is provided by the code structure. And then, we search for reducing path. So, for each vector, we search reducing one within its vector. So, it is instead of checking in the wall list, we only check in the filters. So, for this third step, we can do it classically by checking each vector one by one, or we can do it confidently by approval search. So, about the complexity, for the original sieve, it was this, and we see that using LSF, it reduces a lot the time-exponent. So, our goal was to reduce again this time-exponent. So, now for the quantum computing path. One of the most famous quantum algorithm is Grover's algorithm. The idea is that we have n data on the check function that takes an x in input and returns 0 or 1. And the output of Grover's algorithm is to return an x so that we have f of x at which value 1. So, Grover's does this calculation in a square root of n steps, where in a classic model, we have to take n steps until we find the correct answer. Another very useful algorithm, quantumly, is the quantum model work. So, it takes in its input a graph J on the function f as before. And we can choose f, so just we have f of a vertex value in 1 as we want, and we call it a marked vertex. And the quantum model work returns a marked vertex. So, I will explain further this with an example to be more concrete. So, now our algorithm is basically a sieve step. So, we begin with n lattice vectors at most of them there, and we return a list of the same number of lattice vectors, but they are shorter. The main idea was to take the previous quantum algorithm and to replace the Grover's such step by a quantum model work. So, for this first step, we sample a code on the associated alpha filters. So, there are filters of angle alpha. And then we insert each list vectors in its unique nearest alpha filter. So, basically it separates the sphere into large areas of angle alpha. And we can choose with this parameter C, we can choose the number of vectors there in each alpha area. And we will optimize later the parameter C. So, here we have again our sphere of dimension D. We have a filter of center S, and as we consider that the lattice vectors act as random vectors. In fact, they are lying on the border of the field. So, there are angle alpha with the center S. So, we can consider that these vectors are d-dimensional vectors. So, in fact, if we take the difference between two of these vectors, it is the same I have considering the residual vectors on the d-1 dimensional sphere. And we can have an equivalent between the angle we are searching. So, we want to have reduced pairs on the d-dimensional sphere. So, pairs of vectors which are often called atmosphere of a stream under the corresponding angle between two residual vectors is a certain theta star alpha, which only depends on alpha. So, now for the steps two, we have our areas of angle alpha. And for each of these areas, we check here in the corresponding filter for finding all of the solutions in this area. So, to do this, we start by constructing the vertex. So, we choose randomly N power C1, which is another parameter. We choose N power C1 vectors from this current alpha filter. Then we sample a second layer of filter ring. So, there are the beta filters of angle beta. And we insert each vertex vector in its nearest beta filter. And then we perform quantum random works in order to find all of the reducing pairs into the alpha filter. So, the graph we are using is the Johnson graph. So, Johnson graph here for our parameter N power C1 and N power C1. It is constructing that way. So, we have for the vertex, each vertex is a set of N power C1 vectors which are chosen from the N power C vectors of the current alpha filter. And we consider that there is an edge between two vertexes if they differ by exactly one vector. So, to see for a simpler example, here we have the Johnson graph of parameter 5 and 2. So, we have five possible elements and each vertex is a set of two elements among these fives. And we see that there is an edge between two vectors if there is an edge between two vertexes, if only if they differ by exactly one element. So, for the quantum random work, we have our graph which is well defined. And here we say that a vertex is mapped if only if it contains a pair of angles at most theta star alpha. So, if there are residual vectors which are the residual vectors of the d-dimensional vector which are reducing. Under the quantum random work, we return such math vertex. So, now to explain how this quantum random work works. So, here we have the graph which is way simplified. And here we have the current vertex we are working in. So, at the beginning we have our N power C1 vectors. So, there are residual vectors and we consider that they are lying on the sphere of the d-1 dimension. Then we regenerate the filters, the beta filters. And we add each vector in its nearest filters. And now we begin the work. So, on the graph we are here by adding this vector from the current alpha filter, so venue. We compute its nearest filters and then we check in its nearest filters if there is a vector which reduces with venue. If it is the case, we win and we return the result. And if we don't find a reducing pair, then we pass to the next vertex. So, to do this, we delete one vector randomly and we add another venue from the current alpha filter. And we do this again and again until we find a closed vector to venue. So, by taking the difference of just two vectors, we find a shorter vector. So, here on this scheme I explained for classic random work by choosing a randomly a vector to be deleted and a vector to be added. In fact, for the quantum random work, the difference is that we do not choose randomly because we do a quantum superposition of all the possible neighbor vertices. So, for the complexity of one quantum random work, we need to have these five values. So, for the setup S, it is a cost to construct the first vertex and to compute the beta filters for each vector. Then we have the update costs. So, we construct the quantum superposition of all of the neighbors of the current vertex and we do, during this step, the search of reducing vectors of venue within its beta filters. By doing this step during the update step, in fact, the check step is immediate. It is in constant time because it is already down before. Then we also need epsilon, the probability for a vertex to be mapped. So, for a vertex to contain two vectors which are reducing. And finally, delta, which is the spectral gap of the graph. So, the overall complexity of a quantum random work is this formula. So, the only difference between a quantum random work and the classical random work are the square roots over epsilon and delta. So, to summarize our algorithm, we have a step one which separates the sphere into large areas of angle alpha. And then, during the second step, we are searching for all of the solutions inside of each alpha area. So, in fact, during this, we are missing all of the solutions. For example, where we have a reducing pair V on the value and V is in an alpha area and the value is in another alpha area. So, we have to run the step again and again until we get the n reduced vectors by changing the alpha area by changing the fielders. So, the complexity of this algorithm is in fact how we are doing n quantum random work. So, the overall complexity is one. In fact, the values of s, epsilon, delta and u are only depending on the three parameter we can choose as we want. So, there are c, so that we have n's power c vectors that are alpha fielders. We have n power c1 vectors per vertex in the graph and then n power c2 vectors per beta fielders. So, after a numerical optimization, we get these three optimized parameters that gives a solution or SVP in dimension d in time 2 power 0.2570 d. And we also have computed the space complexities. So, we have these three different types of memory. So, for the quran on quantum memory, it is quite practical because we see that the space exponents are not as high as for the classical at the time. For other saving algorithms, it's much of the same number of orders, but it requires a model that allows quran on quantum memory. So, finally, we have trade-offs. So, in fact, if we have as much quantum memory as we want, we can perform the algorithm with the optimal time. But if we do not have, if we have limited quantum memory, we can still run the algorithm with less quantum memory and by choosing the right parameters, we get a higher time, but we can still get a solution. So, here it is the same for quran limitation. So, we see that it is very close to offering curves. So, to do a synthesis of this for the trade-offs, in fact, if we choose the best parameter for having zero quran on zero quantum memory, we recover the time complexity of the actual best classical algorithms of SVP by saving. Then, if we had some quran, we recover here the complexity of the previous best quantum algorithm, and here we add some quran on quantum memory and we have our optimal time. So, to conclude, in fact, we showed with stage work that the time to break up to crypto system, in fact, it was lower than we think before. And concretely, that means that before, if one claimed about having crypto-stems with 128 bits of security, in fact, there are four bits of security, which are a lot, and it is required fixed with a slight increase of the parameter. So, thank you very much for your attention.