 Two papers. We start with the paper analyzing block-wise lattice algorithms using dynamical systems by Guillaume Arne, Zaviel Puyol, Damien Steller. The speaker is Zaviel. Thank you. So I will present the paper, analyzing block-wise lattice algorithms using dynamical system. As you say, it's joint work with Guillaume Arne and Damien Steller through the context. Lattice's mathematical objects that provide art problems that can be used to build various cryptographic primitives, such as encryption, hash functions, and so on. And the best known attacks against lattice-based cryptosystem rely on what is called block-wise lattice reduction algorithms. That's why we want here to study this algorithm to assess the security of lattice-based cryptosystems. The most widely used reduction algorithm among the lattice reduction algorithms is called BKZ. And that's the one we will study in this talk. It's the widely used because in practice, it's the most efficient. And despite the fact that it's not well understood in theory, that is, no reasonable time bound is known under any time of BKZ. Our contribution is to give the first worst case analysis of the BKZ algorithm. And to do that, we use a new technique. We introduce a model of the algorithm, which in itself can be interesting to understand other things on this algorithm and other lattice-reduction algorithms. So the talk is in two parts. I will first give a few reminders on lattice, lattice-reduction, and give the main result. And then I will explain the main ideas behind the result and the model that we used to obtain it. A lattice is a grid of points. It's a set of all integral linear combination of a fixed number of linearly independent vectors of n linearly independent vectors. So here we have two vectors that generate the lattice represented by the black dots. The two vectors are called a basis of the lattice. And as long as the dimension, the number of vectors, is at least two, all lattices have infinitely many bases. The problem of lattice reduction consists since finding bases of the lattice made of rather short and rather orthogonal vectors. For instance, more reduced bases of this lattice would be these spaces. So when the dimension is two, it's easy. It's when the dimension grows that it becomes harder. So finding reduced lattices is hard when the dimension grows. Now, all bases of the same lattice have the same determinant. It's an invariant of a lattice. And the ratio between the determinant, no, between the first vector of a basis and the determinant gives a measure of how well a base is reduced. So this ratio will be called the Hermit factor. And to make things simple, our goal is to find bases of lattices with small Hermit factors. It is known that the Hermit factor can always be made as small as the square root of a sequence that grows linearly in n, the lattice dimension. A more precise way to measure the lattice reduction consists in considering the Gram-Schmidt's orthogonalization of the bases. I will keep this the same notation during the wall talk. x i is equal to the log of the norm of the i th Gram-Schmidt vector of a basis. And considering the shape of the x i's shows how well the lattice is reduced. There are various notions of lattice reduction. The strongest reduction is called h k z reduction. When a basis is h k z reduced, its Hermit factor is optimal, its square root of gamma n. And the shape of the x i's is very flat, which means this is a strong reduction. On the other hand, it takes exponential time to compute a h k z reduced basis, so it's impractical. On the opposite, the LLL reduction is a weak reduction. It achieves only an exponential Hermit factor. There is a big gap between the x i's, but it can be computed in polynomial time. What we are studying is a compromise between the two. So it's b k z, and it's not an algorithm. In fact, it's a hierarchy of algorithms that takes a parameter beta. And when beta is equal to 2, b k z is equivalent to LLL. When beta is equals to n, it's equivalent to h k z. Between the two, b k z achieves an exponential Hermit factor, but the constant of the exponent can be made as small as we want by increasing the parameter beta. B k z makes use of h k z in small dimension. If we apply b k z on a n dimensional lattice with a parameter beta, it will make use of h k z in dimension beta. So it takes at least time exponential in beta. To have something continuous between b k z and LLL, we would like to have something polynomial in n. But as I said, we have no bound on the complexity of b k z. So we don't know if this question mark can be replaced by poly of n. So a brief history on b k z and another algorithm of the same family. The definition was given in 1987 and the algorithms a few years later. And an experimental result, in particular, by Gamma and Nguyen shows that it's very unlikely, in fact, that b k z is polynomial in n. That is, it's unlikely that this question mark can be poly of n. On the other hand, there are blockwise algorithms still between h k z and LLL that do have a complexity polynomial in n. However, in practice, b k z is the most efficient algorithm and achieves the best compromise. That's why it's really worth trying to optimize the complexity of b k z even if we know other algorithms. So what does the b k z algorithm do? b k z is just a loop. And at each step of the loop, we do a small part of the reduction. During this small part of the reduction, the strong h k z reduction algorithm is applied at most n times in small dimension, in dimension beta. So this is the main step of the algorithm. One step of b k z takes time polynomial in n. The problem is, how many times is the small reduction step applied? In the standard version of b k z, it's applied until the whole h k z reduction do nothing inside the loop. But it doesn't work. So what can we do? Here is a curve that shows the evolution of the quality of the basis during the execution of b k z. So it's the time and it's the quality. It's when the limit factor decrease, the quality improves. So during the first tools, during the 100 first tools, the quality improves quickly. But after that, there are many 1,000 loops in this dimension. And between loops 200 and the n, nearly nothing occurs. That's the basis of our result, which is, we didn't prove that b k z ends in time polynomial in n. But we have proved something which is nearly as good, which is we can just stop b k z after our polynomial number of iterations, a polynomial in n. So n power 3 times something which is nearly negligible. And after this polynomial time, the quality of the b k z output is nearly as good as what we can prove for the standard version of b k z for which we have no time bound. In red, it's the bound for the standard version of b k z. And we have just some epsilon and some small constant. That's the only difference. So that's the main idea, just stop b k z earlier. I will now explain the model that we use to prove this result. For most lattice algorithms, the analysis rely on a potential function. That is, a numerical function of the input that decrease during the execution of the algorithm. And it cannot go below a certain value, so the algorithm must stop. With b k z, it doesn't work. We did not manage to get it to work. So we used a different technique. Instead of having just one number, we have a full vector. This is the vector of the x i, that is the norms of the Gram-Schmidt vectors. And we are first analyzing a model in which we made only one assumption, which is h k z reduction always follow a fixed pattern, which corresponds to a sort of worst case h k z reduced basis. And this implies that knowing only the x i's, not the complete basis, but only the x i's, we have all the information to simulate the algorithm. I will show how the model works on this animation. So this is a small basis of dimension 9 that is very, very weakly reduced at the beginning. And I will apply the model of b k z. It consists in doing h k z reduction at each position in small dimension. Say the parameter is 4, we'll do h k z in dimension 4. I start in position 1, and it flattens the first block of x i's. Same thing in position 2, and so on, until the end. So this is one loop of b k z, and then it starts again. And the main hypothesis in the main assumption of the model is that the shape of the h k z block at each step is exactly the same. And I have said that it's now a linear algebra problem, because the first steps consist in taking the mean of the first four vectors, which is a matrix multiplication, and then adding a fixed shape, which means we add a constant vector. Same thing in position 2, same thing until the end. So a full loop of b k z, just a combination of matrix multiplication and vector addition, and can be itself expressed as a multiplication by one matrix A, the expression of which can be computed, and a constant vector gamma. On this is, so now this is a linear algebra problem, the analysis of which is mainly technical. It's in two steps. The first is a fixed set point of the system, and it can be shown that this system has a fixed point, which is nearly aligned, and the slope of the fixed point is directly related to the Hermit factor. So it corresponds to the Hermit factor that is given in the theorem. And the second step of the study of the model is proving that the model has an eigenvalue smaller than 1 to prove that it converges in polynomial time. So it's not as simple, but it's nearly as if up to small details, the largest eigenvalue is 1 minus something 1 over polynomial of n. And this is enough to prove that the dynamical systems converge to the fixed point in polynomial time. This leads to the claim complexity bound. This was the analysis of the model. So but our result is a result not on the model, but on the real algorithm. So this can not be transposed directly to the real algorithm. We have to do a bit of some transformation and some averaging. But in the end, a rigorous adaptation of what has been done on the model, which was a sort of worst case analysis, can be transposed to the real algorithm. And it's enough to obtain the result on the quality of BKZ output. So in conclusion, we have the first analysis of BKZ. We don't prove that it ends in polynomial time, but the idea is just stop it earlier and it works. The output has a good quality. And for that, we used a new method, which relates on a model, which we all can be used to prove other things on block-wise algorithms. So we hope that it will lead to better strategies to reduce lattices and to improve block-wise lattice algorithms. Another problem is that in practice, this is a worst case algorithm. And the real BKZ algorithms behave better than what is proved in the worst case. So there is still a gap we would like to explain and we expect we could improve the model to make it more predictive. Thank you. We have time for a short question. No, this is a worst case analysis for we don't know anything for the standard BKZ. We know things only for the BKZ interrupted. So we have a worst case analysis of that, but we can prove that it's as good as the original. Yes. So there is a small gap, and that's what I was saying here. We would like to improve the model to fill this gap. But yes, there is a small gap. OK, thanks, Dota. Again.