 Two papers. We start with the paper analyzing block-wise lattice algorithms using dynamical systems by Guillaume Arne, Zaviel Puyol, Damion Stella. The speaker is Zaviel. Thank you. So I will present the paper, analyzing block-wise lattice algorithms using dynamical system. As you say, it's joint work with Guillaume Arne and Damien Stella, through the context. Lattice's mathematical objects that provide art problems that can be used to build various cryptographic primitives, such as encryption, hash functions, and so on. And the best known attacks against lattice-based cryptosystem rely on what is called block-wise lattice reduction algorithms. That's why we want here to study these algorithms to assess the security of lattice-based cryptosystems. The most widely used reduction algorithm among the lattice-reduction algorithms is called BKZ. And that's the one we will study in this talk. It's the widely used because in practice it's the most efficient. And despite the fact that it's not well understood in theory, that is, no reasonable time bound is known on the running time of BKZ. Our contribution is to give a first worst case analysis of the BKZ algorithm. And to do that, we use a new technique. We introduce a model of the algorithm, which in itself can be interesting to understand other things on this algorithm and other lattice-reduction algorithms. So the talk is in two parts. I will first give a few reminders on lattice-lattice reduction and give the main result. And then I will explain the main ideas behind the result and the model that we used to obtain it. A lattice is a grid of points. It's a set of all integral linear combination of a fixed number of linearly independent vectors of n linearly independent vectors. So here we have two vectors that generate the lattice represented by the black dots. The two vectors are called a basis of the lattice. And as long as the dimensions, the number of vectors is at least two, all lattices have infinitely many bases. The problem of lattice reduction consists since finding a basis of the lattice made of rather short and rather orthogonal vectors. For instance, more reduced basis of this lattice would be these spaces. So when the dimension is two, it's easy. When the dimension grows, that it becomes harder. So finding reduced lattice is hard when the dimension grows. Now all bases of the same lattice have the same determinant. It's an invariant of a lattice. And the ratio between the determinant, no, between the first vector of a basis and the determinant, gives a measure of how well a base is reduced. So this ratio will be called the Hermit factor. To make things simple, our goal is to find bases of lattices with small Hermit factors. It is known that the Hermit factor can always be made as small as the square root of a sequence that grows linearly in n, the lattice dimension. A more precise way to measure the lattice reduction consists in considering the Gram-Schmidt's orthogonalization of the bases. I will keep this the same notation during the wall talk. x i is equal to the log of the norm of the i th Gram-Schmidt vector of a basis. And considering the shape of the x i's shows how well the lattice is reduced. There are various notions of lattice reduction. The strongest reduction is called HKZ reduction. When a basis is HKZ reduced, its Hermit factor is optimal, its square root of gamma n. And the shape of the x i's is very flat, which means this is a strong reduction. On the other hand, it takes exponential time to compute a HKZ reduced basis, so it's impractical. On the opposite, the LLL reduction is a weak reduction. It achieves only an exponential Hermit factor. There is a big gap between the x i's, but it can be computed in polynomial time. What we are studying is a compromise between the two. So it's BKZ. And it's not an algorithm. In fact, it's a hierarchy of algorithms that takes a parameter beta. And when beta is equal to 2, BKZ is equivalent to LLL. When beta is equals to n, it's equivalent to HKZ. Between the two, BKZ achieves an exponential Hermit factor. But the constant of the exponent can be made as small as we want by increasing the parameter beta. BKZ makes use of HKZ in small dimension. If we apply BKZ on a n-dimensional lattice with a parameter beta, it will make use of HKZ in dimension beta. So it takes at least time exponential in beta. To have something continuous between BKZ and LLL, we would like to have something polynomial in n. But as I said, we have no bound on the complexity of BKZ. So we don't know if this question mark can be replaced by poly of n. So a brief history on BKZ and other algorithm of the same family. The definition was given in 1987, and the algorithms a few years later. And an experimental result, in particular, by Gaman and Guyen, shows that it's very unlikely, in fact, that BKZ is polynomial in n. That is, it's unlikely that this question mark can be poly of n. On the other hand, there are block-wise algorithms, still between HKZ and LLL, that do have a complexity polynomial in n. However, in practice, BKZ is the most efficient algorithm and achieves the best compromise. That's why it's really worth trying to optimize the complexity of BKZ even if we know other algorithms. So what does the BKZ algorithm do? BKZ is just a loop, and at each step of the loop, we do a small part of the reduction. During the small part of the reduction, the strong HKZ reduction algorithm is applied at most n times in small dimension, in dimension beta. So this is the main step of the algorithm. One step of BKZ takes time polynomial in n. The problem is how many times is the small reduction step applied? In the standard version of BKZ, it supplies until the whole HKZ reduction do nothing inside the loop. But it doesn't work. So what can we do? Here is a curve that shows the evolution of the quality of the basis during the execution of BKZ. So it's the time, and it's the quality. It's when the limit factor on decrease, the quality improves. So during the first tools, during the 100 first tools, the quality improves quickly. But after that, there are many 1,000 loops in this dimension. And between loops 200 and the end, nearly nothing occurs. That's the basis of our result, which is we didn't prove that BKZ aren't in time polynomial in n, but we have proved something which is nearly as good, which is we can just stop BKZ after our polynomial number of iterations, a polynomial in n. So n power 3 times something which is nearly negligible. And after this polynomial time, the quality of the BKZ output is nearly as good as what we can prove for the standard version of BKZ for which we have no time bound. In red, it's the bound for the standard version of BKZ. And we have just some epsilon and some small constant. That's the only difference. So that's the main idea, just stop BKZ earlier. I will now explain the model that we use to prove this result. For most lattice algorithms, the analysis rely on a potential function that is a numerical function of the input that decreased during the execution of the algorithm. And it cannot go below a certain value, so the algorithm must stop. With BKZ, it doesn't work. We didn't manage to get it to work. So we used a different technique. Instead of having just one number, we have a full vector. This is the vector of the Xi, that is the norms of the Gram-Schmidt vectors. And we are first analyzing a model in which we made only one assumption, which is HKZ reduction always follows a fixed pattern, which corresponds to a sort of worst case HKZ reduced basis. And this implies that knowing only the Xi's, not the complete basis, but only the Xi's, we have all the information to simulate the algorithm. I will show how the model works on this animation. So this is a small basis of dimension 9 that is very, very weakly reduced at the beginning. And I will apply the model of BKZ. It consists in doing HKZ reduction at each position in small dimension. Say the parameter is 4, we'll do HKZ in dimension 4. I start in position 1, and it flattens the first block of Xi's. Same thing in position 2, and so on until the end. So this is one loop of BKZ, and then it starts again. And the main hypothesis in the main assumption of the model is that the shape of the HKZ block at each step is exactly the same. And I have said that it's now a linear algebra problem because the first steps consist in taking the mean of the first four vectors, which is a matrix multiplication, and then adding a fixed shape, which means we add a constant vector. Same thing in position 2, same thing until the end. So a full loop of BKZ, just a combination of matrix multiplication and vector addition, and can be itself expressed as multiplication by one matrix A, the expression which can be computed, and a constant vector gamma. And this is a linear algebra problem, the analysis of which is merely technical. It's in two steps. There's a fixed set point of the system, and it can be shown that this system has a fixed point which is nearly aligned, and the slope of the fixed point is directly related to the Hermit factor. So it corresponds to the Hermit factor that is given in the theorem. And the second step of the study of the model is proving that the model has an eigenvalue smaller than 1 to prove that it converges in polynomial time. So it's not as simple, but it's nearly as if up to small details, the largest eigenvalue is 1 minus something 1 over polynomial of n. And this is enough to prove that the dynamical systems converge to the fixed point in polynomial time. This leads to the claim complexity bound. This was the analysis of the model. So but our result is a result not on the model, but on the real algorithm. So this can not be transposed directly to the real algorithm. We have to do a bit of some transformation and some averaging. But in the end, a rigorous adaptation of what has been done on the model, which was a sort of worst case analysis, can be transposed to the real algorithm. And it's enough to obtain the result on the quality of BKZ output. So in conclusion, we have the first analysis of BKZ. We don't prove that it ends in polynomial time, but the idea is just stop it earlier and it works. The output is a good quality. And for that, we used a new method, which rely on a model, which we all can be used to prove other things on block-wise algorithms. So we hope that it will lead to better strategies to reduce lattices and to improve block-wise lattice algorithms. Another problem is that in practice, this is a worst case algorithm, and the real BKZ algorithms behaves better than what is proved in the worst case. So there is still a gap we would like to explain, and we expect we could improve the model to make it more predictive. Thank you. We have time for a short question. No, this is a worst case analysis for we don't know anything for the standard BKZ. We know things only for the BKZ interrupted. So we have a worst case analysis of that, but we can prove that it's as good as the original. Yes. So there is a small gap, and that's what I was saying here. We would like to improve the model to fill this gap, but yes, there is a small gap. OK, thanks, Dothan again. The second talk is pseudo-random knapsacks and sample complexity of LWE, search-to-decision reductions by Daniel Michensio and Petros Moll. The speaker is Petros. Do you hear me? OK. So my name is Petros Moll, and today I'll be talking about search-to-decision reductions for knapsack families and the LWE problem, and this is joint work with my advisor Daniel Michensio. Is the timing correct? Really? OK. So OK, let me start by recording the LWE problem. So in the LWE problem, we have a secret vector s n dimensional with coefficients from zq, where n and q are public parameters. And in this problem, we are getting m samples, which each sample is a pair. The first component of the pair is a vector from the same dimension as the secret, randomly and uniformly chosen from zq to the n. And each bi is a noise-inner product of each random vector with a secret, where the noise is some small scalar drawn from a publicly known distribution. OK, and the goal is to find this secret s. For the rest of the talk, I'll be using this more succinct representation of LWE. We group all the samples into m by n matrix a with random entries. And b represents compactly all the bi's. OK, so a little bit of background information about LWE. It was introduced initially by Oded Regev in 2005. It is a generalization of the well-known learning parity with noise problems in larger modules. And since it has been introduced, it has been very successful in cryptographic constructions. And as you can see here, especially the last three years, it has served as the underlying hardness assumption for many fancy primitives, especially in public crypto. OK, a little bit more formal look at LWE. The parameters is n, the size of the secret, the number of samples m, q is the modulus, and chi is the error distribution. And again, now, this is exactly the problem I defined in my first slide. Given these m random samples, this m by n random matrix a, and m noisier products with some fix and common secret s, find s, or equivalently find this error vector e. And in the decision version of the problem, we are again given this m by n random matrix a. And our goal is to decide whether the second component of the input, this vector t, corresponds to an LWE instance that is m noisier products with a secret s, or some random vector from z, q to the m, which is completely independent from everything else. OK, so the talk focuses on understanding the hardness of these two problems and how their hardness relates with each other. But before doing that, why do we even care in the first place? OK, in cryptography, we do need decision problems. These are the ones that are appropriate for the crypto constructions. And that's the case for LWE, too. In particular, all known LWE-based constructions rely on the decision version of LWE. And this is mainly due to the fact that most of the security definitions we have in crypto have a very strong indistinguishability flavor. OK, on the other hand, the search problems. From an algorithmic point of view, we do understand their hardness better. So over the time, they have undergone more study by the community. And now, the nice thing about the search to decision reductions is that they somehow bring those two families of problems together. And given search decision reductions, we are able to prove claims such as a primitive pi is secure under some notion of security, just by assuming that the corresponding search problem is hard. OK, our results. We provide a toolset for studying search to decision reductions for LWE, in the special case where the noise is polynomially bounded. And as an added feature of our reductions, these reductions tend not to be sample preserving. I'll come back to that at the end of my talk. And we give similar search to decision results for generalized knapsack functions. And also, in particular, we give some powerful and easy to check criteria in order to prove search to decision equivalence. And I actually saw that the first two bullets are somehow related. And along the way, we use in a new context some techniques from Fourier analysis. And this might be interesting because these ideas might find some applications elsewhere. OK, let me actually start from the second bullet. And I'll come back to LWE towards the end of the talk. OK, so what is a knapsack family? It is parametrized by some integer m and to finally find a billion group g. And we also have this set S of integer multipliers. So S is a subset of integers, where the maximum value, this S, the small s, is polynomially bounded in the parameter m. And now a random knapsack family is a function with domain of the m-dimensional vectors of multipliers and trains this group g. If we want to sample, we just have to sample m independent uniformly random group elements. And then evaluation is simply take each group element g i multiplied by the corresponding multiplier from the vector of multipliers, and then add everything up to form a single group element. And just for ease of notation, I'll be using this dot product notation instead of the right-hand side sum. OK, so many instances in cryptography actually are subcases of this general description I gave above. Maybe the most representative example is this random modular subset sum. This is the specific case where the set of multipliers is just the binary set. Either we get an element or we omit it. And the underlying group is the cyclic group z sub m. All right, so now if we consider a fixed specific distribution on the vector of multipliers, the integer multipliers, then we have the same two problems for the knapsack functions, namely the search problem, given a description of a member of the knapsack family and the image of some input x that follows some pre-specified distribution, find this unknown image x. And the decision problem is very naturally defined as we are given this random member of the knapsack family. And also, we are given either the image of some unknown input x or a completely uniform and independent element u. And our task is to say which is the case. So just for ease of presentation, I'll be using this calligraphy k of g,d to denote the family of knapsack functions that are defined over this group, g, and that have input distribution, this distribution, d. And I'll be borrowing standard terms from cryptography. Whenever the search problem is hard, I'll just say that the corresponding family is one way. And similarly, for the distinguishing problem, if the decision problem is hard, I'll just say that the family is a pseudo random or PRG in short. OK, so what do we know about the search decision connections for knapsack problems? So in Palazzo, now in the paper in 89, they proved that the decision problem is as hard as the search problem. For the specific case, where the multipliers are binary, so this is the subset sum problem. And g is the cyclic group z sub m. And Fischer and Stern extended the result for the vector group g, z2 to the k. And for input distribution, the uniform distribution over all n-bit vectors with a pre-specified humming weight. All right, so we'd like to take a step further and ask ourselves, OK, is this true for other families, for other groups too, or for other input distributions? Ideally, we would like a cleanser to decision equivalence for ideally every group g, finite group g, and every distribution. Unfortunately, that's not the case. However, it turns out that if we add this additional condition in the red frame, the implication becomes true. So and this is, in fact, the main theorem of technical theorem of our paper. And it basically says that the decision problem is hard, assuming the search problem is hard, and also some other related decision problems are hard. And by related, I mean problems defined for NAVSAC families that have range of related groups. So this quotient subgroup of the initial group g. And I don't want to spend too much time on this condition, but it might seem, at first sight, that this is a very artificial condition just to make the proof work. The only thing that I will say is that, in reality, this condition is not restrictive at all. It's much less restrictive than it seems. In particular, in many interesting cases, this restriction holds in a very strong information theoretic sense. So in reality, we really don't have to check anything else for many interesting groups. And we get a clean search decision equivalence. So for example, this is the case for any group and any input distribution, as long as the multipliers are binary. So if we are working on any finite abelian group g, and our distribution is only defined over the n-bit strings, then we get search decision equivalence unconditionally. And just note here that this implication alone generalizes the result by impagiatso now and fissure instead. Same thing for groups with prime exponent and any distribution. And also another nice thing is that we can use all these nice tricks from information theory, leftover has lemma, bounce on the entropy of distribution, et cetera, et cetera, and get similar results for other cases of groups. OK, so just one quick slide about the proof. We want to show how to solve the search problem by having a distinguisher for the decision problem. We do the trick used by impagiatso now. So we break down the proof into two parts by introducing this intermediate notion of a predictor. So this is, again, the outline by impagiatso now. However, due to the generality of our result, the details are fairly, fairly different. In particular, in this step one, impagiatso now already used the Goldreich-Levin hardcore predicate. In our case, we have to derive general conditions that allows to invert such a function, such a knapsack function, just by given noisy predictions for x dot r, for random vectors r, modulo sum number that might not be necessarily prime. So neither Goldreich-Levin or the Goldreich-Rubenfeld-Sudan results are enough in this setting. OK, and so for this step, we use as a tool a nice algorithm developed by Acavia, Goldwasser, and Safra for learning heavy Fourier coefficients of general functions. Step two, again, we managed to show that there is a way to use the distinguisher to get a predictor that satisfies exactly the general condition of step one. Won't get into the details. This is actually the most technical part of the proof. You can see, you can take a look at the paper if you are interested. OK, let me just now come back to the LWE as promised in the beginning of my talk. For that part, we use some unknown duality between LWE and knapsack problems. So if you flip an LWE instance, so if you see it from a different point of view, it turns out that these LWE instance can be transformed to this knapsack instance, where this matrix G, I have grouped all m elements from the group into this big matrix G. In the error-correcting codes language, this is exactly the parity check matrix for the query code generated by matrix A. And the important thing to notice here is that the only component, the only quantity that unifies these two problems is this vector E, the error vector, and that is the error vector of LWE becomes the unknown input in the knapsack instance. And also, it turns out that all the distribution work out nicely. If you start from a random matrix A, you get a random matrix G. All right, so a similar transformation works also the other way around, which is very convenient. And if you put all the pieces together, you can go all the way from the search LWE to the decision LWE through the corresponding knapsack problems and use all the machinery and the strong theorem we have for knapsacks for search decision. OK, so what does this mean in particular for LWE? Using these connections, we are able to re-prove all previously known such reductions with bounded error for learning with errors and for LPN. And also, we're able to get some search decision reductions for instances of LWE not considered before. And as an additive benefit, we get that our reductions are sample preserving, meaning that all the previous reductions prove that M samples from the LWE distributions are indistinguishable. But they have to assume that the corresponding search problem is hard given a higher number of samples, still polynomially related to M, but a higher number of samples. In contrast, our reductions are sample preserving. So if one can solve the decision problem given M samples, then he can also solve the search version given the same number of M problems. Of course, the caveat here is that there is a degradation in the success probability. So if you start with some probability distinguishing at Vanda's epsilon, this will go down the inverting probability. But this seems to be unavoidable. All right, so why should we care about the number of samples? The reason is that in natural instantiation of schemes, there is a certain number of LWE samples exposed. And now the nice thing about the sample preserving reductions is that we can base the security of these schemes on the hardest of the search problem when given exactly M samples. And from a concrete algorithmic point of view, that's interesting because some known attacks against LWE, one wayness of LWE such as some lattice attacks and some recent algebraic attacks seems to be sensitive in the number of samples exposed. And in its extreme manifestation, this phenomenon, there are cases of parameters of LWE where if given enough number of samples, we can completely break LWE. Whereas with fewer samples, the attack doesn't seem to be working. OK, so let me conclude with some open problems. All the sample preserving reductions that I mentioned before work whenever the noise is bounded. It would be nice to extend this to the unbounded noise case. And especially lately, the LWE with super polynomial noise has been used in several applications. So this problem is well-motivated. Just let me mention here that we do know how to get search decision reduction for LWE for some parameters and super polynomial noise by the work of Piker. But these reductions are not sample preserving. And the similar landscape for the algebraic variant of LWE, the ring LWE, we do know sample preserving reductions, but unfortunately, no sample preserving ones are known. Thank you. Very short question, if any. OK, then let's thank the speaker again.