 Hello and thank you for clicking on our video as so today I'm going to present you to a walk with Quentin Judi, Aline Roulounglois and Véchan Nguyen. And oops before I start with giving our contribution I would like to give you the context of our contribution so we contribute to the theoretical understanding of the hardness assumptions that underlie cryptography based on structured ledges. So when I say this I mean that we are actually not really doing cryptography today so I'm not showing you any construction but I will give you some theoretical reductions to deepen our understanding. So our main result can be summarized in the following statement. We show a classical reduction from a worst-case-letters problem to the module learning-of-varus problem with a small modulus and a linear rank. So when I say classical I mean that the reduction is not quantum and the worst-case lattice problem I'm talking about is to approximate gap-shorted vector problem in module-letters and the module learning-of-varus problem has an underlying ring of degree n and this will be our asymptotic parameter throughout the presentation. So when I say a small modulus I mean that the modulus can be polynomially small in the ring degree and the rank is also linear in this ring degree. Okay so this will be the outline of the talk. I will recap some notions about modulators problems then I will give some motivation of our result and some more technical details before I conclude with some open questions. So let's start. So the first thing is that we need the shortest vector problem so for any lattice which is a discrete additive subgroup of the Euclidean space we can associate a minimal which is the smallest norm of non-zero vector in the lattice and we can define the approximate gap-shorted vector problem for an approximation factor gamma and a lattice lambda and some positive parameter delta. So the task is to distinguish whether the minimum is smaller than delta or larger than gamma times delta and if the minimum lies in between delta and gamma delta any answer is correct and you can already see here that if the approximation factor becomes larger then the problem becomes easier. And as we like to draw pictures in dimension 2 here we have a lattice of dimension 2 with generated by two vectors b1 b2 and the minimum in this case it happens this is the norm of b1 so I can give you a first delta 1 which is the circle of radius blue and you can answer me directly that the minimum is smaller than delta 1 and I can give you a second circle in orange of radius delta 2 and you can answer directly that is in this case the minimum is even larger than twice this delta 2. Okay and what we really need in this talk today is the gap-shorted vector problem over modularity so let k be a number field of degree n and are its ring of integers so if you're not familiar with all with number theoretic notions you can think of k as the field of polynomials with rational coefficients modular x to the n plus 1 and are as the ring of polynomials with integer coefficients modular x to the n plus 1 and every number field has something that we call a canonical embedding that is defining a field homomorphism from k to the Euclidean space and this canonical embedding is equipped with some special symmetries and then any module over r of rank D defines the module edges via this canonical embedding in dimension in the Euclidean space of dimension dn and every idea which is a module of rank 1 defines an ideal edges of dimension n in our Euclidean space and if I lost you a bit and what I really would like you to take away is that not every lattice in r to dnd is a module lattice so those are somehow special lettuces and then we can define the mod gap shortest the approximate gap shortest vector problem over module lettuces by simply replacing the general lattice by module lattice in the definition so again the task is to decide whether the minimum is smaller than delta or larger than gamma times delta for an approximation factor okay so that was the first lattice problem now the second and so the learning of ours problem may be familiar to you but I will still recap the definition so we set the quotient ring set q as set module q set and we the lwe problem is given by some matrix a which is sampled uniformly at random and a vector b which is of dimension m and is given by a times s plus e where s is a secret vector of dimension d and e a vector of dimension m but of small norm so you can think of a Gaussian and the search variant of the learning with various problem asks you to find secret s and the decision version asks you to distinguish whether a b comes from this learning with error sample or if b is sample from uniform from the uniform distribution so there are two things I would like to read no nothing here so this is the learning with errors problem and as I already said we are interested in the version in the variant over modules so again we go to the number theoretic framework so we simply replace the ring of integers by the ring of integers of some number field and we set the quotient ring Rq to be our modular qr so we will have the same picture where now the matrix a sample uniformly at random over Rq with m the number of rows and d the rank the number of columns and so the first thing I would like to remark is that for d equals 1 and this is a special case that we call the ring lwe problem and the second thing I would like to remark that is the matrix I somehow hide some structure so every element in this matrix a let's say a11 the first entry defines a structured matrix in set q where n is the ring degree and so we should not forget this but for the rest of this talk this does not play an important role so now we had the two lattice problems and I would like to continue with motivating our results so what we know for lwe is that Rq have introduced a quantum reduction going from the approximate capture to spectrum problem to the lwe problem and where the modulus can be small and pike had gave a classical reduction where the lwe modulus has to be large and Prokersky-Longlau-Pike had recognized the lay a merged or took the positive of both results to give a classic reduction where the modulus can also be small so the situation is a slightly different one for module lwe so long line silly gave a quantum reduction where the modulus can be small and any rank and it was for glory that it should be possible to adapt pike's proof for module learning with error problem move the expense of having a large modulus and only to the search variant as there is no search decision reduction for exponentially large modulus for module lwe and now comes our work so we thought that why could could we not do the same as well lwe so we somehow take the positive of both and we show a classical reduction for a small modulus and we go down to the decision of variant but with the expense of needing a linear rank and why do we care so as you maybe offer have already heard is a NIST standardization process going on so the crypto community puts makes a lot of effort to standardize possible post-quantum schemes and within this third round candidates multiple ones are based on a module lwe problem and variants so there's kyber and deletion but there's also saber which is using a deterministic variant of the module learning of earth's problem and to be honest those those schemes in order to maximize efficiency they have ranks which lie rather between two and five so they are much smaller than during degree n which is more around a thousand or five hundred and so our result doesn't does not have direct practical impact on those schemes and yeah so let's move on to the technical details so here I wanted to recall this statement at the beginning so I showed you know what the worst case lattice problem is and I also recap the module learning of errors problem and now I would like to show you how we managed to do this classical reduction for small modulus okay so in fact we're following the high-level idea of the lwe analog of bracket a prokersky-longloir-peikert-recif and still a from two thousand and thirteen so we have three steps and so the first step is that we have a classical reduction from the gap shortest vector problem of a modular to this down to the decision a version with the expense of having a exponentially large modular so in fact what we do is take the proof of pie grid which gives us a classical reduction and then use also a more recent result from pike a dragf and Steven David of its where we can go down directly to the decision version and then we continue with step two which is a hardness reduction for module lwe with a binary secret so this is an interesting problem on its own so what we do is we take the one of the first results into this direction for lwe and use a more intelligent noise fluting to make the parameters much better than the original one but the proof much simpler than the BLPRS paper whoops and and then there's a last step where we need to shrink the modules because now it's exponentially large but for practical schemes we need polynomially large modules and and here in fact we need this binary secret hardness because the loss in the reduction depends on the norm of the secret and by taking a binary secret we minimize the loss in the reduction and today we are going only to focus on step two so the step two is the hardness of the binary module lwe problem so let's start with a sample a a times s plus e where the secret s is binary so taken over r mod 2r and of dimension d and we want to show that this is it is hard to find a secret s so the first step that we do is we do we do something that people call the lossy argument so we replace the matrix a by some multiple secrets module lwe sample so a becomes b times c plus z and then a times s plus e becomes b times c plus a times s here plus set times s plus e and here you already see that this argument does not work if the dimension of so if the number of columns of a is too small so somehow we need to hide as a thinner matrix B but if we already have only one column we cannot make it thinner we cannot hide something the second step is that we argue that due to the left over hash lemma c times s can be replaced by s prime where s prime is uniformly is an element in our queue and in order to apply the left over hash lemma the rank d has to be larger than l log q so you can see even if I take l equals 1 here then the rank has to be at least logarithmic in q and as you remember so in this big picture q was exponentially large so here you can see that the rank has to be large has to be linear in fact okay and then the third key ingredient is noise fluting so we want to argue that having set times s plus e and here e prime where e and e prime following the same following the same distribution that those two distributions are indistinguishable for an adversary and we end up with here is a sample of the module l w e problem with uniform sequence so if it's hard to find this s prime here and we show that those distributions are close enough then and assume the hardness of module l w e with multiple sequence then the module l w e with a binary sequence is also hard to solve okay so let's focus on the noise fluting so in the original paper from gk pv and they use the statistical distance in order to measure how far away two probability distributions are and in our work we take an alternative measure which is the rainy divergence and I gave you the definitions of those probability measures but I guess for you it's just important that they are just two different ways of calculate how far two probability distributions are away from each other is a bit like taking two different forms in one Euclidean space and so if you take for example to Gaussians of with better and one is shifted by some vector s the rainy divergence is given by this value and statistical distance is given by this value and there's an important difference of those two measures so both fulfill the probability preservation property for an event e and this means that that the probability this event happens for the probability distribution q bound the bounds above the probability that it happens for the distribution p with the difference of having just statistical distance so here you can see that here it's an additive factor and in the rainy divergence this property becomes a multiplicative factor if you can ensure that qe is negligible then you can make sure that p is negligible and so in order to make this direction of implications you need to require that the statistical distance is negligible and you need to require that the rainy divergence is constant and so you see it this is much better so constant it is much easier to have than negligible and we can see this in our concrete example so if you have the Gaussians debat the better and the better shifted by s and we assume that the norm of s is bounded by alpha you can see for the statistical distance that we need alpha over a better so the ratio of the two to be negligible but in the rainy divergence we only need the ratio to be constant by using an argument with a Taylor expansion at zero and so we can have much better parameters and one caveat is that the rainy divergence only works for search problems this is why we do it for the search variance and do a search decision reduction at the end okay so this was the high level step 1 step 2 step 3 today we only saw step 2 but if you're interested in step 1 and step 3 I invite you to look at the paper and I finish with some open questions so the first remark is that there's a lot of work going on right now on which is somehow related where the secret is taken from a small secret distribution and not necessarily binary but maybe bounded norm small bounded norm just refer to entropic LWE there's work on entropic module LWE and ring LWE going on as well and there is some work in progress on our side where we try to refine the proof of hardness for binary module LWE and make it independent of the number of samples and there are some big open questions that remain that we could not answer where like what about the classical hardness and the hardness of binary secret for smaller ranks and in particular for rank equals one so the ring LWE problem and maybe a first step would be to generalize some of our results that are restricted to specific number fields and make them valid for also more general sets of number fields so this is the end of my talk and I thank you for listening and hope to see you at Asia Crypt online