 Hi everyone, my name is Alice Pellemarie and I'm going to present you a joint work with Damien Stélie on the hardness of NTRU. So let me start with a bit of context. So if you're not familiar with NTRU, let me just recap a few information on it. So NTRU is an algorithmic problem that is based on lattices. It's supposed to be post-quantum, meaning that we don't know how to break it even if you have a quantum computer. It uses structure lattices, which means that it's quite efficient. And it's been used a lot already in many different cryptographic constructions. So as an example, you can cite three of the submissions that made it to the round three of the NIST post-quantum standardization process and that are using NTRU, so two encryption schemes and one signature scheme. And it's also one of the characteristics of NTRU that it's quite old for lattice-based cryptographic assumptions, so it's been already studied quite a lot, like it's been around for more than 20 years. And there is one aspect of NTRU which has not been studied so much so far and which is the topic of our article. It's the relations between the hardness of different variants of NTRU and between NTRU and other algorithmic problems. So this is what we call a reduction here on my picture. Narrow means a reduction, so it means here, for instance, this was the state of the art before our article. And it says that you can reduce decision NTRU to search NTRU, which means in other words that if you can solve search NTRU, then you can solve decision NTRU. So it's not a very surprising reduction, it's quite immediate because so I'm going to define more formally what is search and decision NTRU, but the idea of search NTRU is to recover some secret information, whereas the problem asked by decision NTRU is just to distinguish if something is an NTRU instance or uniform. So if you can recover the secret information related to NTRU, then you can say, oh, it's an NTRU instance, but if you don't get any secret, then you say it's going to be uniform. So if you can solve search, usually you can solve decision variants of the problems. So this was the only reduction that was known related to NTRU. And what we do in this work is we prove a bunch of other reductions. So let me comment a bit on this picture. So this is what we prove. We introduce two search variants of NTRU, I'm going to define them, but we realize that there are two quite natural ways of defining the search variant of NTRU. And for the moment, we don't know if both search variants are equivalent. So we have one search variant, here's the first one, which we can prove is equivalent to the decision N variant of NTRU. And we have the second search variant, which we can prove is at least as hard as some worst case problem on ideal lattices. So ideal lattices are just lattices with some extra structure. And so, yeah, so the picture is not completely complete yet because what we really would like for crypto is to have some guarantees of the harness of the decision and NTRU problem because this is the one that we usually use to construct a product with primitive. And so we would like to have a guarantee of its harness based on some worst case problems and ideal lattices, for instance, and you can see that for the moment, we don't have a reduction from a worst case lattice problem to decision NTRU. But this is already a beginning. OK, so now let me go into more details about this result. And I'm going to start first by defining NTRU and the three variants of NTRU I've told you about, like the decision variant and the two search variant. So first of all, I'm going to use number shields and ring of integers. So I'm going to fix for this talk a cyclotomic number field of power of two. But you can take any any nice number field. I mean, the reductions I'm going to show you the parameters are going to depend on some quantities of the number fields. But if the discriminant is not too big and if you know a good basis of the ring of integer, everything is going to be the same. And if you are not familiar with number field and ring of integers, you can just imagine for most of the talk that K is just the field of rational number and R is just the ring of integer numbers and RQ is just Z mod QZ. So I'm going to say once this is important that we are in the field and not just over the rational and over the integers. But for most of the talk, it's not important. So let me start by defining what is an NTRU instance. So an NTRU instance essentially is an element that lives in R modulo Q. And which can be written as, so I'm going to write it H. And it's going to be written as F over G mod Q with some small F and G. So I'm going to parameterize my NTRU instance by two parameters Q, which is a modulus, and gamma, which measures how small F and G are. So this smallness measurement is going to take this form. So I'm going to assume that the Euclidean norm of F and G. So when F and G are polynomials, you just take the Euclidean norm of the vector of the coefficients of the polynomial. And so I'm going to assume, I mean in a gamma Q instance, these quantities are going to be bounded by square root Q over gamma. So the larger gamma, the smaller F and G. And gamma, I'm going to call it the gap because we're going to see later on that if you take H uniform mod Q, you expect to be able to be, sorry, you expect to be able to write it as F over G with F and G of the order of square root Q. So if you take gamma is one here, essentially every H is an NTRU instance. But when you increase gamma, then the number of gamma NTRU instances decreases because you impose stronger and stronger conditions on the size of F and G. And F and G is a small F and G that can be used to write H. I'm going to call that an NTRU trapdoor, a trapdoor for H. So some remark that is important here is that F and G are not unique. You can have multiple ways of writing H as F over G and F prime over G. But they are semi-unique in the sense that the span of the vector FG is unique. So say differently, if I have two trapdoors FG and F prime G prime, then F prime is alpha F and G prime is alpha G for some alpha in the field. Or another way to write it again is F over G is equal to F prime over G prime. And here I am performing the division over the field K, not modulo Q. So this is some quantity that is unique and which I'm going to call HK. So it's kind of a lift of H over the rationales. Okay, so now that I've defined NTRU instances, I can define the different variants of NTRU problem. So the decision variant is what you would expect for a decision variant of a problem. You are given an element H, which is either uniform modulo Q or which is simple from some distribution over some NTRU instances and you are asked to distinguish if H is a NTRU instance or if H is uniform. So that's very natural and I'm not going to spend too much time. The search problem is a bit more tricky. So probably the most natural variant of the search problem you would want to define if you are asked to define the search variant of NTRU would be this variant which we call NTRU vector problem, which is I give you some H, which is an NTRU instance. So H can be written as F over G mod Q for some small F and G and I ask you to recover a trapdoor. So recover another, not necessarily the same, but another pair of FG, such that F and G are small and H is F over G. So that's a natural search variant. And yes, I'm going to comment on the name NTRU vector in a few seconds, but let me just define the other variant, which I also believe makes a lot of sense. So the problem of the NTRU vector variant is that, so as I've said, you are just asked to recover one trapdoor, but it's not unique what you have to recover. So you could also say, okay, maybe I don't really care about recovering F and G small. I just care about recovering the lift of H in K, like F divided by G in K, what I called HK on the previous slide. And the good point of this variant, which I'm going to call NTRU module, is that the solution of this program is unique. So there is a unique HK. So say differently, just to be sure we're on the same page, I just want not necessarily to recover a small trapdoor, but any multiple of this trapdoor, so alpha F, alpha G for any alpha, which can be very large. I'm happy with that. Anything in the span of the vector FG. So that's all for the two search entry problems and the decision and entry problem. And I just wanted to comment a bit on, so I've told you at the beginning that NTRU is a lattice problem, but from the definition, it might not be so clear. So let me just introduce you the lattice, which is behind the entry problems. So this is a module lattice of front two. So if you are not familiar with module lattices, just take R equals Z and then this is just going to be a lattice of dimension two over the integers. Otherwise it's a lattice of dimension two over some ring R. So this is a basis of the lattice of the lambda H, so which is a lattice associated to some elements H in the field, in the ring R. And so what's the property of this lattice is that you can also define it as the pair of all vectors, so GF and not FG, but it's okay, such that G times H is F modulo Q. So if G is invertible, this is exactly H is equal to F over G mod two. So this lattice contains all ways to write H as F over G, but not this is a really small, so not everything in the lattice is a trapdoor. Trapdoor is a small F and G in the lattice essentially. So let me just give you a bunch of properties. If you sample H uniformly at random mod Q, then you would expect from instance using the Gaussian and heuristic that the shortest vector of this lattice B H is roughly squared Q up to some squared in factor. On the other hand, if you take H to be an entry instance, then you will have a short vector in this lattice because you know that H can be written that you get a vector GF in this lattice with G and F smaller than the square root Q over gamma. So that should probably be a square root two times square root Q over gamma here because it's a vector with two coordinates. Anyway, so you get a very, I mean, a much shorter vector in the lattice than what you would expect if H was uniform. And so this is what we call a unique SVP instance because you get something, some lambda one much smaller than expected, especially if gamma is large, it's going to be much smaller. And so if I rephrase my different entry problems in terms of this lattice, so the decision and entry problem has to distinguish whether the lattice has a unique short vector or not. So it has to guess the value of lambda one of the shortest vector of the lattice. If you look at the entry vector variant of the problem, the question is, can we find a short vector in the lattice lambda H? So which is why we called it entry vector. And if you look at the entry module variant, we don't want to recover a short vector, but we want to recover the dimension one subspace stand by a short vector because we know that all the short vector are going to be on the same line and we want to recover this line. So if you have a ring, this line is called a rank one module and this is why we call this bias an entry module problem. So essentially we want to find the direction in which the lattice lambda H is dense. Okay, so this concludes the definition of the three variants of entry. And so now I would like to discuss a bit about what we know about entry, what we knew and what we know thanks to the result group pro. So let me discuss a bit about reductions that were known about entry. So we have a few reductions that were known on entry, which I didn't show on the first picture because they couldn't really be shown on this one. So we know that if you sample F and G from some Gaussian distribution, but a large one. So it's not the regime in which we are interested in this work because in this work, we assume that F and G are smaller than square root Q. Here we take them from a distribution which is slightly larger than square root Q. Then you can prove at least for cyclotomic fields that F over G is indistinguishable from uniform on your Q. So this means that the decisional variant of entry is probably hard if F and G are sufficiently large, which is again, not the regime in which we are interested here because we want a regime where F and G are smaller than square root Q. And so entry instances are sparse among all the elements of Q. You also have another reduction which shows that decisional entry cannot be harder than ring a WA. So it's not very interesting for entry because it's an upper bound on the harness of entry but it's something already. And we have also attacks. So that's essentially all we knew for the reductions on entry. Let's just say a few words about attacks on entry because that's also important. Maybe if we cannot prove reduction that's because we have attacks. So I'm only going to consider polynomial time attacks here. So because entry is a lattice problem you can of course run any lattice reduction algorithm. So in polynomial time you would run LLL in order to break it. So this will break entry, the decisional variant of entry and the module variant of entry. If the gap is large enough, so for LLL is going to be if the gap is larger than two to the N. But in the case of entry we also have some attacks that are specific to entry and not to all module lattice problems or all lattice problems which reaches which breaks decisional variant of entry and entry module for some regime of parameters that are different from the ones of LLL. So you have a condition here which is maybe a bit complicated but one example to keep in mind is if the modulus is two to the square root N and the gap is roughly the same as the square root Q. So meaning that F and G are very small, let's say constant size and the modulus is two to the square root N then you can break decisional entry and entry module and it's not, I mean if you would use just LLL you wouldn't break it for these parameters. Okay, so this is essentially all we know about attacks and reductions from entry. And so this is what we prove with a bit more details. So I put the real names of entry problems now so we prove that decisional entry variant is equivalent to the entry module variant. The entry module variant is no harder than the entry vector variant and the entry vector variant is at least as hard as worst case ideology. So in this picture I put in black the immediate errors. So we have already seen that if you can solve some search variant, you can solve decisional variant and if you can find a short vector you can find the span of this short vector. So this arrow is a bit maybe less obvious but it's still very easy to prove. So it means that if you can solve entry module and you can solve ideal SVP so you can find short vectors in ideal lattices then you can solve entry vector problem. And in red you have the two interesting arrows that require some work and that are maybe more interesting in terms of security guarantees. So which shows that decisional entry is at least as hard as the module variant of entry and that entry vector is at least as hard as some worst case ideal problem, ideal lattice problem. So something I want to emphasize also in this picture is that especially the red arrows they require some very specific distribution and entry instances and some specific range of parameters. So not all the arrows may compose like sometimes you get incompatible conditions to compose the arrows. And so if you want to know how the gap degrades along the reduction so you can pause on this slide but essentially for all the trigger reductions we have no loss and for the red reductions the interesting one we have a small loss so you can look at it. Okay, so I wanted to try to plot all we know about entry on one slide. So it's a complicated slide and I'm not sure it's very useful but still it's there. So let me show you, you can pause also if you want to have a look. So on this slide on the x-axis you have the modulus q and on the y-axis you have the size of fng. So it's not the gap that I'm plotting but the size of the upper bound on fng and the scale is log log. So you go from polynomial to two to the n and in between you have two to the square root n because you can be interested in various parameter ranges for intro. So in gray you have the area where the size of fng is larger than q then you have low entry instances with fng larger than q. So in purple you have the area where you can prove that decision and entry is unconditionally hard. So if you have fng of this side and Gaussian then it's indistinguishable from uniform. In red you have the area where you have the subfield attack or Kirchner-Fuck attack or LLL attacks. So all the attacks on entry. So the red area is where you should not choose your parameters. You have polynomial time attacks and so all reductions they can be positioned here. So the equivalence between decision and true and intro modules is there is few restrictions on it. So it covers quite a bunch of the space. And the restriction between, I mean there is a reduction between from ideal SAP to intro vector problem is restricted to the size of fng being sufficiently large. So you have two areas here. So in blue it's something that works for all number fields at least with small discriminants. And in polynomial time, classical time and in light blue you need a quantum, it is a quantum reduction and it's only for cyclotomic fields. Okay, so let me move on to the techniques of the two interesting reductions and I'm going to start from the reduction from ideal SAP to intro vector. So this reduction is a one to one reduction. So we start with an ideal and we're going to transform it into an intro instance such that any solution to the intro instance gives us a small vector in the ideal. So for simplicity here, I'm going to assume my ideal is principal and it's just the set of all multiples of some generator Z, which I know. So all integral multiple of Z. And I have some element G in I which is small in my ideal. So here is the place where the intuition, if you imagine that the ring R is the ring of integer Z, it might be misleading because then Z is going to be the shortest element of the ideal and it's trivially solved. But in the general case where R is a ring of integer of a number field, then the shortest vector G is usually going to be an integral multiple of Z. And this is the multiple we would like to find. So how do we transform that into an intro instance? Well, we just do a bit of manipulation. We multiply this equation by Q over Z to get like G times Q over Z is a multiple of Q. And now if I call H this Q over Z, I get G times H is zero mod Q. So the problem here, if I do that, it looks like an intro, but H is a rationale. It's an element of the field. It's Q over Z and in intro, we want H to be something in the ring R. So we just add a line and we round Q over Z. So we get G times Q over Z, the rounding part, is equal to what's left, which is minus G times the fractional part of Q over Z, modulo Q. And now I just take H to be this rounding of Q over Z, F to be minus G times the rational part of Q over Z, and I get that G H is F mod Q. G is small because it's a short vector of I, and F is also of the order of the same size as G, because this fractional part is something that is comprised between zero minus one-elf and one-half, let's say, over the integer. So it's small. So F and G are both small and we get an intro instance. So what I've done so far is if I'm given a principle ideal, I am able to construct an intro instance, such that if G is a small element of the ideal, then there exists a trapdoor FG for the intro instance H. So it's not completely sufficient yet, but you can prove that the converse is true for this intro instance, that is any trapdoor F prime G prime for H will have G prime, which is an element of I. And since it's a trapdoor, G prime is also small, and so this gives you a small element in the ideal, and so you can solve ideal SAP by solving intro for the input H. Okay, so just to conclude on this reduction, so I've presented it to you just in the case of principle ideals, but you can extend it, as I said, to any non-principle ideal. The idea is that you can always write a non-principle ideal as the intersection of a principle ideal for which you know a generator and the ring of integers. And so if you write it like this, then you can just do everything that I've said with this element Z and it's going to work. And another extension of the reduction is that, so the reduction I've presented to you was one to one. If you have an ideal lattice, you can transform it into an intro instance, but what we would like to have is worst case to average case, meaning if you can solve for some average case distribution of an intro, the intro problem, then you can solve ideal SAP for all ideal lattices. And so we can achieve that by using a previous work which showed a worst case to average case reduction for ideal lattices. And then we use the one to one reduction that I've just showed you to reduce that to average case intro. So this was the idea of the first reduction. Let me now move on to the second reduction from decision and intro to intro module. So this reduction is not going to be one to one. So I'm going to phrase the problem a bit differently. So my objective is given some intro instance H to recover edge case. If you remember intro module, we want to recover F over G with a division performed over the rationale and we're given as input F over G module XU. And to do so, so this is a reduction. So we can assume that we have an oracle that solves decision and intro. And here, in order to give you the intuition, I'm going to assume that I have what I'm going to call a perfect oracle. So it's an oracle such that if I give it H which is F over G, it's going to say yes, if there exists F and G smaller than some bound B. So I don't know B but there is some bound B such that if H can be written as F over G then the oracle is going to say yes. And if H cannot be written as F over G with F and G smaller than B then the oracle is going to say no. So it's perfect in the sense that there is really some bound B that says yes or no and it's not some continuous degradation of the success probability. So what can I do if I have such an oracle? Well, the idea is going to be pick any X and Y you want integer and compute X time H plus Y. So if I write it as a fraction it's X F plus Y G divided by G mod Q and now I can query the oracle on this new H prime. And the oracle is going to say yes if X F plus Y G is smaller than B but it's going to say no with very high probability if X F plus Y G is larger than B. So by querying the oracle on H prime I can learn whether X F plus Y G is smaller than B or not. And the important properties here is that I can do that for any X and Y of my choice. And in particular I can choose X and Y such that they only modify one coordinate of F and G at a time. So I can separate the coordinates of F and G and recover them one by one. So I can rephrase my problem in a simpler way by saying I have just F and G that are one dimensional real numbers. I have some bound B which I don't know. And what I can do is I can choose X and Y as I want and I can learn whether X F plus Y G is larger than B or not. And I want to recover F over G. And well, it's just a remark that I cannot hope to recover F and G. I can only hope to recover F over G because if you see my problem, if you scale everything by alpha, so if you scale F, G and B by alpha which are all unknown to the attacker then the behavior of the oracle is going to be the same. So I cannot hope to distinguish to recover F and G exactly. But I can hope to recover F over G and it's actually not complicated to recover F and G here. So the idea is to going to increase so fix for instance some X zero and increase Y zero until the oracle changes from yes to no. And when it changes its answer, we know that we have found some equation X zero F plus Y zero G is equal to B. We do that twice, we get two equations and then we can just solve for F over G. So that's really an easy algorithm. So this again is a simplified version of the algorithm. There are more technical details to handle and in particular the main technicalities here is that the oracle is not going to be perfect but it's going, the only thing you can assume on the oracle is that when it's distributed as in true it says yes and when it's distributed as uniform it says no but everything in between we don't know what happens. And so in order to deal with that we don't do the proof and the algorithm as I've showed to you by hand but we use a framework that was introduced by Piker, Triguff and Stefan Davidov it's to prove the hardness of ring LWUE which is called the oracle hidden center framework problem. Sorry. And essentially the only thing you need to show thanks to their framework is that the probability of success has some nice condition like being deep sheets and things like that and then you can call the framework and it's going to give you an algorithm to find the solution F over G. So let me conclude now. So here is a summary of the reductions we prove. There is a bunch of open questions related to what we have proven so far. So one open question is regarding the distributions. I mentioned that we don't use the same distributions for all the reductions so it would be nice to have one distribution that works for all the reductions. Another open question is whether we can prove the hardness of decision and entry based on some worst case ideal SVP problem. And for this, a natural idea would be whether we could prove that worst case ideal SVP reduces to n from module instead of n through vector. So far we don't know how to do that. And finally the last open problem is whether we could replace this worst case problem on ideal lattices by some worst case problem on module lattices for modules of length two or more because for the moment they are believed to be maybe a bit safer than ideal lattices. So that's all for me. Thank you for watching this video.