 Hi, everyone. I'm going to talk about lossiness and tropic hardness for ring LWE. I'm Tzvika and this is joint work with Nico Duttling So let's start by talking about the learning with errors problem Which I'm sure a lot of you have heard about learning with errors is essentially the problem of solving a system of approximate Random linear equations modular q. So let's be a little more concrete. So we have this matrix a that I have on the slide This is a short and wide matrix n by m using my notation and think of M as some polynomial in N And this matrix is uniform modular q. Q is going to be some global modulus That's going to remain unspecified in the talk it actually ranges from being polynomial in N to being exponential in N But you can just think about this being some large polynomial in N for the purpose of the talk So we have this random matrix a and this gets multiplied by a secret vector s and s is also uniform modular q Now this product s times a To that we add a noise vector e. So e is a vector of IID elements And these are sampled from a noise distribution Which is supported only over elements that are much much smaller than q in absolute value and this s a plus e Mod q is denoted by b. This is going to be the outcome of this system of linear equations Approximately linear equations and the learning with errors problem in the search version Is the problem of given a and b to find the values of the variables of the system of equations s And there's also the decision version which is also going to be important for us And this is the problem of distinguishing a and b from a and u where u is just a uniform Vector that has nothing to do with s and e And again as many of you probably know Learning with errors has been super useful for cryptography. It has a simple linear algebraic structure That allows to use it very easily and provides strong security properties Such as presumed post quantum security and worst case to average case hardness reductions And this makes it an invaluable tool for a building block for many cryptographic Many cryptographic constructions. However, in many cases lwe is not efficient enough to be used In schemes that are designed for the real world And for this this purpose in order to improve efficiency, there are variants of lwe That rely on algebraic number theory and they provide greater efficiency in particular the ring lwe problem Is such a variant and this is going to be the focus of this work and also of this talk However, in order to make everybody's life easier, there's not going to be any algebraic number theory in the talk Rather, we're going to present an abstraction that will allow us to To talk about these algebraic algebraic number theory variants without Without thinking about polynomial rings and things of this sort We call our abstraction structured lwe and it goes as follows. So at first I'm just going to change the notation Rather than thinking about a as one Wide matrix I'm going to partition it into square blocks a1 a2 and so forth And I'm also going to partition the vector e and the vector b into chunks accordingly So now rather than thinking about lwe as finding s given a and b I can think about it as the problem of finding s given A collection of pairs a i and b i and likewise I can also define the decision version In a similar manner. So now I can say that in standard lwe it's Standard lwe is a structured lwe problem where the distributions of the a i is uniform and of the e is is gaussian But now I can start playing with the problem by changing the distributions of the a i and e i and thus get the different variants So the ring lwe problem, which is the focus of this work and also the related polynomial lwe problem Can be presented also as structured lwe But now rather than the a i being uniform the a i is drawn from a distribution that represents Multiplication by a random ring element and since ring multiplication is a bilinear operation This can actually be represented as a matrix where matrix multiplication represents the multiplication over the ring And these this class of matrices actually forms a multilinear group, which is going to be important for us The vectors e i are still going to be gaussians, but they're not going to be iad gaussians They can be slightly weird gaussians According to the geometry of the ring, but this is not going to matter much for us for the purpose of the talk So this allows us to define ring lwe In the form of structured lwe which is what we need for for this work and for the talk But let me just point out that there are other variants of lwe that can be thought of using the subtraction For example, module lwe middle product lwe in order lwe are also cases where the multiplication It's an lwe variant where the multiplication is this Has this bilinear form and can therefore be represented by matrix multiplication So we don't make use of this and it makes use of this in this work But this could be interesting to think about independently In this work we study entropic hardness for ring lwe But let's start by talking about entropic hardness for structured lwe in general So not distinguished between the different flavors and the question here is what happens when s is not sampled from a uniform distribution But rather rather from some other distribution and the first question I guess is why should we even care? So one case where this could be useful is that of leakage resilience So sometimes s is used as a secret key for some Cryptographic scheme and we can consider a case where an adversary obtains some partial information about s So the distribution of s now in the eyes of the adversary is no longer uniform and the question is whether the scheme remains secure another case is that Sometimes for example in fully homomorphic encryption using a different distribution for s gives you better functionality as well as better efficiency And in that case, for example, it is sometimes useful to sample s from a binary distribution rather than a uniform distribution and this sort of Leads us to the distributions that we should care about in the in the context of entropic lwe so Of course, it's obvious that the distribution has to have sufficient entropy non-trivial entropy Otherwise the lwe problem is obviously not hard And and we could care about Small or binary distributions for s so s is sampled now uniformly over the entire domain But only over elements that are small or only over binary elements And the other case is sort of general entropic distributions where we don't know what the exact distribution is But all we know is that it has sufficient entropy All right, so now that we hopefully care, uh, what is the answer? Is the problem still secure or not? Well for standard lwe there is a sequence of of known results Essentially establishes hardness for small and binary Distributions and recently also for general entropic distributions. So this is good However, for wringled lwe almost nothing is known and i'm going to go back to this almost later on And the question is why like why can we not why can we not since these problems are so similar? Why can we not uh, sort of translate the techniques from the lwe regime to hold also for wringled lwe? So let's start by going over the main ideas that allow us to prove entropic hardness for standard lwe And actually all of these works that are mentioned here uses the same basic idea Which is to rely on lossiness in order to prove entropic hardness So what is this idea? So the idea is to replace the ai with a different distribution Which is still going to be computationally distinguishable But this new distribution is going to be over matrices that are going to lose some information about s in this multiplication process And this is going to help in improving entropic hardness as we will see in a minute In particular the distribution that is that is normally used is to replace the ai by essentially an lwe instance by itself So we're going to sample a global uniform matrix b And as you can see b is going to be a very narrow matrix It's going to have the same number of rows as ai, but it's going to be very narrow and it's going to be universal There's only going to be one b in the system. It's not going to be a bi And now for each i we're going to sample sample a ci Which is going to be a matrix of the appropriate dimensions so that b times ci has the same dimension as ai So it's so b times ci is going to be a square matrix And ci is going to be a random and secret matrix and to that we're going to add f fi Which is just a matrix of iid noise just as in Just as an lwe and the lwe assumption the decisional version of lwe Actually asserts that this distribution is going to be computationally indistinguishable from the uniform distribution Which is the prescribed distribution of the ai so we can make the substitution And and still remain computationally indistinguishable from the original from the original distribution Now let's see why making the substitution actually helps improving entropic hardness So from this point and on the argument is going to be information theoretic So we're going to show that once we replace the distribution of the ai's Even the computationally unbounded adversary is not able to recover Is not able to recover the value of s given these samples So let's see what kind of information The adversary can actually learn about s when it is given s times ai plus ai and also of course ai by itself So let's see what we know about what we what an adversary an unbounded adversary can learn about s So the first thing that the adversary learns is this value s times b so and This just uh, this term just appears in all of the whenever you have s times ai you get s times b times ci plus s times fi so s times b is a value that potentially leaks However, since the since b is a very narrow matrix s times b has a very Small dimension and therefore s times b does not leak a lot of information about s. So this is a bounded amount of information The second thing that is leaked is all these terms of the form s times fi and I also want to add to that the ei that also that also exists in this because this is actually going to be important um, so once So given given s times b and s times fi plus ei This is actually all of the information that an unbounded adversary can learn about s from from these from these samples and This is where sort of all of these approaches that we have here diverge how to deal with s times fi plus ei and what i'm going to describe here this uh, um approach as approach we had in the previous work with nico where we had this technique that we called um, um Flooding that we called flooding at the source and uh, what we what we did there was the following So we said, um, we considered the noise ei so the the noise from the lw sample And we actually showed that you can pull some from this gaussian ai you can pull a smaller gaussian and push it back Before the fi so this distribution s times fi plus ei is actually statistically identical to distribution where you take s Add to add add a gaussian e prime to s Take the sum of these two things multiply that by fi and then add some e double prime i And know that the e prime is just a global value. It does not depend on i only the e double prime depend on i And this means that even a computationally unbounded adversary cannot obtain more information about s Then it then can be obtained from s plus e prime because you never sort of see s naked You can you can only see it with the addition of with the addition of e prime And this led us to define this notion of noise loss in s Which is the amount of information on s, which is sampled from some some distribution Which is lost when you add gaussian noise to it And what we what we showed was that this noise lossiness actually sort of dictates The entropic hardness of the lwe problem that you have So you have the the lossiness that comes from the noise lossiness And on the other hand you have this additional sort of side information that Comes from this s times b and so long as s times b does not give you sufficient information To sort of recover from the noise lossiness Then you're going to have entropic hardness Because the adversary is not going to be able to recover the original s And we see that noise lossiness we can see that noise lossiness actually has Sort of the properties that we would expect when we talk when we talk about entropic hardness So first of all the more entropy that s has the better noise lossiness it is going to have And intuitively think about it This actually means that the distribution of s is sort of more dense in the space of all possible s's And this means that when we when we add gaussian noise to it It is more likely to get confused between some value of s and some neighboring value of s Likewise if s has smaller euclidean norm This is also going to be better because again now this smaller norm means that this space of possible s's is smaller It's just going to be this small ball that contains all the elements With with small norm and therefore again if you add gauss if you add a gauss into it You're sort of still sort of dense in this ball And therefore it is more likely for you to get confused between a given s and a neighboring s lastly Notice that the Gaussian parameter of the noise e i is also going to matter because the amount of e prime that we are able to sort of pull back Through the fi is actually going to depend on the amount of noise in the e i that we have So the larger the noise in the lwe Sample that you have the better parameters you're going to get for For the entropic for the entropic hardness which also makes sense So this is the notion of noise lossiness and This sort of is one way to sort of address the problem in the context of standard lwe However, if you try to apply it to ring a lwe you see that you run into a problem and the reason is that What we do here is replace the matrix a i with a matrix Which is close to a low-rank matrix in the sense that This distribution that we have this lwe distribution is a low-rank matrix b times ci. That's a low-rank matrix plus some noise And in fact the the structure of of ring lwe So if we think about as a structured lwe problem, we said that the a i here is going to be Is going to be a ring multiplication matrix and ring multiplication matrix can just cannot have The form of being a low-rank matrix plus some noise So we can just not make the substitution in the context of ring lwe And now perhaps it's the time to talk about this exception So there is one work with bolboceno Perlman and Sharma where we show that in a very specific case So the case where the modular sort of splits completely over the ring and using a very non-standard assumption We can actually get some entropic hardness result for ring lwe and in that particular case You can actually you can think about it as presenting the the a i matrix as sort of low rank plus noise plus low rank low rank plus noise matrix, but this requires A non-standard assumption All right, so let's see how we can get around this problem and actually get lossiness for ring lwe So we said that we wanted to replace a i with a close to low-rank matrix, but we cannot do it So how about trying to be more bold and try to replace a i with a close to zero-rank matrix? So just maybe try to get a i to be computationally distinguishable from a short matrix f i Well, this of course doesn't work because a i is not a short matrix However, we make the observation that what actually matters is the roast pan of f And not f itself So what we're going to do is replace the matrix a i with a matrix that has in its roast pan a short matrix In particular we're going to take a i and replace it with a product h times z i where h is going to be a global big matrix And z i is going to be a short matrix and the roast pan of this matrix actually contains z i Which as we are going to see this is going to give us what we want Now this actually does not offend the structure of the a i as a ring multiplication operation because we can sample h and z i Also as ring multiplication for a ring element So h is going to be for a large ring element and z i is going to be for a short ring element and the product is also going to correspond to A ring multiplication matrix for the corresponding the product of these two ring elements So in terms of structure, we can do this We don't run into the problem that we had before where we were trying to do low rank plus noise So this is already a good thing now We also notice that this is actually Not I mean this is not a new assumption and actually I mean the way that we got to it is actually in the reverse root You can actually present known assumptions such as the entro assumption or d s pr the decisional short polynomial ratio problem as Exactly exactly in this form Where the the h and z i sort of represent That represent ring element multiplication for the appropriate ring elements And we should also notice that this sort of idea we We learned this idea of Replacing a i with say an entro instance was used in a survey in the l w e survey of pie card to relate the hardness of entro With the hardness of ring l w e But we are actually going to use this we are going to show that this can be used in order to show entropic hardness Also notice that the assumption can have many flavors in particular We might want to make we say the z i is short but the question is how short So the shorter z i is the better it is for us the better lossiness we're going to get and this is not surprisingly going to make the assumption stronger However, uh, if we only care about if we only want a mild Mild z i's which are going to be good enough to get some entropic hardness We can actually see that this assumption this d s pr assumption actually converges with ring l w e So we are going to be able to get entropic hardness Not the strongest possible results But by just assuming the ring l w e problem itself, uh, but if we want the sort of stronger, um If we want stronger entropic hardness, we need to make stronger assumptions such as d s pr All right, let's see what information about s a computationally unbounded adversary can deduce after we make the substitution Of h of ai with h times z i So note that an unbounded the adversary gets the gets to see all the ai's and therefore it can Derive h out of it. So h is not secret to an unbounded adversary So the unbounded adversary what it sees is s times h times z i plus e i for a bunch of ai's And again, we're going to use the technique of flooding at the source And we're going to pull out of the e i, uh, the this addition this noise term e prime such that we can write, uh, the S h z i plus e i as well We have this term in a bracket s times h plus e prime where e prime is global All of this gets multiplied by z i and then you get some additional you add some additional e double prime i Um, but once we do this we notice that all the information about s is actually contained inside the brackets So actually all the adversary all an unbounded adversary learns about s is this term s times h plus e prime So, um, the fact that it's a sequence of elements and so forth is not going to matter for us for now from now on And when we look at this we see that this already gives us some flavor of entropic hardness So we could uh, uh, let's think about defining, um s times h noting it by s prime So what we have in the bracket is uh, s h plus e prime. So this is actually s prime plus e prime And we notice now that if s prime has good noise lossiness, then we get entropic hardness Why is that? Well if s prime has high noise lossiness Then this means that the computationally unbounded adversary cannot derive s prime From seeing s prime plus e prime However, since h is just a global invertible matrix This means that if you cannot if the adversary cannot deduce s prime, then it also cannot deduce s So if s prime has high noise lossiness, then we are in good shape Um, and this is already useful because again since multiplying by h is an invertible transformation The entropy of the distribution of s prime is going to be the same as the entropy of distribution of s And since we know that all high entropy distributions are going to have well pretty good noise lossiness, then If we know that s has sufficiently high entropy Then this means that even though we don't know h s prime Is whatever h is s prime is also going to have sufficiently high entropy and therefore it's going to have pretty good noise lossiness So this already gives us um some uh entropic hardness for general entropic distributions However, what we want is to relate the entropic hardness to the noise lossiness of s itself Which will allow us to get better results. For example, this will allow us to um get better results for s with uh with low norm Which is sort of the ultimate result that we want to get a result that is comparable to what we know In the standard lw e setting So let's recap um the information that an unbounded adversary has about s is s times h plus e prime And we want to somehow relate this to the noise lossiness of s itself. So remember the noise lossiness Is what information about s is lost when you just add um Gaussian noise to s not to s to s times h So in order to um in order to do that We recall additional property of the n true or d s pr assumptions And actually this is this can be derived without loss without loss of generality from the assumption the way we described it So you can assume without loss of generality That the inverse of h mod q is a matrix z zero which is a short matrix by itself So this is a global matrix and it's going to be short. Uh, and we're going to be able to use this property In order to derive our sort of ultimate result So let's see how so we know that mod q The inverse of h is a short matrix. So this means that we can take this entire Equation that we have we have s h plus e prime and multiply it by z zero Now, let's see what happens when we multiply by z zero Well, the h cancels out and know that this whole thing is done module q. So the s cancels out And what we get is s plus e prime times z zero Now We get s plus something which is good Which is what we want from noise lossiness and this e prime times z zero Well, it's going to be some kind of Gaussian. It's not exactly the Gaussian that we want So e prime is like A nice a nice Gaussian. So you can think about this continuous or Being over the integers. However, after you multiply by z zero What you get is a Gaussian over the lattice that is defined by the rows of z zero because well, you get a linear combination of the rows of z zero and This is not the Gaussian that we that we have in the definition of noise lossiness Um and furthermore We need to take into account that this Gaussian can be sort of very far from being spherical So it depends on the properties of z zero. So actually what we need to do is uh, Sort of conditioned on the case where z zero has sort of nice singular values So that e prime times z zero is a Gaussian over the lattice z zero Which is close to being spherical And in order to do that we need to define a notion that is called sometimes lossiness We show that this holds with some non negligible probability But not necessarily probability close to one and we show that this is uh, this actually suffices in order to show Entropic hardness. So this is an additional technical difficulty. Uh, but let me not get into the details So going slightly back, uh, we have s plus e prime times z zero And e prime times z zero is a Gaussian over the lattice z zero. So let's try to sort of see how this Um, how this expression relates to the expression of noise lossiness Where noise lossiness is just how much information about s is lost when you just add like, you know, a standard Gaussian not a Gaussian over a lattice So, um, how much information about s is leaked with a Gaussian over a lattice? Well, we notice that you can notice that you are going to get more information than what you get from the standard noise lossiness because in particular we can take this expression, uh, s plus e prime times z zero and Sort of reduce it modular the lattice itself So we can check which coset of the lattice is zero this expression This expression belongs to and the coset of the lattice is zero of this expression is the same as the coset of this Is the same as the cost of z zero of s itself. So the cost of s within the lattice z zero Is something that can be deduced from s plus e prime times z zero and cannot And cannot be deduced if we just added sort of standard Gaussian noise to s So, uh, we get something we get some additional leakage on top of what sort of the noise lossiness tells us that we're that we're going to get so, um what we have here is that To conclude from this expression the information about s that we lose is the noise lossiness. Well, here it says plus but actually minus The information that is contained in the coset of s. So you lose the information in the noise loss in the noise lossiness and possibly gain The information that is obtained in the coset of s. So if the noise but if the noise lossiness is large enough to actually, um Sort of compensate for this additional Information that is contained in the coset then we're going to be in good shape So we have this interest that To choose the parameters so that the lattice z zero has as few cosets as possible What does it mean to have a few to have a few cosets as possible? That the lattice z zero needs to be as dense as we can as dense as we can make it Which sort of translates to the matrix z zero being as short as we can make it So the the shorter, uh, the the smaller the elements of z zero are the fewer cosets that it's going to have And this sort of makes sense, right because the better results the the because well a smaller z zero Sort of translates to into as making a stronger hardness assumption So in order to get better and tropic results, we need to make a stronger hardness assumption Which sort of makes sense. So, um, if we make a stronger and stronger hardness assumption We can, um, make z zero smaller and smaller and make this lost this loss that comes from the coset Again smaller and smaller Annoyingly we cannot take z zero to be small enough So as to allow us to use binary distribution on s So the leakage from the coset is large enough, uh, so that no matter, um, sort of within reasonable parameters We cannot get the noise loss in s to be good enough To compensate for it when s is a binary when s is a binary distribution simply because a binary distribution Cannot have that much entropy. It cannot it can only have n bits of entropy So the question of entropic wringled we with a binary secret still remains open Um, however, we did manage to relate, uh, the entropic hardness of wringled we To this notion of noise lossiness plus this additional loss that comes from the coset of s Which sort of translates the parameters of the, uh, hardness assumption that we are making So this is sort of the ultimate result that we have, uh, in this work All right, let's conclude So what we saw here was that the notion of noise lossiness that was defined previous work in the context of lwe Also implies entropic hardness for the case of wringled we Um, and we need to compensate for this additional loss that comes from leaking the coset of s with respect to some lattice Which is parameterized in the assumption Uh, what is the assumption that we need the hardness assumption that we need to make in order to get entropic hardness? Well, we can rely just on the standard wringled we assumption And this is going to give us, uh, entropic hardness, uh, in the case where we have we initially start from sufficiently High noise lossiness for our secret s and in the case where the noise e i is sufficiently high So this is going to already give us some, uh, um entropic hardness if we want better parameters for entropic hardness We need to make a stronger assumption particular this, uh, uh, decisional short polynomial ratio assumption This is already going to give us entropic hardness for sort of modest, uh, noise lossiness And noise value for for the wringled we instance and for sort of the ultimate parameters If you want binary wringled we we still don't know we still don't know how to get it We don't know how to get our techniques to work for these this parameter regime So this is an interesting open problem. Um, we present this new obstruction structured lwe Um, so we actually thought about this obstruction only For our convenience so that we don't need to sort of work with this algebraic number theory and the entire paper So we wanted to abstract things as much as possible for our own sake But maybe it's an interesting, uh, maybe it's an interesting, um, um, abstraction and maybe it can be useful elsewhere Maybe for example, you can use this to show entropic hardness for other variants of the lwe problem Um And with that I will leave you. Thank you very much