 Hi everyone. This talk will be about hardness of the learning with errors problem on general and traffic distributions. I am Nico and this is joint work with Tsvika Parkersky. One of the motivations of this work is leakage-resilient cryptography. What happens if the secret key of a scheme was chosen using bad random coins or somehow additional side information about the secret key was later leaked to an adversary? In this talk, we will consider this problem on an assumption level. That is, we will look at a hardness assumption which has built-in leakage-resilient. In effect, many constructions from this hardness assumption will inherit its leakage-resilient. The hardness assumption we consider in this talk is the learning with errors problem, which can be phrased as follows. In the search LWE problem, you are given a uniformly random ZQ matrix A and a vector SA plus E. That is, a noisy vector in the row span of A. The goal is to find the vector S. So we can say that the LWE problem asks you to solve a noisy linear equation system over ZQ. In the standard version of the LWE problem, the vector S is chosen uniformly at random from ZQ to the N and the error vector E comes from a short distribution chi. When A is an N times N matrix, we call N the dimension of the problem and M the number of samples. If we are given O of N log Q samples and the distribution is, say, Q over 4 bounded, then the system will have a unique solution S, except with negligible probability. So it's fair to ask that an adversary recovers the same S that was used to compute SA plus E to win the experiment. In the entropic version of the LWE problem, the vector S is not chosen uniformly at random but rather from a distribution S. The only guarantee we have about S is that it has a certain amount of min entropy. So we might actually think of the distribution S as being chosen adversarily with the only guarantee that it does have a certain amount of min entropy. In the decisional version of the LWE problem, the goal is not to recover the secret vector S but rather to distinguish SA plus E from a uniformly random vector U given the matrix A. Same holds for the entropic version. Let's briefly talk about the error distributions we consider. A natural choice are Gaussian error distributions and it turns out LWE with Gaussian errors has some desirable features. Notably, for Gaussian errors, we know reductions from worst case lattice problems to LWE. The approximation factor of the underlying worst case problem relates to a quantity called the modulus to noise ratio alpha. Smaller values of alpha are better as they correspond to harder worst case problems. Furthermore, for Gaussian errors in the regime of m greater n log q, the problem allows re-randomization at the cost of a small increase in alpha. For the techniques in this work, it will be important that the error distribution is Gaussian. So for the rest of this talk, we will entirely focus on Gaussian distributions. By now, LWE has turned out to be one of the most versatile standard assumptions in cryptography. To name just a few, we have constructions of public key encryption, oblivious transfer and multiparty computation, fully homomorphic encryption, attribute-based encryption for all circuits, and non-interactive zero-knowledge from LWE. Offuscation aside, there are rather few primitives which have instantiations from other standard assumptions but not from LWE. In fact, LWE is essentially the only standard assumption from which we know FHE and ABE for all circuits. Another reason for the popularity of the LWE problem is that it is conjectured to be post-quantum secure. That is, we believe that this problem is hard even for efficient quantum algorithms. The problem of LWE with entropic secrets or more generally LWE with secrets that are not chosen uniformly has been studied in literature. All the results in this list are obtained by a reduction from standard LWE to entropic LWE and we'll discuss how these reductions affect the modulus-to-noise ratio. We're interested in the modulus-to-noise ratio for the underlying LWE problem. Applebaum et al. showed that the secret can be chosen from the same distribution as the error without affecting the modulus-to-noise ratio. This is often referred to as the hermit form of LWE. The notion of entropic LWE was first studied in a work by Goldwasser et al. They showed that LWE remains hard if the secret is chosen from a binary entropic distribution of sufficient min entropy. They prove, however, relies on a statistical drowning step and this affects the parameters in an undesirable way. Namely, the modulus-to-noise ratio of the underlying LWE problem increases by a super polynomial factor. Two works of Brachersky et al and Mitiancio provided improved reductions for the case of binary entropic secrets. Their reduction causes only a small parameter loss for the modulus-to-noise ratio. Alvan et al provided a reduction for the slightly more general setting where the entropic distribution is ball-bounded. Their reduction causes a polynomial loss in the modulus-to-noise ratio. All these works left open the case of general entropic distributions. That is, whether entropic LWE is hard for general entropic distributions. This brings us to our results. In this work, we provide a new characterization of the hardness of entropic LWE in terms of a quantity we call noise lossiness, new delta of S. Now, assuming the hardness of standard LWE with dimension k modulus q and Gaussian parameter gamma, we get that entropic LWE with dimension n m samples, same modulus q and Gaussian parameters sigma is hard if the noise lossiness of S is at least k log q plus something super logarithmic in the security parameter. Here delta is a parameter which depends on gamma, m and sigma. Instantiating this, for general distributions, we will find that the noise lossiness of S is lower bounded by min entropy of S minus n log q divided by delta. And we get that entropic LWE is hard for general distributions if the min entropy of S is greater or equal k log q plus this expression in n q, gamma, m and sigma. For our bounded distributions S, we show that the noise lossiness is lower bounded by the min entropy of S minus a constant times square root n times r divided by delta. This constant C is just squared of 2 pi times log E. Consequently, we find that in this setting entropic LWE is hard if the min entropy of S is lower bounded by k log gamma r plus C times gamma squared and m times r divided by sigma. And note that here in the bulb in the bulb bounded or r bounded regime, the entropy bound is actually independent of q. These parameters here are for the search version of the problem. For general min entropy distributions S, hardness of decisional entropic LWE follows directly with the same parameters if q is prime. For our bounded S, we can use a search decision reduction. Now, especially the entropy requirements for general distributions are far from desirable. As h of S is at most n times log q, we need to choose sigma of a similar order as q to get something meaningful here. So one might ask if this strong entropy requirement is actually an artifact of our proof technique maybe. It turns out that this condition is in fact inherent. Our entropy requirement is tied up to an additive term of O of n log security parameter. For the case of composite moduli, we provide matching attacks. This will provide an entropic distribution of secrets with slightly less entropy than what the positive result asks for, for which entropic LWE turns out to be easy. For the case of arbitrary moduli, that is, including prime moduli, we show a black box impossibility. That is, we show that no reduction which only makes black box use of an adversary can establish hardness of entropic LWE from any falsifiable assumption, not just LWE. So completing a table, our work gives a tight characterization in terms of entropy for which distributions of secrets entropic LWE is hard. Now I'll give you an overview of the techniques we use to obtain these results. The central idea in this work is to look at how noise affects the entropic secret directly. So fix an entropic distribution S and at lower case S be chosen from this distribution and let E be a Gaussian error we add on top of S. The noise loss in us is defined as the average conditional min entropy of S given S plus E and it measures the amount of information lost about S when we add Gaussian noise to it. A nice way of thinking about this is in terms of the success probability of a maximum likelihood decoder. For flat distributions S, this decoder decodes S plus E to the S prime in whose VOR noise cell it lies. So if we ask the noise lossness to be large, we essentially require S to be to be a bad error correcting code against Gaussian noise. Our goal will be to show that a sufficiently high noise lossness implies hardness of entropic LWE. So here's an intuition on how we compute the noise lossness of general distributions. The basic question amounts to how many spheres of radius O of delta can you pack into CQ to the N with very little overlap. It turns out the logarithm of this threshold number is N log Q divided by delta and the information loss amounts to how much you go beyond the threshold. That is the min entropy of S minus N log Q divided by delta. In the language of coding theory, this bound on the noise lossness is in fact a strong converse coding theorem for general channels. In the ball bounded setting, the support of the distribution S is confined to a much smaller space. Hence, there will be a much higher overlap of the spheres. So the noise lossness here kicks in a lot sooner. That is, we get non-trivial noise lossness once the min entropy of S is above this constant times square root square root N times R divided by delta. Modulo a small simplification, this is consistent with the converse coding theorem for power constrained Gaussian channels where the power constraint corresponds to the radius R or rather it's square. Essentially all prior works on entropic LWE use lossness arguments and so will we. So let's recall the main ideas. We will replace the uniformly random LWE matrix A with a pseudo random matrix. Then we'll show that adding noise to SA loses information about S. That is, S is not fully specified anymore by A and SA plus E. To construct this pseudo random matrix, we'll take the product of two low ranked matrices B and C and add a discrete Gaussian matrix F. So our pseudo random matrix is the sum of a low ranked matrix BC and a short matrix F. Here this matrix B is either chosen uniformly from ZQ to the N times K or chosen from a discrete Gaussian, whereas the matrix C is chosen uniformly at random. Such a matrix is indistinguishable from a uniformly random matrix under the standard LWE assumption via a simple hybrid argument. We're going to use the following facts about Gaussian distributions. It is well known that the sum of two independent Gaussians is again a Gaussian, though with a different covariance matrix. Adding two independent Gaussians amounts to adding the two covariance matrices. We will take a reverse perspective of this, that is, we want to express a spherical Gaussian as the sum of a target Gaussian and a residual Gaussian. More specifically, for a given matrix F, we want to decompose the spherical Gaussian E with parameter sigma into E1F plus E2. Such that E1 is also a spherical Gaussian, though with a smaller Gaussian parameter sigma 1. And note that if E1 is Gaussian, then E1F is also Gaussian. We can show that such a decomposition exists if sigma is greater or equal the spectral norm of F times sigma 1. On the last slide, I chose such a matrix F from a discrete Gaussian and that's the same distribution or the same F which we will use for this decomposition. So choosing F via a discrete Gaussian will allow us to bound the spectral norm of F by O of gamma times square root of M. We're now ready to sketch our reduction for the hardness of entropic LWE for distributions with a sufficient amount of noise lossiness. So as a first step, as in prior works, we replace the matrix A with the lossy mode BC plus F. And as discussed above, this change goes unnoticed under standard LWE. So instead of giving the adversary matrix A and SA plus E, we choose the matrix A by computing BC plus F and the sample by S times BC plus F plus E. Now we just rewrite this by distributing S on the right hand side. So we compute the right hand side by S times BC plus SF plus E. As we've seen on the last slide, we can decompose the Gaussian error E into E1F plus E2. And as we've just discussed, this is possible as the spectral norm of the matrix F is sufficiently small. Rewriting this again, we obtained the expression S times BC plus S plus E1 times F plus E2. We will now show that S has high min entropy given this information. So from the view of an adversary who sees BC plus F and SBC plus S plus E1F plus E2, S will have high min entropy. Recall that we said earlier that we require an LWE search adversary to find the unique secret S. Thus, if S has high min entropy, any even a computationally unbounded adversary will only have negligible success probability in guessing the correct S. So let's compute this min entropy. Let's for now assume that the matrices BC and F are fixed. Then the information that we give the adversary BC plus F and SBC plus S plus E1F plus E2 can actually be computed from SB and S plus E1. The term E2 doesn't matter anymore because it's an independent Gaussian. It's independent of E1. Now, the matrix B is has rank K. That is, we can describe the term SB with K log Q bits. Therefore, we can apply the min entropy chain rule and compute a lower bound for this conditional min entropy by H of S given S plus E1 minus K log Q. But now the first term is exactly the noise lossness of S with parameter sigma 1. So we can conclude that the entropic LWE search problem is hard if the noise lossiness of S is greater or equal K log Q plus something super logarithmic in the security parameter. Now, we remark that the K log Q term can be improved to something that is actually independent of Q if we have the guarantee that both S and B are short, which will be in the case of our bounded secrets S. So to wrap up, we have seen that LWE with general non-short entropic secrets is still secure, given that we only have a small entropy defect of the distribution of secrets or likewise that we only have a small amount of leakage about the secret. We have identified a property of distributions called noise lossiness, which characterizes under which parameters entropic LWE will be secure. Furthermore, there is an inherent reason why we can only deal with small entropy defects for general distributions, namely they are both matching attacks and a black box impossibility. Finally, LWE or entropic LWE with short secrets tolerates a much higher entropy defect or much higher leakage. Finally, here are some open problems. What can we say if the leakage includes information about the noise term E? Does the black box impossibility result extend to the quantum setting or to quantum reductions? After all, quantum reductions are a staple of lattice-based crypto. And lastly, what about structured variants of LWE, such as ring LWE? Can we prove anything meaningful about entropic variants of these problems? That's all. Thanks.