 Hi, my name is Nicholas Janisse, I'm a postdoc at Rutgers University, and I'm presenting our paper, Improved Discrete Gaussian and Sub-Gaussian Analysis for Ladder Scriptography. This is joint work with Daniela Machancho, Chris Pickert, and Michael Walter. Part one, motivation and background. So it seems that every time a discrete Gaussian analysis arises in Ladder Scriptography, a specialized theorem is proved in whatever paper this arises in. Even though many times these theorems, they all are morally the same. So the goal of our paper was to give a simple modular approach to reduce the pen and paper proofs of previous discrete Gaussian analyses too. And in the end, what this comes down to is representing some of this operation we're doing on a discrete Gaussian as a linear transformation. And then the analysis reduces to a smoothest condition on the kernel of the linear transformation. So just to refresh, a lattice lambda is a discrete subgroup of Euclidean space. And important when working with discrete Gaussians is the dual lattice. So the dual lattice of lambda is this other lattice lambda star that's in the span of lambda. And it's a set of all vectors such that when you enter product down back with lambda, you land back into the integers. So to start, the spherical discrete Gaussian over lambda of width S and center C, and the C doesn't necessarily need to be in the lattice, is the probability distribution proportional to the Gaussian function denoted by S and C. And it's important to note that if S and C are omitted, then that means that S is equal to one and the Gaussian is zero centered, or C is equal to zero. Now we can expand this definition to non-spherical discrete Gaussians. This is important in many applications, say, like digital signatures, and a non-spherical discrete Gaussian of shape sigma for some positive definite matrix sigma is the same, is the probability distribution proportional to the same Gaussian function, except that we replace this S with this square root of sigma capital S. And we denote the distribution as square root of sigma because there are infinitely many square roots for each sigma, and often time in, say, a pen and paper proof, it may be useful to use one square root instead of another. So discrete Gaussians, they arise nearly everywhere in lattices. For example, they arise in our tightest transference theorems for lattices. These are theorems that relate the shortness of vectors in the primal lattice with the shortness of vectors in the dual lattice. They give us our tightest worst case to average case reduction, say, for the short integer solution problem and learning with errors problem. They give us our fastest provable algorithms to solve the shortest vector problem and the closest vector problem in the worst case. Important for applications. They give us our lattice-based digital signatures with the smallest signatures. And building on top of this, they give us strong lattice reptiles. So this includes applications like identity-based encryption, attribute-based encryption, multiple linear maps, and more. And they also give us function-hiding, fully homomorphic encryption. The most important notion when working with discrete Gaussians is smoothness. And what that means, or the definition of the smoothing parameter, say, is that the epsilon smoothing parameter of lambda is the smallest s greater than zero, such that when you scale the dual lattice and evaluate the, or sum up the Gaussian function evaluated at each point of the scale of lattice, or what we call the Gaussian mass, if this Gaussian mass is less than one plus epsilon. Now we say if the discrete Gaussian on lambda over s is smooth, if s is at least the smoothing parameter of lambda for some negligible epsilon. And when I say negligible, I mean negligible in some underlying security parameter in whatever application we're using for the discrete Gaussian. Now the most important consequence of the discrete Gaussian being smooth is that it behaves like a continuous Gaussian. And this makes working with the discrete Gaussian much simpler, or analyzing the discrete Gaussian much simpler. Now we can extend this to non-spherical discrete Gaussians. Now we say that a square root of some positive definite matrix sigma is greater than the smoothing parameter if the same consequence happens, or if when you scale the dual lattice by s transpose, the Gaussian mass is less than one plus epsilon. You'll know this stuff before, and this, when we said s for a scalar was greater than the smoothing parameter, we were comparing two real numbers. But in this case, this bear-than-or-equals-to-sign is just notation. But the consequences are the same. And the most important consequence is that a smooth discrete Gaussian is shift invariant for any center that's in the span of the lattice. Some other important consequences are that we get all of our efficient algorithms for sampling discrete Gaussians, or sampling distributions statistically close to discrete Gaussians from smoothness. And importantly for cryptography is we get our tightest worst case to average case reductions for the short integer solution problem. Part two, the modular framework would be a linear transformation of discrete Gaussians. So the modular approach is to, first, express the sampling portion of the experiment as a single discrete Gaussian, second, express the output of the experiment as a linear transformation on the discrete Gaussian, step one, and three, something that we'll call show smoothness in the kernel lattice. And now I'm going to talk about step three in more detail. So the underlying question is, when is the linear transformation over the discrete Gaussian, so something that's represented on the left, statistically close to sampling the a discrete Gaussian over the transform lattice coset with the appropriate covariance. And in short, we need two conditions to hold. First, we need the kernel of the linear transformation to be spanned by lattice vectors in London. And second, we need the initial discrete Gaussian of shape s or of square root sigma to be smooth in this kernel lattice or the lattice intersected with the kernel of linear transformation. So more concisely, we need a smooth full-rank kernel lattice. When these two conditions are held, then the linear transformation of the discrete Gaussian is statistically close to sampling a discrete Gaussian over the transform coset with the appropriate covariance. So let's go over the simplest example we can think of. And the simplest example would be just taking two independent discrete Gaussians and adding them together and asking, when is there a sum statistically close to the discrete Gaussian that we would expect? So let's sample x1 over a lattice coset a1 with s1 and the same for s2. Let's sample these independently. And the output of the experiment will just be summing x1 and x2. Now step one is to write the joint distribution of x1 and x2 as a single discrete Gaussian. This is easy because they're both sampled independently. So it's a discrete Gaussian over the Cartesian product of the cosets with the appropriate diagonal covariance. Now expressing the output as a linear transformation is easy. We just take x1 plus x2. That's represented by this T, which is the these two identity matrices concatenated. And now we're analyzing this linear transformation on the joint distribution. So the linear transformation theorem says all we need to show is that the kernel is full rank. And what that means is that the intersection lambda1 and lambda2 is full rank in the underlying impedance space and that we're smooth over this kernel lattice. And this kernel lattice, it's all set of all pairs of vectors v and negative v such that v is in this intersection. Then when this condition holds, the statistical experiment of sampling x1 and x2 independently and summing them together is close to the discrete Gaussian over what we would expect. Now as a quick sanity check, what happens if we just analyze this distribution without the linear transformation theorem? Well, you can see, so say we'll fix some z and a1 plus a2 and then z is going to be the sum of little a1 and little a2 each in AI. Now the probability mass, map to z, is going to be this a1 and a2 paired with this plus k and minus k, where k is the intersection. So right away, you see that one step of analyzing the distribution reveals the kernel lattice naturally. Part three, generating LWE samples. So in our paper, we show another application of the linear transformation theorem and how it applies to generating statistically independent LWE samples given a batch of samples already. So real quick to review, the learning with errors distribution is they're given this short and fat random matrix modulo q, which is between a label a and then for some secret vector s and some secret error vector t, you're given s transpose times a plus this error vector. And really, you can see this as treating this matrix a as a generating matrix for a code, modulo q, and you're encoding some secret s and you're scrambling the low order bits of this code order. And the name comes from treating the distribution on a sample by sample basis, but really, we're going to assume that we have many samples at once, okay? So given m, which is going to be m log q LWE samples, we want to generate new samples with the same LWE secret, such that these new samples are statistically close to independent. And the reason we're showing the paper is that the linear transformation theorem gives a more direct proof with a slightly tighter analysis. What I mean by that is that previous LWE self-reductions, self-reduction proofs artificially added continuous Gaussian noise as an intermediate step, and this would slightly increase the output noise. But it's important to note that the algorithmic steps of the reduction in our paper and previous reductions are the same. So as a secondary contribution of our paper, we give a concrete analysis on sub-Gaussian random matrices. So our motivation or lattice trap doors for the short answer to the solution problem. If you recall the SIS problem, or here I'm going to go over the intelligence version, you're given a short and fat random matrix A modulo q, and you're given a some coset vector u modulo q. And to solve the problem, you have to produce this vector x with integer entries such that coefficients are small, and the matrix vector product A times x is equal to u modulo q. And of course, x has to be non-zero if u is zero, and we make this rigorous by a norm bound beta. You can think of the norm as just the Euclidean norm. And really, this is the lattice problem on the lattice coset. We're here. This is given by the q-ary lattice defined by A, where we have A, this is the lattice coset given by all x-directors x, such that A times x is equal to u modulo q. So the trapdoor scheme where the sub-Gaussian matrices arise is the MP12 trapdoor scheme. You can think of this as a signature scheme where the verification key is this SIS matrix A, and the secret signing key is this trapdoor matrix R that's random and has small integer entries such that when you matrix multiply A times R, you get this gadget matrix modulo q labeled as G. And this gadget matrix, it's really any matrix such that the SIS problem is easy. And algorithmically, the trapdoor inversions, they're sampled as a discrete Gaussian over the appropriate lattice coset. And the reason we're worried about these sub-Gaussian concentration bounds is that the width of the discrete Gaussian scales linearly with the largest singular value of the random matrix R. Now, you could always implement the scheme and measure the largest singular value to get an idea of the security, but really we like to know the hardness of the scheme before implementation. Okay, so the way we analyze these singular values is through sub-Gaussian analysis. So a random variable X is sub-Gaussian with parameter S, if its tails are dominated by the Gaussian bounds for S, or with us. Some examples are discrete Gaussian and also uniform random variables over a bounded range, so say plus or minus one, and a random vector is sub-Gaussian, if it's sub-Gaussian in every direction. Or equivalently, if it is its inner product with a unit vector U, it's sub-Gaussian for all unit vectors U. Now, given a random matrix with independently sampled sub-Gaussian roads, we are now concerned with its singular values, or tight concentration bounds on the matrices singular values. And for the remainder, here we're going to assume M is larger than that. And again, we're going to denote S1 as the largest singular value. So a problem with previous concentration bounds is that they have these unknown constants floating around. So one common concentration bond is if you have independent sub-Gaussian entries with parameter S, then the singular, largest singular value of R is bounded by this constant, some unknown constant, times the sub-Gaussian parameter, and then some quantity that depends on the dimension and the tightness of the bound T. Now, if you have a random matrix independent sub-Gaussian isotropic roads, by isotropic, I mean that these cross-variance are zero and that you just get the identity matrix as the covariance matrix, then the largest singular value is bounded by this square root of M, here's something that depends on the dimension, and then plus a unknown constant, and then a quadratic dependence on the sub-Gaussian parameter times the smaller square root of the smaller dimension. So one would expect that if you have, say, a covariance sigma or some variance sigma, that the largest singular value would scale with that. And in our paper, that's what we show, but we also extract the exact constant. So using the techniques of the previous theorems, we show that the constant and the concentration bound is actually around 38, and then also that the whole quantity scales with the standard deviation. You'll notice that this constant is really too large to be useful in applications because of the square root dependence on the dimension. So instead, we come up with a heuristic and we verify it experimentally. So for matrices with independent sub-Gaussian entries from some common distributions or in last cartography, we show experimentally that the largest singular value is tightly concentrated around sigma, so the standard deviation times the square root of m plus square root of n. So the quantity that depends on the dimension. And for continuous Gaussians, this is a theorem known as Gordon's theorem. So really what our heuristic says is that for common distributions and independent entries from these common distributions, they really behave or the singular values really behave like they are drawn from a continuous Gaussian. Thank you for watching my presentation. So in conclusion, we give a simple modular framework for analyzing operations on discrete Gaussians and we give a concrete analysis of sub-Gaussian matrices. So to motivate the paper in the introduction, I gave you a picture like this and this is the updated picture. The dark rectangles are all explicit or shown in our paper and then the light rectangles are results that are not shown in our paper, but are ones that I believe to hold true or I've worked them offline. Thank you for your time.