 Hello. Hey, hello, everybody, welcome to the Lattice session and Danieli Michandra talking about trapdoors for hard lattices. And so as you know, mainstream cryptography is based on the problem of factoring numbers or extracting discrete logarithms and many other problems from number theory. So in this talk, as well in the next talks, you will hear about lattice cryptography. These are cryptographic functions based on computational problems on point lattices, which I'm going to define soon. But anyway, lattice is a discrete arrangement of points in space. And lattice cryptography is interesting for several reasons. On one hand, typical operations in lattice cryptography is the addition of vectors in n dimensional space. So these are very simple operations consisting of many additions of small numbers that can be executed in a paradigm. And lattice cryptography, as far as we know, is resistant against quantum computing. There is no known polynomial time quantum algorithm for lattice problems. And their constructions are supported by a surprising connection between average case and worst case complexity that justifies the selection of certain probability distributions to pick random keys. And recently, lattice cryptography has also generated some of the most amazing discoveries in cryptography, like the development of fully homomorphic encryption by Craig Gentry that, as far as we know, can be built from lattice assumption but nothing else. All known constructions, so far, are based on lattices. So what is a lattice? A lattice is a set of points in space, points that are arranged in a regular manner. You can define a lattice as the set of integer combinations of n linearly independent vectors in Euclidean space. So in this case, we have a two dimensional lattice, B1 and B2. And the lattice points are given by sum and differences of these vectors, like this point is B1 plus B2. So using matrix notation, you can represent the lattice by a matrix, a matrix with a basis vectors as columns. And the lattice point is given by a matrix vector multiplication. You multiply the basis matrix by an integer vector x. The same lattice can have several bases. For example, these two vectors, C1 and C2, generate exactly the same lattices, B1 and B2. And in fact, you can define lattices in a basis independent manner. You can define a lattice as a discrete additive subgroup of Euclidean space. So lattice points can be added together, but they are not a vector space. It is a discrete set of points. This gives other possible methods to define lattices. For example, you can define a lattice as the set of solutions to a system of linear equations. Here in the example, we have a lattice described by matrix A. A is not a basis for the lattice. So the lattice is defined as the set of solutions to the system Ax equals 0. And we take all the integer solutions. So this clearly a lattice because it is a subgroup. It is closed under addition and subtraction. And it is a discrete set of points because all the points are integer. Now, there is a slightly different but even simpler method to introduce lattices, which is going to play a role in this talk. So let's start from the simple possible lattice in n dimensional space. Take the set of all the integer vectors, all the vectors with integer coordinates, Zn. So this is a very simple, regular, orthogonal lattice. Now, so far so good. And any other lattice can be described. So the lattice generated by the matrix B can be thought of as the result of applying a linear transformation to Zn. So think of B as a linear transformation rather than a generating set. We take our integer lattice and we apply the linear transformation B to turn it into a different lattice. Now, it is clear that all lattices have the same algebraic structure. They have the same structure as an additive group. And by applying this linear transformation, you can map any n dimensional lattice to any other n dimensional lattice while at the same time changing the geometry of the lattice, which is the typical parameter of interest in both computational problems and cryptographic constructions. For example, by applying a matrix B, the minimum distance of the lattice, the minimum distance between any two lattice points is going to change. So we'll go back to this view of lattice in a moment. But first, let's move to cryptography. The typical cryptographic function, one-way function based on lattices, is specified by a matrix A and n by m matrix with integer entries, which are chosen at random model of Q. So here n is some security parameter. I think of n as the main parameter that describes the scheme. And Q is the modulus, is a polynomial in n. So numbers, modulo Q, are very small numbers. Are numbers with the order of log n bits that typically fit a single machine level register. So nothing like the big numbers used in RSA. And the other parameter, the number of columns of the matrix A, m, is a larger integer, but still roughly equal to n. It is typically set to n log Q. So it is quasi linear in the security parameter n. So the most simple function you can define using this matrix is the matrix vector multiplication. Take a function that uses the matrix A as a key and on input, a vector x outputs A x, modulo Q. Now, of course, if x is unrestricted, this function is easy to invert. Inverting the function is the same as solving a system of linear equations. In order to make this function into a hard to invert one, we restrict the input x to the set of short vectors. If you do so, you get a function which is hard to invert. This function was introduced by Itai. The corresponding inversion problem is called the small integer solution problem, the problem of finding a small solution to a system of linear equations. And it easily yields collision-resistant hash functions. Now, for typical values of x, short but not too short, x can take more than Q to the n possible values. You see, Q to the n is the size of the range of the function. And if x is not too short, the function will still be hard to invert. But it is also surjective. Every possible output can be obtained by choosing x properly. Another function, which is also used in many cryptographic constructions, is the so-called learning with error function, introduced by Regev. And this function is also indexed by matrix A. This time, we multiply A to the left by an arbitrary vector S. S is not short. It's a random vector modulo Q. And then we perturb the result by adding a very short noise vector E. Now, x and E play essentially the same role in these two constructions. But typically in function G, in the LWE function, E is even shorter. Now, there is a close connection between this function and these functions and lattices. Let's look at the first function, F. Now, we know that the matrix A defines a function by giving a system of linear equations. Take A x equals 0 modulo Q. That's the set of all the vectors which are mapped to 0 by this function. Now, when you evaluate this function on an arbitrary input x, you are specifying a coset, a shifted copy of these lattices. And the inversion problem corresponds to finding a small element in this coset. So the output using linear algebra can be used to identify an arbitrary vector in the coset, as well as the lattice of solutions to the homogeneous system. And the problem is, given a shifted copy of a lattice, find the shortest element or, equivalently, find the element in this lattice coset which is closest to the origin or to some other target vector. Now, the other function, the connection to lattices, is even more direct. You can think of the rows of the matrix A as a basis for the lattice. It's not quite a basis because here we are working modulo Q. So our lattices are always periodic modulo Q. So you can think of everything as repeating regularly when you shift it by Q units. So in the lattice, this is represented by always having this box, this Q by Q box inside the lattice. So when you multiply A by a vector S, you are essentially picking a random lattice point inside this box. And using as a basis the rows of the matrix A. Then we add an error vector to E. And you obtain a perturbed lattice point. And the inversion problem correspond to recovering the original input. So as I told you, for this function, typically E is short enough that the function is injective. If you put spheres of this size around every lattice point, you will get this joint spheres so that the output of the function uniquely determines the input. While in this case, you see there are many points inside this larger circle that gives the set of all possible points in the domain of the function, which are mapped to the same coset. However, just as a remark, syntactic differences aside, F and G are essentially the same function. You can show that they are equivalent using a lattice duality. And the real difference between these two functions is that in the first one, E is chosen as something which is short, but not too short. And the corresponding function is surjective, but not injective. While in the second case, the error vector is very short, giving an injective function, which is not surjective. But if you change the length of x and E in the two constructions, the situation gets reversed. And again, you get equally good functions. And one way function is based on lattices. Anyway, so this is not the main point of this talk. The main point is that computing these functions, F and G, in the forward direction, gives collision-resistant hashing, CPA secure encryption, and a little more. So the more advanced cryptographic functions based on lattices are based on also inverting these functions. They require to compute in pre-images. So inversion for G is very easy to define. It's an injective function. You want to find the unique pre-image. In the case of surjective functions, which are not injective, you have to be careful. So the notion of inversion that turned out to be useful in most application was put forward by Gentry, Peichert, and McDonathan, where you want to find not an arbitrary pre-image, but you want to sample a pre-image according to the conditional input distribution. Now, the input axis is usually chosen using a Gaussian-like distribution. And you want to sample the lattice coset of pre-images again with a discrete Gaussian distribution. So this can be done if you have a good basis for the lattice. If you have a basis consisting of short vectors, then you can both solve the unique decoding problem for G and also the sampling Gaussian sampling problem for the function F. Now, these functions, once you also know how to invert them, you can do quite a few things with them, CCA secure encryption, IDE, hierarchical IDE, oblivious transfer, and many, many, many more constructions. And it is known how to generate matrices together with a basis consisting of short vectors. So the first construction was proposed by Ita in 1999, and a simpler construction was given by Alvin and Peichert in 2009. However, the construction is very, very complicated. And also the resulting algorithms are not entirely practical. They either solve the inversion problem very well with good performance in terms of quality of solutions. But the algorithms are slow. They use floating point numbers, and they are inherently iterative sequential. Or there is a different class of algorithms which desirable properties like parallelization and the ability to move most of the computation to an offline stage. But then they have worse performance in terms of quality of solution returned. So in order to illustrate this quantitative aspect, let's go over the parameters. We have a matrix A, which is N by M. Typically, M is equal to N log N. And you have a basis, a good basis, which consists of vectors of length roughly equal to sigma. Now this sigma will be the standard deviation of the vectors of each coordinate of the vector produced by the inversion algorithm. Now if you run your inversion algorithm, you will obtain pre-images of size sigma times square root of the dimension. Now for this function to be one way, finding vectors of this length should be hard. So finding methods to generate a random lattice together with a trapdoor should aim to produce vectors with small sigma and M. So you can set N and Q so that finding vectors of length beta is hard. And the smaller M and sigma, the harder it is the corresponding inversion problem. So you get the same time, smaller keys, and harder inversion problem underline the constructions. So in our work, we give a new method to generate lattices together with a new notion of trapdoor, which is closely related to a basis but not quite the same. The method is very simple and fast. The key generation consists of a single matrix multiplication. While previous methods required complex algebra, Hermione normal form computation, and other polynomial time algorithms which are not quite practical. And the inversion is also very, very simple and practical. It is parallelizable. Most of the work can be done in an offline stage. And there is no trade-off between efficiency and quality. It gives the best possible optimal quality in terms of sigma, standard deviation of the inversion algorithm, and also efficiency. And the parameters themselves and sigma are quite a bit smaller than previous work. As an example, we get dimensions which is smaller by a factor 32 and standard deviation smaller by a factor 25 for typical instantiations. And this already gives an improvement between 1 and 2 orders of magnitude in the size of the resulting keys. Plus, as we use a different notion of trapdoor, which instead of being an n by n matrix, it is a n over 2 by n over 2 matrix that gives an additional factor in performance. Once you use this within applications, then you can also use the special structure of our trapdoor to get further improvements. Like all the applications based on key delegation, like hierarchical identity-based encryption, you can reduce the size of the key from quadratic in the dimension to linear in the dimension. So here is a brief summary of the parameters I don't really ask you to go over the numbers. But just to give you a sense, you get somewhere between 1 and 2 orders of magnitude in the improvement. And also in the asymptotic setting, the improvement can be significant. It can be logarithmic in Q. So it goes to infinity as the dimension grows. So I'll give a brief sketch of how the method works. So what we'll do is to start from a very simple lattice, G. So here I drew the integer lattice. That's not just to suggest that the lattice is very simple. But yeah, it's not quite Z. Zn does not work. So it is this lattice, which is almost as simple as Zn, but some desirable features. And some of the desirable features is that it allows for very fast inversion of these functions. And then we will transform this lattice into a random one by applying a random transformation, which is also nice. It has low distortion, and it can be computed in both the backward and forward direction very efficiently. So this gives a very simple method to perform the inversion problem in the random lattice by starting from an arbitrary target point, using the inverse transformation to map it to the simple lattice G, solving the inversion problem in the lattice G, which can be done incredibly fast. So here inversion problem, which is typically quadratic, will be solved in linear time and also very, very fast in practice. And then you map the solution forward to the original lattice. And that's how you use the trapdoor. You use the trapdoor to go back and forth between these two lattice, the random one where the problem is hard without knowing a trapdoor, and this fixed lattice. And of course, this transformation introduces distortion, so you need to do some extra work to fix that. But that can be done by choosing proper parameters, using transformations with low distortion and using a perturbation method from the coding algorithm of Pykert from crypto 2010. Now I'll give a brief description of this magic lattice G that allows to do this. So take a vector G, which has k entries where k is the logarithm of q. For simplicity, let's set q to be a power of 2, though this is not necessary. And our vector is given by 1, 2, 4, all the powers of 2 up to q over 2. And let's take the lattice defined by the linear equation Gx equals 0. And this lattice is a very simple and nice basis. It's a basis with 2 along the diagonal and minus 1 below it. And if you see, when you multiply one column by this vector, you are subtracting one coordinate from twice the previous one, and they cancel out. And for the last coordinate, you have q over 2, which is multiplied by 2. And again, you get 0 modulo q. So these are very simple lattice with short vectors. It is almost orthogonal. If you take the Gram-Schmidt orthogonalization of the basis, you get twice the identity. It is sparse. Most of the entries are 0. And it is low dimensional. The dimension of this lattice is essentially log n. So the sparsity of the matrix already gives very fast inversion algorithms by using sparsity within the known algorithms. But you can also get very fast and efficient specialized algorithms just to give a hint of what you can do. Consider the problem of inverting the function f. And you want to find a small solution, x, let's say, 0, 1 vector, such that Gx is equal to some given y. So how can you find y? You can use one of these algorithms. Or even easier, you can just take the binary representation of y. And that gives you a vector, which is mapped by G to the desired value y. So taking binary representations of an integer number already gives a good inversion algorithm. Now, of course, this is a small dimensional lattice. Doesn't seem very useful. So first, you can move to high dimension by taking a tensor construction. You take a block diagonal matrix with many copies of G along the diagonal. And it is clear that the inversion problem for this lattice consists of n parallel inversion problems for the low dimensional lattice. So we have, again, a linear time almost perfectly parallelizable inversion algorithm. We also get, by combining the basis together, a good basis, which is sparse, almost orthogonal, which can be used together with the previous algorithms. But there are new specialized algorithms to solve the inversion problem very, very efficiently in practice. So this is a fixed lattice. So it cannot, by itself, be useful as a cryptographic key. We want to transform it to a random matrix. And this is done in two steps. We first combine G with a random matrix A. And using the fact that G generates the entire space, it is very easy to reduce the inversion problem for this matrix to the inversion problem for G. So adding columns is something that has already been done in previous work. And there are standard methods to efficiently adapt the inversion algorithm. And in order to get a truly random matrix, we multiply our A bar G matrix by a random unimodular transformation. Unimodular means that the determinant is one, consisting of one on the diagonal, the identity on the two diagonal blocks, and some random matrix with small entries in the upright corner. When you execute this product, when you compute the product, you get an expression of this form, where AR is going to be almost random. You set the parameters in such a way that by the leftover hash lemma, these random metrics which will act as a sort of one-time pad and hide the underlying matrix G. And as a side remark, when G is equal to 0, this resembles some previous construction of ITI that was only giving weak trapdoor. So just one short vector. And the resulting construction can be improved if you settle for pseudo-randomness. So just one last slide. And so this matrix, this transformation matrix also has a very simple inverse. You see, you just need to change the sign of AR to get the inverse transformation. And it is clearly very easy to compute back and forth, giving very easy and simple and practical inversion algorithms for the result in random matrix A. Plus, you can batch many execution using fast matrix multiplication. And you can use this matrix to obtain a short, good basis for the new lattice. But you don't really need that. You can adapt all the applications in the lattice cryptography to make direct use of the matrix AR rather than building a full basis. So concluding, we describe the simple method to generate random lattice with trapdoors, which is much more efficient and practical than previous methods and bring lattice cryptography closer to practicality. Thanks for your attention. We have time for a quick question. Oh, thanks. That was unexpected. I took the time for the question. I'll be happy to take questions offline. We started a bit late, so we have a question. Rings. Yeah, the construction adapts in a pretty much trivial way to ring LWE and idea lattices. Yeah, the adaptation is straightforward. And the paper gives some details on how to do that. But yeah, it works just as well. OK, then let's thank Daniele again.