 This is joint work with Chris Pickert and Lon Rosen from Georgia Tech and IDC respectively. In this talk, I will be presenting some new constructions, some efficient constructions of pseudo random functions from lattices and towards this end, we would also introduce a new technique which de-randomizes the learning with errors problem. So this talk would be divided in three heads. First, I'll give some background. Then we'll talk about the de-randomization technique which we call learning with rounding. And lastly, we are going to give a direct construction of PRF. Let's start. So first we define what a pseudo random function is. They are essentially families of function which are indexed by these seeds S. And once you select a function from this family using the seed, it is essentially deterministic and indistinguishable by any computationally efficient adversary from a random function on the same domain and range. And the adversary is allowed to make adaptive query access. In this talk, I would be talking about the domain would be the set of k-bit numbers and random objects would be in red. So pseudo random functions are a cornerstone in symmetry key cryptography. They are used for encryption schemes and as a stop pointed out two days ago, they're also used as a primitive for message authentication codes. They're also used for some very simple identification protocols like friend and foe. Let's talk about how we construct, go about constructing pseudo random functions. The first method would be these objects, AES, Blowfish, this symmetric cryptos streams. These objects are extremely fast, super efficient. They're used all over and they're secure against all known cryptanalytic attacks. However, we want some sort of reduction to complexity theoretic assumption which has been well studied in theory. Towards this end, Goldreich, Goldwasser and Mikali gave the famous CGM construction which gives pseudo random functions from any secure doubling PRG, lend doubling PRG. While simple, this is not that efficient. It's sequential, it requires k iterations for k-bit input. So to remedy this, Neorangold and Neorangold Rosen in a slew of papers gave us direct constructions of PRFs and these security are based on number theoretic problems like factoring, discrete logs. However, they suffer from these problems that despite being theoretically efficient, they require huge exponentiations and also these assumptions fall under quantum attacks. So how to remedy these last two bullets? Well, the session topic and Daniel's talk should give you some clue and the answer is lattices. So the advantages, we've seen them before. Lattices are simple, efficient. They resist quantum attacks. They have very good, average case, worst case reductions. So how does pseudo randomness and lattices stack? What is the state of the art? You can construct PRG from lattices, but then you would have to feed it into this inefficient GGM construction which loses the efficiency that lattices kind of provide you. You can also, to construct a pseudo random function, however, you would need to answer queries and these lattice based assumptions, their problem is that they need some sort of fresh biased errors with each query. So I'll talk about this problem in detail and this is kind of the roadblock we hit and we attempt to solve. With that, we come to our results. We would give these efficient lattice based pseudo random functions. There are two kinds of those functions that we propose. The first is the synthesizer based construction in the model of Neorangold 95 and then there are these direct constructions, subset product based constructions in analogous to Neorangold 97 and Neorangold 2000. Towards this end, there would be this technique of de-randomizing the learning with errors problem, generating these errors deterministically somehow. In this talk, I would be talking about the de-randomization technique and the direct construction details can be found in the paper and on the print. So let's talk about the de-randomization technique now. I would be talking about the learning with errors problem, more specifically the ring variant, although all the results carry over to the general version as well. I would be using these polynomial rings, which are just polynomials with integer coefficients and what you do is you mod them with this polynomial x to the n plus one, which is a cyclotomic for n being a power of two. So the cyclotomicity is used for efficiency. I'll just show you when, exactly when. We also use this ring R sub q, which is the same ring, except the coefficients are now drawn, the coefficients are now a model of q. So the LW problem here is for the adversary to distinguish between these two worlds. In the first world, what the adversary gets is a random element sampled from RQ, sampled uniformly from RQ, and it's noisy ring product with some element S, also from RQ. And in the second world, what he gets samples are both drawn completely uniform at random. So a couple of points. The sample S is uniform and it's fixed for all the samples. And the error is EI are drawn fresh for each sample and they are drawn according to this Gaussian distribution, which is typically short poly n error rates. This problem has been well studied in literature and the hardness is based on worst case lattice problems. The LW problem by itself gives you a pseudo random generator if you wish, but the problem here is the secret error E. Each time you take a sample, you have to deal with getting a fresh error sample. So this kind of gives you a roadblock when you are dealing with a query access from an adversary, when you are trying to design a pseudo random function. We could get around this problem if we could somehow generate these errors deterministically. Could we do that? The answer is affirmative. What we do here is instead of adding error, what we do is we round it to a sparser subgroup, which is the group of integers model of P for some P, which is less than Q. So what we do is we divide ZQ into these QRP size chunks that are P of those and we map all of them, we round them essentially to these values and we interpret them as numbers model of P. This is kind of the central technique that we use throughout. We extend this operation over the ring by rounding each coefficient. That's giving us a operation from RQ to RP. The problem gets defined, this is the problem that we propose, the ring LWR problem, and LWR stands for learning with rounding. And this problem instead of distinguishing noisy ring products, you instead try to distinguish rounded ring products from uniform over RQ cross RP this time. So the intuition behind this is LWE essentially hides the lower order bits by kind of adding this short noise at the end. LWR instead discards those lower order bits. Let's talk about the reduction from LWR. We prove that it's as hard as ring LWE. For this ratio of moduli, the source modulus Q and the target modulus P is supposed to be super polynomial and your error rate is small. The reduction is, this is the reduction idea. It's very simple. So the first line just says that if you sample an element at random model of Q, and if P divides Q, then it's exactly the same as sampling, and rounding it, then it's exactly the same as sampling a uniform element model of P. So the division criteria is not really necessary. I mean, if P does not divide Q, then it's a statistical equivalence instead of being exactly equivalent. Then we invoke the LWE assumption and just replace this uniform distribution with this rounding of the noisy product. And then since the error is short and with this moduli that we have set up in this fashion, the rounding of the noisy ring product is almost the same as the noisy, is exactly the same as the rounding of the just the ring product with high probability since the error is short. The actual proof is slightly more subtle. It requires more sophisticated corner case analysis. The LWR assumption in itself directly gives you these objects called synthesizers, which Neorain Gold defined in the paper in 95. The synthesizers give you a K bit PRF through a log K depth tree of synthesizers. I won't go into the details, they could be found in the paper. Now let's move on to the direct construction. The synthesizer, while being an improvement over the GGM construction, which is a K length sequence, instead you get a log K level tree. It's still in NC2, could we somehow improve this? The answer again is affirmative and this is the construction. We choose this key, we choose an A uniformly from RQ and we choose the elements S1 through SK from the error distribution of LWE and then we compute this function. So this looks complicated, however this function is really simple. What we do is this input X1 through XK just indexes a subset of these SIs. So if XI is one, then we include that SI and if it's zero we don't. So once we have that subset we perform this subset product and then we multiply A and then we round it. And this product can be computed efficiently. We do this, this is where the cyclotomicity comes in. We can do this FFT or Chinese remainder theorem sort of transformation to convert this from a polynomial product to a coordinate-wise product. And then once we have that coordinate-wise product we can then perform a discrete log operation to reduce it to a subset sum sort of operation. And this is kind of motivated by this direct construction that Neurangul and Neurangul Rosen gave which is essentially very similar, however they require exponentiation but we don't. Let's talk about the proof of security of this PRF. I'll leave up the construction here. It's kind of like the LWR reduction. However, it's slightly more complicated than that because we have to handle these adaptive adversarial queries. The strategy would be simple. We would try to embed the LW challenge on each bit of the PRF. This is how we do it. We define this new function f tilde x, which is essentially the same as fx. Instead, what we do is in the first bit we embed the LW challenge in the following way. So if x1 is zero, we answer as usual. We multiply a with the subset product of the rest. If x1 is one, we multiply a with this s1 and we add this short error. So this is the new bit which is coming in. And so you form this product here, which is extra compared to this product. Now in this product, what you would note is these s elements were all chosen to be short. And this error is also chosen to be short. So this combined product is also short. However, this increases exponentially in case so you have to select your model appropriately. Then we employ the same argument as the LWR reduction. That with high probability, these two functions give the same value. Now, since we embedded the LW challenge, we now would use the hybrid argument to replace this with completely uniform elements instead of the first bit answer by the ring LW assumption. Then we get this new function f prime x, which uses only the x2 through xk to give you an answer. So we reduced a bit. And then we embed the LW challenge on the second bit to get another function then on the third bit and so on until we get the uniform function. So this is how the proof works. In conclusion, what we have, what we discussed today was the de-randomizing technique from LWE which gives us synthesizers. And then we also gave a direct construction of PRFs based on lattices. Recently, a couple of weeks back, Mark Zandri gave us a construction, posted a paper on e-print which shows that these construction also yield quantum PRFs in the sense that the adversary is allowed to make a superposition of queries. So that's an exciting development. Some open questions remain. The first open question, the big open problem here is to improve this p over q ratios. So these p over q ratios somehow correspond to the LWE error rates. And currently our proofs need these to be super polynomial, n to the omega 1, n to the theta log k, and in the direct PRF, n to the theta k. The hope would be to somehow match the LWE sort of error rates which would be like square root n is what LWE currently achieves. We would want to achieve that. But this is the best we can hope because Arora Gay showed that there's an attack on LWE if a sub-exponential attack on LWE if you go below square root n. And for the actual PRF constructions, we would want these ratios to be poly n. Also, we would maybe look for efficient PRFs from other learning problems like parity with noise or the subset sum problem. So these are all directions we could look at. That's all. I can take any questions.