 I'm Peter Scholl, and this is my talk on efficient pseudo-random correlation generators from Ring LPN. This is joint work with my co-authors, Elette Boyle, Joffoie Coutot, Nive Gauboer, Nive Valéchi and Lisa Cole. The main motivation for this work is towards secure multi-party computation in the pre-processing model. In this model, the parties involved, here Alice and Bob, are given access to a source of correlated randomness generated in a pre-processing phase by a trusted dealer. Even in an online phase, the parties take the correlated randomness together with their inputs x and y, and exchange some messages to evaluate the function f on their inputs. The main advantages of using correlated randomness are that the online phase can be information theoretically secure and also very efficient. In fact, the overhead of this protocol is just a small constant factor on top of evaluating and sending a circuit description of the function in the clear. However, the main drawback of this approach is that to generate the correlated randomness in a secure way, the parties have to run an interactive protocol. This pre-processing protocol is what typically dominates the overall secure computation cost in practice. A pseudo-random correlation generator, or PCG, is a tool that can potentially avoid this expensive pre-processing phase in secure computation. PCG is defined for a target correlation, R0, R1, which might be, for example, a set of many oblivious transfers on random strings or a large amount of multiplication triples. PCG also consists of two algorithms, firstly a seed generation algorithm, which outputs a pair k0, k1 of correlated seeds, together with an expansion algorithm, which takes one of the seeds and locally expands it to produce a large pseudo-random output. There's a correctness requirement, which says that two expanded outputs should be indistinguishable from an actual sample from the correlation. Secondly, the security requirement says that even if I'm given one of the seeds, say k0, then the other seed's expanded output should be indistinguishable from an output of the correlation, sampled conditioned on the expanded output from seed k0. Given a PCG to produce some correlated randomness, all we need now is a secure setup protocol to actually generating the PCG seeds without a trusted third party. Putting these things together, we obtain a secure computation protocol with what we call a silent pre-processing phase. Here we just have a small setup protocol to generate the short seeds, followed by silent and completely local expansion to produce the correlated randomness used in the protocol. Several previous works have constructed PCGs for useful classes of correlations. For instance, from learning with errors, we can construct a PCG for arbitrary additively secret-shed correlations using homomorphic secret sharing. However, this is based on fully homomorphic encryption and is computationally very expensive. There are also recent constructions based on learning parity with noid assumption instead of LWE. These can be used for simple correlations like vector oblivious linear function evaluation and oblivious transfer. These constructions have been shown to have good concrete efficiency in some recent works which have optimized them and presented implementations. LPN could also be used to construct more complex correlations like oblivious linear function evaluation, multiplication triples, and even higher degree correlations. Unfortunately, these constructions are not practical since they have at least a quadratic computational cost in n, where n is the length of the expanded correlated output. There are also older constructions of PCGs for simple linear classes of correlations such as pseudo-random secret sharing, as well as a construction for one-time truth tables from the paper last year. The main contributions of this work are to construct efficient PCGs for correlations like oblivious linear function evaluation and multiplication triples based on the ring LPN assumption. These constructions can silently expand n, OLEs, or triples in quasi-linear n runtime, improving upon the quadratic cost of the previous LPN-based constructions. We also present several extensions of variants for authenticated triples, multi-party triples, and other bilinear correlations such as matrix triples. We also present secure protocols for setting up the seeds of these PCGs with active security. In particular, for our two-party PCG for authenticated triples, this implies a silent pre-processing protocol for the speed's multi-party computation protocol. This protocol has very good concrete efficiency, but the size of each party's seed can be as small as a round of megabyte, and the run times for expanding the seeds are comparable to the times for generating multi-party triples using the overdrive protocol, which is a non-silent protocol based on homomorph encryption. On the technical side, some of our techniques were actually inspired by work on fully homomorph encryption using ring LWE. For instance, a method for switching from LPN to ring LPN to avoid a quadratic blow-up, and a bootstrapping style technique to optimize the QSC generation. Our most efficient constructions also rely on new arithmetic variants of ring LPN over polynomial rings. Since these hadn't been studied previously, we also provided new security analysis to improve our confidence in these assumptions. Our main goal will be to construct PCGs for the oblivious linear function evaluation, or OLE, and multiplication triple correlations. OLE is a two-party functionality between a sender and a receiver. The receiver has input x, the sender has input u and v, and the receiver obtains w equals ux plus v. Since we're constructing a correlation, we'll consider a random variant of this by both parties' inputs as sampled uniformly. The related notion of multiplication triples is an n-party correlation, which samples random values a, b, and c, where c is a times b, and then distributes to each party an additive sharing of a, b, and c. So party i gets values a, i, b, i, c, i, where a is the sum of a, i, and so on. These two correlations are related in the two-party setting. Two random OLEs can be locally converted into a single multiplication triple. And finally, we also will consider authenticated multiplication triples, which are a stronger variant used in actively secure protocols such as speeds. Here, as well as a, b, and c, each party also gets an additive share of these values, multiplied by a random MAC key delta. Before moving on to the constructions, I'll introduce distributed point functions. These are a secret shared form of point function, which is a function parameterized by points alpha and beta, as well as a domain size n, on input x and domain outputs either beta, if x is alpha, or otherwise zero on every other input. Distributed point function consists of a key generation algorithm, which samples two random keys, then an evaluation of our algorithm, which takes one of the keys, as well as a public input, and outputs a secret sharing of the point function applied to the input x. There are very efficient constructions of distributed point functions based on any length-doubling pseudo-random generator following a GGM-style tree construction. There's also a highly efficient setup protocol for generating a pair of seeds for a secret point function in a distributed manner by Dono and Shilat. As a warm-up, we'll see that a distributed point function can be used to contain a simple PCG to compress a random unit vector. The key generation algorithm simply samples the distributed point function keys for a secret point alpha and beta, and then to expand this to a unit vector, each party simply locally evaluates the key with every input x and the domain to obtain a length n vector, such that when summed together, one of these sums will be non-zero, and the rest will all be zero. We can easily extend this from secret sharing a unit vector to any sparse vector with t non-zero entries. The EVE approach is to take one DPF for every point in the vector, evaluate these locally, and then add up the resulting shares. We can obtain a more efficient approach if the sparse vector can be obtained by concatenating t smaller vectors of the same length. However, this requires some additional structure on the positions of the non-zero entries in the sparse vector. The security of our constructions relies on arithmetic variants of the learning parity with noise assumption. In its most basic form, the primal version of LPN says that given AS plus E, where A is a large random public matrix, S is a secret vector, and E is a large error vector, this is indistinguishable from a uniformly random vector. Here, if we're over a large field, E has a small handing weight, but the non-zero entries of E are uniformly random. This is also equivalent to LPN in its dual version, also known as the syndrome decoding problem, where we don't have a secret S, but the sparse vector E is simply multiplied by with a public compressing random matrix H. We'll also consider variants where the matrices A and H are more structured instead of being simply uniform, and we can do the same for the error vector E, for example the regular structure on the previous slide. The starting point for our constructions is the PCG for the tensor product correlation from our paper from crypto last year. Here the target correlation we want to consider is where two parties each have a random vector X0, X1, as well as an N by N matrix, where these matrices are additive secret shared of the tensor product of the two vectors, where here the tensor product denotes the N by N matrix consisting of every pairwise product. Note that this construction inherently has a quadratic computational cost just to compute the entire output, because of the fact that it's an N by N matrix. However, having a tensor product does mean that the parties can just take the N diagonal entries of these matrices, and these can be locally converted into N independent instance of OLE. The construction from the crypto 2019 paper is based on two main ideas. Firstly, the fact that the tensor product preserves sparsity, so that if we have a tensor product of two sparse vectors, then the resulting matrix is still somewhat sparse, in that if the original vectors have T nonzero entries, then the product have T squared. Secondly, they use the idea that we can randomize sparse vectors using the LPN assumption. For example, using dual LPN, we can take a secret shared sparse vector, and then each party can locally multiply this by the public compressing matrix H to obtain something that is pseudo random under LPN. Putting these two things together, the construction looks like this. Firstly, the generation algorithm will sample two sparse vectors, E and F, and give one of these to each of the parties. We'll then generate DPF seeds for the tensor product matrix E tensor F, which has T squared nonzero entries. In the expansion phase, the parties will evaluate the DPF instances on the N squared points in the domain to obtain matrices R0 and R1, which are secret sharings of the tensor product. Finally, to randomize this into a uniform tensor product, they will apply the LPN assumption twice. First, multiply the shares on the left by the matrix H, then secondly, multiply on the right by the transpose of H. You can see that this gives us tensor product of HE and HF, which is uniformly random under LPN. The problem with this construction is that the parties inherently have to compute an entire N by N tensor product, which is computationally very expensive. Even if they only want to obtain N OLEs, which would be in the diagonal entries of this matrix, we don't see a natural way to obtain these without computing everything else. So the natural question, perhaps there's a way of modifying the matrix H to rely on a variant of LPN that would allow us to compute these diagonal entries directly. A natural first attempt is to try using a matrix H consisting of Vandermonde matrices based on evaluating polynomials. This does actually allow you to do this computation efficiently. However, unfortunately, it turns out to be insecure due to an algebraic attack. However, based on a slight variant of this, using the ring LPN assumption over polynomial rings, it turns out we can actually build something secure. The precise assumption we rely on is ring LPN over a polynomial ring, which is polynomials over Zp modulo x to the N plus one, where here N is a power of two. This assumption states that given a random polynomial A, together with A times E plus E prime above E and E prime as sparse polynomials, this is pseudo random. And most efficient constructions will also use a reducible form of this, where x to the N plus one actually splits into linear factors modulo p. This means that Rp is isomorphic to Zp to the power n, means that given a multiplication triple in Rp, this can actually be locally converted into N independent triples in Zp. Since the reducible form of ring LPN hasn't previously been studied, in the paper, we do perform some security analysis where we observe that it only appears slightly weaker than the standard irreducible form of ring LPN or even standard LPN. The main attack that you have to watch out for is that an attacker can exploit sparse factors of the polynomial X to the N plus one and use these to reduce an instance down to a smaller dimension. However, there's a tradeoff here, since reducing the dimension also increases the noise rate. So as long as we carefully choose the parameters, we can ensure that this is still secure. Using ring LPN instead of LPN, the construction of the PCG will now use a polynomial product instead of a tensor product. So the seed generation algorithm will sample some sparse polynomials ee prime ff prime and then multiply every pair of these. So this is just a length two tensor product over the polynomial ring. Note that each of these polynomial products reduced modular X to the N plus one is now another polynomial of length n with at most t squared nonzero coefficients. In the expansion phase, after obtaining shares of each of these polynomial products, we can then randomize them using LPN by multiplying with a public random polynomial a. This gives the parties shares of the product of ae plus a prime with af plus f prime, since this is linear in the product of the sparse polynomials. This is pseudo random under ring LPN and then can be locally unpacked into zp to obtain n multiplication triples over zp. Now, instead of having of n squared cost to produce n triples in zp, we've managed to reduce this. Concretely, the dpf expansion phase requires o of n times t squared prg operations. However, this can be reduced to o of nt or even better using other techniques. And then the polynomial products require of n log n arithmetic operations. Again, we have a seed size requiring the same as around t squared dpfs, which is t squared lambda log n bits. One nice thing about this construction is that we can go from multiplication triples to authenticated triples almost for free by simply multiplying everything with the random math key. Since multiplying by a scalar doesn't affect sparsity, this only has a random factor to overhead on the basic construction. Finally, in the paper, we present several extensions to the construction, including PCGs for inner product correlations, matrix multiplication triples, and generalized degree two correlations. And except for the authenticated triples construction, we can extend all of these to the multi-party setting. As well as our PCG constructions, we give efficient setup protocols for generating the PCGs in a distributed way. Here, in our OLE or triples construction, the main challenge is to set up dpf keys for a product of two sparse polynomials and the ring RP with malicious security. The first step here is to do a secret shed sparse polynomial multiplication. Importantly, we want the communication to be independent of the degree of the polynomials. We show that this can be done with t squared multiplications over zp, as well as t squared binary addition circuits to compute x or sharings of the non-zero positions in the polynomial product. Once we have this, we also need a malicious secure protocol to set up the dpf seeds. We do this by extending the semi-honest Dönner-Schulack protocol from CCS 2017, by doing the computation on authenticated inputs and adding a correctness check to verify the authenticity of the outputs. A protocol is not fully actively secure. It does still allow selective failure attacks. However, we show that there's only results in at most of one bit amount of leakage on the LPN error vector. Using this protocol, we give some efficiency estimates for generating authenticated multiplication triples for the speeds protocol for actively secure two-party computation. In this protocol, the PCG setup phase requires around 25,000 triples to generate the first PCG seeds. This costs around four megabytes of communication. In the silent expansion phase, these can then be expanded to produce a million triples in under 20 seconds of computation. We can then perform a bootstrapping phase, where we use some of the triples that are expanded and reserve them to generate a seed for the next set of PCG seeds. In this way, the cost of the 25,000 seed triples essentially only needs to be paid once, could be seen as a one-time setup. For comparison, the state-of-the-art overdrive protocol for generating speeds triples using homomorph encryption requires a one-time distributed key generation setup, and then costs around 20 seconds of computation time, as well as two gigabytes of bandwidth to produce the same number of triples. So using a PCG seems that we can obtain the benefits of silent pre-processing with roughly comparable costs in computation to non-silent approaches. In summary, we've shown that using PCGs for OLE and multiplication triples gives a practical approach using the ring LPN assumption. This expands the types of correlations we can perform efficiently using PCGs and gives us the benefit of silent pre-processing compared with previous protocols. There are still some interesting open problems that remain. For example, a triple generation construction can only silently generate triples over reasonably large finite fields. It would be nice to extend this to work over the integers modulo 2 to the k or modulo 2. Also, the matrix triples construction we give is not that efficient and would be great to improve, as well as have better constructions for higher-degree correlations. And finally, I'd like to see some more study into security of ring LPN, including its different variants, which we use in our constructions. Thanks for listening, and I hope you've enjoyed this presentation.