 Hi, my name is Peter, and I'm going to tell you about this paper on homomorphic secret sharing and public key silent OT Which we build using new techniques for the palliative crypto system This is joint work with Claudia O'Alandi and Sophia Yakubov from Ores University I'll start by introducing homomorphic secret sharing and then move on to our main technical highlight Which is a new share conversion procedure based on palliative encryption We'll use this to construct a distributed multiplication protocol for homomorphic secret sharing Later, I'll then show how to use the same share conversion procedure to build a different object called the pseudo random correlation function This can be used to produce a large quantity of correlated randomness used in two-party or multi-party computation protocols And our constructions have some nice features like a simple public key setup And we can use these to build either vector or le correlations or oblivious transfer correlations Homomorphic secret sharing is a form of succinct distributed computation on private inputs In this talk, I'll be focusing on the two-server setting where each input X is run through some share algorithm Which outputs two shares x0 and x1 here given to the two servers Alison Bob Then when the servers want to evaluate some function say a program P across these inputs They can locally perform an evaluation procedure which leads to a share of the results So the security should require that each of these shares individually completely hides all information about X However, later on when the parties want to reconstruct the output They can combine these shares by just adding them together to obtain the result P of X Note that this reconstruction procedure really is just addition in some forms of HSS You allow more complex reconstruction, but here we'll just be focusing on the simplest kind of additive reconstruction Now if you're familiar with standard secret sharing Then you'll know that using any linear secret sharing scheme such as additive secret sharing or shamiah secret sharing You can build HSS for linear functions just by applying the linear function to the shares At the other end of the scale using learning with errors in a circular security assumption We can actually build HSS for any class of functions Unfortunately, this construction isn't really so efficient as it builds on top of expensive forms of fully homomorphic encryption In between these two extremes There's quite a lot of different possibilities depending on what kind of assumptions we want to use Using just one way functions. For instance, we can build HSS for simple types of functions like point functions Whereas using learning parity with noise we can build HSS for more powerful classes like low degree polynomials and Assuming LWE this time without expensive types of fully homomorphic encryption We can get HSS for branching programs, which is a fairly large class of circuits including logarithmic depth circuits Interestingly using the DDH or palliate assumptions, which aren't even known to imply fully homomorphic encryption We can get the same type of HSS for branching programs There is a big drawback of these constructions though, which is that there is a chance the result of the computation is Incorrect with some probability that is inverse polynomial in the security parameter In this work we improve upon the palliate construction by reducing this error probability to something negligible At the same time we also managed to increase the message space Which was previously bounded to a polynomial size to be exponential in the security parameter matching all of the other known constructions So we want to build HSS for branching programs now the model of branching programs will consider It's actually very simple. You could just think of it as arithmetic circuits with addition and multiplication gates With the one constraint that in every multiplication gate at least one of the inputs is actually an input wire to the circuit itself So we can consider two types of values input wires to the circuit and then intermediate computation values which we'll call memory values and in the HSS construction Every input wire is going to be represented or given out to the parties in an encrypted form under some additively homomorphic encryption scheme And then for each memory value during the computation The parties will end up having some secret sharing of the corresponding message under some linear secret sharing scheme So with this the types of operations will support Firstly addition where we can add either to memory values or to input values Which is a simple local operation by linearity of the secret sharing scheme or the homomorphism of the encryption scheme And then secondly multiplication So because of the restricted model, this is only between input wire and a memory wire And this is the most complex part where the magic happens And finally at the end we have any memory value we can reconstruct it which is just revealing the shares Now I'm going to focus on the multiplication protocol, which is the most complex part of the HSS construction So we have two values x and y we want to multiply x is an input wire to the actual computation So it's given an encrypted form or y is this memory value. We have additive secret sharing The parties will do some kind of distributed decryption and multiplication step Which ends up with them getting multiplicative shares of the product x times y This comes in the form of group elements g0 g1 Where g0 times the inverse of g1 is g to the x y for some fixed group element g So i'm not going to go into the details of this step because it's essentially the same as in previous works The only thing to point out is that since there's some kind of decryption of the ciphertext involved here We do actually need the secret key to be involved So instead of additive shares of just y we're actually going to give the parties Additive shares of d times y where d is the secret key Now once we have these multiplicative shares The most important step is this distributed discrete log algorithm Which will convert the multiplicative shares into additive shares And that's the main step where this palier-based construction is different Let's start with a quick recap of the original distributed discrete log procedure by Boyle Gabor and Ishae Which works for the general ddh case of primordial groups So here we have the two group elements g0 and g1 which are divisive shares of x y And we want to convert them to additive shares of x y So looking at all the group elements on this circle we have here We can consider them separated by multiplication by this generator g If we keep going around from g1 we eventually get to g0 And then finally all the way up to G1 g into the g to the n minus 1 and back to g1 since n is the order of the group in this case So if we look at the distance between g0 and g1 in terms of Multiplication by powers of g then this is of course the x times y value we're interested in So one natural approach to obtaining the shares we want Is just to fix some public element h on the circle here Then have Alice and Bob count the distance from their original input g0 and g1 Up to h in terms of multiplications by g So of course this will get us valid additive shares or subtractive shares actually of the message x y Now there's of course a big problem here Which is that these shares x y 0 x y 1 will generally be very large And computing this distance is going to involve a potentially exponential number of multiplications So we want to make this more efficient One thing you can do is to actually have many distinguished elements h which are publicly identifiable So then the distance will be reduced and if you choose them carefully you can make this run in polynomial time You do have to also make sure the message space has polynomial size so x times y is not too large But the problem now Is that this introduces some inherent failure probability So if there's one of the distinguished points Which falls between the two values g0 and g1 Then Alice and Bob will measure the distance to the wrong point This will give a Incorrect message and the result of the computation will be wrong So despite various optimizations and improvements over the years All of the constructions for hss based on this technique have this inverse polynomial failure probability And limited to polynomial size message space and that's even for a variant over Pelea groups, which has been looked at previously So let's try to translate this to the case of palier encryption and see what we can do better Now with palier we're no longer working in cyclic group We're actually working with the integers modulo n squared under multiplication Where n is this rsa modulus So the important fact here is that the element 1 plus m Generates a subgroup of this where the discrete logarithm problem is easy So the first thing we'll do to help this in our case is to replace this fixed element g With a special base 1 plus n So note that the original inputs g0 and g1 are actually still in the larger group. They're not on this easy subgroup That's kind of inherent because of the way the first part of the distributed decryption part of the protocol works But when you look at that ratio, you end up mapping into this easy discrete log subgroup And the second change we want to make is that instead of having multiple h's which leads to this inverse polynomial error probability We're going to have just a single distinct which element h So this actually makes things really easy because now alice of bob can just take h divide this by g0 or g1 And you get 1 plus n raised to the power of your final share You can solve discrete logarithm and get the result The only problem is of course, how do you actually find this h? Because now if you look at the circle, this isn't actually the entire group Remember, since g is not the group is not cyclic This is going to be a coset determined by the original value g1 with multiplication by powers of 1 plus n If they choose h to be this special value Determined by just reducing g0 or g1 modulo n And mapping this back into the larger group mod n squared, this actually gets us something we want So first notice that this does indeed map to the same h for both g0 and g1 Simply because they have this special ratio 1 plus n to the xy So reducing mod n removes the n and these are in fact equal modulo n And the second important thing is that this h also happens to be in the same coset as g0 and g1 Which of course is what we need for this to be correct And the proof of this is actually very simple Just for some quick intuition here The idea is that by reducing g1 modulo n This gives us some unique representative of the entire coset defined by g1 We can do this for any group element It doesn't have to be g1 and this always gives us a unique coset representative And after mapping h back up to modulo n squared, we can reduce mod n Of course, we get the same value h Which means that h also has to be in this coset So this is actually all we need to make the distributed discrete logarithm algorithm work And it gives us this nice property of only having a single element h So we can support both the large message space and we get a negligible probability of error for each multiplication Just to summarize quickly what we've learned so far We have this basic hss construction from Pelier With improved correctness error and larger message space And one catch which I haven't mentioned is that to repeatedly do the multiplication procedure We do additionally need to give out encryptions of the inputs multiplied by the secret key And this involves introducing a circular security assumption However, we do have additional variants of hss following the same blueprint The first is a public key variant where you can share these inputs without knowing the private key Based on a variant of algomal over the pelier group And finally, we have a variant without a circular security assumption Using the Prokersky-Goldwasser encryption scheme, but this has much larger ciphertext sizes And also note that there has been a concurrent work to ours Which achieves similar results instead using the Damgård-Uruk crypto system And one nice thing about that is they don't need to assume circular security All right, so in the second half of the talk, we'll be looking at pseudorandom correlation functions A pseudorandom correlation function, or PCF, gives us a way of obtaining a large amount of correlated randomness with minimal interaction So before talking about the details of PCFs, let me say a little bit more about what I mean by correlations in general So a correlation can be seen as distribution, which in this case outputs a pair of strings r0 and r1 given privately to Alison Bob And depending on correlation, this can often be used in secure 2-party and multi-party computation protocols to help improve efficiency One common example of correlation is the oblivious transfer correlation Here Bob is the sender who's given two random strings s0, ns1 Well, Alice gets one of them sb and a random bit b Now Bob doesn't learn which string Alice got and Alice doesn't learn the other string Another useful correlation is oblivious linear evaluation or OLE Here Alice and Bob get random field elements x and y together with additive secret sharing of the product x times y In vector OLE, each sample from the correlation that's given to Alice and Bob will essentially be a single OLE sample With the restriction that the value y given to Bob will always be the same across every output of the correlation So in a pcf, Alice and Bob are first given a pair of correlated keys k0 and k1 coming from some key generation distribution Then given the key and a public nonce, they can both locally run an evaluation procedure to obtain a single output from the correlation r0, r1 And they can repeat this in as many times as they want using different nonces to obtain fresh samples from the correlation PCFs have only been studied relatively recently since a paper from last year From that, we know that using the learning with errors assumption, we can get pcfs for any type of additively secret shared correlation But this relies on homomorphic secret sharing and is quite expensive On the other hand, if we use a new variable density variant of the learning parity with noise assumption We can obtain pcf with better concrete efficiency for simple correlations like oblivious transfer And degree two correlations In this work, we extend the class of pcfs we can build from standard assumptions with reasonable efficiency By building pcfs for vector OLE and OT based on the Palier assumption or quadratic residuality Now compared with the other constructions, you know that since these are all Public key type constructions, they require expansions for every evaluation of the pcf So computationally, this is going to be much slower than lpn based alternatives But as a trade-off for this, we get by with much smaller key sizes Conceptually simpler constructions and based on classical number theoretic assumptions So let's take a look at our pseudo random correlation function for vector OLE So this is going to be a very simple application of our hss multiplication procedure I showed you in the first part of this video Where we'll use that to multiply the secret value xi given to alice with the secret value y given to bob Now the xi value is actually going to come from the public nonce given to both parties So the important thing here is that given some public form of randomness in the nonce We can use this to obliviously sample a palier encryption where nobody knows the underlying message x So as well as that in the private keys given to each of the parties We're going to give them a secret sharing of the product d times y where d is the palier decryption key So as well as this of course, we'll give y to bob And we'll give the decryption key to alice so that she can use this to recover the message x Now given this we have everything we need to run the hss multiplication procedure We multiply the ciphertext encrypting x with the shares of d times y Which gives alice and bob shares of x times y And we can repeat this on input of fresh nonce getting a fresh encryption of a new value x and a new vector only sample So this is a very simple construction of a pcf of vector only And in the paper, we also have a similar construction for generating random ot correlations instead The main difference there is that uses gold was a mccally encryption instead of palier This is an encryption scheme that encrypts bits And one key point there is that we actually have to repeat this process several times to build up an ot on strings one bit at a time So both of these pcfs actually do require some kind of trusted setup In the form of the pcf keys, which are these additive shares of d times y Alice has the decryption key d and bob has the secret scalar y We show that we can actually remove this trusted setup replace it with a public key setup Consisting of a single message sent from both alice and bob We formalize this by building a non-interactive vector only protocol based on the same distributed discrete log procedure used in our hss multiplication Putting everything together we obtained public key pseudo random correlation functions, which have the following structure So as part of our setup, we do actually rely on a common reference string in the form of an rsa modulus n prime Given to both parties when no one knows the the secret factorization Then alice and bob just have to exchange their public keys with a single message They can use this to obtain their private pcf keys, which are the secret shares of d times y Note that d here is a private key actually corresponding to a different rsa modulus than the one n prime in the crs And then given any public nonce The parties can do their hss multiplication to get the pcf output and of course repeat this as many times as they want So the main thing I want you to take away from this This will be have this very nice technical trick in the form of a share conversion procedure for palia encryption Which allows you to locally convert multiplicative shares of a message in the exponent into additive shares of x And so we show that this is very powerful We can use it to build homomorphic secret sharing for branching programs Maybe get negligible error and a large plain text space And additionally we get new constructions of pseudo random correlation functions Which allow you to produce an arbitrary number of vector release or oblivious transfers Or based on the same techniques And in addition we have this public key setup procedure To allow alice and bob to generate this correlated randomness after just exchanging one message each I'll conclude by mentioning a few open problems, which I think might be interesting to have a look at So as I mentioned before the pcf oblivious transfer is actually quite a bit less efficient But it requires repeating this multiplication procedure one bit at a time And this leads to a bigger security parameter overhead in terms of number of exponentiations That would be great to remove this Another limitation of our constructions is that if you want the public key setup Then you do need this common reference string in the form of a trusted rsa modulus For the PCFs, it would also be interesting to try to expand the class of correlations we can construct For instance, instead of just vector-ally building o-ally correlations from pelier And another interesting direction might be to look at constructing public key PCFs from other assumptions For instance, we know how to do a lot of things from learning with errors But actually nothing in the public key setting from lpn And finally a big limitation of all of the constructions is that they only work for two parties It would be really nice to have some techniques which allow us to go beyond this Thanks for listening and I hope you enjoyed the talk