 Thank you very much for the introduction. And thanks, everyone, for being here. So this talk is about homomorphic secret sharing, which is a very powerful form of secret sharing that allows you to do computations on secret shared values, such that you end up with a secret sharing of the result in a non-interactive way. But just as in regular secret sharing in the two-party setting, which I'm going to focus on here, we have a share algorithm which outputs two shares over secret value x. And the security part requires that individually each of these shares completely hides the secret. So on top of this, we require a homomorphic evaluation procedure, which takes as input an additional public program, p, and then homomorphically evaluates this program on the shares locally. We require that for correctness, after evaluation, we end up with a secret sharing of the result of p applied to the inputs x. And I'm going to consider a very strong form of reconstruction on the secret sharing screen, where the outputs of the evaluation procedure can simply be added together to give the correct results applied to the inputs. So this might seem very strong, but it's actually very useful and important, particularly for applications like secure computation, where we want to get very low communication overhead. So a simple example of homomorphic secret sharing, you can just take standard additive secret sharing over some finite field, where the share algorithm will simply sample two random shares, which sum up to the secret. And then to evaluate any linear program p, we can simply apply p to the shares individually. And then these will sum up to give the correct result, p of x. But if we want to get homomorphic secret sharing for more complex functionalities, then we're going to have to work a lot harder. So the first line of work in this field goes back to 2014, when Gabor Anishai showed how to get this for simple classes of functions, including point functions and interval functions, using only a pseudo-random number generator. These constructions are also very efficient, computationally have a nice low overhead. At the other end of the scale, we know that we can get homomorphic secret sharing for arbitrary circuits by building on top of specific kinds of fully homomorphic encryption based on lattice-based assumptions. But as with FHE generally, these have a very high computational overhead. In particular, you either need some expensive bootstrapping procedure, or if you avoid this then you have some kind of noise growth which increases during the computation and the ciphertext multiplication procedure gets more expensive to handle this. Crypto 2016, Boyle, Gabor Anishai presented a very exciting work which constructed homomorphic secret sharing based on just the DDH assumption for log-depth circuits. This was a really amazing feasibility result because it managed to avoid using fully homomorphic encryption for what's actually a very expressive class of functions. On the downside, from a practical point of view, this is still not really very efficient. In particular, it has some kind of inherent non-negative correctness error from applying the homomorphic evaluation procedure and to handle this and make this reasonably small you have to add a lot of extra computation. And on top of that, the plain text space is also limited to be only a polynomial size. So in this work, the main question we're interested in is can we build some kind of efficient homomorphic secret sharing from lattice-based assumptions without relying on fully homomorphic encryption? So let me stress that this is not a feasibility kind of question. Of course, we know if we allow lattice-based cryptography, we can build FHE. It's much more about the concrete efficiency. So we're looking for something that avoids the or has much more efficient homomorphic evaluation than using FHE. So the result we get is something which kind of fits somewhere in the middle of this picture, which is a construction of homomorphic secret sharing for NC1 circuits based on lattices, which is more efficient than FHE. So the bottom line of our result is that we avoid this costly homomorphic multiplication procedure, which is present in all known fully homomorphic or even somewhat homomorphic encryption schemes and we get something much cheaper, which is roughly akin to the cost of just a decryption operation in a lattice-based encryption scheme. And in practice, this can be up to around an order of magnitude faster. And then compared with the DDH-based construction from crypto 2016, we managed to get negligible correctness error and we can also support much larger plaintext spaces without too much extra cost. So some of the highlights from our work are a set of techniques, which we apply to lattice-based encryption schemes when used in this distributed setting, which may have other applications. And we also present some optimizations for specific settings of our scheme and then look at some applications where we show that we get some nice concrete efficiency improvements. Okay, so I'll start by giving a high-level overview of the construction from crypto 2016 based on DDH. This is also relevant to our work, which follows a similar blueprint. The computational model we'll consider is something called restricted multiplication straight-line programs. This is the class of programs that encompasses all branching programs, some particular log-space computations and NC1 or log-depth circuits. And in these programs, we have two different data types, which can be divided into input values and memory values. So in the DDH construction, the input values correspond to inputs of the program, which are given in a kind of encrypted form with some public ciphertext. And for every memory value, these can be seen as kind of intermediate computation values where the parties actually have some additive secret sharing of some representation of the underlying value. So whenever you see this yellow and red box on the slides, and this is always used from now on to indicate that something is being additively secret shared. So in one of these programs, we can do different kinds of operations. We can do additions on two values which have been stored into memory. Because we have additive secret sharing, this is just simple addition on the shares. And then we can do a multiplication which always has to be between one of the input values given as a ciphertext and one of the memory values given as two secret shares. And this will always result in another memory value which is in secret shared form. And on top of this, we can also do conversions by loading input values into memory, but not the other way around. So in the DDH construction, it's this multiplication phase which is by far the most complex part of the process and leads to this non-negotiable correctness error which we'd like to try and avoid. Okay, so in our construction, the main tool we use is an abstraction of most lattice-based encryption schemes with this property which we call nearly linear decryption. So if you take a look at pretty much any encryption scheme based on LWE or RingLWE, then look at the decryption equation. This always has this nice property that you can divide it into two parts. Firstly, a linear part which we'll call lindac and then a non-linear part. The linear part of the equation takes as input the secret key and then a public ciphertext encrypting some value x. And then what it ends up with is an approximation of this value q over p turned by the message, where q is the modulus where ciphertext lie and p, which is smaller than q, is the plaintext modulus. So then the second part of the decryption will actually recover the message just by rounding this to the nearest multiple of q over p and then scaling that down to recover x modulo p. So how can we exploit this nearly linear decryption to do homomorphic secret sharing multiplication without doing some kind of FHE machinery? Our main idea is to do multiplication via distributed decryption. So imagine the parties first have additive secret shares of the secret key for this encryption scheme and some public ciphertext encrypting a value x. So if we apply the linear part of decryption, this function lindac to the linear and the secret key, where each party just applies it locally to that secret key share given the ciphertext because this is the linear function and this just gives them additive shares of this approximation of q over p times x. Now the problem here is since we want homomorphic secret sharing where we get additive shares of the output, we actually need a way to get this down to exact additive shares as some method to remove the noise in this sharing. So the way we do this is with this local rounding trick which we based on a previous paper by Dota Setel where we have this rounding lemma which roughly states that if the parties have an additive secret sharing where individually the shares are uniformly random but the sum of the shares is something which is somewhat close to a multiple of q over p then actually if the parties just round their shared locally and then sum them up, this is equivalent to first adding the shares and then rounding afterwards, except with some small probability. And in this case, the failure probability corresponds to one of the random shares falling into one of these red regions on the graph here on the slide and as long as you choose the modulus q to be super polynomially larger than modulus p, then this happens with negligible probability. Okay, so putting these two things together, so far we've got a multiplication procedure which takes as input on the one hand a pair of these memory values which now we're saying a secret shares of the value y multiplied with the secret key and then an input value which is a ciphertext encrypting x known to both parties. And they apply the linear decryption function and the rounding function and you can end up with secret shares of x times y. But going beyond one multiplication, ideally we'd like to apply this again and for this seems like we need to get shares of x times y also times a secret key to do another multiplication. Now this is actually quite easy to get. We just make one small modification instead of having an encryption of x, we have an encryption of x times by the secret key and then this multiplication by the secret key just ends up in the output. We have shares of x times y times a secret key. But this is not quite enough to keep going. The problem is after doing the rounding step, we took the shares modulo q and brought them down to modulo p and this seems incompatible with the previous multiplication where we required shares starting at mod q. So one thing we can do at this point to do another multiplication is take a different encryption scheme which has a ciphertext space modulo p and then encrypts messages at a much smaller level modulo say p one smaller than p and continue at the smaller level. But then if you wanna do several multiplications we need to do some kind of level scheme where we have a whole chain of decreasing moduli where the gap between each pair of moduli has to be super polynomial for the rounding trick to work. And this is clearly not gonna be practical beyond a few multiplications. So ideally what we want is some way of taking shares modulo p and converting them back up to shares modulo q so we can use them again. So a way of doing this is a simple trick we call the lifting lemma which roughly states that if we have random additive secret shares of some value z with the additional requirement that this value z is much smaller than the modulus p then actually you can take these shares modulo q and they'll already give a correct sharing of the result of z. This means you can just take the shares modulo q and p and use them straight away for the next level modulo q without having to do any extra work. I like to think of this as the do nothing lemma. So the reason this works if you look at the scale which represents zp going from minus p over two up to p over two then in most of the cases when z is small you expect that each of the two shares will fall one side negative and one side positive. Which means when you sum them up over the integers this is already just a sharing of the integers so clearly you can take modq as well. And just as with the rounding lemma the only thing we have to watch out for is this corner case where one of the shares fall into the red region which is small enough this time if we choose p to be super polynomial larger than the magnitude of the value z. So now putting these three things together we can actually get a complete multiplication procedure based on this distributed decryption technique. So we have a public input ciphertext this time encrypting x times by the secret key and we have this memory value which represents two additive secret shares of y times by the secret key and then apply the three techniques of linear decryption rounding and lifting to finally end up with another memory value of the product. Because these are shares moduloq again then we can just use this again for the next multiplication with another input value say w and keep on going as many times as we want. So what this gives us is a homomorphic evaluation procedure that allows us to evaluate an arbitrary branching program as long as we have this guarantee that for every intermediate value in the computation the plaintext value has always got this bounded magnitude which has to satisfy this equation where these two inequalities come from the rounding and lifting lammas making sure that this value of the share don't fall in these red regions from the pictures. So you might have noticed a slight problem with this which is that we have to have these encryptions of x times by the secret key for every input value x. In general, that might give you a circular security issue which you have to watch out for. But thankfully in all of these lattice based encryption schemes there are actually some results which show these schemes are naturally circularly secure for when encrypting even linear functions of the secret key which is essentially due to the linear property of the decryption equation. So this is not a problem for security and actually on top of this we can exploit the proofs as these works which prove circular security to also give us an efficient mechanism for generating these encryptions of x times by the secret key. So this can be done given only the public key and the input value x. This is really nice because with the homomorphic secret sharing scheme it gives you an efficient way of generating the two shares which you need for evaluation in just a public key manner without any special kind of setup. Okay, so I'll now give a quick application an example of something you can do with homomorphic secret sharing. One of the interesting things you could do is a generalized form of two-server private information retrieval which you can think of as a kind of making private queries to a public database. So imagine we have two non-colluding servers who each have a copy of some public database and then a client who has some private query as input and then the client wants to interact with the servers to obtain something that's the result of the query. So using homomorphic secret sharing the client can just split the query up into two shares then send each share to the servers who will locally do the homomorphic evaluation procedure then send back the results. The client just needs to add together the shares it's very cheap computationally and recover the results. So compared with other kinds of private information retrieval there are kind of less expensive versions of this which have offer much less expressive classes of functionalities. So we obtain a more enhanced set of features like conjunctive keyword searches and pattern matching can be supported. On the other end of the scale compared with doing the same kind of thing which you could do with some more homomorphic encryption. Although we have this two-server setting instead of one we end up getting much smaller ciphertexts around three times smaller and we can reduce the computational overhead by around an order of magnitude for this application. Okay so summing up we showed how to build an efficient homomorphic secret sharing scheme from lattice-based encryption schemes using these three main tricks of a nearly linear decryption and a rounding trick and the lifting trick which put together give us HSS for these restricted multiplication programs which give you any kind of branching program and log depth circuits. So in the paper we also present a few extensions and optimizations for this including optimizations for the special case of degree two programs and a secret key homomorphic secret sharing which can be applied to the private information retrieval example and also present an extension supporting batched operations so you can do parallel computations on many independent plaintext values. To conclude I want to end by giving a few interesting open questions which I'd be very happy to see solved. So firstly all of the techniques which we use seem restricted to the two-party setting. So in particular the rounding and lifting tricks all fail with constant probability as soon as you go to just three parties. It seems kind of inherent to our techniques and would be nice to have a way to work around. Interestingly this is also the case for pretty much almost all known efficient homomorphic secret sharing schemes we can't get anything that is reasonably efficient beyond the two-party case. This is the big open problem. A related issue with our scheme is that because of the tricks again we require the modulus q to be super polynomial in the security parameter and if we can reduce this then we could get much smaller ciphertext in the scheme. Finally I mentioned that we have an extension to allow batched operations on using parallel operations on plaintext and compared with using similar techniques in somewhat homomorphic encryption not fully homomorphic encryption we have to work a little harder there and it would be nice to get something that is more efficient. Okay so that's everything I wanted to say and thank you for listening. All right so are there any questions and if you have questions please walk up to the microphones. You can use this one. I can't be asked to walk. Can you just give a quick overview of how you did the batching because you need to keep the things small yeah. Yes so that's part of the difficulty with batching so we need the kind of coefficient representation of plaintext to be small. So you do batching by fixing some kind of bound and then do the CRT transformation on that for when you're over this ring and then you have to make sure that the actual coefficient representation of the polynomial doesn't grow too large after a few multiplications. Yeah so it's okay for like degree two or three or something but it's not going to scale well beyond that. Question? So using the larger message space comes at the cost of a super polynomial modulus which in terms I guess implies also assuming lattice problems are hard for super polynomial factors. Yes. And is there any way to get some result of this type from the standard complexity assumption with polynomial factors so if you keep the modulus the message space is smaller for example. So in our construction even with a small message space we still need a super polynomial queue exactly because of the rounding and lifting tricks and even if you do HSS from say somewhat of fully homomorphic encryption if you want additive reconstruction then you'll still need the super polynomial queue. We don't know how to avoid that for additive reconstruction at least efficiently. Alright so final question? Okay then let's thank Peter again.