 So, this is a joint work together with Nivek Yoboa and Niveli Shai, right here, you guys. And the focus is on what's referred to as a circuit size barrier. So here, suppose I have Alice and Bob with some inputs and want to compute some circuit scene. So, if we don't care about security, the communication that's required here is very minimal. I can just send my input over, computation will be performed, and the output sent back. Interestingly enough, this is a, you know, crypto conference, we want to do this securely. And so by security here, I mean that nothing beyond the output is revealed. So interestingly enough, when we go to this security setting, all, essentially all known techniques for achieving this require communication that grows with the entire circuit size. Here, as opposed to inputs and outputs, there can be a very large gap here. And this is what I mean by the circuit size barrier. So essentially, all of the tools that we know how to use, even if you consider, for example, restricted classes of programs, things just like branching programs, Boolean formulas, still require this entire circuit size amount of communication. So the one exception to this is the recent breakthrough works in fully homomorphic encryption, which essentially allows you to get security while mimicking the original procedure, but underneath a layer of encryption. So here this breaks the circuit size barrier, it gives you something comparable again to just inputs and outputs. But of course, fully homomorphic encryption is also non-ideal in a certain different ways. For example, even though we get these great asymptotics, concretely, these have very heavy machinery, high concrete costs. And in addition, despite a lot of work and a lot of different candidates that have come up over the years, essentially all of them rely on the same class of assumptions boiling down in one way or the other to some sort of noisy encoding over lattices. And in particular, there's no candidates to date based on sort of retro 20th century assumptions. So let's rewind one year ago, Eurocrypt 2015, where I presented exactly this slide and I work in fact with the same authors on what's known as function secret sharing. So don't worry about what that is now, of course you should look it up afterward. But the important part that I want to say here is this giant region in red, which is essentially one of the things we showed is that if you were able to construct this object, this function secret sharing tool, for classes of programs sufficiently rich, for example, NC1 and above, that this would give you a succinct two-party computation protocols, essentially breaking the circuit size barrier for comparable classes of circuits. So you'll see this, this isn't red. We took this in fact as a negative result, it's some sort of barrier that essentially if you wanted to build this kind of function secret sharing, then probably you're going to need to use something like fully homomorphic encryption. So I'm happy to report in this work, we essentially turned the tables on this negative result by constructing exactly this notion, this function secret sharing, based not on fully homomorphic encryption, but based just on standard decisional Diffie-Hellman. Okay, so we achieved this for the class of branching programs, which in particular includes this NC1 log space in a number of different things. And actually I should take what half step back, so what we achieve is not exactly the same thing as function secret sharing, in the sense that we have a small probability of error in the reconstruction procedure, but as it turns out, this is still enough if you boost within the application, it's still enough to give you, for example, succinct MPC. Okay, so the core theorem of this work is actually going to be a little bit more convenient to talk about the dual notion of function secret sharing, which is that of homomorphic secret sharing. So keep in mind, I come with some secret and I have a secret sharing scheme such that if I want to now get secret shares of a function of this secret, that there's some procedure of homomorphically evaluating on the secret shares independently, without communicating back and forth. Okay, and the resulting values, these output shares, are exactly an additive secret sharing of this target function evaluation, F on my input. So here throughout the talk, W is going to represent an input. Okay, so a couple of notes here. First of all, you can actually consider more general reconstruction procedures here. It'll be a bit convenient for us and gives us some further applications to consider explicitly just addition. And one of the benefits of that is that, so these elements, these output shares really are elements of the output space of the function. So for example, my branching function, my function F outputs a single bit, then these shares that we have to exchange are literally one bit, not even an encryption, they're just one bit. Okay, so this is homomorphic secret sharing. And as I mentioned, we achieve this with a class of branching programs, so F's coming from branching programs, with this noticeable failure probability. And this failure probability is, you can trade it off here in exchange for run time. So note in particular that this delta is going to have to be noticeable, one over polynomial. Okay, so in the rest, by the way, Lambda is also the security parameter for the DDH. Okay, so as a function of this core result, we get a collection of different applications following from DDH. The first of which is of course, secure two-party computation. And here with the, for the class of branching programs, we essentially can match basically what you could hope for in terms of secure computation for the communication. Okay, going beyond branching programs, for a much, much richer class of circuits, we can basically use this branching program as a mini tool in order to shape the communication complexity to sublinear. Okay, so this, even for this large class of circuits, which I'll talk a bit about afterward, we can kind of break this circuit size barrier. An additional application that I won't have much time to talk about, but actually this is one of the applications that does require additive reconstruction at the end, is that of two-server private information retrieval. Okay, so here you have basically a client who wants to make secret branching program queries onto a database. So I've mentioned this a couple of times, but I really want to drive this point home, that branching programs are actually a pretty rich class of computations. So for example, if somebody tomorrow made it illegal to do any computation beyond this, I would still be pretty happy to live in this world. For example, things like approximations, essentially all cryptographic primitives have some sort of instantiation within this. Okay, so for the remainder of the talk, I'll go through the construction. I'll actually talk mostly about a simplified version, and then a little bit about how we get beyond that. I'll go through applications, and then I'll conclude. Okay, so let's get started. Okay, so it's going to be convenient for us to, instead of going directly and working with branching programs, to work with the following class of programs known as restricted multiplication, straight line programs, or RMS. Okay, so these programs have various values in memory, and you have inputs, and you're allowed to do four different types of instructions. The first is to take an input value and load it into memory. You can take two values in memory and add them together, you can take a value in memory and multiply it by an input, or you can output a value that's currently held in memory as an element of any ZM of your choice, for example. Okay, so what we're going to do is we're going to build homomorphic secret sharing that supports evaluation of these types of restricted multiplication programs over the integers, and we'll get correctness to hold as long as all of the intermediate computation values, these integers, as you go along, are sufficiently small in magnitude. So as a special case of this, if you just think about zero, one binary values throughout computation, this is already rich enough to capture log space and branching programs. I have a very cool side about that, but definitely no time to go through it. Okay, so even if you want to just compute one of these branching programs, for example, you can get some benefits by working with a larger plain text space than just zero, one. And also out of interest, so this, if you allow these larger plain text space, this now captures an entire exotic complexity class known as reach few L. Okay, so let's do this warm-up. Remember we're in the world of DDH, so we have some DDH group and a generator. And for the time being, suppose that it were the case, that if I take G to the W, that this is a secure encryption of W. It's not, I hate to break that to you, but suppose for the time being that it is. So here's how we're going to generate the secret shares. So I come with my input W. The first thing I'm going to do, by the way, this line denotes that this will be one guy's share, this will be the other guy's share. The first thing I'll do is give additive secret shares of each of the values. So just random values that add to the correct value. And the second piece will be to give this sort of pseudo encryption of my input to each of the parties and note here that this is the same value on both sides. Okay, so how can we do homomorphic evaluation on this? Let's step through the four different things that we have to be able to support. So loading of value into memory, we're going to think about the invariant that we'll maintain is any value in memory is currently held as an additive secret share between us. So loading an input is already trivially taken care of because this is part of the original secret sharing. Now that we have this, it's straightforward to see how can we add values in memory. I have an additive share of this, an additive share of this. I can just add them locally, and that gives me an additive share of the sum. Okay, so multiplication, let's skip over this for a second. And let me just say that outputting the value of memory is also reasonably straightforward. With a little bit of a twist, you basically need to just output your additive secret share. Okay, so the tricky part is how do we do this multiplication? And remember that here, this is where we're going to use the fact that we have, as part of this input value, W, we have this encryption, or pseudo encryption. Okay? So we'll take this G to the W that each of us hold, and raise it to the additive secret share of our memory value X. Okay, so take a look at this. If I multiply these two values, what do I get? Quite audience. All right, I'll tell you the answer. You get shares, multiplicative shares now of G to the product, G to the WX. Okay, this is good. This is progress. In fact, we're trying to get this WX, we want to get shares of it somehow. But what we want to get in order to continue with computation is that we need additive secret shares. So how can we get from these multiplicative shares to additive? It sort of seems like we're stuck, right? Essentially, this would require us to take discrete log. Or does it? Okay? So again, our goal now is to get back to these additive secret shares. And one of the core technical ideas in this work is the following share conversion procedure. Okay, so our goal, again, we have these multiplicative shares of G to the Z, and we want to somehow get to additive secret sharing of this exponent Z. Okay, so let's look at this graphically. The top part and the bottom part are going to represent the views of the two different parties. And each of us, so here, graphically, think about this as stepping along the different powers of this secret group, DTH group, G. And the fact that we have a multiplicative sharing of G to the Z means that we can derive values that here differ on this path along by exactly Z steps, okay? So what we're going to do is to consider a set of special random points, these red points here, whose density is approximately delta, and whose identity we can efficiently test and agree upon. So this can be achieved, for example, just by sharing a pseudo random function and a point, some G to the I is said to be a red point if this evaluation of the PRF is zero. Okay, so now our share conversion procedure is going to be as follows. So starting with whatever your share is, say this guy starts here, this guy starts here, just iteratively step along on this path each time multiplying by one copy of the generator G until you hit a point that's a special point, one of these red points. And the output of this will be exactly the number of steps that it took for you to reach that point. Okay, unless, of course, so we have this escape condition that if you go beyond a certain amount of point and you didn't see one of the special points then just output zero, for example. Okay, so why does this help us? Well, there's three different cases about where these points potentially could lie as opposed to where our values are. The first case is the good case for us where there's no point in this zone in between our values and there is a point in this zone before a board. So in this case, essentially this is good because each of us is going to output our distance with respect to the same special point, which means that the values that we output will differ additively exactly by this value Z. So this is the good case, but there's two additional cases that would be bad for us. So the first is if there is a point in this red zone, and the second is if there's no points in either of the zone, in which case at least one of us is going to have to abort. But, and never fear, so you can set the probability of these events to be essentially as low as you want, okay? So here, what does this probability space correspond to? It's roughly going to scale with the size of this window times the density of these red points. So for any value that I want to shrink this error down to, what I can do is I can just make these points less and less dense. And as a trade-off, I'll shrink the error in exchange for needing to take additional steps, so to travel further along this path before you're expected to hit one of these points. Okay, so this is the share conversion procedure. And actually, kind of a note out of interest is that this procedure is a little bit reminiscent to some applications or some techniques in crypt analysis. For example, there's a technique of Van Orscha and Wiener, and I think it's kind of cool that we actually turn this into a constructive technique. Okay, so summarizing this warm-up construction, as you may recall, so loading to memory, addition, and generating output were not a problem. And to do this procedure of multiplying an input together with a memory value, we first started by taking G to the W, raising it to the additive share of X, and then running the share conversion procedure. And note, by the way, so the error of the share conversion scales also with the size of the payload. So it's important when you're designing the scheme, this is where we get this restriction of correctness only as long as the plaintext remains small. Okay, so this is fantastic, I hope you think so too. But, of course, this is assuming in this kind of magical land where G to the W was actually secure encryption. So I'm not gonna spend too much time talking about how we get this beyond, because I think that this warm-up actually gives most of the core technical ideas. But let me just give you a little bit of a flavor of how we remove these restrictions. So the first step is that instead of taking this G to the W as some sort of encryption, we replace this with L-ga-ma, and it will be the case that we can mimic a similar sort of step of going from this encryption using the additive shares, this time using sort of linear algebra in the exponent by giving together not only shares of X, the memory value itself, but also will end up maintaining as an invariant that you have additive shares of the secret key for the L-ga-ma times X. Okay, and then, so I'm not gonna explain why, but hopefully I'll pique your interest. So we'll also need an encryption now of the input times the secret key, and in order to make the share conversion procedure work out, we need to keep the payload small, so we use some bit decomposition tricks. Okay, so kind of combining these tools in with a little bit of work gives us a scheme that's secure, but now, so we've shifted the assumption and now we have L-ga-ma, which is great, but note that we're still giving an encryption of the secret key. So here this is secure, assuming that L-ga-ma is circular secure. And the final step is removing this circular security assumption, and there's two potential ways you can do this. So the first is sort of the typical approach in fully homomorphic encryption where instead of giving an encryption of the secret key under itself, you can consider a sequence, sort of a leveled version, where in each one I give you an encryption of one key under the next. And this, so this blows up the share size now by the depth of computation, you need a new key for each multiplication method. However, unlike fully homomorphic encryption, actually in our construction, we can get rid of circular security altogether, basing instead of in the place of L-ga-ma, turns out we can use the encryption scheme of Bonnet, Hallevi, Hamburg, and Ostrowski, which is provably circular secure exactly in the way that we need based on standard DDH. Okay, so one last comment before I move on to applications. As I've described it, this secret sharing scheme is individual for each person. So if I want to generate shares of my input, I need to, for example, generate my own L-ga-ma secret key and do things with respect to that key. And the question is, can you hope to get something with homomorphic computation across different parties' inputs? So for example, say that I have some sort of general generation procedure that gives us a public key and shared, so shared public key and separate evaluation keys, such that any party who comes along can now use this public key in order to share his input, so that later on these servers, for example, can compute across the different shares. Okay? So this will be important for the next piece, which is, now let's shift gears and talk about applications. Okay, so kind of the whole thing that brought us into all of this is this question of secure two-party computation and suppose that what we want to evaluate is a branching program. So using this public key version of HSS, the first thing that we'll do is jointly run the key generation procedure. So now we share a public key and each of us will take one of the evaluation keys. Now, given this, the parties will secret share using the homomorphic secret sharing scheme, their inputs, and now we can locally evaluate using the homomorphic evaluation procedure. So remember what we result in here are secret shares that agree are the correct values except with some probability delta. So of course, one thing that we can do is just run this multiple times independently. And interestingly, in the case of secure two-party computation here, we have an issue not only with correctness, so but also with potentially with security if you reveal whether or not there's errors. So, but don't worry, you can take care of that by essentially taking each of these multiple copies and running another MPC just to securely output the majority value. Okay, and a quick taking a look at the communication required for these three different parts and using a little bit of tricks with hybrid encryption techniques so that we can cut down some of the costs here. Essentially, we get back down to what I promised. So scaling as inputs plus poly and the security parameter times the output. Okay, so going if you're going beyond branching programs, we can actually apply these sorts of techniques directly into larger classes of circuits. Okay, so the intuition is that I want to view my more general more general circuit structure here as a collection of larger branching program gates. And if I have a circuit for which I can do this, for example, essentially all natural non canonical, diabolical circuits can be put in this form. Then now, by running exactly what I told you in the previous slide, we only need to pay for the size of the inputs and the outputs for each of these gates. We don't have to communicate with respect to the entire branching program size. Okay, of course, there's a little bit more work. We have to add take care of the error, but we can basically take care of that by adding the error encoding and decoding straight into this branching program gate here. Okay, so to summarize our results, we give a construction of this Delta homomorphic secret sharing based on DDH for the class of branching programs. This gives us various sorts of low communication secure two-party protocols for computation and also gives this two server generalized private information retrieval. So a couple of reflective comments before I talk about open questions. I think it's very interesting this error parameter Delta in some sense maybe it can be viewed as an analog to the noise that occurs in FHE. Indeed it gives some sort of limit as far as how you can compute. But interestingly, so for FHE if you want to be able to handle to improve the error, you increase the size of the ciphertext typically. Whereas in our construction, in order to battle the error, you actually will just increase your runtime. Another note is that this public key HSS version can actually be viewed as sort of a weak form of threshold fully homomorphic encryption. So the difference, the why is this a weak version is that the parties have secret evaluation keys. But one of the things I guess that we show is that standard threshold FHE is maybe a little bit of an overkill for these applications of secure two-party computation. Okay, so very quickly some open questions. I think it's a really fascinating area that's really just being opened up right now as far as we can tell. So beyond branching programs, this is an obvious question. Is there some way of supporting, for example, multiplications of two memory values directly or doing some sort of FHE style branching bootstrapping procedure? So our construction is really limited to two parties and the reason is exactly because of this share conversion procedure. This stepping along really relies on the fact that we have values that are differing by Z but the payload. Are there ways that you can extend beyond two parties? If you guys were in the spooky session just before here, they showed some GMW style techniques that allowed them to go to additional parties. Unfortunately, it seems that those don't quite work in this case. Can you do a similar approach but under different assumptions? Factoring, for example, RSA seemed very in line with this. Of course, all sorts of optimizing complexity, improving the error. There's a lot to be done there. So with that, I'm happy to report that as of yesterday, we have a complete full version of the paper up on Eprint and I invite you to take a look. Thank you.