 All right, thanks and thanks to everybody for showing up so early of course in the morning although I saw that there was coffee outside So you're expected to pay attention Okay, so group-based secure computation. Let me start by saying what do I mean by this? Okay, so here's sort of a philosophical look of the landscape of secure computation approaches so in one kind of category I consider these classical approaches of things that we we typically talk about things like yow and GMW based protocols and and So these have been around for decades and have really been fine-tuned and optimized in a lot of fantastic ways a lot of practical systems But one of the downsides of these is that inherently both of these approaches can everything in this category inherently requires Communication to grow with the entire circuit size of that you want to do and they compute Okay, so in comparison with these In 2009 kind of these breakthrough works in fully homomorphic encryption. We found the first ways of Getting secure computation where the the communication grows asymptotically not with the circuit size, but just with the Essentially the input and output size as you would want So this is fantastic in terms of asymptotic costs. Unfortunately, there's a lot of downsides If you actually look at some of the concrete costs here, so this has been around since 2009 There's been tremendous work and actually tremendous progress in improve optimizations, but still essentially what exists now is even kind of more expensive in Essentially all the okay. I'm not even going to go into the extremes that let's just say there's much to be desired Okay, and one of the the negatives that kind of contributes to this not only from a theoretical standpoint But also in terms of practicality is that all of these FHE based Constructions so despite a lot of work in trying to get it based on different assumptions Essentially everything relies on the same narrow window of these noisy encodings based on lattices So one of the one of the downsides of that is So as I mentioned not not only in terms of we'd like a wider spectrum of things But because there's a lot of these generic lattice attacks That means that you actually have to crank up kind of the security parameters and this this contributes also to poor poor efficiency Okay, so now kind of shifting to the topic of this talk It's going to be kind of a different approach to to secure computation And this is kind of one of the things that we introduced last year in the crypto result Based on what's known as homomorphic secret sharing okay, so this is the topic of this talk and In the title of the talk we refer to this as group based secure computation and sort of the reason for this is that for the first time Aside from these lattice based constructions. We have constructions of homomorphic secret sharing Based on the discrete log type assumptions, and I'll talk a lot about this today Okay, so what is this homomorphic secret sharing and this is going to be kind of the central friend of ours today Okay, so this is an extension of standard secret sharing You think about some secret X split into two different parts where we require standard security So if you just see one of these two different shares, you don't know what the secret X is Okay, but the homomorphic aspect is that I can locally compute on a single share in a homomorphic fashion So say that there's some program P that we'd like to evaluate Such that these is that these outcomes here have an additive reconstruction Okay, so what does it mean again? I have share X 0 you have share X 1 we locally evaluate and these add together Okay, so an interesting relaxation That's going to be important for today is what we refer to as Delta HSS And this is essentially homomorphic secret sharing but with some sort of probability of error Delta And this probability is taken over the the randomness of the share procedure Okay, and in turn so we have this probability Delta in error and we allow the runtime evaluation The evaluation run time to scale also with one over Delta. So for example, we're talking about one over one over a polynomial probability here Okay, so in the crypto work from last year We showed essentially that if you have so two different things first of all we gave a construction of Delta HSS So go into more detail later But also we showed that if you have even Delta HSS this is enough to give you say succinct two-party secure computation and roughly what the framework is is think about these two parties with some secret inputs We can run a small size and generic say secure protocol in order to generate these homomorphic secret shares And the bulk of Sort of the work is going to be done in the locally with homomorphic evaluation Okay, in the end we can do some sort of recombination protocol to get the final answer Okay, so the communication here. I don't have to communicate with respect to the circuit size But just with respect to input and output size Okay Good. So yes, so this has two different results the construction and this this putting it together Okay, so This this framework is very interesting particularly from a theoretical aspect We can do things like beat the circuit size barrier of communication But this is completely Like a basically as it sat before it was a very interesting direction but completely theoretical and in this work We try to make strides in a couple different dimensions of pushing not only kind of seeing what is the best that we can do Asymptotically in a theoretical direction But also can we push this can we get this to something that can really be practical and I think that there's a lot of Exciting stuff to be said in that direction Okay, so we look in particular at three different dimensions of improvement in secure computation here The round complexity the communication and computation So the first result I am going to say it's completely theoretical But basically we show that we can get a two-round multi-party computation from DDH with a little bit of setup Okay, and previously this this Result was only known based on learning with errors these lattice assumptions or indistinguishability obfuscation Okay, next in kind of a different direction We we look at improvements at the communication complexity So first of all kind of trying to shave off these poly and the security parameter factors We can really get this communication to just input plus essentially output size and an additive poly lambda And as one corollary of this and so if you consider the the specific functionality of trying to generate one bit ot's oblivious transfer Then we can actually achieve this So if you have n of these with total communication scaling essentially four times n with a low-order term Okay, and previously if you wanted to get any sort of constant overhead times n This was only known based on poly stretch local pseudo random generator assumptions or fly hiding And even in these cases the constant was significantly larger Okay, and the third category is is really kind of treading forward into the the direction of Practicality and we we have a lot of different Optimizations and that when you compare at least to to the crypto result. We're talking orders of magnitude improvement Okay, good, so I'm going to to try to give you a little bit of a flavor of each of these different directions today But to do it in order to do that I'm going to start with a little bit of a crash course of the crypto paper Okay, so there with me here So in the crypto paper and we gave this construction based on DDH for Delta HSS for the class of branching programs And essentially how we do that is we show how to support two different homomorphic evaluation procedures Okay, so the first is just addition of two values And the second is going to be a restricted form of multiplication I'm sometimes going to refer to this as RMS and multiplication. This is restricted multiplication straight line for the class of programs And what does this allow this allows me to multiply any of my any intermediate computation value V Times an input. Okay, so this is not general circuits because it doesn't allow me to multiply two intermediate computation values But these together will give me enough for for branching programs Okay, ignore this arrow here All right, so at a high level I'm think about three ways to encode a zq element Okay, so what is zq here? We're talking about DDH So let G be some DDH hard group of order prime order q and little G will be a generator everywhere Okay, so first type of encoding is just going to be taking this value you and raising it as the exponent second second is a standard additive secret sharing Over zq and the third is going to be a form where essentially so it'll be a pair of group elements shares Such that the group elements differ in discrete log by the value that's in here. Okay Okay So first observations about these encodings Within any one of these levels if I stay within the same level I have an additive homomorphism property Okay, you can convince yourself of that But the second and the special thing that's going to be helpful for us is that there's a natural pairing procedure Okay, so suppose that I have some level one encoding have g to the u both of us do and that we have across us Additive secret shares of some value V Okay, then if each of us takes g to the u and raises it by our share of V This gives me exactly this type of level three encoding of the product. Okay, so essentially the discrete log differs by the product UV Okay, so now let's see how we're gonna get secret sharing with homomorphic So think about kind of roughly suppose it as part of the secret shares of a value I give you level one and level two encodings of my inputs Okay, and now for the homomorphic evaluation We're going to maintain the invariant that for any partial computation value We'll hold additive shares of this value across one another Okay, so first of all this holds for the inputs Now if we want to perform an addition this follows directly by the additive homomorphism of these additive shares And now let's see how to do an RMS multiplication as you may suspect. There's gonna be something fishy going on here So so suppose now what do I have? By the invariant if I have this partial computation value We're holding additive secret shares of this value and In addition each of the inputs I gave you this extra helper information. This was part of the initial Secret sharing itself which was basically g to the u. Okay, so using this pairing procedure We can get to these multiplicative this level three shares of the product and The question is how do I get back to additive shares which we need in order to maintain the invariant to get to the next step Okay, and this that on is the share conversion procedure Which is really one of the kind of interesting and a technical and mysterious parts of this work So essentially if I think about taking this cyclic group g and Flattening it out in terms of different steps of multiplying by the generator So the fact that we have these shares With a differ in the discrete log by the by some payload in this case the Z is UV Means that you have some group element. I have some group element that differ on this this position here by this payload So how we do the conversion? We say ahead of time. Let's share Let's say both have agree upon some random sprinkling of special points these red points here of the desired density delta Okay, so now once we get these shares each of us will just output The distance from whatever the share is in terms of multiplications by the generator to the first special point that you hit Okay, so as long as there's no special point between us this will give us exactly what we want Okay, so here this distance in this distance will differ exactly by this gap zine Okay, so aside from this error probability of delta that we have something inside of here and then we'll get a correct conversion Okay, so I'm hiding certainly a number of things under the rug about this construction in particular It's certainly not okay for me to give you g to the you for some secret value you So what we do in reality good is So this g to the you is going to be replaced by an El Gamal encryption Actually of you and there's a little bit of machinery that needs to be done to modify this pairing Okay, but a lot of the basic structure is exactly is here and kind of one of the takeaways is that this share conversion We're going to have to run it for every multiplication or a miss multiplication and you have to run it for where this gap is the value the multiplied value and Also once for each of the bits of the the El Gamal secret key times that value Okay, so I'll leave it as a mystery why but please ask me afterward if you're interested Okay, so this is a takeaway for later Good so so moving forward Our first result is to get two round multi-party computation from DDH. So a little bit of context And so certainly two rounds at least two rounds is known to be necessary for secure computation And if you're interested just in the regime of two parties Then actually there's a lot of sort of simple things that you can do to get to two rounds of of interaction And the question becomes interesting when you're looking at the multi-party setting. Okay, and so this is something That we didn't even really know about until much much more recently and there's been a sequence of works achieving two round computation under various assumptions, so LWE and IO and here this is sort of an improvement of setup So in this work We we get something that's a little bit weaker than these the more recent versions here The difference is this is a common random string versus we require public key infrastructure And also we have a limitation on the number a constant number of parties But the punch line here is that even when you look at these restrictions anything here with two rounds and Was not known at all under DDH. So this is what we do Okay, so for the starting point of this construction I want to consider a little bit of a generalized version of the the two-party secure computation slide that you saw from before So remember before there was this phase where we're two people were running some sort of secure protocol to get shares of the inputs Now I want to think about Since we need to get to the multi-party setting. I'm going to consider this client server model Okay, so in the same way we can if we had all these clients who actually have inputs Let's temporarily assume that we have two servers to help us Okay, so so first we'll run some sort of protocol to get these shares The servers will do a homomorphic evaluation So I want to put a little little bit of a side note here So if you can homomorphically evaluate the program that you're interested in directly Then that's fantastic at this point. That would mean if you can if you're interested in computing a branching program But even for general circuits This is not a problem because we can essentially use standard tricks of instead homomorphically evaluating Randomized encoding so you could think of it basically if I have some a very deep circuit I can squash this down into Yau garbling for example of the circuit Okay, so say we have this homomorphic evaluation and recall that this gives us the property that aside from some error parameter Delta That these shares will add to the correct output Okay, so you can deal with correctness in terms of Like iterating this many times Okay, but if you if you stop right here Then this would actually give you a security issue and the reason is that if I exchange these shares So some of them have errors and the error is going to be dependent on the inputs Okay, so because of that you have to sort of clean up and hide where the errors occurred So there's an additional protocol to output the correct value Okay, so what do we have in terms of rounds? So oops Direction so we have constant number of rounds, but it's sort of a large constant So to get to two rounds, there's three primary steps So first we have to clean up this first step to make it so that you can run just in a single round And this is where the public key infrastructure is going to take place in Step two we're going to remove the need for this majority this extra MPC by making it So it's safe to just exchange the shares even though there's going to reveal where the errors occurred Okay, and step three is going to be just to support not just two servers You can essentially think about this client server model as is a setting multi-party computation But you're assuming that two of the parties at least one of those two is honest So we need to extend this to more servers Okay, so I'm not going to talk about step one or step two at all Basically step one really uses homomorphisms of El Gamal in a careful way and step three You can execute with more servers by using sort of standard server emulation tricks Okay, so for step two to give you a little bit of a flavor and Again, the issue is that I can't just exchange the the homomorphic evaluated shares Because there'll be errors and this leaks information on the inputs and the the input So what is it that it actually leaks? Well, remember the probability of error occurred if there was a red point between us Okay, but for example if we're secret sharing a value of zero then there is no space between us so exactly what this is going to correspond to is The the error is directly of a function of this intermediate computation value and we can show that essentially So this this is exactly the sort of setting Where we have solutions for leakage resilient Circuit compilers. Okay, so basically instead of evaluating the standard circuit itself I'm going to evaluate one that's leakage resilient with respect to partial information leakage Okay, so this is kind of the The first part and the real challenge or an additional challenge So if you remember from a takeaway slide I not only run this share conversion procedure on the partial computation values But also on the bits of this El Gamal secret key times the computation values Okay So we have to address this leakage as well and we do so by kind of adding these additional randomization and secret sharing tricks Okay, and so again, I'm just trying to kind of give you a flavor of each of these directions so in the second direction for optimizing communication and I want to give you kind of a little clean I think it's a bit of a cute takeaway here The notion of punctured OT. Okay, so for standard oblivious transfer You think about some sort of large database and the receivers interested say in one position in the database But what if you have a receiver who is interested in almost all of the positions of the database? Okay, so we give As one of our techniques we give basically a cheap protocol for achieving this almost all oblivious transfer via punctured PRF's punctured pseudorandom functions and The protocol is actually quite simple. Basically, you can run a generic procedure generic NPC Where the output is that the server receives some pseudorandom function key and the receiver will receive a key That's punctured in the positions that he's not supposed to learn Okay, so now given this information the sender can mask each of these values by the PRF evaluations send them over and The receiver will be able to reconstruct exactly those positions that were not punctured out Okay, so we use this and this this piece as a way of dealing with with leakage in a cheaper way Okay, so in the previous I told you kind of this blow-up of leakage resilient Compilers and everything but if I allow you to communicate a little bit more via this process You can get this much more efficient Okay, so this is together with a couple additional tricks that we have to introduce Okay, so finally I want to give you a flavor of some of the the concrete Optimizations that we have so as the baseline, what is the cost of this actual homomorphic evaluation? So the cost really comes from multiplication and there's two different parts to this The first was this pairing procedure I mentioned which will correspond to essentially Exponentiations and a product over the group Okay, and the second part is the share conversion procedure. So as I've described basically Order of one over Delta kind of an expectation times you have to do this multiplication by a generator and testing if this is a special point So let me describe some of the the optimizations that we have The first is to consider specific what we refer to as conversion friendly groups as a zp star where Two is a generator of the group and the prime here that you're modding by is very close to a power of two Okay, so what does this give me this gives me that each time that I need to do this Multiplication I can essentially shift and then do a small addition Okay, the second big optimization that we have is redefining. What are the distinguished points? What are these red points and how are they defined? Okay, so I'm not going to mention we have a provable optimization that de-randomizes But a heuristic way of looking at this is well instead of saying it say Some sort of pseudo random function evaluates to a special value What if I just say this point to this group element is special if in a certain window of its bit representation? I have all zeros Okay, and this if you do this this allows a lot of Amortization and huge improvements in terms of testing the speed of testing Okay, and so ultimately you can get this to to an average of even less than one machine word per Of these stepping along multiplication Okay, so I'm kind of a high level there's tons of other optimizations. We've considered And the bottom line here is that ultimately it seems like compared to FHE one of the things that we have is that the size Is quite promising and the bottom line in terms of so this was first kind of done by by estimates in and now In we have a follow-up work actually that's starting to look at implementations So ultimately for this RMS homomorphic multiplications if you want error probability Delta It seems that you can get about 200,000 times Delta multiplications per second Okay, so summing up and we have these three different types of results It kind of going from from theoretical down to practical as you go along And I want to end with a couple open questions I think really this is kind of a fascinating area for me and in general So first of all there's a lot of questions even just about this component this homomorphic secret sharing Is there any way that you can go beyond branching programs safe from from DDH? So we know from learning with errors that you can get HSS, but it ends up basically going through fully homomorphic encryption Are there so all of the kind of all the solutions that I've really talked about are for the two-party setting So not for the for the MPC you can extend this to multiple parties But when you're just looking at say the homomorphic secret sharing tool itself as soon as you go from two to three parties We know very little. It's kind of embarrassing. How big the gap is And in particular this is because this conversion share conversion procedure really relies on the fact that there's two parties Can you get this from this sort of thing from other assumptions? For the share conversion, can you get a better error versus run runtime trade-off right now? It's it's directly inverse proportional It's not clear if you can do better Can you try to do some sort of better fault tolerance at the program level? Looking at this two-round MPC from from DDH Can you remove some of the restrictions that we have so for example polynomial number of parties and Removing the the public key infrastructure And I didn't really talk about it, but I kind of want to push the agenda since I'm here So I talked mostly about this kind of high-end HSS They can support say branching programs and uses public key sorts of tools, but you can actually we looked a little bit at some more low-end HSS versions and turns out you can get some really interesting function classes based just on one-way functions and So you can get a lot of there's been some interesting works with with applications building on top of these Okay, so there's a lot of questions here. Can you what can you do? How far can you go and and so on all right, so with this I will conclude