 All right. Welcome everyone. It's my great pleasure to introduce three amazing talks of the session on multi-party competition. So the first talk is Garbert's Circuit with sublinear evaluation. This is a work from Abida Heik, David Heath, Ladimir Kolesnikov, Steve Lu, Rafa Losroski and Akash Shah. And David will give the talk. It's a busy day for David. Hello. Okay, there we go. So thanks for that nice introduction. It's good to be back on the stage. So I'm going to be talking again about garbled circuits. I'm going to be talking about a technique that we call garbled circuits with sublinear evaluator. So I'd actually just like to jump straight in here. So first of all, just a reminder to everybody about the basic execution of a garbled circuit. And we're going to analyze how we're trying to play with some of the parameters here. So in a garbled circuit, we have two parties, a so-called generator who takes the circuit and makes some kind of garbling of it, some kind of representation that will allow the evaluator to run that garbled circuit. And so there's basically three steps to a garbled circuit protocol. First, you generate the circuit, then you send that circuit across the network, and then the evaluator evaluates it. And traditionally in garbled circuits, the parties pay linear costs in all three of these steps. Okay, so depending on the size of the circuit, the amount of cost the generator pays to garble that circuit is linear to send across the wires linear and to evaluate is also linear. So in this work, what we want to do is play a little bit with this and say, can we squeeze on some of this and get sublinear cost in some parts of this scheme. So in particular, I want to think about a special case, which is somewhere in the middle of your computation, you have some kind of conditional dispatch over a set of circuits. So in particular, the generator and evaluator agree that the garbled circuit will choose one out of N different circuits and then run that one circuit at runtime. And what we would like to achieve here is that we would like to get the communication here to be sublinear. And also we would like to get the evaluators work to be sublinear in the number of circuits here. The reason we think that getting the evaluators computation sublinear is interesting as compared to getting the generators is because issues of adaptivity aside, the evaluators work is essentially the entire online phase of a garbled circuit based protocol. The generator can make the garbled circuit offline, you know, perhaps overnight, and then when the computation is actually needed, the evaluator does the actual work. So that's why it makes sense to try to optimize the evaluator. So that's just a way to restate what I just said. So for functions with conditionals, we'd like to achieve sublinear communication and sublinear computation for one party. Now that's a mouthful. So just for brevity, I'm going to rename this double sublinear as so called compactness. Okay, so we want to achieve to compact to PC, which again means sublinear communication, sublinear computation for one party. There is relevant related work in the area of compact to PC. So probably the most relevant work is all of the work on fully homomorphic encryption, which sort of trivially can achieve this notion of compactness, because one party will encrypt their input send it across the wire. So that parties work in the communication are kind of automatically sublinear in the actual computation that's being done. And also this work done done by myself and Vlad Kolesnikov on this technique called stack garbling, which I'm going to tell you about in in just a second, which also considers the setting of there's one of many circuits and we would like to evaluate exactly one of them. And there we achieved significantly improved communication cost, but it does not achieve this notion of compactness, because both parties are going to pay at least linear costs in the size of the of the entire function. Okay, so actually I want to, to talk a little bit more about this stack garbling technique because this is really our starting point for our technical contribution, we build on this stack garbling work to achieve this compact to PC using only symmetric key Okay, so in particular, I'm going to show you how to achieve this compact to PC for the functions of this particular form using roughly square root communication and evaluator computation. So to do that, I do want to give some background on on this stack garbling technique because we're going to use the techniques from this from this work. So the idea of stack garbling is that we are going to slightly change the way that our generator and evaluator garble and evaluate the circuits respectively. So in particular, our generator is not going to automatically go off and generate all these branches, all at once. Okay, instead, what he's going to do is for each branch, he's going to elect a pseudo random generator seed. And then from these seeds, he is going to derive garblings of each of the branches respectively. And what I mean by this is, during the process of garbling the circuit, the generator is going to make lots of random decisions about how to encrypt this circuit roughly. And what I'm saying here is that the generator should derive all of those random choices from a pseudo random generator, starting from some some short seed. Okay. And now, when it comes time to actually send the garbled circuit across the network, the garbled the generator is not going to send each of these garbling separately. Instead, what he's going to do is he's going to add all of these garbled circuits together and send only the sum across the wire. Okay, and the point here is that the size of this garbling is now independent in the number of branches. Because we've, we've added everything together, we're only paying for essentially one branch instead of all of them. So now the evaluator has received this stacked garbled circuit, and somehow she would like to evaluate some branch. And for the purposes of this talk will assume that she's allowed to learn which branches is being evaluated. So what happens here is the garbled circuit, we're going to add a little bit of extra machinery that I'm not going to describe, but the garbled circuit is going to use this machinery to reveal to the evaluator, each of the pseudo random generator seeds for each of the inactive branches. Now because the generators decisions were all based on these seeds. This means that the evaluator can reconstruct locally, each of the inactive garbled circuits. Okay, I'd like to point out though, it's the evaluator is not receiving the seed for the single active branch. And this is very important, because for security reasons we can't, we can't give her that particular seed if she were to receive that seed, she would sort of automatically be able to decrypt the generators input. And now she's not getting that particular one. And now, with these comp this combination of information that the evaluator has available to her. She can just use more linear arithmetic to compute the garbled circuit corresponding to the single active branch, even though she never got the seed for it. And from here she just evaluates normally she's got the garbled circuit that she needed for the active branch you can she can just evaluate. So this technique achieves sublinear communication, but it does not achieve sublinear computation for this party, or either party, in particular because the evaluator here is garbling each of the branches. So she's paying one of your work in the total size of the circuit in order to garble all of these things and on stack. Now here's here's our approach. Essentially what I'm going to do is rearrange some of the pieces of stack garbling to drive down the cost of the evaluator. So the crucial idea is what we are going to do is take the branches from our conditional and arrange them into some kind of grid. Specifically, the generator is going to make roughly square root and buckets, and each of these buckets is going to hold roughly square root and circuits. Okay, so what will happen is the generator is going to pseudo randomly choose circuits that should go into each of these buckets with replacement with replacement is very important as we'll see in a moment. Okay. So pseudo randomly populates each of these buckets, and we add extra circuits this is why it's roughly square root and such that with overwhelming probability, every single circuit is somewhere in this grid. Okay. And then what the generator is going to do is he's going to use the stacked garbling technique that I already described to you to stack together the contents of each bucket. So he's going to choose pseudo random seeds for each each circuit in each bucket and generate them all and stack together each bucket separately. And then he's going to send each of these garblings across the wire to the evaluator. And the point here is again there's only square roots, many of these buckets. So this is only going to be square root communication in the size of in proportion to the number of branches. Okay. So now the evaluator has received all of these stack garbled circuits. But actually, I've been lying to you just a little bit about what we do here, because what we need is that what's going to happen next is the garbled circuit is going to declare to the evaluator. Hey, this particular location in the grid is the circuit you need to evaluate. And remember, we're in the general to PC setting, she should be not be learning the identity of the active branch of this conditional that would not be secure. So to deal with this, I'm going to add just a little bit more here, where instead of just using normal circuits, we're going to use a universal circuit encoding of each circuit. Okay, so universal circuit is is basically a circuit that can be programmed with some extra inputs to represent any circuit up to some fixed size. So what we can do is use a choose a universal circuit big enough that it can represent any of the circuits in our conditional. And, importantly, universal circuits have have low overhead, in particular they have only logarithmic overhead in in the general case, and also importantly as for some special cases like when I'll talk about at the very end of this talk. There's either constant overhead or in some other cases just no overhead at all to to represent circuits as a universal circuit. So in fact, what the what the what has actually happened is the generator has garbled each of these universal circuits and said those sent that stacked garbling of universal circuits across the wire to the evaluator. And so now the evaluator's view is basically that she has this grid of circuits and she has no idea what any particular circuit in the grid is. Okay. So as I said, we're going to add some extra machinery, and the garbled circuit is going to reveal to the evaluator. Hey evaluator. This is the circuit you should be evaluating. Okay. And the important point is, at this point, what we can arrange is that the evaluator can completely discard every other bucket. She doesn't even have to think about them. So we need to consider the single bucket that has the active branch in it. And this is important because there's only square root number of things in this in this bucket, which is how we're going to get our square root overhead in terms of computation. The next thing that happens is that the garbled circuit machinery is going to reveal a little bit more information to the evaluator. So before it's going to reveal to the evaluator, here are the identities of the other circuits in this bucket. So I'm not going to tell you the identity of the actual active branch, but I will tell you everything else in this bucket. This is completely fine because we sampled all of the branches in this bucket with replacement. So for instance, the fact that circuit zero is in this bucket does not rule out the possibility that the active branch might also be circuit zero. And from here, the garbled circuit is again going to reveal more information to the evaluator. Here are the generators pseudo random seeds that are used as part of the stack garbling procedure to garble each of the inactive branches in this bucket. And from here, she can just as before run the stacked garbling procedure to garble these universal circuits because she knows which branch is actually being encoded encoded so she knows what the programming of each of these inactive branches is. And again, this is going to allow her to obtain the correct garbling of the single active universal circuit. And from here, she can evaluate normally and because of the properties of the universal circuit it's going to give her the correct answer for the active branch. Okay. And again, emphasizing that this is only square root work. Now I'd like to mention that actually here there is a surprising technical challenge, which I skimmed over, which is I mentioned all of this information that has to be revealed to the evaluator. In particular, you can see that there's a challenge just by, I said that the garbled circuit has to tell the evaluator the location of the single active branch. So that means there has to be enough machinery in the garbled circuit that regardless of which branch is actually active, it can point the evaluator to the correct place in this grid. And doing this in compact work is actually kind of surprisingly tricky. Okay. The crucial insight that allows us to build all of this machinery efficiently so that we can actually achieve compactness in a true sense is that we are not actually going to pseudo randomly populate each bucket. Instead, we are just going to pseudo randomly populate one bucket is first bucket. And then what we were going to do is choose the contents of every other bucket based off of the pseudo random choices we have already made. So in particular here you can see that the second bucket is identical to the first bucket except for that we have shifted everything by by one index. And what this means is that we can now put machinery into the garbled circuit that can roughly speaking based on squared and positions in this first bucket can calculate based on which active branch you want to run, where that is in this grid that can calculate the position compactly. Okay. So, this was the core, the core technical ideas of this GC wise approach garbled circuit with sublinear evaluator. Again, we organize things into the square root size buckets. For each of these buckets, we stack the branches together. And at runtime the evaluators only going to consider one bucket and this is how we're getting this combination of sublinear communication costs and sublinear computation. To wrap up, I want to tell you about one, we think kind of interesting use case where this GC wise is useful. So the idea is something that we're calling garbled garbled private information retrieval. So I'd like to define that for you. The idea is, we have our circuit, and along, and we also have some kind of public database that's agreed to by the parties. So what I'd like to have happen is somewhere in the middle of our garbled computation. We would like for the garbled circuit to read some element from this large database without adding extra rounds of interaction so we want to keep the constant round property of our garbled circuits in order to read this element into the into the circuit. And we want all this to cost sublinear in the size of the database. So actually, the techniques I've shown you pretty much automatically solve to solve this problem. What you're going to do is view this database as some kind of conditional branch, where what you're going to do is make n different circuits. So for instance, the first circuit says, it takes no inputs, and then it outputs the content of the first cell of memory. Okay, and so on and so forth. So these are our n conditional branches, and now given the techniques I've already showed you, you just conditionally dispatch over these n branches, paying only square root cost. Now I should mention that this garbled private information retrieval is interesting, because the techniques I showed you earlier about garbled RAM actually don't work here, because what we want is we want to pay sublinear costs, even if we're only reading one item, whereas techniques like garbled RAM have to be amortized over a large number of accesses. So this was GC wise. Again, the idea is to make compact to PC the sublinear communication and sublinear evaluator computation for the specific class of functions that has this conditional branching in it. And then at the end I showed you this garbled PR which is, we think a pretty cool and interesting application of this idea. And so with that, I'll be happy to take any questions. Thank you. Thank you very much, David. So, are there any questions? Fantastic. And if other people have questions, please line up to the microphone. Hello. Thanks for the great. This is working. Yeah. Great talk. Thank you. There's one bit I didn't understand. I understand everything else. So that's why I'm asking if you can go back to like slide 19. So there you said that it is fine to reveal the identity of the inactive circuits because we sampled with replacement and so on. But what if that green circuit was one of the others, as I know the seed, I could like check the identity now, right? For example, if that was C zero, I can take the seed that I know that corresponds to C zero and I see. Yeah, so the important point is that even if the circuits are the same, we're going to garble them starting from different randomness, different pseudo random seeds. And so therefore, even if you see two encryptions of the same circuit, you can't tell that they're the same circuit. Okay. Yeah, thank you. Yeah. I have a question about if we generalize this grid from 2D to higher dimension, what are the trade-offs for the parameters? Higher dimension. I'm not, I'm not actually sure. I mean, there may be some natural way to interpret that, but but the way that I view it is that the two dimensions respectively are about communication and computation. So what you could do is you could put flex into this matrix so that you make your matrix wide and then computation goes up or you make your matrix narrow and then communication goes up, right? But I'm not sure what it would mean to generalize to higher dimension because it's really just these two parameters that we're playing with. I see. So just a quick follow up. Does that mean like the identification like process is not clear when it's in higher dimension? Because I think that's a part I wasn't really sure from this talk. Can you can you try to clarify? I'm not sure what you mean. Like, because it feels like the communication will for sure goes down because like it will go down to like into the 1 over D if D is a dimension, right? But it sounds like the more complex thing is you identify which grid you care about, right? Yeah. So again, the problem is, if you could characterize what it means to increase the dimension from say even just two to three, then that would be awesome. But for me, I don't even know what that means to have to have a third dimension. What is the third dimension here? Oh, I mean, like, just, you know, there's a matrix, but you can have a higher dimensional tensor, right? You put no, it doesn't work that way. Well, so I'm saying, of course, you can do that. But like mechanically, what are we doing with this extra dimension? Because what we have here are two dimensions. One is how big is the bucket? Meaning how many things is the evaluator going to have to reconstruct with this garbling process? And the second dimension is how many buckets do we have? Meaning how much communication are we paying? But I just don't, I'm not even sure what the third dimension would represent or what it's encoding mechanically and how the parties are doing things. Okay, cool. Thanks. Thank you. All right. Thank you very much. So let's thank the speaker again. So we should be ready for the next talk. So the next talk will be on a highly efficient OT based multiplication protocols. This is