 All right, so welcome to the afternoon, the first afternoon session on round complexity. So the session will proceed in two rounds. And in the first round, we have a talk on unconditional secure computation with reduced interaction by Ivan Damgard, Jesper Bosnielsen, Rafi Ostrowski, and Adi Rosen. And Ivan will give the talk. Okay, thanks, so I guess protocol requires, I repeat that this is joint work with Jesper from AU and Rafi and with Adi. So this talk, as the title says, is about unconditionally secure multi-party computation protocols. So things like that you probably heard of, BDW speeds and so on. Of course, on one hand, as you probably also all know, they're great, right? Because they're computationally really, really efficient. Much more efficient, of course, than the FHZ-based stuff. And also, actually, if you only look at computation, even more efficient than things based on Yaw-Gapling, for instance. You just need very, very simple field elaborations, linear combinations, and that's it. So on the other hand, they're not so great, right? Because you require lots of interaction. If you require the protocol to be efficient in the circuit size of the function you're computing, then for all we know, you need a number of rounds that's proportional to the depth of the circuit you're computing, and the communication will be proportional to the circuit size, and if you don't consider amortization over several computations, you'll have to invest also something that's proportional to the number of players, which I call N here. So that's too bad, and a really hard problem, of course, is whether we can really improve this significantly. Or in the words of this nice guy here, is there something like the FHZ of unconditional security, if you will? And we don't know. And this talk is about some partial answers to this. So the big problem, of course, remains open, but there are some things we can say. So basically, there are two things we consider here. One is message complexity, and the other one is round complexity. So let's take message complexity first. So here the question is that we ask, how many messages do you need to send to compute non-trivial functions securely? So what I mean here, the generic example of non-trivial function is where everybody has an input bit, and we want to compute the end of all these bits, and everybody learns the result. And this is related to, of course, but it's not the same as round complexity, and there's some related work on this in various different and more limited models, such as considering only the sum modulo N as being the function, for instance. So sum modulo two as being the function. So in general, we actually know very little about this. Before we get to the results, it's in fact worth while thinking about how to count messages that may seem like a trivial thing. But actually, it's not quite trivial, because so if the protocol sometimes, but not always, sends a message in a given time slot, should be charged for this, right? But I mean, it's the absence of a message or also a signal. Well, I mean, there are two ways you can go. You can say, well, if sometimes a message is sent at that point in time, and sometimes it's not, even if it's not sent, that the receiver can conclude. Some things we should always charge for a time slot if it's used ever. So that's called conservative message complexity. And if you say, no, no, I only want it charged for the cost of physically moving some bits around. And that's called liberal message complexity. And then we consider only the expected number of messages sent. Those are kind of, you know, at the end of the spectrum. There are also models that are in between. For our work, it doesn't really matter much because it turns out that the bounds we get up pretty close, even for those two extreme ways of counting. So, you know, if you like some other model better, well, we pretty much have the answer anyway for that model too. A little bit more about the actual security model and types of functions. So as I said, Boolean functions, one input bit for me to play, everybody gets the result. And in the security model, we assume that the average say sees the message pattern. So when A sends to B, even if they are both honest, the adversary knows that the message was sent, but of course not the content of the message. We assume, the default thing is to assume static corruption and a constant number of players. So that number doesn't get to grow with the security parameters if you're statistical security. So we rule out by this, we rule out tricks like selecting a small committee of players that at random, that does it and hoping that the adversary didn't corrupt those guys. But actually we can also consider adaptive corruption and any number of players and all the bounds will be the same. Okay, so what are the functions? So there's one class we call difficult functions. Not what you usually call difficult, but those are the functions for which we can get bounds. So think of basically anything that's not blatantly trivial, XOR or threshold functions. So what we need precisely is each input must sometimes influence the output. And secondly, also slightly more complicated maybe for each honest player P, if the adversary is only given the output, he can't at least not always figure out what this other guy's input is. So for example, think of three players in one corruption and say we were computing the and. So I'm corrupt, I have a one, right? And say the output is zero. So this means that if the other guys are Alessandra and Daniel here, they could have one and zero or they could have zero and one. I have no way to tell. So for each single guy, I can't know exactly what he or she has. So that's the kind of thing you want. Okay, there's also something called very difficult functions such as and and threshold functions, but not including XOR. So there we just require the same thing as for difficult functions, but in addition, the truth table has to contain an embedded and. So what that means is if I restrict the inputs appropriately, I get an and by only using those inputs. Okay, so here are the results we get. So again, this is for T corrupt inside of the end players. I simplified a little bit the expressions from the paper to get me easier to read, but this is the essence. So you can see that they're all quite similar. It's all omega n times T asymptotically, but there are some small differences here. You can see, for instance, going from difficult to very difficult doesn't make much of a difference. It's like one half expected message in a liberal case and one message in the conservative case. Going from liberal to conservative makes a little bit more of a difference, essentially n over two messages, but not very much of a difference really. Okay, so why should we believe that something like this is true? Well, here's the rough intuition. So here's the player, Charlie Brown. So the first sort of rough observation is, well, he's gonna have to communicate with at least T plus one players before his input becomes determined, right? Because if you only talk to T players and that conversation determines your input, those T guys could be the corrupted guys, and then he knows my input. That's not good, of course. So on the other hand, that's the next sort of vague intuition that after, so if Charlie Brown's input is only determined after this T plus first send operation, then you can't really start computing the function, it seems before that message is sent. So that ought to mean that Charlie must receive another message later on to let him know what the result was, okay? So that means we think there should be this additional message, okay? So that means that if we count the number of receives or sends that the protocol has to do to cater for Charlie Brown here, if we count those and believe that intuition is true, then for each player we count T plus one send and receives in the first place and two send and receives in the last phase because that last message that tells Charlie Brown the output must be both send and received. So this is for every player, so we multiply by N, and since each message must be sent and received, we divide by two to get the number of messages. Okay, so that's the kind of bound we expect. Okay, now, so I've been waving my hands madly here, of course, so the main technical challenge you have to solve here, of course, is to say, well, this intuition we have to show is actually true, so that, I mean, of course, a priori there's no guarantee that the protocol has to behave like this, it just seems that way. But we actually show that for very difficult, the difficult functions, the protocol really must behave roughly like this. I say roughly because, of course, as you saw on the previous slide, not all bounds are exactly that expression there, one of them is actually. But it turns out that there's good reasons why not all the bounds are the same because there's a family of cases which I'll get to now where these bounds are actually tight. So there's a good reason why they can't be the same. Exactly. So one obvious reason why they don't have to be the same is for liberal message complexity. So Charlie Bond gets the result, right? We just have to tell him one bit. So in principle, if it's liberal complexity, we could tell him by just sending a message or not sending a message. So that means expected is only half a message we send on average, you could think. So that's the reason for this difference, for instance. Okay, so there are some cases where the bounds are tight. So for T equals one, one corruption, any number of players, and if we compute a function in this deterministic log space, we have a positive result that's based on something called private simultaneous message protocols introduced by Faker at all and generalized by Shia and other people. So for that case and for those kinds of functions, we can actually exactly match those bounds. I think there's one of these four cases from before where we have a message off, but apart from that. So one, I think, kind of cute corollary of this is that for three players to compute security, the end of an input bit from each player, everybody gets the result, one semi-honor is corruption, six messages are exactly necessary and sufficient in the conservative case. And for X, all the answer is five, right? So that was about message complexity. If you came in later on for new viewers, you can start here to wake up. Now, here's something about the number of rounds. So as I said, we really have no idea what the low bound, if there is indeed a low bound on the number of rounds in general, but what we ask now here instead is, maybe we can reduce the interaction for some of the players. Well, precisely, we say, can some of the players be lazy? And a lazy player is defined as follows. If you're lazy, you send the message initially to the other guys who are supposed to do the work. They do something, and then later on, then I can relax, lean back, and this one message arrives from these guys, and then I know the result. So, and by the way, maybe you're a markers in order here. So it's not that, if you insist on synchronous protocols, you might think that, well, then I have to stay awake and watch all the rounds go by until I finally get the result. But nobody says you have to use a medium that requires synchrony, right? I mean, for instance, I mean, Garfield could just send a message immediately, then lean back and eat some pizza until an email arrives with the result. So it doesn't really have to be that way. So results we have on this. So there are two cases, rather natural. So one is where there's three T plus one players with T malicious corruptions, and the other one is where you have two T plus one with T semi-honest corruptions. And what we show in both cases is you can compute any function with unconditional security, and while allowing up to T players to be lazy. And on the other hand, at most T players can be lazy in both cases. And that lower bound falls, in fact, from the message complexity bound I was showing before. So what can we say about this? The positive result here, so for the semi-honest case, there are no results already, which say you can be any functionality. Then I can compute that functionality securely in a model where all players except for one are semi-honest corrupt if you are allowed to get correlated randomness from the beginning. Okay, so the pre-processing model, any functionality, even reactive functionality can be computed securely, even with up to players, number of players minus one corruption. So what it therefore can do is you can say, if I'm a lazy player, P, he says, well, if I would do one of the standard protocols, why would I have to participate? Then think of what I would do in that protocol as being a functionality, reactive functionality. So I'm gonna set up correlated randomness, allowing the non-lazy players to compute me, to emulate me, okay? Then I'll just send securely, probably this correlated randomness to the other guys, this is semi-honest corruption, right? So he does do that correctly. So correctness is fine. And then the other guys do some standard protocol for the ordinary honest majority setting and compute the solid sensor back to B. So that's pretty straightforward. It gets a bit hard in the malicious case. So in the malicious case, we have NS3T plus one. And now in that case, it's kind of, the worst case is if those T-lazy players you designate happen to be honest. Because that means that the guys who are supposed to do the work, they now we've lost T-honest players. So what we have left is like in prime, which is two T plus one, honest play. Two T plus one players out of which we only have honest majority now. So on one hand, that's good news, right? Because with honest majority, it's known from Waban and Benor back in 88 that you can actually compute any function securely in that case. But only if you have broadcast, right? You have to tolerate a small error probability and you need broadcast, just given for free. And of course, if you only have honest majority, as we know you can't do broadcast from scratch in that case, it requires at most the third quarter players, right? So what we do to solve this problem is we come up with a broadcast protocol of a special form. So this is designed for all the three T plus one players, but the form is we can select T players, of course, that's the lazy guys. So they'll send one message to each of the other guys. And after that, the remaining N minus T players can do broadcast among themselves. I think that's kind of a little bit interesting, just independently. So what's a little bit surprising about it is that of course, you don't know whether those T players you select here, whether they're honest or not, they could be corrupt, all of them, right? In principle. So they'll send you complete garbage. In that case, of course, we're saved by the fact that then all the other guys are honest, right? But they don't know this in advance. So we need kind of things to self-correct kind of, but this turns out to work out. Based on previous existing work on broadcast with pre-processing. Okay, so what about the lower bound? Why can't you have more lazy players than this? So I'll give you just the argument for the semi-honest case, that's the simplest thing. So here are seven players, okay? So T is three. What if you had a protocol that tolerates four lazy players, let's imagine we had this. So I'm gonna designate the cartoon figures to be, you know, first T-lazy players then one additional lazy player. And then we have three hardworking cryptographers on the other side that will be the active guys, yeah. And then the message pattern that's mandated by the laziness of these guys says, well, initially what happens is that these guys, they send a message to the active guys. They do some work about themselves and then one message is sent back to the lazy guys. Now, of course, as the drawing indicates, if there was a protocol like this, of course I can make a three-party protocol out of this, but just letting one guy emulate all these guys in his head, second one emulates these guys and the third one emulates just snooping, right? And then if the original protocol is secure and does whatever function you like, then you have here, of course, something that will do a three-party computation of the ant, for instance. But the problem is, of course, this is only four messages and that's too little. If you go back to the lower bounds, even with the most, let's say, tolerant way of counting messages, at least in the expected four and a half messages, it's necessary, it turns out. So this just is not possible. You could imagine, maybe some of you are thinking, what if some of these lazy players were allowed to communicate among themselves? Maybe that would help somehow. Maybe, you know, Charlie Bond sends a message to Snoopy and to the other guys first and then maybe that can be used later on to decode the result or something like this. This generalizes a little bit, as long as you can isolate it in this way. And as long as, if I said conservative message complexity, I could even allow a message here because there will be five in the lower bound and six. So as long as you can sort of isolate one guy that their sense only receives to the other guys, the same lower bound will still hold. So it's only if everybody really talks to everybody, then I can't show anything, but that's also natural because then I think you could actually do it. So, you know, that's tied in some way. Okay, so the take home message here, there are two. So first of all, essentially any non-trivial Boolean function requires basically n times t messages to be computed if you insist on conditional security and if all players get the result and if the message pattern is known to the adversary. There are other results saying if the message pattern is secret, then the bounds change. For instance, and also if the message pattern is secret and you would have compensational assumption, then you can do much better, of course. Okay, and then secondly, if you have t semi-honestomalysis corruptions and if those are maximal compared to the number of players, you can have, of the t players, be lazy. You send once to the other guys, you lean back and wait for an email that arrives with the result in the end. Okay, and that was what I had. Thanks.